CN110599528B - Unsupervised three-dimensional medical image registration method and system based on neural network - Google Patents

Unsupervised three-dimensional medical image registration method and system based on neural network Download PDF

Info

Publication number
CN110599528B
CN110599528B CN201910828807.7A CN201910828807A CN110599528B CN 110599528 B CN110599528 B CN 110599528B CN 201910828807 A CN201910828807 A CN 201910828807A CN 110599528 B CN110599528 B CN 110599528B
Authority
CN
China
Prior art keywords
image
layer
feature map
slice
upsampling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910828807.7A
Other languages
Chinese (zh)
Other versions
CN110599528A (en
Inventor
赵秀阳
马英君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN201910828807.7A priority Critical patent/CN110599528B/en
Publication of CN110599528A publication Critical patent/CN110599528A/en
Application granted granted Critical
Publication of CN110599528B publication Critical patent/CN110599528B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention provides an unsupervised three-dimensional medical image registration method and system based on a neural network, which comprises the following steps: collecting an image; preprocessing the acquired three-dimensional medical image; training a neural network to obtain a trained neural network model; and inputting the medical image to be registered into the trained neural network model for registration to obtain and output a registered image. Wherein the invention is based on a deformation field
Figure DDA0002189971670000011
Deformation field
Figure DDA0002189971670000012
Deformation field
Figure DDA0002189971670000013
And based on the deformed image
Figure DDA0002189971670000014
Deformed image
Figure DDA0002189971670000015
Deformed image
Figure DDA0002189971670000016
Deformed image
Figure DDA0002189971670000017
Computing a fixed image I using a loss functionFAnd the deformed image
Figure DDA0002189971670000018
And performing back propagation optimization on the neural network until the calculated loss function value is not reduced or the network training reaches a preset training iteration number, and finishing the neural network training to obtain a trained neural network model. The invention is used for realizing medical image registration.

Description

Unsupervised three-dimensional medical image registration method and system based on neural network
Technical Field
The invention relates to the field of medical image registration, in particular to an unsupervised three-dimensional medical image registration method and system based on a neural network, which are mainly used for registering three-dimensional human brain images.
Background
Medical image registration, refers to seeking a spatial transformation or series of spatial transformations for one medical image to bring it into spatial correspondence with a corresponding point on another medical image. This coincidence means that the same anatomical point on the body has the same spatial position on the two matching images. Generally, an Image to be registered is called a floating Image (Moving Image), and a transformation target Image is called a fixed Image or a reference Image (Fi × ed Image).
There are currently many open source software available for medical image segmentation, registration, etc., for example: FreeSprofer can be used to analyze and visualize structural and functional neuroimaging data from slices or time series, which has better performance in skull stripping, B1 bias field correction, grey white matter segmentation and morphological difference measurement; FSL, similar to fresufer, allows for comprehensive analysis of FMRI, MRI and DTI brain imaging data; the ITK software package can be used for the segmentation and registration of multi-dimensional images; the NiftyReg realizes rigid body, affine and nonlinear registration methods of nifti images, and simultaneously supports GPU operation; elasti x is ITK-based open source software, including common algorithms for processing medical image registration; ANTS is used as a current better registration tool, and can realize deformable registration of differential homoembryo. The above registration tools are based on conventional registration methods to fit the deformation of the image. In addition, currently, many deep learning-based registration methods are proposed in succession, such as: DIRNet, BIRNet, voxelmorph and the like utilize a neural network to obtain image transformation parameters, and then utilize a transformation network to transform the floating image to obtain a registration result.
Although the above registration tools and methods all achieve good results in image registration, the following problems still exist:
(1) some methods require manual labeling and supervision information, have high registration specialty requirements, and have relatively low registration rates. The marking of the feature points and the feature areas of the medical image and the acquisition of the image supervision information need to be finished by professional medical imaging doctors, which is very difficult for registration workers without related medical experience, meanwhile, the feature marks obtained by different doctors and the same doctor at different times may have differences, the manual marking process is time-consuming and labor-consuming, and the subjective judgment of the doctors has a great influence on the registration result.
(2) The registration accuracy is relatively low. Some methods proposed in recent years, although making great progress in the registration result, still have to be improved in accuracy.
Therefore, the invention provides an unsupervised three-dimensional medical image registration method and system based on a neural network, which are used for solving the problems.
Disclosure of Invention
In view of the above disadvantages of the prior art, the present invention provides an unsupervised three-dimensional medical image registration method and system based on a neural network, which are used for realizing rapid registration of medical images. But also to improve registration accuracy.
In a first aspect, the present invention provides an unsupervised three-dimensional medical image registration method based on a neural network, comprising the steps of:
l1, image acquisition: acquiring three-dimensional medical images from public data sets OASIS and ADNI, and/or: acquiring a three-dimensional medical image from a DICOM interface of a CT, MRI or ultrasonic imager;
l2, preprocessing the acquired three-dimensional medical image: the method comprises image segmentation, clipping, normalization processing and affine alignment, and any one image is selected from images after affine alignment to be used as a fixed image IFThe rest of the image is taken as a floating image IM(ii) a Wherein the size of the cut images is consistent;
l3 based on the fixed image I obtained after preprocessingFAnd a floating image IMTraining the neural network to obtain a trainedA neural network model;
l4, inputting the medical image to be registered into the trained neural network model for registration to obtain and output a registration image of the medical image to be registered;
wherein in step L3, based on the fixed image I obtained after preprocessingFAnd a floating image IMTraining a neural network to obtain a trained neural network model, comprising:
s1, fixing image I obtained after preprocessingFAnd a floating image IMInputting the neural network as an input layer of the neural network, wherein each set of input data comprises the fixed image IFAnd one said floating image IM
S2, inputting fixed image I in input layerFAnd a floating image IMPerforming down sampling to output a characteristic diagram;
the down-sampling comprises 3 down-sampling processes, a convolution calculation process with convolution kernel size of 3 multiplied by 3 and a LeakyReLU activation function calculation process after the 3 down-sampling processes; the 3 downsampling processes correspond to 3 downsampling process layers; the 3 down-sampling process layers are sequentially marked as a first down-sampling process layer, a second down-sampling process layer and a third down-sampling process layer according to the execution sequence of the down-sampling process; each downsampling process layer comprises a convolution layer with convolution kernel size of 3 multiplied by 3, a LeakyReLU activation function layer and a maximum pooling layer;
s3, respectively carrying out feature re-weighting on feature graphs output by LeakyReLU activation function layers in a first downsampling process layer, a second downsampling process layer and a third downsampling process layer corresponding to downsampling to obtain three weighted feature graphs, which are sequentially: a first weighted feature map, a second weighted feature map, and a third weighted feature map;
s4, performing 1 × 1 × 1 convolution on the feature map output in the step S2, and outputting a floating image IMTo a fixed picture IFDeformation field of
Figure BDA0002189971650000031
S5, inputting the characteristic diagram output in the step S2 into an UpSampling layer for UpSampling, wherein the UpSampling layer comprises 3 UpSampling process layers, each UpSampling process layer comprises an UpSampling layer and a convolutional layer with the convolutional kernel size of 3 multiplied by 3, and a LeakyReLU activation function layer is arranged behind the convolutional layer with the convolutional kernel size of 3 multiplied by 3 in each UpSampling process layer; the 3 upsampling process layers correspond to the 3 upsampling processes of the upsampling; the 3 upsampling process layers are sequentially marked as a first upsampling process layer, a second upsampling process layer and a third upsampling process layer according to the sequence of the 3 upsampling processes;
after the feature map output by the UpSampling layer of the first UpSampling process layer is fused with the third weighted feature map, the feature map is used as the input of the convolutional layer with the convolutional kernel size of 3 multiplied by 3 in the first UpSampling process layer; fusing a feature map output by the UpSamplling layer of the second up-sampling process layer with the second weighted feature map, and taking the feature map as the input of the convolution layer with the convolution kernel size of 3 multiplied by 3 in the second up-sampling process layer; fusing a feature map output by an UpSampling layer of a third up-sampling process layer with the first weighted feature map, and taking the feature map as the input of a convolutional layer with the convolutional kernel size of 3 multiplied by 3 in the third up-sampling process layer;
s6, respectively carrying out 1 multiplied by 1 convolution on the feature maps output by the first up-sampling process layer, the second up-sampling process layer and the third up-sampling process layer, and outputting the floating images I corresponding to the first up-sampling process layer, the second up-sampling process layer and the third up-sampling process layerMTo a fixed picture IFIn turn is a deformation field
Figure BDA0002189971650000041
Deformation field
Figure BDA0002189971650000042
And deformation field
Figure BDA0002189971650000043
S7, floating the image IMAnd the deformation field of the above output
Figure BDA00021899716500000419
Inputting into a spatial transformation network, rendering floating images IMAnd the deformation field of the above output
Figure BDA0002189971650000044
Input to the spatial transformation network, floating image IMAnd the deformation field of the above output
Figure BDA0002189971650000045
Input to the spatial transformation network, floating image IMAnd the deformation field of the above output
Figure BDA0002189971650000046
Inputting the image data into the space transformation network, and correspondingly obtaining a floating image I through the space transformation of the space transformation networkMThe corresponding deformed images are sequentially deformed images
Figure BDA0002189971650000047
Deformed image
Figure BDA0002189971650000048
Deformed image
Figure BDA0002189971650000049
And the deformed image
Figure BDA00021899716500000410
S8 deformation field based on the output
Figure BDA00021899716500000411
Deformation field
Figure BDA00021899716500000412
Deformation field
Figure BDA00021899716500000413
And based on the deformed image obtained
Figure BDA00021899716500000414
Deformed image
Figure BDA00021899716500000415
Deformed image
Figure BDA00021899716500000416
Deformed image
Figure BDA00021899716500000417
Computing a fixed image I using a loss functionFAnd the deformed image
Figure BDA00021899716500000418
Performing back propagation optimization on the neural network until the calculated loss function value is not reduced or the network training reaches a preset training iteration number, and finishing the neural network training to obtain the trained neural network model;
the computational expression of the loss function is:
Figure BDA0002189971650000051
in the formula (i), the first and second groups,
Figure BDA0002189971650000052
represents the calculated loss function value, α and β are both constants, α + β is 1,
Figure BDA0002189971650000053
is a regularization term, λ is a regularization control constant parameter,
Figure BDA0002189971650000054
representing a predetermined image I fixed by saidFDescending miningSampling the obtained three-dimensional medical image; three-dimensional medical image
Figure BDA0002189971650000055
Sequentially with the deformed image
Figure BDA0002189971650000056
Equivalent, three-dimensional medical image
Figure BDA0002189971650000057
Figure BDA0002189971650000058
Are successively lower and are each smaller than the fixed image IFThe resolution of (a) of (b),
Figure BDA0002189971650000059
representing said fixed image IFAnd the deformed image
Figure BDA00021899716500000510
A measure of the degree of similarity between the two,
Figure BDA00021899716500000511
representing the three-dimensional medical image
Figure BDA00021899716500000512
And the deformed image
Figure BDA00021899716500000513
A measure of the degree of similarity between the two,
Figure BDA00021899716500000514
representing the three-dimensional medical image
Figure BDA00021899716500000515
And the deformed image
Figure BDA00021899716500000516
Similarity between themThe measurement is carried out by measuring the time of the measurement,
Figure BDA00021899716500000517
representing the three-dimensional medical image
Figure BDA00021899716500000518
And the deformed image
Figure BDA00021899716500000519
A measure of similarity between;
Figure BDA00021899716500000520
the same similarity metric function is used.
Further, the step of implementing feature re-weighting in step S3 includes:
step S31, recording each feature graph output in the down sampling and to be subjected to feature re-weighting is a feature graph X, wherein X is equal to R(H×W×D)Slicing the feature map X in the D dimension of the feature map X, and processing each slice X ∈ R obtained by slicing by using a global average pooling strategy(H×W)And performing global average pooling to obtain a slice descriptor z of each slice X in the D dimension of the feature map X, wherein a specific formula of each slice descriptor z is as follows:
Figure BDA00021899716500000521
wherein (i, j) represents a pixel point on the slice x, and x (i, j) represents a gray value of the slice x at the pixel point (i, j);
step S32, obtaining a weight S of each slice X in the dimension D of the feature map X, where the calculation formula of the weight S of each slice X is as follows:
s=σ(δ(z)),
wherein σ represents a sigmoid activation function, δ is a ReLU activation function, and z is a slice descriptor of the slice x obtained in step S31;
step S33, correspondingly loading each weight S acquired in step S32 to the corresponding slice to obtainEach slice in the D dimension of the feature map X corresponding reweighed slice
Figure BDA0002189971650000064
Wherein the feature re-weighting calculation formula corresponding to each slice x is as follows:
Figure BDA0002189971650000061
in the formula, Fscale(x, s) represents the multiplication operation between a slice x and its corresponding weight s;
step S34, based on the feature map X obtained in step S33, the reweighed slice corresponding to each slice X in the D dimension thereof
Figure BDA0002189971650000062
Correspondingly obtaining the re-weighted feature map corresponding to the feature map X
Figure BDA0002189971650000063
Furthermore, the similarity measure function adopts a cross-correlation function.
Furthermore, the space transformation network adopts an STN space transformation network.
Further, the preprocessing also comprises data enhancement;
the data enhancement comprises the following steps: respectively performing bending transformation on each obtained floating image to obtain a bent transformed image corresponding to each obtained floating image; the resulting warped images are newly added floating images.
In a second aspect, the present invention provides an unsupervised three-dimensional medical image registration system based on a neural network, comprising:
an image acquisition unit acquiring three-dimensional medical images from the public data sets OASIS and ADNI, and/or: acquiring a three-dimensional medical image from a DICOM interface of a CT, MRI or ultrasonic imager;
an image preprocessing unit for preprocessing the acquired three-dimensional medical image, including a graphImage segmentation, cutting, normalization processing and affine alignment, and selecting any one image from images after affine alignment as a fixed image IFThe rest of the image is taken as a floating image IM(ii) a Wherein the size of the cut images is consistent;
a neural network training unit based on the fixed image I obtained after preprocessingFAnd a floating image IMTraining a neural network to obtain a trained neural network model;
the image registration unit inputs the medical image to be registered into the trained neural network model for registration to obtain and output a registration image of the medical image to be registered;
wherein, the neural network training unit comprises:
an input module for preprocessing the obtained fixed image IFAnd a floating image IMInputting the neural network as an input layer of the neural network, wherein each set of input data comprises the fixed image IFAnd one said floating image IM
A down-sampling module for fixed image I input in the input layerFAnd a floating image IMPerforming down sampling to output a characteristic diagram; the down-sampling comprises 3 down-sampling processes, a convolution calculation process with convolution kernel size of 3 multiplied by 3 and a LeakyReLU activation function calculation process after the 3 down-sampling processes; 3 downsampling processes correspond to 3 downsampling process layers; the 3 down-sampling process layers are sequentially marked as a first down-sampling process layer, a second down-sampling process layer and a third down-sampling process layer according to the execution sequence of the down-sampling process; each downsampling process layer comprises a convolution layer with convolution kernel size of 3 multiplied by 3, a LeakyReLU activation function layer and a maximum pooling layer;
the reweighting module is used for respectively performing characteristic reweighting on the characteristic graphs output by the LeakyReLU activation function layer in the first downsampling process layer, the second downsampling process layer and the third downsampling process layer corresponding to downsampling to obtain three weighted characteristic graphs, and the three weighted characteristic graphs are sequentially: a first weighted feature map, a second weighted feature map, and a third weighted feature map;
a first deformation field output module for performing 1 × 1 × 1 convolution on the feature map output by the down-sampling module to output a floating image IMTo a fixed picture IFDeformation field of
Figure BDA0002189971650000071
The up-sampling module inputs the characteristic diagram output by the down-sampling module into an up-sampling layer for up-sampling, the up-sampling layer comprises 3 up-sampling process layers, each up-sampling process layer comprises an UpSamplling (namely up-sampling) layer and a convolution layer with convolution kernel size of 3 multiplied by 3, and a LeakyReLU activation function layer is respectively arranged behind the convolution layer with convolution kernel size of 3 multiplied by 3 in each up-sampling process layer; the 3 upsampling process layers correspond to the 3 upsampling processes of the upsampling; the 3 upsampling process layers are sequentially marked as a first upsampling process layer, a second upsampling process layer and a third upsampling process layer according to the sequence of the 3 upsampling processes; after the feature map output by the UpSampling layer of the first UpSampling process layer is fused with the third weighted feature map, the feature map is used as the input of the convolutional layer with the convolutional kernel size of 3 multiplied by 3 in the first UpSampling process layer; fusing a feature map output by the UpSamplling layer of the second up-sampling process layer with the second weighted feature map, and taking the feature map as the input of the convolution layer with the convolution kernel size of 3 multiplied by 3 in the second up-sampling process layer; fusing a feature map output by an UpSampling layer of a third up-sampling process layer with the first weighted feature map, and taking the feature map as the input of a convolution layer with the convolution kernel size of 3 multiplied by 3 in the third up-sampling process layer;
a second deformation field output module, which respectively performs 1 × 1 × 1 convolution on the feature maps output by the first upsampling process layer, the second upsampling process layer and the third upsampling process layer, and outputs the floating images I corresponding to the first upsampling process layer, the second upsampling process layer and the third upsampling process layerMTo a fixed picture IFIn turn is a deformation field
Figure BDA0002189971650000081
Deformation field
Figure BDA0002189971650000082
And deformation field
Figure BDA0002189971650000083
A spatial transformation module for transforming the floating image IMAnd the deformation field of the above output
Figure BDA0002189971650000084
Inputting into a spatial transformation network, rendering floating images IMAnd the deformation field of the above output
Figure BDA0002189971650000085
Input to the spatial transformation network, floating image IMAnd the deformation field of the above output
Figure BDA0002189971650000086
Input to the spatial transformation network, floating image IMAnd the deformation field of the above output
Figure BDA0002189971650000087
Inputting the image data into the space transformation network, and correspondingly obtaining a floating image I through the space transformation of the space transformation networkMThe corresponding deformed images are sequentially deformed images
Figure BDA0002189971650000088
Deformed image
Figure BDA0002189971650000089
Deformed image
Figure BDA00021899716500000810
And the deformed image
Figure BDA00021899716500000811
Neural network optimization module based on the aboveDeformation field of output
Figure BDA0002189971650000091
Deformation field
Figure BDA0002189971650000092
Deformation field
Figure BDA0002189971650000093
And based on the deformed image obtained
Figure BDA0002189971650000094
Deformed image
Figure BDA0002189971650000095
Deformed image
Figure BDA0002189971650000096
Deformed image
Figure BDA0002189971650000097
Computing a fixed image I using a loss functionFAnd the deformed image
Figure BDA0002189971650000098
Performing back propagation optimization on the neural network until the calculated loss function value is not reduced or the network training reaches a preset training iteration number, and finishing the neural network training to obtain the trained neural network model;
the computational expression of the loss function is:
Figure BDA0002189971650000099
in the formula (i), the first and second groups,
Figure BDA00021899716500000927
represents the calculated loss function value, α and β are both constants, α + β is 1,
Figure BDA00021899716500000910
is a regularization term, λ is a regularization control constant parameter,
Figure BDA00021899716500000911
representing a predetermined image I fixed by saidFDown-sampling the obtained three-dimensional medical image; three-dimensional medical image
Figure BDA00021899716500000912
Sequentially with the deformed image
Figure BDA00021899716500000913
Equivalent, three-dimensional medical image
Figure BDA00021899716500000914
Are successively lower and are each smaller than the fixed image IFThe resolution of (a) of (b),
Figure BDA00021899716500000915
representing said fixed image IFAnd the deformed image
Figure BDA00021899716500000916
A measure of the degree of similarity between the two,
Figure BDA00021899716500000917
representing the three-dimensional medical image
Figure BDA00021899716500000918
And the deformed image
Figure BDA00021899716500000919
A measure of the degree of similarity between the two,
Figure BDA00021899716500000920
representing the three-dimensional medical image
Figure BDA00021899716500000921
And the deformed image
Figure BDA00021899716500000922
A measure of the degree of similarity between the two,
Figure BDA00021899716500000923
representing the three-dimensional medical image
Figure BDA00021899716500000924
And the deformed image
Figure BDA00021899716500000925
A measure of similarity between;
Figure BDA00021899716500000926
the same similarity metric function is used.
Further, the re-weighting module includes:
a descriptor obtaining module for recording that each feature graph to be subjected to feature re-weighting output in the down-sampling is X, and X belongs to R(H×W×D)Slicing the feature map X in the D dimension of the feature map X, and processing each slice X ∈ R obtained by slicing by using a global average pooling strategy(H×W)And performing global average pooling to obtain a slice descriptor z of each slice X in the D dimension of the feature map X, wherein a specific formula of each slice descriptor z is as follows:
Figure BDA0002189971650000101
wherein (i, j) represents a pixel point on the slice x, and x (i, j) represents a gray value of the slice x at the pixel point (i, j);
the slice weight calculation module is used for acquiring the weight s of each slice X on the D dimension of the feature map X, wherein the calculation formula of the weight s of each slice X is as follows:
s=σ(δ(z)),
wherein, σ represents a sigmoid activation function, δ is a ReLU activation function, and z is a slice descriptor of a slice x obtained by the descriptor obtaining module;
a weighting module for correspondingly loading each weight s obtained by the slice weight calculation module on the corresponding slice to obtain each slice on the D dimension of the characteristic diagram X and the weighted slice
Figure BDA0002189971650000102
Wherein the feature re-weighting calculation formula corresponding to each slice x is as follows:
Figure BDA0002189971650000103
in the formula, Fscale(x, s) represents the multiplication between a slice x and its corresponding weight s;
a weighted image acquisition module for obtaining a weighted slice corresponding to each slice X on D dimension of the feature image X based on the weighted image
Figure BDA0002189971650000104
Correspondingly obtaining the re-weighted feature map corresponding to the feature map X
Figure BDA0002189971650000105
Furthermore, the similarity measure function adopts a cross-correlation function.
Furthermore, the space transformation network adopts an STN space transformation network.
Further, the preprocessing in the image preprocessing unit further includes data enhancement; wherein said data enhancement comprises: respectively performing bending transformation on each obtained floating image to obtain a bent transformed image corresponding to each obtained floating image; the resulting warped images are newly added floating images.
The invention has the beneficial effects that,
(1) the unsupervised three-dimensional medical image registration method and the unsupervised three-dimensional medical image registration system both use an unsupervised registration mode, do not need any marking information and registration supervision information in the registration process, reduce the requirement of marking data and errors of artificial subjective judgment, contribute to image registration of medical workers without related medical experience to a certain extent, improve the registration rate to a certain extent, save the registration time and save manpower and material resources to a certain extent.
(2) The unsupervised three-dimensional medical image registration method and system based on the neural network, provided by the invention, respectively fuse the feature graph obtained by the down-sampling path to the up-sampling path according to the contribution degree of the feature graph by means of feature weight fusion and a loss function with a multi-level loss supervision function, and meanwhile supervise the model loss from different resolution angles, thereby realizing more effective feature reuse and model supervision and improving the registration accuracy to a certain extent.
In addition, the invention has reliable design principle, simple structure and very wide application prospect.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a schematic flow diagram of a method of one embodiment of the invention.
FIG. 2 is a diagram of the deformation field obtained in the method of FIG. 1
Figure BDA0002189971650000111
Deformation field
Figure BDA0002189971650000112
Deformation field
Figure BDA0002189971650000113
Deformation field
Figure BDA0002189971650000114
Schematic diagram of the process indication.
FIG. 3 is a schematic block diagram of a system of one embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is only a part of the embodiment of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The following explains key terms appearing in the present invention.
Fig. 1, 2 are schematic flow diagrams of methods of one embodiment of the invention. This embodiment takes registration of 3D human brain images as an example.
As shown in fig. 1, the method 100 includes:
first, image acquisition.
A public data set of human brain images is downloaded from public data sets OASIS and ADNI.
In particular, those skilled in the art can also acquire the required three-dimensional medical image from the DICOM interface of CT, MRI or ultrasound imager.
Step two, pretreatment:
and preprocessing the three-dimensional medical image acquired in the first step.
The human brain data collected in the first step includes redundant parts such as neck, oral cavity, nasal cavity, skull and the like, and the sizes and gray scales of the redundant parts are different. For this purpose, standard preprocessing is performed on these image data. Firstly, the image is divided, the human brain is separated from the original data, and the obtained human brain image is cut to make the size of the human brain image consistent. Then, normalization processing is carried out to normalize the voxel value to 0, 1]. Followed by affine registration. Then selecting an image from the affine-aligned data asFor fixing the image IFThe rest are used as floating images IM
In addition, in order to enhance the robustness and generalization capability of the neural network model, the preprocessing in the step further includes data enhancement, that is, each floating image obtained in the preprocessing process is subjected to bending transformation, and all images obtained after the bending transformation are newly added floating images. I.e. the new image resulting from the warping transformation also belongs to the floating image resulting from the preprocessing in this step.
In the embodiment, the warping transformation may adopt three kinds of warping transformations with different degrees, so as to realize data enhancement. In the concrete implementation, the number of the bending transformation with different degrees can be increased or decreased according to the actual situation.
All the floating images obtained by preprocessing in the step constitute a training set of the neural network.
And thirdly, training a neural network to obtain a trained neural network model. Fixed image I obtained after preprocessingFAnd training the neural network by the training set to obtain a trained neural network model.
And fourthly, inputting the three-dimensional medical image to be registered into the neural network model for registration, and finally obtaining and outputting a registration image of the three-dimensional medical image to be registered.
When the invention is used, image acquisition is firstly carried out, then the acquired image is preprocessed, and then a fixed image I is obtained based on preprocessingFAnd training the neural network by the training set to obtain a trained neural network model, inputting the three-dimensional medical image to be registered into the trained neural network model for registration, and finally obtaining and outputting a corresponding registration image.
The method has better effect when the number of the related images to be registered (in the embodiment, the human brain images to be registered) is relatively large. Specifically, after the trained neural network model is obtained, the images to be registered are respectively input into the trained neural network model, and then the registered images corresponding to the images to be registered are obtained. Specifically, after a neural network model is trained, a corresponding registration image can be correspondingly output every time an image to be registered is input; after the last registered image is output, the next image to be registered can be continuously input into the trained neural network model until the image registration of all the images to be registered is completed.
In the third step, the training of the neural network to obtain a trained neural network model includes:
s1, fixing image I obtained after preprocessingFAnd a floating image IMInputting the neural network as an input layer of the neural network, wherein each set of input data comprises the fixed image IFAnd one said floating image IM
Each set of input data IFAnd IMAnd the 3D images are subjected to lossless splicing to form 2 channels and then are sent to a neural network input layer.
S2, inputting fixed image I in input layerFAnd a floating image IMDown-sampling and outputting the input fixed image IFAnd a floating image IMThe characteristic diagram of (1).
The down-sampling comprises 3 down-sampling processes, a convolution calculation process with convolution kernel size of 3 multiplied by 3 and a LeakyReLU activation function calculation process after the 3 down-sampling processes; the 3 downsampling processes correspond to 3 downsampling process layers; the 3 down-sampling process layers are sequentially marked as a first down-sampling process layer, a second down-sampling process layer and a third down-sampling process layer according to the execution sequence of the down-sampling process; each downsampling process layer includes a convolution layer with a convolution kernel size of 3 × 3 × 3, a LeakyReLU activation function layer, and a max pooling layer.
S3, respectively carrying out feature re-weighting on feature graphs output by LeakyReLU activation function layers in a first downsampling process layer, a second downsampling process layer and a third downsampling process layer corresponding to downsampling to obtain three weighted feature graphs, which are sequentially: a first weighted feature map, a second weighted feature map, and a third weighted feature map.
The step of implementing the feature re-weighting in step S3 includes:
step S31, recording each feature graph output in the sampling and to be subjected to feature re-weighting is a feature graph X, wherein X belongs to R(H×W×D)Slicing the feature map X in the D dimension of the feature map X, and processing each slice X ∈ R obtained by slicing by using a global average pooling strategy(H×W)And performing global average pooling to obtain a slice descriptor z of each slice X in the D dimension of the feature map X, wherein a specific formula of each slice descriptor z is as follows:
Figure BDA0002189971650000141
wherein (i, j) represents a pixel point on the slice x, and x (i, j) represents a gray value of the slice x at the pixel point (i, j);
step S32, obtaining a weight S of each slice X in the dimension D of the feature map X, where the calculation formula of the weight S of each slice X is as follows:
s=σ(δ(z)),
wherein σ represents a sigmoid activation function, δ is a ReLU activation function, and z is a slice descriptor of the slice x obtained in step S31;
step S33, correspondingly loading each weight S obtained in step S32 to the corresponding slice, to obtain each slice X corresponding weighted slice in the D dimension of the feature map X
Figure BDA0002189971650000151
Wherein the feature re-weighting calculation formula corresponding to each slice x is as follows:
Figure BDA0002189971650000152
in the formula, Fscale(x, s) represents the multiplication between a slice x and its corresponding weight s;
step S34, based on the feature map X obtained in step S33, the reweighed slice corresponding to each slice X in the D dimension thereof
Figure BDA0002189971650000153
Correspondingly obtaining the re-weighted feature map corresponding to the feature map X
Figure BDA0002189971650000154
Such as for the feature map X e R(H×W×D)The feature map X obtained in step S33 has X in D-dimension1、X2、…、XDSection X1、X2、…、XDThe corresponding feature map after the re-weighting is
Figure BDA0002189971650000155
There is a re-weighted feature map corresponding to feature map X
Figure BDA0002189971650000156
S4, S4, performing 1 × 1 × 1 convolution on the feature map output in the step S2, and outputting a floating image IMTo a fixed picture IFDeformation field of
Figure BDA0002189971650000157
S5, inputting the characteristic diagram output in the step S2 into an UpSampling layer for UpSampling, wherein the UpSampling layer comprises 3 UpSampling process layers, each UpSampling process layer comprises an UpSampling layer and a convolutional layer with the convolutional kernel size of 3 multiplied by 3, and a LeakyReLU activation function layer is arranged behind the convolutional layer with the convolutional kernel size of 3 multiplied by 3 in each UpSampling process layer; the 3 upsampling process layers correspond to the 3 upsampling processes of the upsampling; and the 3 upsampling process layers are sequentially marked as a first upsampling process layer, a second upsampling process layer and a third upsampling process layer according to the sequence of the 3 upsampling processes.
After the feature map output by the UpSampling layer of the first UpSampling process layer is fused with the third weighted feature map, the feature map is used as the input of the convolutional layer with the convolutional kernel size of 3 multiplied by 3 in the first UpSampling process layer; fusing a feature map output by the UpSamplling layer of the second up-sampling process layer with the second weighted feature map, and taking the feature map as the input of the convolution layer with the convolution kernel size of 3 multiplied by 3 in the second up-sampling process layer; and fusing the feature map output by the UpSamplling layer of the third up-sampling process layer with the first weighted feature map to be used as the input of the convolution layer with the convolution kernel size of 3 multiplied by 3 in the third up-sampling process layer.
S6, respectively carrying out 1 multiplied by 1 convolution on the feature maps output by the first up-sampling process layer, the second up-sampling process layer and the third up-sampling process layer, and outputting the floating images I corresponding to the first up-sampling process layer, the second up-sampling process layer and the third up-sampling process layerMTo a fixed picture IFIn turn is a deformation field
Figure BDA0002189971650000161
Deformation field
Figure BDA0002189971650000162
And deformation field
Figure BDA0002189971650000163
S7, floating the image IMAnd the deformation field of the above output
Figure BDA00021899716500001619
Inputting into a space transformation network, transforming the floating image IMAnd the deformation field of the above output
Figure BDA0002189971650000164
Input to the spatial transformation network, floating image IMAnd the deformation field of the above output
Figure BDA0002189971650000165
Input to the spatial transformation network, floating image IMAnd the deformation field of the above output
Figure BDA0002189971650000166
Input to the spatial transform network, respectively via the null of the spatial transform networkInter-transforming, corresponding to obtain floating image IMThe corresponding deformed images are sequentially deformed images
Figure BDA0002189971650000167
Deformed image
Figure BDA0002189971650000168
Deformed image
Figure BDA0002189971650000169
And the deformed image
Figure BDA00021899716500001610
S8 deformation field based on the output
Figure BDA00021899716500001611
Deformation field
Figure BDA00021899716500001612
Deformation field
Figure BDA00021899716500001613
And based on the obtained deformed image
Figure BDA00021899716500001614
Deformed image
Figure BDA00021899716500001615
Deformed image
Figure BDA00021899716500001616
Deformed image
Figure BDA00021899716500001617
Computing a fixed image I using a loss functionFAnd the deformed image
Figure BDA00021899716500001618
The value of the loss function in between,and carrying out back propagation optimization on the neural network until the calculated loss function value is not reduced or the network training reaches the preset training iteration times, and finishing the neural network training to obtain the trained neural network model.
Wherein the computational expression of the loss function is:
Figure BDA0002189971650000171
in the first step of the method,
Figure BDA00021899716500001719
represents the calculated loss function value, α and β are both constants, α + β is 1,
Figure BDA0002189971650000172
is a regularization term, lambda is a regularization control constant parameter,
Figure BDA0002189971650000173
representing a predetermined image I fixed by saidFDown-sampling the obtained three-dimensional medical image; three-dimensional medical image
Figure BDA0002189971650000174
Sequentially with the deformed image
Figure BDA0002189971650000175
Equal, three-dimensional medical images
Figure BDA0002189971650000176
Are successively lower and are each smaller than the fixed image IFThe resolution of (a) of (b),
Figure BDA0002189971650000177
representing said fixed image IFAnd the deformed image
Figure BDA0002189971650000178
A measure of the degree of similarity between the two,
Figure BDA0002189971650000179
representing the three-dimensional medical image
Figure BDA00021899716500001710
And the deformed image
Figure BDA00021899716500001711
A measure of the degree of similarity between the two,
Figure BDA00021899716500001712
representing the three-dimensional medical image
Figure BDA00021899716500001713
And the deformed image
Figure BDA00021899716500001714
A measure of the degree of similarity between the two,
Figure BDA00021899716500001715
representing the three-dimensional medical image
Figure BDA00021899716500001716
And the deformed image
Figure BDA00021899716500001717
A measure of similarity between;
Figure BDA00021899716500001718
the same similarity metric function is used.
Optionally, the similarity metric function described in this embodiment uses a cross-correlation function; the space transformation network adopts STN space transformation network.
Optionally, the method 100 described in this embodiment may be implemented by using a U-Net neural network as a basic neural network structure.
Alternatively, the fusion mode of the feature maps involved in the present invention may be splicing fusion. In this embodiment, U-Net type channel dimension splicing and fusion can be adopted. In addition, when implementing the feature map fusion according to the present invention, a person skilled in the art may select another fusion method according to actual situations to perform fusion, for example, feature map fusion may be performed by a sum fusion (corresponding point addition) method.
In the present embodiment, the maximum pooling layer down-sampling factor is 2 × 2 × 2 at the time of down-sampling, and accordingly, it is possible to set in advance
Figure BDA0002189971650000181
From a fixed picture IFReduced to 1/2 to obtain,
Figure BDA0002189971650000182
From a fixed picture IFReduced to 1/4 to obtain,
Figure BDA0002189971650000183
From a fixed picture IFThe result is obtained after the reduction 1/8 of the alloy,
Figure BDA0002189971650000184
resolution of (2) >
Figure BDA0002189971650000185
Resolution of (2) >
Figure BDA0002189971650000186
>0。
In conclusion, the unsupervised three-dimensional medical image registration method based on the neural network provided by the invention has the advantages that the use of the loss function realizes the unsupervised registration of the three-dimensional medical image, any marking information and registration supervision information are not needed in the registration process, the requirement of marking data and the errors of artificial subjective judgment are reduced, the image registration is facilitated for medical workers without related medical experience to a certain extent, the registration rate is improved to a certain extent, the registration time is saved, and meanwhile, the manpower and material resources are saved to a certain extent.
Referring to fig. 3, an unsupervised three-dimensional medical image registration system 200 based on neural network of the present invention comprises:
an image acquisition unit 201 that acquires three-dimensional medical images from public data sets OASIS and ADNI;
the image preprocessing unit 202 performs preprocessing on the acquired three-dimensional medical image, including image segmentation, clipping, normalization processing and affine alignment, and selects any one image from the images after affine alignment as a fixed image IFThe rest of the image is taken as a floating image IM(ii) a Wherein the size of the cut images is consistent;
a neural network training unit 203 based on the pre-processed fixed image IFAnd a floating image IMTraining a neural network to obtain a trained neural network model;
an image registration unit 204, which inputs the medical image to be registered into the trained neural network model for registration, and obtains and outputs a registration image of the medical image to be registered;
the neural network training unit 203 includes:
an input module for preprocessing the obtained fixed image IFAnd a floating image IMInputting the neural network as an input layer of the neural network, wherein each set of input data comprises the fixed image IFAnd one said floating image IM
A down-sampling module for fixed image I input in the input layerFAnd a floating image IMPerforming down sampling to output a characteristic diagram; the down-sampling comprises 3 down-sampling processes, a convolution calculation process with convolution kernel size of 3 multiplied by 3 and a LeakyReLU activation function calculation process after the 3 down-sampling processes; 3 downsampling processes correspond to 3 downsampling process layers; the 3 down-sampling process layers are sequentially marked as a first down-sampling process layer, a second down-sampling process layer and a third down-sampling process layer according to the execution sequence of the down-sampling process; each downsampling process layer comprises a convolution layer with convolution kernel size of 3 x 3 and an LeakyReLU activation functionSeveral layers and one maximum pooling layer;
the reweighting module is used for respectively performing characteristic reweighting on the characteristic graphs output by the LeakyReLU activation function layer in the first downsampling process layer, the second downsampling process layer and the third downsampling process layer corresponding to downsampling to obtain three weighted characteristic graphs, and the three weighted characteristic graphs are sequentially: a first weighted feature map, a second weighted feature map, and a third weighted feature map;
a first deformation field output module for performing 1 × 1 × 1 convolution on the feature map output by the down-sampling module to output a floating image IMTo a fixed picture IFDeformation field of
Figure BDA0002189971650000191
The up-sampling module is used for inputting the characteristic diagram output by the down-sampling module into an up-sampling layer for up-sampling, the up-sampling layer comprises 3 up-sampling process layers, each up-sampling process layer comprises an UpSamplling layer and a convolution layer with the convolution kernel size of 3 multiplied by 3, and a LeakyReLU activation function layer is respectively arranged behind the convolution layer with the convolution kernel size of 3 multiplied by 3 in each up-sampling process layer; the 3 upsampling process layers correspond to the 3 upsampling processes of the upsampling; the 3 upsampling process layers are sequentially marked as a first upsampling process layer, a second upsampling process layer and a third upsampling process layer according to the sequence of the 3 upsampling processes; after the feature map output by the UpSampling layer of the first UpSampling process layer is fused with the third weighted feature map, the feature map is used as the input of the convolutional layer with the convolutional kernel size of 3 multiplied by 3 in the first UpSampling process layer; fusing a feature map output by the UpSamplling layer of the second up-sampling process layer with the second weighted feature map, and taking the feature map as the input of the convolution layer with the convolution kernel size of 3 multiplied by 3 in the second up-sampling process layer; fusing a feature map output by an UpSampling layer of a third up-sampling process layer with the first weighted feature map, and taking the feature map as the input of a convolution layer with the convolution kernel size of 3 multiplied by 3 in the third up-sampling process layer;
a second deformable field output module for oversampling the first, second and third upsampling process layersRespectively carrying out 1 multiplied by 1 convolution on the characteristic graphs output by the program layer, and outputting floating images I corresponding to the first up-sampling process layer, the second up-sampling process layer and the third up-sampling process layerMTo a fixed picture IFIn turn is a deformation field
Figure BDA0002189971650000201
Deformation field
Figure BDA0002189971650000202
And deformation field
Figure BDA0002189971650000203
A spatial transformation module for transforming the floating image IMAnd the deformation field of the above output
Figure BDA0002189971650000204
Inputting into a spatial transformation network, rendering floating images IMAnd the deformation field of the above output
Figure BDA0002189971650000205
Input to the spatial transformation network, floating image IMAnd the deformation field of the above output
Figure BDA0002189971650000206
Input to the spatial transformation network, floating image IMAnd the deformation field of the above output
Figure BDA0002189971650000207
Inputting the image data into the space transformation network, and correspondingly obtaining a floating image I through the space transformation of the space transformation networkMThe corresponding deformed images are sequentially deformed images
Figure BDA0002189971650000208
Deformed image
Figure BDA0002189971650000209
Deformed image
Figure BDA00021899716500002010
And the deformed image
Figure BDA00021899716500002011
Neural network optimization module, deformation field based on the above output
Figure BDA00021899716500002012
Deformation field
Figure BDA00021899716500002013
Deformation field
Figure BDA00021899716500002014
And based on the deformed image obtained
Figure BDA00021899716500002015
Deformed image
Figure BDA00021899716500002016
Deformed image
Figure BDA00021899716500002017
Deformed image
Figure BDA00021899716500002018
Computing a fixed image I using a loss functionFAnd the deformed image
Figure BDA00021899716500002019
Performing back propagation optimization on the neural network until the calculated loss function value is not reduced or the network training reaches a preset training iteration number, and finishing the neural network training to obtain the trained neural network model;
the computational expression of the loss function is:
Figure BDA00021899716500002020
in the formula (i), the first and second groups,
Figure BDA00021899716500002021
represents the calculated loss function value, α and β are both constants, α + β is 1,
Figure BDA0002189971650000211
is a regularization term, λ is a regularization control constant parameter,
Figure BDA0002189971650000212
representing a predetermined image I fixed by saidFDown-sampling the obtained three-dimensional medical image; three-dimensional medical image
Figure BDA0002189971650000213
Sequentially with the deformed image
Figure BDA0002189971650000214
Equivalent, three-dimensional medical image
Figure BDA0002189971650000215
Are successively lower than the resolution of the fixed image I and are all smaller than the fixed image IFThe resolution of (a) of (b),
Figure BDA0002189971650000216
representing said fixed image IFAnd the deformed image
Figure BDA0002189971650000217
A measure of the degree of similarity between the two,
Figure BDA0002189971650000218
representing the three-dimensional medical image
Figure BDA0002189971650000219
And the deformed image
Figure BDA00021899716500002110
A measure of the degree of similarity between the two,
Figure BDA00021899716500002111
representing the three-dimensional medical image
Figure BDA00021899716500002112
And the deformed image
Figure BDA00021899716500002113
A measure of the degree of similarity between the two,
Figure BDA00021899716500002114
representing the three-dimensional medical image
Figure BDA00021899716500002115
And the deformed image
Figure BDA00021899716500002116
A measure of similarity between;
Figure BDA00021899716500002117
the same similarity metric function is used.
Wherein, the re-weighting module comprises:
a descriptor obtaining module for recording that each feature graph to be subjected to feature re-weighting output in the down-sampling is X, and X belongs to R(H×W×D)Slicing the feature map X in the D dimension of the feature map X, and processing each slice X ∈ R obtained by slicing by using a global average pooling strategy(H×W)And performing global average pooling to obtain a slice descriptor z of each slice X in the D dimension of the feature map X, wherein a specific formula of each slice descriptor z is as follows:
Figure BDA00021899716500002118
wherein (i, j) represents a pixel point on the slice x, and x (i, j) represents a gray value of the slice x at the pixel point (i, j);
the slice weight calculation module is used for acquiring the weight s of each slice X on the D dimension of the feature map X, wherein the calculation formula of the weight s of each slice X is as follows:
s=σ(δ(z)),
wherein, σ represents a sigmoid activation function, δ is a ReLU activation function, and z is a slice descriptor of a slice x obtained by the descriptor obtaining module;
a weighting module for correspondingly loading each weight s obtained by the slice weight calculation module on the corresponding slice to obtain each slice on the D dimension of the characteristic diagram X and the weighted slice
Figure BDA0002189971650000221
Wherein the feature re-weighting calculation formula corresponding to each slice x is as follows:
Figure BDA0002189971650000222
in the formula, Fscale(x, s) represents the multiplication operation between a slice x and its corresponding weight s;
a weighted image acquisition module for obtaining a weighted slice corresponding to each slice X on D dimension of the feature image X based on the weighted image
Figure BDA0002189971650000223
Correspondingly obtaining the re-weighted feature map corresponding to the feature map X
Figure BDA0002189971650000224
Optionally, the similarity measure function is a cross-correlation function.
Optionally, the spatial transformation network adopts an STN spatial transformation network.
Optionally, the preprocessing described in the image preprocessing unit 202 further includes data enhancement; wherein said data enhancement comprises: respectively performing bending transformation on each obtained floating image to obtain a bent transformed image corresponding to each obtained floating image; the resulting warped images are newly added floating images.
The same and similar parts in the various embodiments in this specification may be referred to each other. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and the relevant points can be referred to the description in the method embodiment.
Although the present invention has been described in detail by referring to the drawings in connection with the preferred embodiments, the present invention is not limited thereto. Various equivalent modifications or substitutions can be made on the embodiments of the present invention by those skilled in the art without departing from the spirit and scope of the present invention, and these modifications or substitutions are within the scope of the present invention/any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. An unsupervised three-dimensional medical image registration method based on a neural network is characterized by comprising
The method comprises the following steps:
l1, image acquisition: acquiring three-dimensional medical images from public data sets OASIS and ADNI, and/or: acquiring a three-dimensional medical image from a DICOM interface of a CT, MRI or ultrasonic imager;
l2, preprocessing the acquired three-dimensional medical image: the method comprises image segmentation, clipping, normalization processing and affine alignment, and any one image is selected from images after affine alignment to be used as a fixed image IFThe rest of the image is taken as a floating image IM(ii) a Wherein the size of the cut images is consistent;
l3 based on the fixed image I obtained after preprocessingFAnd a floating image IMTraining a neural network to obtain a trained neural network model;
l4, inputting the medical image to be registered into the trained neural network model for registration to obtain and output a registration image of the medical image to be registered;
wherein in step L3, based on the fixed image I obtained after preprocessingFAnd a floating image IMTraining a neural network to obtain a trained neural network model, comprising:
s1, fixing image I obtained after preprocessingFAnd a floating image IMInputting the neural network as an input layer of the neural network, wherein each set of input data comprises the fixed image IFAnd one said floating image IM
S2, inputting fixed image I in input layerFAnd a floating image IMPerforming down sampling to output a characteristic diagram;
the down-sampling comprises 3 down-sampling processes, a convolution calculation process with convolution kernel size of 3 multiplied by 3 and a LeakyReLU activation function calculation process after the 3 down-sampling processes; the 3 downsampling processes correspond to 3 downsampling process layers; the 3 down-sampling process layers are sequentially marked as a first down-sampling process layer, a second down-sampling process layer and a third down-sampling process layer according to the execution sequence of the down-sampling process; each downsampling process layer comprises a convolution layer with convolution kernel size of 3 multiplied by 3, an LeakyReLU activation function layer and a maximum pooling layer;
s3, respectively carrying out feature re-weighting on feature maps output by LeakyReLU activation function layers in a first downsampling process layer, a second downsampling process layer and a third downsampling process layer corresponding to downsampling to obtain three weighted feature maps, which are sequentially: a first weighted feature map, a second weighted feature map, and a third weighted feature map;
s4, performing 1 × 1 × 1 convolution on the feature map output in the step S2, and outputting a floating image IMTo a fixed picture IFDeformation field of
Figure FDA0003581638690000021
S5, inputting the characteristic diagram output in the step S2 into an UpSampling layer for UpSampling, wherein the UpSampling layer comprises 3 UpSampling process layers, each UpSampling process layer comprises an UpSampling layer and a convolutional layer with the convolutional kernel size of 3 multiplied by 3, and a LeakyReLU activation function layer is arranged behind the convolutional layer with the convolutional kernel size of 3 multiplied by 3 in each UpSampling process layer; the 3 upsampling process layers correspond to the 3 upsampling processes of the upsampling; the 3 upsampling process layers are sequentially marked as a first upsampling process layer, a second upsampling process layer and a third upsampling process layer according to the sequence of the 3 upsampling processes;
after the feature map output by the UpSampling layer of the first UpSampling process layer is fused with the third weighted feature map, the feature map is used as the input of the convolutional layer with the convolutional kernel size of 3 multiplied by 3 in the first UpSampling process layer; fusing a feature map output by the UpSamplling layer of the second up-sampling process layer with the second weighted feature map, and taking the feature map as the input of the convolution layer with the convolution kernel size of 3 multiplied by 3 in the second up-sampling process layer; fusing a feature map output by an UpSampling layer of a third up-sampling process layer with the first weighted feature map, and taking the feature map as the input of a convolution layer with the convolution kernel size of 3 multiplied by 3 in the third up-sampling process layer;
s6, respectively carrying out 1 multiplied by 1 convolution on the feature maps output by the first upsampling process layer, the second upsampling process layer and the third upsampling process layer, and outputting a floating image I corresponding to the first upsampling process layer, the second upsampling process layer and the third upsampling process layerMTo a fixed picture IFIn turn is a deformation field
Figure FDA0003581638690000022
Deformation field
Figure FDA0003581638690000023
And deformation field
Figure FDA0003581638690000024
S7, floating the image IMAnd the deformation field of the above output
Figure FDA0003581638690000031
Inputting into a spatial transformation network, rendering floating images IMAnd the deformation field of the above output
Figure FDA0003581638690000032
Input to the spatial transform network, floating image IMAnd the deformation field of the above output
Figure FDA0003581638690000033
Input to the spatial transformation network, floating image IMAnd the deformation field of the above output
Figure FDA0003581638690000034
Inputting the image data into the space transformation network, and correspondingly obtaining a floating image I through the space transformation of the space transformation networkMThe corresponding deformed images are sequentially deformed images
Figure FDA0003581638690000035
Deformed image
Figure FDA0003581638690000036
Deformed image
Figure FDA0003581638690000037
And the deformed image
Figure FDA0003581638690000038
S8 deformation field based on the output
Figure FDA0003581638690000039
Deformation field
Figure FDA00035816386900000310
Deformation field
Figure FDA00035816386900000311
And based on the deformed image obtained
Figure FDA00035816386900000312
Deformed image
Figure FDA00035816386900000313
Deformed image
Figure FDA00035816386900000314
Deformed image
Figure FDA00035816386900000315
Computing a fixed image I using a loss functionFAnd the deformed image
Figure FDA00035816386900000316
Performing back propagation optimization on the neural network until the calculated loss function value is not reduced or the network training reaches a preset training iteration number, and finishing the neural network training to obtain the trained neural network model;
the computational expression of the loss function is:
Figure FDA00035816386900000317
in the formula (i), the first and second groups,
Figure FDA00035816386900000318
represents the calculated loss function value, α and β are both constants, α + β is 1,
Figure FDA00035816386900000319
is a regularization term, λ is a regularization control constant parameter,
Figure FDA00035816386900000320
representing a predetermined image I fixed by saidFDown-sampling the obtained three-dimensional medical image; three-dimensional medical image
Figure FDA00035816386900000321
Sequentially with the deformed image
Figure FDA00035816386900000322
Equivalent, three-dimensional medical image
Figure FDA00035816386900000323
Are successively lower and are each smaller than the fixed image IFThe resolution of (a) of (b),
Figure FDA00035816386900000324
representing said fixed image IFAnd the deformed image
Figure FDA00035816386900000325
A measure of the degree of similarity between the two,
Figure FDA00035816386900000326
representing the three-dimensional medical image
Figure FDA00035816386900000327
And the deformed image
Figure FDA00035816386900000328
A measure of the degree of similarity between the two,
Figure FDA00035816386900000329
representing the three-dimensional medical image
Figure FDA00035816386900000330
And the deformed image
Figure FDA00035816386900000331
A measure of the degree of similarity between the two,
Figure FDA0003581638690000041
representing the three-dimensional medical image
Figure FDA0003581638690000042
And the deformed image
Figure FDA0003581638690000043
A measure of similarity between;
Figure FDA0003581638690000044
the same similarity measurement function is adopted;
the step of implementing the feature re-weighting in step S3 includes:
step S31, recording each feature graph output in the down sampling and to be subjected to feature re-weighting is a feature graph X, wherein X is equal to R(H ×W×D)Slicing the feature map X in the D dimension of the feature map X, and processing each slice X ∈ R obtained by slicing by using a global average pooling strategy(H×W)And performing global average pooling to obtain a slice descriptor z of each slice X in the dimension D of the feature map X, wherein the specific formula of each slice descriptor z is as follows:
Figure FDA0003581638690000045
wherein (i, j) represents a pixel point on the slice x, and x (i, j) represents a gray value of the slice x at the pixel point (i, j);
step S32, obtaining a weight S of each slice X in the dimension D of the feature map X, where the calculation formula of the weight S of each slice X is as follows:
s=σ(δ(z)),
wherein σ represents a sigmoid activation function, δ is a ReLU activation function, and z is a slice descriptor of the slice x obtained in step S31;
step S33, correspondingly loading each weight S acquired in the step S32 to the corresponding slice, and obtaining the weighted slice corresponding to each slice X on the D dimension of the feature map X
Figure FDA0003581638690000046
Wherein the feature re-weighting calculation formula corresponding to each slice x is as follows:
Figure FDA0003581638690000047
in the formula, Fscale(x, s) represents the multiplication between a slice x and its corresponding weight s;
step S34, based on the feature map X obtained in step S33, the reweighed slice corresponding to each slice X in the D dimension thereof
Figure FDA0003581638690000048
Correspondingly obtaining the re-weighted feature map corresponding to the feature map X
Figure FDA0003581638690000049
2. The unsupervised three-dimensional medical image registration method based on neural network as claimed in claim 1, wherein said similarity measure function is a cross-correlation function.
3. The unsupervised three-dimensional medical image registration method based on neural network as claimed in claim 1, wherein the spatial transformation network employs STN spatial transformation network.
4. The method of claim 1, wherein the preprocessing further comprises data enhancement;
the data enhancement comprises the following steps: respectively performing bending transformation on each obtained floating image to obtain a bent transformed image corresponding to each obtained floating image; the resulting warped images are newly added floating images.
5. An unsupervised three-dimensional medical image registration system based on a neural network, comprising:
an image acquisition unit acquiring three-dimensional medical images from the public data sets OASIS and ADNI, and/or: acquiring a three-dimensional medical image from a DICOM interface of a CT, MRI or ultrasonic imager;
the image preprocessing unit is used for preprocessing the acquired three-dimensional medical image, comprises image segmentation, cutting, normalization processing and affine alignment, and selects any one image from the images subjected to affine alignment as a fixed image IFThe rest of the image is taken as a floating image IM(ii) a Wherein the size of the cut images is consistent;
a neural network training unit based on the fixed image I obtained after preprocessingFAnd a floating image IMTraining a neural network to obtain a trained neural network model;
the image registration unit inputs the medical image to be registered into the trained neural network model for registration to obtain and output a registration image of the medical image to be registered;
wherein, the neural network training unit comprises:
an input module for preprocessing the obtained fixed image IFAnd a floating image IMInputting the neural network as an input layer of the neural network, wherein each set of input data comprises the fixed image IFAnd one said floating image IM
A down-sampling module for fixed image I input in the input layerFAnd a floating image IMPerforming down sampling to output a characteristic diagram; the downsampling includes 3 downsampling processes and a volume following the 3 downsampling processesConvolution calculation process with the kernel size of 3 × 3 × 3 and a LeakyReLU activation function calculation process; 3 downsampling processes correspond to 3 downsampling process layers; the 3 down-sampling process layers are sequentially marked as a first down-sampling process layer, a second down-sampling process layer and a third down-sampling process layer according to the execution sequence of the down-sampling process; each downsampling process layer comprises a convolution layer with convolution kernel size of 3 multiplied by 3, a LeakyReLU activation function layer and a maximum pooling layer;
the reweighting module is used for respectively performing characteristic reweighting on the characteristic graphs output by the LeakyReLU activation function layer in the first downsampling process layer, the second downsampling process layer and the third downsampling process layer corresponding to downsampling to obtain three weighted characteristic graphs, and the three weighted characteristic graphs are sequentially: a first weighted feature map, a second weighted feature map, and a third weighted feature map;
a first deformation field output module for performing 1 × 1 × 1 convolution on the feature map output by the down-sampling module to output a floating image IMTo a fixed picture IFDeformation field of
Figure FDA0003581638690000061
The up-sampling module is used for inputting the characteristic diagram output by the down-sampling module into an up-sampling layer for up-sampling, the up-sampling layer comprises 3 up-sampling process layers, each up-sampling process layer comprises an UpSamplling layer and a convolution layer with the convolution kernel size of 3 multiplied by 3, and a LeakyReLU activation function layer is respectively arranged behind the convolution layer with the convolution kernel size of 3 multiplied by 3 in each up-sampling process layer; the 3 upsampling process layers correspond to the 3 upsampling processes of the upsampling; the 3 upsampling process layers are sequentially marked as a first upsampling process layer, a second upsampling process layer and a third upsampling process layer according to the sequence of the 3 upsampling processes; after the feature map output by the UpSampling layer of the first UpSampling process layer is fused with the third weighted feature map, the feature map is used as the input of the convolutional layer with the convolutional kernel size of 3 multiplied by 3 in the first UpSampling process layer; fusing a feature map output by the UpSamplling layer of the second up-sampling process layer with the second weighted feature map, and taking the feature map as the input of the convolution layer with the convolution kernel size of 3 multiplied by 3 in the second up-sampling process layer; fusing a feature map output by an UpSampling layer of a third up-sampling process layer with the first weighted feature map, and taking the feature map as the input of a convolution layer with the convolution kernel size of 3 multiplied by 3 in the third up-sampling process layer;
a second deformation field output module, which respectively performs 1 × 1 × 1 convolution on the feature maps output by the first upsampling process layer, the second upsampling process layer and the third upsampling process layer, and outputs the floating images I corresponding to the first upsampling process layer, the second upsampling process layer and the third upsampling process layerMTo a fixed picture IFIn turn is a deformation field
Figure FDA0003581638690000071
Deformation field
Figure FDA0003581638690000072
And deformation field
Figure FDA0003581638690000073
A spatial transformation module for transforming the floating image IMAnd the deformation field of the above output
Figure FDA0003581638690000074
Inputting into a spatial transformation network, rendering floating images IMAnd the deformation field of the above output
Figure FDA0003581638690000075
Input to the spatial transformation network, floating image IMAnd the deformation field of the above output
Figure FDA0003581638690000076
Input to the spatial transformation network, floating image IMAnd the deformation field of the above output
Figure FDA0003581638690000077
Inputting the image data into the space transformation network, and correspondingly obtaining a floating image I through the space transformation of the space transformation networkMThe corresponding deformed images are sequentially deformed images
Figure FDA0003581638690000078
Deformed image
Figure FDA0003581638690000079
Deformed image
Figure FDA00035816386900000710
And the deformed image
Figure FDA00035816386900000711
Neural network optimization module, deformation field based on the above output
Figure FDA00035816386900000712
Deformation field
Figure FDA00035816386900000713
Deformation field
Figure FDA00035816386900000714
And based on the deformed image obtained
Figure FDA00035816386900000715
Deformed image
Figure FDA00035816386900000716
Deformed image
Figure FDA00035816386900000717
Deformed image
Figure FDA00035816386900000718
Computation of a fixed image I using a loss functionFAnd the deformed image
Figure FDA00035816386900000719
Performing back propagation optimization on the neural network until the calculated loss function value is not reduced or the network training reaches a preset training iteration number, and finishing the neural network training to obtain the trained neural network model;
the computational expression of the loss function is:
Figure FDA00035816386900000720
Figure FDA0003581638690000081
in the first step of the method,
Figure FDA0003581638690000082
represents the calculated loss function value, α and β are both constants, α + β is 1,
Figure FDA0003581638690000083
is a regularization term, λ is a regularization control constant parameter,
Figure FDA0003581638690000084
representing a predetermined image I fixed by saidFDown-sampling the obtained three-dimensional medical image; three-dimensional medical image
Figure FDA0003581638690000085
Sequentially with the deformed image
Figure FDA0003581638690000086
Equivalent, three-dimensional medical image
Figure FDA0003581638690000087
Are successively lower and are each smaller than the fixed image IFThe resolution of (a) of (b),
Figure FDA0003581638690000088
representing said fixed image IFAnd the deformed image
Figure FDA0003581638690000089
A measure of the degree of similarity between the two,
Figure FDA00035816386900000810
representing the three-dimensional medical image
Figure FDA00035816386900000811
And the deformed image
Figure FDA00035816386900000812
A measure of the degree of similarity between the two,
Figure FDA00035816386900000813
representing the three-dimensional medical image
Figure FDA00035816386900000814
And the deformed image
Figure FDA00035816386900000815
A measure of the degree of similarity between the two,
Figure FDA00035816386900000816
representing the three-dimensional medical image
Figure FDA00035816386900000817
And the deformed image
Figure FDA00035816386900000818
A measure of similarity between;
Figure FDA00035816386900000819
the same similarity measurement function is adopted;
the re-weighting module comprises:
a descriptor obtaining module for recording that each feature graph to be subjected to feature re-weighting output in the down-sampling is X, and X belongs to R(H ×W×D)Slicing the feature map X in the D dimension of the feature map X, and processing each slice X ∈ R obtained by slicing by using a global average pooling strategy(H×W)And performing global average pooling to obtain a slice descriptor z of each slice X in the D dimension of the feature map X, wherein a specific formula of each slice descriptor z is as follows:
Figure FDA00035816386900000820
wherein (i, j) represents a pixel point on the slice x, and x (i, j) represents a gray value of the slice x at the pixel point (i, j);
the slice weight calculation module is used for acquiring the weight s of each slice X on the D dimension of the feature map X, wherein the calculation formula of the weight s of each slice X is as follows:
s=σ(δ(z)),
wherein, σ represents a sigmoid activation function, δ is a ReLU activation function, and z is a slice descriptor of a slice x obtained by the descriptor obtaining module;
a weighting module for correspondingly loading each weight s obtained by the slice weight calculation module on the corresponding slice to obtain the weighted slice corresponding to each slice X on the D dimension of the characteristic diagram X
Figure FDA0003581638690000091
Wherein the feature re-weighting calculation formula corresponding to each slice x is as follows:
Figure FDA0003581638690000092
in the formula, Fscale(x, s) represents the multiplication between a slice x and its corresponding weight s;
a weighted image acquisition module for obtaining a weighted slice corresponding to each slice X on D dimension of the feature image X based on the weighted image
Figure FDA0003581638690000093
Correspondingly obtaining the re-weighted feature map corresponding to the feature map X
Figure FDA0003581638690000094
6. The unsupervised three-dimensional medical image registration system based on neural network as claimed in claim 5, wherein said similarity measure function is a cross-correlation function.
7. The unsupervised three-dimensional medical image registration system based on neural network as claimed in claim 5, wherein the spatial transformation network employs STN spatial transformation network.
8. The unsupervised three-dimensional medical image registration system based on neural network as claimed in claim 5, wherein said preprocessing in the image preprocessing unit further comprises data enhancement; wherein said data enhancement comprises: respectively performing bending transformation on each obtained floating image to obtain a bent transformed image corresponding to each obtained floating image; the resulting warped images are newly added floating images.
CN201910828807.7A 2019-09-03 2019-09-03 Unsupervised three-dimensional medical image registration method and system based on neural network Active CN110599528B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910828807.7A CN110599528B (en) 2019-09-03 2019-09-03 Unsupervised three-dimensional medical image registration method and system based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910828807.7A CN110599528B (en) 2019-09-03 2019-09-03 Unsupervised three-dimensional medical image registration method and system based on neural network

Publications (2)

Publication Number Publication Date
CN110599528A CN110599528A (en) 2019-12-20
CN110599528B true CN110599528B (en) 2022-05-27

Family

ID=68857220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910828807.7A Active CN110599528B (en) 2019-09-03 2019-09-03 Unsupervised three-dimensional medical image registration method and system based on neural network

Country Status (1)

Country Link
CN (1) CN110599528B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111027508B (en) * 2019-12-23 2022-09-06 电子科技大学 Remote sensing image coverage change detection method based on deep neural network
CN111242877A (en) * 2019-12-31 2020-06-05 北京深睿博联科技有限责任公司 Mammary X-ray image registration method and device
CN111091575B (en) * 2019-12-31 2022-10-18 电子科技大学 Medical image segmentation method based on reinforcement learning method
CN111260705B (en) * 2020-01-13 2022-03-15 武汉大学 Prostate MR image multi-task registration method based on deep convolutional neural network
CN111524170B (en) * 2020-04-13 2023-05-26 中南大学 Pulmonary CT image registration method based on unsupervised deep learning
CN111768379A (en) * 2020-06-29 2020-10-13 深圳度影医疗科技有限公司 Standard section detection method of three-dimensional uterine ultrasound image
CN112102373A (en) * 2020-07-29 2020-12-18 浙江工业大学 Carotid artery multi-mode image registration method based on strong constraint affine deformation feature learning
CN112150425A (en) * 2020-09-16 2020-12-29 北京工业大学 Unsupervised intravascular ultrasound image registration method based on neural network
CN112837357B (en) * 2021-02-25 2024-03-05 平安科技(深圳)有限公司 Medical image registration method, device, computer equipment and storage medium
CN113034453B (en) * 2021-03-16 2023-01-10 深圳先进技术研究院 Mammary gland image registration method based on deep learning
CN112907439B (en) * 2021-03-26 2023-08-08 中国科学院深圳先进技术研究院 Deep learning-based supine position and prone position breast image registration method
CN113344876B (en) * 2021-06-08 2023-05-12 安徽大学 Deformable registration method between CT and CBCT
CN113450397B (en) * 2021-06-25 2022-04-01 广州柏视医疗科技有限公司 Image deformation registration method based on deep learning
CN113409291B (en) * 2021-06-29 2022-04-22 山东大学 Lung 4D-CT medical image registration method and system
CN113763441B (en) * 2021-08-25 2024-01-26 中国科学院苏州生物医学工程技术研究所 Medical image registration method and system without supervision learning
CN113724307B (en) * 2021-09-02 2023-04-28 深圳大学 Image registration method and device based on characteristic self-calibration network and related components
CN114170276A (en) * 2021-10-15 2022-03-11 烟台大学 Magnetic resonance brain image hippocampus registration method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584283A (en) * 2018-11-29 2019-04-05 合肥中科离子医学技术装备有限公司 A kind of Medical Image Registration Algorithm based on convolutional neural networks
CN109767459A (en) * 2019-01-17 2019-05-17 中南大学 Novel ocular base map method for registering
CN110021037A (en) * 2019-04-17 2019-07-16 南昌航空大学 A kind of image non-rigid registration method and system based on generation confrontation network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8805035B2 (en) * 2010-05-03 2014-08-12 Mim Software, Inc. Systems and methods for contouring a set of medical images
KR102294734B1 (en) * 2014-09-30 2021-08-30 삼성전자주식회사 Method and apparatus for image registration, and ultrasonic diagnosis apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584283A (en) * 2018-11-29 2019-04-05 合肥中科离子医学技术装备有限公司 A kind of Medical Image Registration Algorithm based on convolutional neural networks
CN109767459A (en) * 2019-01-17 2019-05-17 中南大学 Novel ocular base map method for registering
CN110021037A (en) * 2019-04-17 2019-07-16 南昌航空大学 A kind of image non-rigid registration method and system based on generation confrontation network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Deformable Image Registration Using a Cue-Aware Deep Regression Network;Xiaohuan Cao 等;《IEEE Transactions on Biomedical Engineering》;20180930;第65卷(第9期);全文 *
Non-rigid image registration using self-supervised fully convolutional networks without training data;Hongming Li 等;《2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018)》;20180524;全文 *
基于图像特征和光流场的非刚性图像配准;纪慧中 等;《光学精密工程》;20170930;第25卷(第9期);全文 *
拟合精度引导的扩散加权图像配准;张文华 等;《计算机科学》;20180531;第45卷(第5期);全文 *

Also Published As

Publication number Publication date
CN110599528A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
CN110599528B (en) Unsupervised three-dimensional medical image registration method and system based on neural network
US20210232924A1 (en) Method for training smpl parameter prediction model, computer device, and storage medium
CN113077471B (en) Medical image segmentation method based on U-shaped network
CN110321920B (en) Image classification method and device, computer readable storage medium and computer equipment
CN108776969B (en) Breast ultrasound image tumor segmentation method based on full convolution network
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
Eppenhof et al. Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks
US7945117B2 (en) Methods and systems for registration of images
CN111429421B (en) Model generation method, medical image segmentation method, device, equipment and medium
WO2020007941A1 (en) Automated determination of a canonical pose of a 3d objects and superimposition of 3d objects using deep learning
Li et al. Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images
CN111640120A (en) Pancreas CT automatic segmentation method based on significance dense connection expansion convolution network
US20220335600A1 (en) Method, device, and storage medium for lesion segmentation and recist diameter prediction via click-driven attention and dual-path connection
CN115830016B (en) Medical image registration model training method and equipment
Liu et al. The measurement of Cobb angle based on spine X-ray images using multi-scale convolutional neural network
WO2023044605A1 (en) Three-dimensional reconstruction method and apparatus for brain structure in extreme environments, and readable storage medium
CN111080658A (en) Cervical MRI image segmentation method based on deformable registration and DCNN
CN110992310A (en) Method and device for determining partition where mediastinal lymph node is located
CN115115676A (en) Image registration method, device, equipment and storage medium
CN116758087B (en) Lumbar vertebra CT bone window side recess gap detection method and device
CN113822323A (en) Brain scanning image identification processing method, device, equipment and storage medium
CN112750110A (en) Evaluation system for evaluating lung lesion based on neural network and related products
CN111369598B (en) Deep learning model training method and device, and application method and device
CN114787816A (en) Data enhancement for machine learning methods
CN114565626A (en) Lung CT image segmentation algorithm based on PSPNet improvement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant