CN116523983B - Pancreas CT image registration method integrating multipath characteristics and organ morphology guidance - Google Patents

Pancreas CT image registration method integrating multipath characteristics and organ morphology guidance Download PDF

Info

Publication number
CN116523983B
CN116523983B CN202310755028.5A CN202310755028A CN116523983B CN 116523983 B CN116523983 B CN 116523983B CN 202310755028 A CN202310755028 A CN 202310755028A CN 116523983 B CN116523983 B CN 116523983B
Authority
CN
China
Prior art keywords
image
layer
feature
convolution
decoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310755028.5A
Other languages
Chinese (zh)
Other versions
CN116523983A (en
Inventor
叶颀
陈文善
朱致鹏
方驰华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN202310755028.5A priority Critical patent/CN116523983B/en
Publication of CN116523983A publication Critical patent/CN116523983A/en
Application granted granted Critical
Publication of CN116523983B publication Critical patent/CN116523983B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The invention relates to the technical field of pancreatic CT image registration, and discloses a pancreatic CT image registration method combining multipath characteristics and organ morphology guidance, which comprises the following steps: s1: acquiring a reference image and an image to be registered, wherein the reference image and the image to be registered are pancreatic CT images; s2: preprocessing the acquired reference image and the image to be registered; s3: registering the preprocessed image to be registered through an organ morphology guiding registration network to obtain a pancreas morphology guiding deformation field; s4: extracting and fusing multipath features of the preprocessed image to be registered through a multipath feature registration network to obtain a multipath feature deformation field; s5: compounding the pancreas morphology guide deformation field and the multipath characteristic deformation field; s6: and (3) applying the composite deformation field obtained in the step (S5) to the preprocessed image to be registered to obtain a deformed image, and realizing the registration of the pancreatic CT image. The obtained registration image has higher accuracy, and better robustness and smoothness.

Description

Pancreas CT image registration method integrating multipath characteristics and organ morphology guidance
Technical Field
The invention relates to the technical field of pancreatic CT image registration, in particular to a pancreatic CT image registration method combining multipath characteristics and organ morphology guidance.
Background
The pancreatic CT image registration fusion technology has important significance for preoperative diagnosis of pancreatic cancer and postoperative condition monitoring. On the one hand, when a patient is examined, two images of an arterial period and a venous period are often required to be taken so as to acquire obvious features developed in different periods, and registration of the images in the front period and the rear period by using a registration technology is beneficial to the features in different periods, such as: details of development of blood vessels, pancreas and surrounding organs are fused, which more accurately assists doctors in deducing the illness state and making treatment schemes in the preoperative diagnosis stage; on the other hand, tracking treatment is often required for patients after the operation, and by registering CT images in the preoperative period and the postoperative period, the postoperative rehabilitation condition of the patients can be effectively detected and more targeted treatment can be carried out. Thus, accurate registration of the pancreas will aid in the pre-operative diagnosis and post-operative assessment of the physician.
Currently, there are two main types of registration methods for CT images, one is a traditional registration technology based on model driving, and the other is a deep learning method based on data driving.
Conventional registration techniques based on model driving mainly include: (1) a registration method based on feature matching: this method performs registration by extracting feature points or feature areas in the image. The method is suitable for the condition that obvious characteristic points or characteristic areas exist in the image, but the method is mainly applied to two-dimensional images, and a registration method aiming at three-dimensional images, particularly abdominal cavity CT images is not mature. (2) non-parametric model-based registration method: classical models are Demon and variant models thereof, symmetry-Demon, diffeomorphic Demon, etc., and the type of method is a global coordinate transformation registration algorithm which is not robust to noise and artifacts and is difficult to process large deformed complex images, and has relatively large calculation amount, and each pair of input images needs to be subjected to iterative operation for a long time. (3) registration method based on physical model driving: the deformation process of the image is approximated to the physical motion process of the object, and the registration robustness of the image with noise and artifacts is poor; meanwhile, the time complexity of the model is high, and the whole iterative process is required to be repeated for each registration; in addition, the model has more parameters, and for different types of images, proper parameters need to be adjusted according to the artificial selection of actual conditions so as to achieve the best effect.
The data-driven deep learning registration technology mainly comprises the following steps: (1) STN (Spatial Transformer Network) registration method: the registration of the images is realized by using a space transformation network, and the method can only process transformation operations such as translation, rotation, scaling and the like and cannot capture complex deformation processes; the accuracy and quality requirements on input data are high, and fine data preprocessing operation is needed; a large amount of training data and computational resources are required to obtain the best registration results and may not be suitable for registration of small data. (2) Voxelmorph registration method: a convolutional neural network is used to predict a transform field between two three-dimensional medical images and the transform field is used to map each pixel point of a floating image onto a fixed image to achieve image registration. The medical image registration effect on the existence of noise and artifacts is not ideal, and the data preprocessing operation needs to be refined; a large amount of training data and computational resources are required and when the training data set is small, an overfitting situation may occur.
In summary, although the conventional registration method has model interpretation, the accuracy of registration is often lower than that of a network model based on deep learning, and the registration efficiency is extremely low, and a re-running program is required to perform long-time iterative operation for each pair of new images to be registered to obtain a registration result. The deep learning registration method is based on the fact that fixed model parameters are obtained after a large amount of data are trained, in the subsequent application, only images are needed to be input into a network, registration images with high accuracy can be obtained in a period of seconds, meanwhile, compared with the traditional method, the deep learning registration method can process more complex image deformation, but the method is high in dependence on the data, and a large amount of data subjected to fine preprocessing are needed to avoid the risk of overfitting. Furthermore, both of the above methods are less than ideal for the operation of artifacts and noise images present in CT images.
On the other hand, when the registration algorithms are applied to brain images or organs such as chest and lung, liver of the abdominal cavity and the like, good registration effects can be obtained, but the adhesion of each organ of the image of the abdominal cavity is strong, the pancreatic part occupies smaller area, and the influence of the background noise in the abdominal cavity causes the development and registration of the pancreas to be poorer than the registration precision of the parts, and even the situation of incapability of registration occurs.
Disclosure of Invention
The invention aims to provide a pancreatic CT image registration method for fusing multipath characteristics with organ morphology guidance, which has high accuracy and better robustness and smoothness.
In order to achieve the above object, the present invention provides a pancreatic CT image registration method that fuses multipath features and organ morphology guidance, comprising the steps of:
s1: acquiring a reference image and an image to be registered, wherein the reference image and the image to be registered are pancreatic CT images;
s2: preprocessing the acquired reference image and the image to be registered;
s3: registering the preprocessed image to be registered through an organ morphology guiding registration network to obtain a pancreas morphology guiding deformation field;
s4: extracting and fusing multipath features of the preprocessed image to be registered through a multipath feature registration network to obtain a multipath feature deformation field;
S5: compounding the pancreas morphology guide deformation field and the multipath characteristic deformation field;
s6: and (3) applying the composite deformation field obtained in the step (S5) to the preprocessed image to be registered to obtain a deformed image, and realizing the registration of the pancreatic CT image.
Preferably, step S2 includes:
s201: resampling the reference image and the image to be registered to keep the same size and pixel spacing of the reference image and the image to be registered;
s202, pre-registering the resampled reference image and the image to be registered to realize the alignment of the coordinates of the reference image and the image to be registered;
s203: and distributing pseudo labels to the pre-registered reference image and the image to be registered to obtain pseudo label images corresponding to the reference image and the image to be registered.
Preferably, step S3 includes:
s301, obtaining pancreas target area diagrams of the reference image and the image to be registered through cutting the pseudo tag images corresponding to the reference image and the image to be registered obtained in the step S203;
s302, combining the pancreas target area diagram of the reference image obtained by cutting in the S301 and the pancreas target area diagram of the image to be registered;
s303, inputting the combined images in the step S302 into an organ morphology guiding registration network for encoding, and extracting to obtain a feature map;
S304, decoding the coded characteristic diagram, and outputting pancreas morphology guiding deformation field.
In the preferred scheme, in the step S3, the organ morphology guiding registration network includes an encoding stage, a decoding stage and a deformation stage, the encoding stage performs feature extraction on the input image to be registered, the decoding stage outputs a registered pancreas morphology guiding deformation field, and the deformation stage uses the registered deformation field output by the decoding network as a network parameter vector of the deformation stage, and deforms the image to be registered to obtain a registered image.
Preferably, the encoding stage: the method comprises a first coding layer and a downsampling layer, wherein the first coding layer comprises 16 convolution kernels with the size of 3 multiplied by 3, a regularization layer and an activation function layer; the convolution kernel slides in step length 1 to obtain a feature map, the number of image input channels of the first coding layer is 2, and the number of output channels is 16; the downsampling layer carries out mean pooling downsampling on the feature images so that the size of the feature images after downsampling becomes 1/2 of the size of the original image;
decoding: the method comprises a first decoding layer, an up-sampling layer, a second decoding layer, a first convolution layer and a second convolution layer which are sequentially connected, wherein the first decoding layer comprises 32 convolution kernels with the size of 3 multiplied by 3, a regularization layer and an activation function layer, the number of input channels of the first decoding layer is 16, and the number of output channels of the first decoding layer is 32; the up-sampling layer samples the feature images, the size of the feature images is changed into twice of the original size, and the feature images with the same size in the first encoding stage and the first decoding stage are combined by using jump connection to obtain a 48-channel feature image; the second decoding layer comprises 32 convolution kernels with the size of 3 multiplied by 3, a regularization layer and an activation function layer, the number of input channels of the second decoding layer is 48, and the number of output channels of the second decoding layer is 32; the first convolution layer comprises 16 convolution kernels with the size of 3 multiplied by 3, a regularization layer and an activation function layer, the number of input channels of the first convolution layer is 32, and the number of output channels of the first convolution layer is 16; the second convolution layer comprises 3 convolution kernels with the size of 3 multiplied by 3, a regularization layer and an activation function layer, the number of input channels of the second convolution layer is 32, the number of output channels is 3, and the second convolution layer outputs a deformation field with the size of 3 channels and the same size as the input image.
As a preferred scheme, the deformation stage adopts an STN network, and the loss function of the organ morphology guiding registration network adopts an NCC loss function.
As a preferred solution, in step S4, the multipath features extracted by the multipath feature registration network include features extracted from three path input images, where the input of the first path is a window image in the image histology theory most suitable for observing each organ of the abdominal cavity; the input of the second path is a window image corresponding to the edge feature image screened out by utilizing the self-adaptive threshold canny edge detection technology; the input of the third path is a window image after noise reduction and enhancement on the basis of the original image.
Preferably, step S4 includes:
s401, windowing the reference image with aligned coordinates obtained in the step S202 and the image to be registered to obtain images with different window widths;
s402, selecting a reference image and an image to be registered, windowing the obtained image with the window width of 45 windows of 350, the obtained image with the window width of 0 windows of 150 and the obtained image with the window width of 100 windows of 1000, and splicing the images with the same window width to obtain three spliced images;
s403, inputting the three spliced images obtained in the step S402 into a multi-path feature registration network, and taking the image of the window level 45 and the window width 350 as the input of a first path; taking an image of window level 0 and window width 150 as input of a second path; taking an image with a window level of 100 and a window width of 1000 as an input of a third path;
S404, respectively carrying out multi-level coding on the images input by the three paths to obtain different-level feature images with different resolutions, and splicing and fusing the feature images with the same resolution of the same level on the three paths to obtain coding feature images with different resolutions;
s405, decoding the coding feature map obtained in the step S404, and outputting a multipath feature deformation field.
Preferably, the multipath feature registration network comprises an encoding stage, a decoding stage and a deformation stage; the encoding stage is divided into three paths, wherein the first path inputs an image with a window width of 45 windows and 350 windows, the second path inputs a characteristic diagram after the image with a window width of 0 windows and 150 windows is subjected to self-adaptive threshold canny edge detection, and the third path inputs an image with a window width of 100 windows and 1000 windows, which is close to the original image;
the coding structure of the first path is the same as that of the second path, the coding stage has five layers, each coding layer comprises a layer of convolution block fused with SEnet, resNet, inception three network structures, and the feature images acquired by convolution are downsampled by mean value pooling among the coding layers, so that the feature image size is changed into 1/2 of the upper layer;
the third path is different from the encoding structures of the first path and the second path and comprises encoding and decoding stages, the encoding stages are also divided into five layers, each encoding layer structure is similar to the first two paths, but before each encoding layer, the characteristic images of the corresponding resolution levels acquired by the first path and the second path are subjected to convolution fusion for one time, and then the fused characteristic images and the characteristic images of the same resolution level of the third path are subjected to convolution fusion for one round;
After the encoding stage is finished, obtaining resolution characteristic images corresponding to five levels after three paths are fused, and then entering a decoding stage; five layers are correspondingly arranged in the decoding stage, the input of each decoding layer is a feature map of a resolution layer corresponding to the third path coding stage and a feature map obtained by upsampling a decoding layer on a third path, jump connection is carried out during decoding, and each decoding layer carries out two-round convolution operation on the input feature map, wherein the size of the feature map of the decoding layer after upsampling becomes 2 times of that of the upper layer; and performing convolution operation on the feature map obtained by the last decoding layer to finally obtain the multipath feature deformation field.
Preferably, the coding structure of the first path and the second path comprises a first coding layer, a first downsampling layer, a second coding layer, a second downsampling layer, a third coding layer, a third downsampling layer, a fourth coding layer, a fourth downsampling layer and a fifth coding layer which are sequentially connected,
a first coding layer, which performs one round of convolution coding on the characteristic image obtained in the coding stage by using a convolution block of 3 multiplied by 3 to obtain a characteristic image, wherein the channel number of the characteristic image is changed from 2 to 16;
the first downsampling layer downsamples the feature map obtained by the first coding layer by using mean value pooling, and the size of the feature map is changed into 1/2 of that of the original map;
Second coding layer: performing one-round convolution coding on the feature map acquired by the first downsampling layer by using a convolution block of 3 multiplied by 3, wherein the number of channels of the feature map is changed from 32 to 64;
second downsampling layer: downsampling the feature map obtained by the second coding layer by using the mean value pooling, wherein the size of the feature map is changed into 1/4 of that of the original map;
third coding layer: performing one-round convolution coding on the feature map acquired by the second downsampling layer by using a convolution block of 3 multiplied by 3, wherein the number of channels of the feature map is changed from 64 to 128;
third downsampling layer: downsampling the feature map obtained by the third coding layer by using the mean value pooling, wherein the size of the feature map is changed into 1/8 of that of the original map;
fourth coding layer: performing one-round convolutional encoding on the feature map acquired by the third downsampling layer by using a convolutional block of 3 multiplied by 3, wherein the number of channels of the feature map is changed from 128 to 256;
fourth downsampling layer: downsampling the feature map obtained by the fourth coding layer by using mean value pooling, wherein the size of the feature map is changed into 1/16 of that of the original map;
fifth coding layer: performing one-round convolutional encoding on the feature map acquired by the fourth downsampling layer by using a convolutional block of 3 multiplied by 3, wherein the channel number of the feature map is changed from 256 to 512;
the structure of the decoding stage comprises a first decoding layer, a first upsampling layer, a second decoding layer, a second upsampling layer, a third decoding layer, a third upsampling layer, a fourth decoding layer, a fourth upsampling layer and a fifth decoding layer which are sequentially connected,
First decoding layer: the method comprises 1 convolution process, wherein a convolution block of 3 multiplied by 3 is utilized to carry out one round of convolution decoding on the feature map obtained by the fifth coding layer, so as to obtain a feature map with 256 channels;
first upsampling layer: performing a deconvolution operation on the feature image obtained by the first decoding layer to change the size of the feature image into 1/8 of the original image, performing jump connection, and combining the feature image obtained by the up-sampling with the feature image obtained by the fourth coding layer to obtain a feature image with 256 channels;
second decoding layer: the method comprises 2 convolution processes, namely, firstly, carrying out one convolution on the characteristic image spliced by the first upsampling layer by utilizing 256 convolution structures to obtain a characteristic image with 256 channels; then, the characteristic image is convolved again by utilizing 128 convolution structures, so that a characteristic image with 128 channels is obtained;
second upsampling layer: performing a deconvolution operation on the feature image obtained by the second decoding layer to change the size of the feature image into 1/4 of the original image, performing jump connection, and combining the feature image obtained by the up-sampling with the feature image obtained by the third coding layer to obtain a feature image with 128 channels;
third decoding layer: the method comprises the steps of 2 convolution processes, namely, firstly, carrying out one convolution on the feature images spliced by the second upsampling layer by utilizing 128 convolution structures to obtain feature images with the channel number of 128; then, the characteristic image is convolved again by using 64 convolution structures to obtain a characteristic image with the channel number of 64;
Third upsampling layer: performing a deconvolution operation on the feature image obtained by the third decoding layer to change the size of the feature image into 1/2 of the original image, performing jump connection, and combining the feature image obtained by the up-sampling with the feature image obtained by the second coding layer to obtain a feature image with 64 channels;
fourth decoding layer: the method comprises 2 convolution processes, namely, firstly, carrying out one-time convolution on the characteristic image spliced by the third upsampling layer by using 64 convolution structures to obtain a characteristic image with the channel number of 64; then, the characteristic image is convolved again by utilizing 32 convolution structures to obtain a characteristic image with the channel number of 32;
fourth upsampling layer: performing a deconvolution operation on the feature image obtained by the fourth decoding layer to change the feature image size into the original image size of the original image, performing jump connection, and combining the feature image obtained by the up-sampling with the feature image obtained by the first coding layer to obtain a feature image with the channel number of 32;
fifth decoding layer: the method comprises 2 convolution processes, namely, firstly, carrying out primary convolution on the characteristic image spliced by the fourth upsampling layer by utilizing 32 convolution structures to obtain a characteristic image with the channel number of 32; and then, the characteristic image is convolved again by utilizing 16 convolution structures, so that the characteristic image with the channel number of 16 is obtained.
Compared with the prior art, the invention has the beneficial effects that:
the invention obtains the pancreas morphology guiding deformation field through the organ morphology guiding registration network, realizes the directional registration of pancreas, utilizes the multipath characteristic registration network to carry out multipath characteristic extraction and fusion on the pancreas CT image, obtains the abdominal characteristic image with more target pertinence and low noise, and improves the accuracy of registration.
Drawings
Fig. 1 is a flowchart of a pancreatic CT image registration method according to an embodiment of the invention.
FIG. 2 is a graph showing a comparison of the pancreatic CT images before and after segmentation in which (a) is an undivided image, according to an embodiment of the present invention; (b) is a cropped image.
Fig. 3 shows images of different window widths of pancreatic CT images according to an embodiment of the present invention, where (a) is original, (b) is an image of window width 350 of window level 45, (c) is an image of window width 150 of window level 0, and (d) is an image of window width 1000 of window level 100.
Fig. 4 is a comparison of the pancreatic CT images before and after cropping, in which (a) is an uncut image and (b) is a cropped image, according to an embodiment of the present invention.
Fig. 5 is a block diagram of an organ morphology-guided registration network in accordance with an embodiment of the present invention.
Fig. 6 is a comparison chart of the pancreatic CT image edge extraction according to an embodiment of the present invention, wherein (a) is an original image and (b) is an edge feature image.
Fig. 7 is a block diagram of an organ morphology-guided registration network in accordance with an embodiment of the present invention.
Fig. 8 is a block diagram of a registration network incorporating multipath feature network and organ morphology guidance in accordance with an embodiment of the present invention.
Fig. 9 is a flow chart of information flow of a fused multi-path signature network and organ morphology guidance in accordance with an embodiment of the present invention.
Fig. 10 is a flowchart of an example of registration of an embodiment of the present invention.
Fig. 11 is a result diagram of a registration example of an embodiment of the present invention, in which (a) is a reference image, (b) is an image to be registered, and (c) is a registered image.
FIG. 12 is a graph comparing the results of the method of the present invention with the results of the prior art registration method, wherein (a) is the registration result of the method of the present invention, (a 1) is the registration difference map of the method of the present invention, and (a 2) is the registration difference map of the method of the present invention plus the pancreatic label image; (b) The VM model registration result is, (b 1) the VM model registration difference map is obtained, and (b 2) the VM model registration difference map is added with a pancreas label image; (c) Registering the result for the Demon algorithm, (c 1) registering the difference map for the Demon algorithm, and (c 2) registering the difference map for the Demon algorithm and adding the pancreas label image.
Detailed Description
The following describes in further detail the embodiments of the present invention with reference to the drawings and examples. The following examples are illustrative of the invention and are not intended to limit the scope of the invention.
Example 1
As shown in fig. 1, a pancreatic CT image registration method for fusing multipath features with organ morphology guidance according to a preferred embodiment of the present invention includes the following steps:
s1: acquiring a reference image and an image to be registered, wherein the reference image and the image to be registered are pancreatic CT images;
s2: preprocessing the acquired reference image and the image to be registered;
s3: registering the preprocessed image to be registered through an organ morphology guiding registration network to obtain a pancreas morphology guiding deformation field;
s4: extracting and fusing multipath features of the preprocessed image to be registered through a multipath feature registration network to obtain a multipath feature deformation field;
s5: compounding the pancreas morphology guide deformation field and the multipath characteristic deformation field;
s6: and (3) applying the composite deformation field obtained in the step (S5) to the preprocessed image to be registered to obtain a deformed image, and realizing the registration of the pancreatic CT image.
According to the embodiment, the pancreas morphology guiding deformation field is obtained through the organ morphology guiding registration network, the pancreas is subjected to directional registration, the multi-path feature extraction and fusion are carried out on the pancreas CT image through the multi-path feature registration network, the abdominal cavity feature image with more target pertinence and low noise is obtained, the accuracy of registration is improved, the organ morphology guiding registration and the multi-path feature registration are combined, the multi-path feature registration can be used for carrying out refinement adjustment and correction optimization after the directional registration of the organ morphology guiding registration, and finally the deformation field with more robustness and smoothness is obtained, so that the accuracy of the obtained registration image is higher, and meanwhile, the robustness and the smoothness are better.
In this embodiment, a pancreatic CT image is acquired, and pre-processing is performed. The preprocessing step is step S2, comprising:
s201: resampling the reference image and the image to be registered to keep the same size and pixel spacing of the reference image and the image to be registered.
In medical image processing, three-dimensional medical images are typically stacked from a series of slices. Each slice contains certain medical image data, and the size and resolution of these data is typically determined by the size and Spacing (Spacing) of the pixels. Spacing refers to the physical distance between adjacent pixels in a medical image. If the Spacing in the medical image data is inconsistent, it may be difficult to compare and match between different medical image data when medical image analysis and processing is performed, and thus it is necessary to unify the Spacing of the medical images. Based on this, the present invention resamples (resamples) the image to the same Spacing (1, 1), and the resampled image maintains the same size and the same pixel pitch.
S202, pre-registering the resampled reference image and the image to be registered to realize the alignment of the coordinates of the reference image and the image to be registered.
In three-dimensional medical image registration, pre-registration refers to simple preliminary processing of an original medical image before formal registration is performed, so that accuracy and stability of registration are improved. The primary roles of preregistration include the following:
1. improving medical image quality: the pre-registration can perform denoising, smoothing, scaling and other treatments on the original medical image so as to reduce noise interference and improve the image quality, thereby improving the accuracy and reliability of registration.
2. Acceleration of the matching process: the pre-registration can perform cutting, downsampling and other processes on the original medical image so as to reduce the calculated amount and time complexity in the matching process, thereby improving the speed and efficiency of registration.
Based on the method, the image is preregistered, one fixed image is selected, and other images are preregistered on the fixed image to realize the alignment of coordinates.
S203: and distributing pseudo labels to the pre-registered reference image and the image to be registered to obtain pseudo label images corresponding to the reference image and the image to be registered.
In three-dimensional medical image registration, pseudo-Labeling refers to marking unknown medical images with known label information to facilitate registration and classification tasks. In particular, in three-dimensional medical image registration, the present invention generally has a set of known medical image data, where each medical image has been labeled as a specific tissue structure and organ. The invention can use the known marking information to mark the unknown medical image, thereby facilitating registration, classification and other tasks.
Based on the method, a simple segmentation network is utilized to train a part of the marked images to obtain a segmentation network of the pancreatic organs, and the segmentation network is applied to the images to be registered to segment the pancreatic organs. As shown in fig. 2.
In addition, preprocessing of pancreatic CT images has also been performed using windowing techniques. In three-dimensional medical image registration, a Windowing technology (window) is used for adjusting the window level and window width of medical image display, so that different tissue structures in the medical image are highlighted, noise is reduced to a certain extent, and the accuracy and robustness of registration are improved. Based on this, the original image is windowed using a windowing technique to obtain images under multiple paths, as shown in fig. 3.
When training the organ morphology guide registration network and the multipath feature registration network, acquiring a data set, wherein the data set is a plurality of pancreas CT images, randomly selecting a part of data as a training set, and the rest of data as a test set. All images of the dataset were pre-processed, including resampling, pre-registration, and assignment of pseudo-labels, prior to training the organ morphology-guided registration network and the multi-path feature registration network. Resampling keeps all images of the dataset at the same size and pixel pitch; pre-registration refers to randomly selecting one image from a data set as a reference image, and then aligning coordinates of other images by taking the reference image as a standard; in this embodiment, the pseudo tag assigned to the pancreatic image is obtained by using the existing pancreatic pre-segmentation technique.
Example two
The difference between this embodiment and the first embodiment is that, based on the first embodiment, this embodiment further describes step S3 and the organ morphology guidance registration network.
In this embodiment, step S3 includes:
s301, obtaining pancreas target area diagrams of the reference image and the image to be registered through cutting the pseudo tag images corresponding to the reference image and the image to be registered obtained in the step S203. Clipping the image into target region (clipping with pancreas as center) with the pre-processed pancreas organ pseudo tag as standard). As shown in fig. 4.
S302, combining the pancreas target area map of the reference image obtained by cutting in the S301 and the pancreas target area map of the image to be registered.
S303, inputting the combined images in the step S302 into an organ morphology guiding registration network for encoding, and extracting to obtain a feature map.
S304, decoding the coded characteristic diagram, and outputting pancreas morphology guiding deformation field.
The organ morphology-oriented registration network employed in step S303 of the present embodiment is trained. And (3) training an organ morphology guide registration network, carrying out registration training by adopting a known data set, and fixing registration network parameters after the optimal registration effect is achieved. The specific algorithm is as follows: the organ morphology guiding registration network is characterized by comprising an encoding stage, a decoding stage and a deformation stage, wherein the encoding stage performs feature extraction on input image data, the decoding stage outputs a registered deformation field, the deformation field is acted on a pseudo-label image to be registered to obtain a registered pseudo-label image, then a similarity loss function between the registered pseudo-label image and a pseudo-label image of a reference image is calculated, reverse gradient propagation is performed, and network parameters are updated.
Therefore, the finally obtained organ morphology guiding registration network also comprises an encoding stage, a decoding stage and a deformation stage, wherein the encoding stage performs feature extraction on the input pseudo-label image to be registered, the decoding stage outputs a pancreas morphology guiding deformation field to be registered, and the deformation stage takes the registered deformation field output by the decoding network as a network parameter vector of the deformation stage, and deforms the pseudo-label image to be registered to obtain a registered pseudo-label image.
Registering by using a trained organ morphology guiding registration network, specifically comprising:
encoding: the method comprises a first coding layer and a downsampling layer, wherein the first coding layer comprises 16 convolution kernels with the size of 3 multiplied by 3, a regularization layer and an activation function layer; the convolution kernel slides in step length 1 to obtain a feature map, the number of image input channels of the first coding layer is 2, and the number of output channels is 16; the downsampling layer carries out mean pooling downsampling on the feature images so that the size of the feature images after downsampling becomes 1/2 of the size of the original image;
decoding: comprises a first decoding layer, an up-sampling layer, a second decoding layer, a first convolution layer and a second convolution layer which are connected in sequence,
1. The first decoding layer comprises 32 convolution kernels with the size of 3 multiplied by 3, a regularization layer and an activation function layer, the number of input channels of the first decoding layer is 16, and the number of output channels of the first decoding layer is 32;
2. the up-sampling layer samples the feature images, the size of the feature images is changed into twice of the original size, and the feature images with the same size in the first encoding stage and the first decoding stage are combined by using jump connection to obtain a 48-channel feature image;
3. the second decoding layer comprises 32 convolution kernels with the size of 3 multiplied by 3, a regularization layer and an activation function layer, the number of input channels of the second decoding layer is 48, and the number of output channels of the second decoding layer is 32;
4. the first convolution layer comprises 16 convolution kernels with the size of 3 multiplied by 3, a regularization layer and an activation function layer, the number of input channels of the first convolution layer is 32, and the number of output channels of the first convolution layer is 16;
5. the second convolution layer comprises 3 convolution kernels with the size of 3 multiplied by 3, a regularization layer and an activation function layer, the number of input channels of the second convolution layer is 32, the number of output channels is 3, and the second convolution layer outputs a deformation field with the size of 3 channels and the same size as the input image.
Deformation stage: the present embodiment employs an STN network during the deformation phase. And the deformation field output in the decoding stage is used as an STN network parameter vector, and the image to be registered is deformed to obtain a registered image.
The STN network (Spatial Transformer Network) is a neural network that can learn geometric transformations of images, which can be transformed by transforming the input images. The STN network is mainly composed of three parts: a positioning network, a grid generator and a sampler.
1. Positioning network: the positioning network inputs the original image and outputs a parameter vector for representing geometric transformation (such as rotation, scaling, translation and the like) in the image;
2. grid generator: the grid generator uses the parameter vector output by the positioning network to generate a grid for performing geometric transformation on the original image;
3. a sampler: the sampler samples the original image using the generated grid to obtain a transformed image.
In addition, the loss function of the organ morphology-guided registration network of the present embodiment employs an NCC loss function.
NCC (Normalized Cross Correlation) loss function is a similarity measure commonly used in medical image registration. The NCC loss function is based on cross-correlation calculations of the images for evaluating the similarity between the two medical images, thereby optimizing the registration process of the medical images. The calculation formula of the NCC loss function is as follows:
wherein F and M respectively represent two kinds of medicine The image is displayed in a form of a picture,and->Respectively representing the mean value of the two images in the local area,/->And->The standard deviation of the two images in the partial region is represented, and N represents the number of pixels included in the partial region of the image. The NCC loss function has a value range of [ -1, 1]When the two medical images are perfectly aligned, the value of NCC is 1, and when the two medical images are perfectly unmatched, the value of NCC is-1.
In medical image registration, the NCC loss function is typically used as an optimized objective function of a registration algorithm for minimizing a similarity metric between two medical images, thereby achieving registration of the medical images.
In summary, the invention obtains a registration network which focuses more on the morphological information of the pancreatic organs by registering by using the pancreatic pseudo-tag images obtained by segmentation. The structure diagram of the organ morphology guiding registration network is shown in fig. 5, the input image is a pseudo-label image corresponding to the image to be registered and the reference image, and the image of the part is simpler, so that a shallow convolution structure is adopted to acquire a pancreas morphology guiding deformation field.
Example III
The difference between the present embodiment and the second embodiment is that, based on the second embodiment, the present embodiment further describes step S4 and the multipath feature registration network.
In step S4, the multipath features extracted by the multipath feature registration network include features extracted from three path input images, where the input of the first path is a window image most suitable for observing organs of the abdominal cavity in the image histology theory; the input of the second path is a window image corresponding to the edge feature image screened out by utilizing the self-adaptive threshold canny edge detection technology; the input of the third path is a window image after noise reduction and enhancement on the basis of the original image.
The edge features of the images refer to edges or contours which can be distinguished and identified due to brightness or contrast variation of the images, are widely applied in the fields of image processing and computer vision, are beneficial to object identification and scene segmentation, and are beneficial to template matching between the images.
In three-dimensional medical images, a Canny algorithm can be utilized to extract edge features in the images. The three-dimensional medical image is generally composed of a plurality of two-dimensional images, each two-dimensional slice can be regarded as a gray-scale image, the edge characteristics of each slice image are extracted by using a Canny algorithm, and then the edge characteristics of all the images are overlapped to obtain an edge characteristic image of the three-dimensional image. Fig. 6 is an image texture feature extracted by a canny algorithm.
The multi-path image feature combination refers to fusion of medical images with different window widths so as to obtain more comprehensive feature information. The invention takes images with different window widths obtained by the windowing technology in the first step as the input of the network. The image of the window level 45 and the window width 350 are images which are shown by the existing medical imaging principle and are most suitable for the observation of the abdominal organs, and the significant characteristics of the abdominal organs can be obtained by inputting the images into a network; the image with window level 0 and window width 150 is an image which is more suitable for extracting edge characteristics, and the image of the path is input into a network after Canny edge characteristic extraction is carried out on the image of the path; window level 100 window width 1000 is the window width that is close to the original image after filtering out a portion of the noise, and the present invention will retain most of the information of the original image in the input network.
Specifically, step S4 of the present embodiment includes:
s401, windowing the reference image with aligned coordinates obtained in the step S202 and the image to be registered to obtain images with different window widths;
s402, selecting a reference image and an image to be registered, windowing the obtained image with the window width of 45 windows of 350, the obtained image with the window width of 0 windows of 150 and the obtained image with the window width of 100 windows of 1000, and splicing the images with the same window width to obtain three spliced images;
S403, inputting the three spliced images obtained in the step S402 into a multi-path feature registration network, and taking the image of the window level 45 and the window width 350 as the input of a first path; taking an image of window level 0 and window width 150 as input of a second path; taking an image with a window level of 100 and a window width of 1000 as an input of a third path;
s404, respectively carrying out multi-level coding on the images input by the three paths to obtain different-level feature images with different resolutions, and splicing and fusing the feature images with the same resolution of the same level on the three paths to obtain coding feature images with different resolutions;
specifically, in this embodiment, five layers of encoding are performed on the input image by using three paths, so as to obtain a feature map of 5 layers corresponding to the three paths; and then splicing and fusing the feature images of the same layer of the three paths, wherein the resolution of the feature images of the same layer is the same. The feature images extracted by the three paths are all 5 layers from top to bottom, and are respectively: the feature map size of the first layer is 1 times the size of the input image, and the feature map size of the second layer is 1/2 of the size of the input image; the feature map size of the third layer is 1/4 of the size of the input image; the feature map size of the fourth layer is 1/8 of the size of the input image; the feature map size of the fifth layer is 1/16 of the size of the input image. As shown in fig. 7.
S405, decoding the coding feature map obtained in the step S404, and outputting a multipath feature deformation field.
The multipath characteristic registration network adopted in the embodiment is a model which is fixed after iterative training. The training of the multipath characteristic registration network is to input all images to be registered and reference images into the registration network, and fix network parameters after the training obtains an optimal effect. The specific process is as follows: the multipath feature registration network consists of an encoding stage and a decoding stage and a deformation stage. The encoding stage extracts the characteristic diagram of the input image data, the decoding stage outputs a registered deformation field, the deformation field acts on the image to be registered to obtain a registered image, then a similarity loss function between the registered image and a reference image is calculated, inverse gradient propagation is carried out, and network parameters are updated.
The trained multipath feature registration network also comprises an encoding stage, a decoding stage and a deformation stage, wherein the deformation stage adopts an STN network.
Registering with a trained multipath feature registration network, comprising: a coding stage, a coding stage and a deformation stage; the encoding stage is divided into three paths, wherein the first path inputs an image with a window width of 45 windows and 350 windows, the second path inputs a characteristic diagram after the image with a window width of 0 windows and 150 windows is subjected to self-adaptive threshold canny edge detection, and the third path inputs an image with a window width of 100 windows and 1000 windows, which is close to the original image;
The coding structure of the first path is the same as that of the second path, the coding stage has five layers, each coding layer comprises a layer of convolution block fused with SEnet, resNet, inception three network structures, and the feature images acquired by convolution are downsampled by mean value pooling among the coding layers, so that the feature image size is changed into 1/2 of the upper layer;
the third path is different from the encoding structures of the first path and the second path and comprises encoding and decoding stages, the encoding stages are also divided into five layers, each encoding layer structure is similar to the first two paths, but before each encoding layer, the characteristic images of the corresponding resolution levels acquired by the first path and the second path are subjected to convolution fusion for one time, and then the fused characteristic images and the characteristic images of the same resolution level of the third path are subjected to convolution fusion for one round;
after the encoding stage is finished, obtaining resolution characteristic images corresponding to five levels after three paths are fused, and then entering a decoding stage; five layers are correspondingly arranged in the decoding stage, the input of each decoding layer is a feature map of a resolution layer corresponding to the third path coding stage and a feature map obtained by upsampling a decoding layer on a third path, jump connection is carried out during decoding, and each decoding layer carries out two-round convolution operation on the input feature map, wherein the size of the feature map of the decoding layer after upsampling becomes 2 times of that of the upper layer; and performing convolution operation on the feature map obtained by the last decoding layer to finally obtain the multipath feature deformation field.
Wherein the coding structure of the first path and the second path comprises a first coding layer, a first downsampling layer, a second coding layer, a second downsampling layer, a third coding layer, a third downsampling layer, a fourth coding layer, a fourth downsampling layer and a fifth coding layer which are sequentially connected,
1. a first coding layer, which performs one round of convolution coding on the characteristic image obtained in the coding stage by using a convolution block of 3 multiplied by 3 to obtain a characteristic image, wherein the channel number of the characteristic image is changed from 2 to 16;
2. the first downsampling layer downsamples the feature map obtained by the first coding layer by using mean value pooling, and the size of the feature map is changed into 1/2 of that of the original map;
3. second coding layer: performing one-round convolution coding on the feature map acquired by the first downsampling layer by using a convolution block of 3 multiplied by 3, wherein the number of channels of the feature map is changed from 32 to 64;
4. second downsampling layer: downsampling the feature map obtained by the second coding layer by using the mean value pooling, wherein the size of the feature map is changed into 1/4 of that of the original map;
5. third coding layer: performing one-round convolution coding on the feature map acquired by the second downsampling layer by using a convolution block of 3 multiplied by 3, wherein the number of channels of the feature map is changed from 64 to 128;
6. third downsampling layer: downsampling the feature map obtained by the third coding layer by using the mean value pooling, wherein the size of the feature map is changed into 1/8 of that of the original map;
7. Fourth coding layer: performing one-round convolutional encoding on the feature map acquired by the third downsampling layer by using a convolutional block of 3 multiplied by 3, wherein the number of channels of the feature map is changed from 128 to 256;
8. fourth downsampling layer: downsampling the feature map obtained by the fourth coding layer by using mean value pooling, wherein the size of the feature map is changed into 1/16 of that of the original map;
9. fifth coding layer: performing one-round convolutional encoding on the feature map acquired by the fourth downsampling layer by using a convolutional block of 3 multiplied by 3, wherein the channel number of the feature map is changed from 256 to 512;
the structure of the decoding stage comprises a first decoding layer, a first upsampling layer, a second decoding layer, a second upsampling layer, a third decoding layer, a third upsampling layer, a fourth decoding layer, a fourth upsampling layer and a fifth decoding layer which are sequentially connected,
1. first decoding layer: the method comprises 1 convolution process, wherein a convolution block of 3 multiplied by 3 is utilized to carry out one round of convolution decoding on the feature map obtained by the fifth coding layer, so as to obtain a feature map with 256 channels;
2. first upsampling layer: performing a deconvolution operation on the feature image obtained by the first decoding layer to change the size of the feature image into 1/8 of the original image, performing jump connection, and combining the feature image obtained by the up-sampling with the feature image obtained by the fourth coding layer to obtain a feature image with 256 channels;
3. Second decoding layer: the method comprises 2 convolution processes, namely, firstly, carrying out one convolution on the characteristic image spliced by the first upsampling layer by utilizing 256 convolution structures to obtain a characteristic image with 256 channels; then, the characteristic image is convolved again by utilizing 128 convolution structures, so that a characteristic image with 128 channels is obtained;
4. second upsampling layer: performing a deconvolution operation on the feature image obtained by the second decoding layer to change the size of the feature image into 1/4 of the original image, performing jump connection, and combining the feature image obtained by the up-sampling with the feature image obtained by the third coding layer to obtain a feature image with 128 channels;
5. third decoding layer: the method comprises the steps of 2 convolution processes, namely, firstly, carrying out one convolution on the feature images spliced by the second upsampling layer by utilizing 128 convolution structures to obtain feature images with the channel number of 128; then, the characteristic image is convolved again by using 64 convolution structures to obtain a characteristic image with the channel number of 64;
6. third upsampling layer: performing a deconvolution operation on the feature image obtained by the third decoding layer to change the size of the feature image into 1/2 of the original image, performing jump connection, and combining the feature image obtained by the up-sampling with the feature image obtained by the second coding layer to obtain a feature image with 64 channels;
7. Fourth decoding layer: the method comprises 2 convolution processes, namely, firstly, carrying out one-time convolution on the characteristic image spliced by the third upsampling layer by using 64 convolution structures to obtain a characteristic image with the channel number of 64; then, the characteristic image is convolved again by utilizing 32 convolution structures to obtain a characteristic image with the channel number of 32;
8. fourth upsampling layer: performing a deconvolution operation on the feature image obtained by the fourth decoding layer to change the feature image size into the original image size of the original image, performing jump connection, and combining the feature image obtained by the up-sampling with the feature image obtained by the first coding layer to obtain a feature image with the channel number of 32;
9. fifth decoding layer: the method comprises 2 convolution processes, namely, firstly, carrying out primary convolution on the characteristic image spliced by the fourth upsampling layer by utilizing 32 convolution structures to obtain a characteristic image with the channel number of 32; and then, the characteristic image is convolved again by utilizing 16 convolution structures, so that the characteristic image with the channel number of 16 is obtained.
10. Obtaining a final deformation field: and finally, carrying out convolution on the characteristic image by using 3 convolution structures to obtain a three-channel deformation field for the characteristic image with the channel number of 16 obtained by the fifth decoding layer.
Fig. 7 is a block diagram of a multipath feature registration network of the present embodiment. The first path, the second path and the third path are respectively counted from left to right. The first path is input as a window CT image which is based on an image histology theory and is most suitable for observing the abdominal organs, the second path is input as an edge image after extracting edge characteristics of the image, and the third path is a CT image after reducing noise and enhancing the original image. And in the encoding process, the darker characteristic image of the third path is fused with the lightest characteristic image of the first path and the darkest characteristic image of the second path. And in the decoding process corresponding to the rightmost path, carrying out multi-layer decoding on the fusion features acquired by the three paths, and finally acquiring deformation fields corresponding to the multi-path feature registration network.
In the organ morphology-oriented registration network of this embodiment, the present invention improves the coding layer, which includes a layer of convolution blocks fusing SEnet, resNet, inception three network structures.
The coding layer of the embodiment fuses Inception, SENet, resNet three convolution structures to code, and extracts more comprehensive image characteristics. The convolution structure of the Inote enables the network to comprehensively consider the characteristic values in the single channel image of the characteristic image globally and locally in the encoding process, which is different from the convolution structure that the traditional convolution only focuses on the local area and ignores the global characteristic. The SENet (squeize-and-Excitation Network) is a convolution neural network structure based on a channel attention mechanism, images acquired in the process of convolution coding are often multichannel, the attention degree of a traditional convolution structure for each channel is the same when the next round of convolution is carried out, the attention mechanism is added to the SENet structure introduced by the invention, the channel attention operation is carried out on a multichannel characteristic diagram, a weight parameter is given to each channel characteristic diagram, and self-adaptive learning and adjustment are carried out according to the importance of each channel in the characteristic diagram, so that the convolution focuses on the characteristic information of important channels. The method solves the global and local importance problems of the internal feature values of each channel in the feature images, and the SENet solves the importance problems between channels, so that the combination of the two can enable the features extracted by the network in the convolution process to be more comprehensive, stable and accurate.
In addition, a ResNet (Residual Network) structure is introduced between two convolution structures, and because the Network structure is complex and the related model parameters are more, in order to avoid the problems of model degradation, gradient disappearance required by optimization and the like in the process of updating the Network, the Residual Network is introduced in the embodiment to ensure that the result obtained after the feature image is convolved is not worse than the original result, and ensure the correct direction of model optimization.
The acceptance module contains three branching structures:
1. the first branch passes through (number of input channels/2)Is followed by the BatchNorm and the activation functionA feature map of the number of channels/2 with unchanged size;
2. the second branch passes through (number of input channels/4)Is followed by BatchNorm and an activation function to obtain a feature map of the number of channels/4 of constant size, and then (number of input channels/4) are passed through ≡>Adding BatchNorm and an activation function to the convolution kernel of (1) to obtain a feature map of the number/4 of channels with unchanged size;
3. the third branch passes through (number of input channels/4)Is followed by BatchNorm and an activation function to obtain a feature map of the number of channels/4 of constant size, and then (number of input channels/4) are passed through ≡ >A characteristic diagram of the channel number/4 with unchanged size is obtained after the convolution kernel of the (a) and the (b) are followed by the BatchNorm and an activation function;
4. and finally, combining the feature graphs obtained by the three branch structures to obtain the feature graph with unchanged size and unchanged channel number of the multi-scale fusion.
A SEnet channel attention module: and processing the output of the acceptance module by a channel attention mechanism to further improve the performance and generalization capability of the network.
The Residual module: and carrying out residual connection on the output of the SEnet module and the initial input so as to avoid the problems of gradient disappearance, model degradation and the like.
Finally, in this embodiment, the combined structure of the organ morphology guide registration network and the multi-path feature registration network is shown in fig. 8. Therefore, as shown in fig. 9, the registration method of the present embodiment mainly includes:
1. generating a deformation field of the pancreas pseudo-label image by using the trained organ morphology guide registration network;
2. registering the pancreas CT image by using a trained multipath feature registration network to generate a deformation field;
3. compounding a deformation field generated by the multipath characteristic network with a pseudo-tag deformation field generated in the organ morphology guide network to generate a final deformation field; the deformation field generated by the multi-path characteristic registration network is a main deformation field, the deformation field generated in the organ form guide network is a form guide deformation field, and the form guide deformation field and the main deformation field are compounded to generate a final deformation field;
4. And applying the final deformation field to the image to be registered to obtain a deformed image, wherein the deformed image is the registered image.
The final deformation field is described as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,pancreatic morphology-directed deformation field representing organ morphology-directed network generation,/->Representing the main deformation field generated by the multi-path feature fusion network,/->Representing the final deformation field obtained after the two deformation fields are compounded, and applying the deformation field to the image to be registered to obtain the registered image.
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing the image to be registered (floating image), -a reference image>Representing registered imagesThe image approximates the reference image +.>
The CT image of pancreas has more noise, vascular connection and interaction among organs lead to a certain difficulty in observing pancreas, but the relative positions of the remarkable organs and pancreas are favorable for positioning and registering pancreas images, and in the embodiment, the point is fully utilized, a window image which is most suitable for the observation of the abdominal organs is fused in the encoding process of the images to be registered, deep features are acquired by two paths of the abdominal cavity edge feature image acquired by the traditional method, the pertinence of the features extracted by the image to be registered of a third path is enhanced, so that the feature extraction process can focus on pancreas and related organs thereof, the influence of noise on the registering process is filtered to a certain extent, under the encoding enhancement effect, the image registering result is improved to a certain extent compared with the basic Voxelmorph algorithm, and the Dice coefficient for measuring the image registering accuracy is improved to 0.45 from 0.38 of the Voxelmorph algorithm.
The Dice coefficient is calculated as follows:
the above is a Dice coefficient calculation formula, in whichTrue label map representing reference image +.>True label map representing registered image, +.>Representing the total number of pixels with the same pixel value at the corresponding position in the two label images, +.>Representing the total number of non-zero pixels in the reference image real label image, +.>Representing the total number of non-zero pixels in the true label image of the registered image, wherein the range of Dice is +.>The closer the coefficient is to 1, the better the registration effect, and the equal to 1 indicates that the two images are completely registered.
The method comprises the steps of introducing an organ morphology guidance network based on the existing algorithm, training the network by using the obtained pancreas pseudo-tag, generating a deformation field only aiming at pancreas, fusing the deformation field to a deformation field generated by a main network, finally obtaining a pancreas registration deformation field with more pertinence and accuracy, and improving the accuracy (Dice coefficient) of network registration to 0.52 based on the main network with the aid of the organ morphology guidance network.
Furthermore, in the present embodiment, a specific example is presented, in which the applied dataset is derived from the TCIA-Pancreas public dataset, comprising 82 abdominal cavity CT images. According to the invention, the first image in the data set is used as a reference image, other 81 images are used as images to be registered, registration alignment is carried out on the first image, 71 images in the 81 images are randomly divided to be used as training sets, and the rest 10 images are used as test sets.
Before formal registration, the invention adopts conventional resampling and preregistration operation, and firstly preregists 81 images with reference images by utilizing ANTs registration library function, so that the images from different equipment and different body types are subjected to preliminary alignment and scaling of coordinates. And then obtaining a pseudo-label image of the pancreas by using the trained segmentation network, and inputting the pseudo-label image and the original image into the network proposed by the invention.
After the image is input into the network, a windowing technology is adopted to obtain an image of the most suitable observation window of the abdominal organ, an image of the pancreatic organ edge detection window and an image of the window to be registered after proper noise reduction are used as three inputs of the multipath feature fusion network, the input image is subjected to 5-layer convolution encoding to extract image features, a stable and comprehensive feature image is obtained, and then the feature image is subjected to 5-layer convolution decoding to obtain a deformation field of the main network. On the other hand, the pancreas pseudo-label image is input into an organ morphology guidance network, then a morphology guidance deformation field of pancreas organ deformation is obtained through encoding and decoding operations, and the deformation field of the main network and a constraint field of the organ morphology guidance network are fused to obtain a final deformation field. And finally, applying the deformation field to the image to be registered of the third path of the main network to generate a registered image. An example flow of an implementation of registration of an image is shown in fig. 10.
As shown in fig. 11, in this example, it can be observed that the organ of the registered image is distinguished significantly, no distortion of the distortion clutter occurs, mainly due to the regularization added and the edge image ensuring the integrity of the registered organ contours. More importantly, the outline of the pancreas part of the registered image is basically matched and aligned with the reference image, the development of blood vessels in the left lower part and the middle part of the pancreas to be registered is more obvious than that of the original image, the development of the blood vessels is vital in the preoperative planning of pancreas treatment, the pancreas is registered before the operation so that the development of blood vessels in the venous period and the arterial period can be correspondingly registered and fused, the scheme for determining the cancer removal by the operation can be favorably determined, the influence of inter-operation blood vessel wrapping can be avoided to a certain extent, and the registration network provided by the invention just carries out registration reinforcement on the development of the blood vessels, so that the significance of the invention is more obvious.
As shown in fig. 12, the present invention provides the registration effect of the present invention, and the Voxelmorph registration algorithm (abbreviated as VM) and the differential synembryo Demon algorithm (abbreviated as Demon). The first column of the figure is the display result of the same slice of the three-dimensional image after the same image is registered by three registration methods, and it can be observed that the development of each organ of the registration result of the invention is clearer and more differentiated than the other two methods, and the development of the blood vessels around the pancreas is more obvious than the other existing methods.
The second column in fig. 12 is the difference map of the reference image subtracted from the registered image obtained by three types of methods (the more gray the difference map indicates that the registered image is more similar to the reference image, the more bright or dark the difference map indicates that the registered image is more different from the reference image), the left image of each method is the difference map itself, and the right image is the difference map plus the pancreatic label image of the red portion. It can be seen that the difference images of the proposed method are basically gray, while the difference images of the VM and the Demon registration method are mostly too bright or too dark, which indicates that the registration images obtained by the proposed registration method are more similar to the reference images than the prior VM and Demon methods. More importantly, in the pancreas part, the outline and the part of the pancreas are basically not seen in the image after registration, but the degree of distinction of VM and Demon methods in the pancreas part is quite obvious, which shows that the registration of the acquired registration image and the reference image in the pancreas part is almost completely matched, and the VM algorithm and the Demon algorithm do not reach the degree yet. This again demonstrates the superiority of the present invention in solving the pancreatic CT image registration problem over existing Voxelmorph registration methods and differential syn Demon methods.
In summary, the embodiment of the invention provides a pancreatic CT image registration method combining multipath features and organ morphology guidance, which comprises the steps of firstly acquiring an optimal observation window image of a celiac organ, a high-contrast easily-distinguished window image extracted from the edge of the celiac image, and extracting features of images which are properly filtered to remove noise and are close to the celiac, so as to acquire and highlight the edge contours of all the core organs of the celiac and all the parts in the celiac, and simultaneously, acquiring the image features with low noise, strong pertinence and obvious features, which are rich in image details, by utilizing a windowing technology in a multipath feature registration network. In addition, the invention fully utilizes the organ information of pancreas, utilizes the pseudo tag generated by the existing segmentation algorithm to construct a pancreas morphology registration network, establishes an organ morphology guiding registration network, firstly ensures that the pancreas of the registered core organ can be approximately aligned, then fuses with the global registration network of the multi-path feature registration network, carries out finer adjustment and correction on the pancreas, and simultaneously carries out global registration on other areas of the image, thereby ensuring that a registration deformation field with pertinence, comprehensiveness and smoothness is obtained.
Specifically, in the registration through the multipath feature registration network, the abdominal cavity CT images with three windows acquired by adopting a windowing technology are respectively used as input images of three paths for registration. Wherein the input of the first path is a window image which is most suitable for observing each organ of the abdominal cavity in the image histology theory; the input of the second path is a window image corresponding to the edge feature image screened out by utilizing the self-adaptive threshold canny edge detection technology; the input of the third path is a window image after noise reduction and enhancement on the basis of the original image.
The multipath feature registration combines the feature extraction method in the traditional method with the U-net coding and decoding network in the deep learning, and the image features of the main organs of the abdominal cavity in the first path and the edge features of the abdominal cavity images in the second path are fused into the features of the images to be registered in the third path in the coding process, so that the image features have a certain interpretation and the accuracy of single model registration is improved.
In addition, the method provided by the invention also adopts an organ morphology guiding registration network, fully utilizes the pseudo tag of the image to conduct deformation guiding on the registration process of the pancreatic target, ensures the directionality and pertinence of pancreatic image registration, and mainly adopts the steps that the pseudo tag is used for training one organ morphology registration network to obtain the deformation field of the pancreatic organ image, the deformation field is fused with the deformation field of the main network, the deformation field generated by the main network is used for conducting refinement adjustment and correction optimization after the pancreas is directionally registered by the deformation field of the organ morphology registration network, and finally the deformation field with more robustness and smoothness is obtained.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and substitutions can be made by those skilled in the art without departing from the technical principles of the present invention, and these modifications and substitutions should also be considered as being within the scope of the present invention.

Claims (7)

1. A pancreatic CT image registration method combining multipath characteristics and organ morphology guidance is characterized by comprising the following steps:
s1: acquiring a reference image and an image to be registered, wherein the reference image and the image to be registered are pancreatic CT images;
s2: preprocessing the acquired reference image and the image to be registered; comprising the following steps:
s201: resampling the reference image and the image to be registered to keep the same size and pixel spacing of the reference image and the image to be registered;
s202, pre-registering the resampled reference image and the image to be registered to realize the alignment of the coordinates of the reference image and the image to be registered;
s203: the pre-registered reference image and the image to be registered are distributed with pseudo labels, and pseudo label images corresponding to the reference image and the image to be registered are obtained;
s3: registering the preprocessed image to be registered through an organ morphology guiding registration network to obtain a pancreas morphology guiding deformation field; comprising the following steps:
S301, obtaining pancreas target area diagrams of the reference image and the image to be registered through cutting the pseudo tag images corresponding to the reference image and the image to be registered obtained in the step S203;
s302, combining the pancreas target area diagram of the reference image obtained by cutting in the S301 and the pancreas target area diagram of the image to be registered;
s303, inputting the combined images in the step S302 into an organ morphology guiding registration network for encoding, and extracting to obtain a feature map;
s304, decoding the coded feature map, and outputting pancreas morphology guiding deformation fields;
s4: extracting and fusing multipath features of the preprocessed image to be registered through a multipath feature registration network to obtain a multipath feature deformation field; comprising the following steps:
s401, windowing the reference image with aligned coordinates obtained in the step S202 and the image to be registered to obtain images with different window widths;
s402, selecting a reference image and an image to be registered, windowing the obtained image with the window width of 45 windows of 350, the obtained image with the window width of 0 windows of 150 and the obtained image with the window width of 100 windows of 1000, and splicing the images with the same window width to obtain three spliced images;
s403, inputting the three spliced images obtained in the step S402 into a multi-path feature registration network, and taking the image of the window level 45 and the window width 350 as the input of a first path; taking an image of window level 0 and window width 150 as input of a second path; taking an image with a window level of 100 and a window width of 1000 as an input of a third path;
S404, respectively carrying out multi-level coding on the images input by the three paths to obtain different-level feature images with different resolutions, and splicing and fusing the feature images with the same resolution of the same level on the three paths to obtain coding feature images with different resolutions;
s405, decoding the coding feature map obtained in the step S404, and outputting a multipath feature deformation field;
s5: compounding the pancreas morphology guide deformation field and the multipath characteristic deformation field;
s6: and (3) applying the composite deformation field obtained in the step (S5) to the preprocessed image to be registered to obtain a deformed image, and realizing the registration of the pancreatic CT image.
2. The pancreatic CT image registration method according to claim 1, wherein in step S3, the organ morphology-guided registration network includes an encoding stage, a decoding stage and a deformation stage, the encoding stage performs feature extraction on the input image to be registered, the decoding stage outputs a registered pancreatic morphology-guided deformation field, the deformation stage uses the registered deformation field output by the decoding network as a network parameter vector of the deformation stage, and the image to be registered is deformed to obtain a registered image.
3. The method for pancreatic CT image registration with fusion of multipath features and organ morphology guidance of claim 2,
encoding: the method comprises a first coding layer and a downsampling layer, wherein the first coding layer comprises 16 convolution kernels with the size of 3 multiplied by 3, a regularization layer and an activation function layer; the convolution kernel slides in step length 1 to obtain a feature map, the number of image input channels of the first coding layer is 2, and the number of output channels is 16; the downsampling layer carries out mean pooling downsampling on the feature images so that the size of the feature images after downsampling becomes 1/2 of the size of the original image;
decoding: the method comprises a first decoding layer, an up-sampling layer, a second decoding layer, a first convolution layer and a second convolution layer which are sequentially connected, wherein the first decoding layer comprises 32 convolution kernels with the size of 3 multiplied by 3, a regularization layer and an activation function layer, the number of input channels of the first decoding layer is 16, and the number of output channels of the first decoding layer is 32; the up-sampling layer samples the feature images, the size of the feature images is changed into twice of the original size, and the feature images with the same size in the first encoding stage and the first decoding stage are combined by using jump connection to obtain a 48-channel feature image; the second decoding layer comprises 32 convolution kernels with the size of 3 multiplied by 3, a regularization layer and an activation function layer, the number of input channels of the second decoding layer is 48, and the number of output channels of the second decoding layer is 32; the first convolution layer comprises 16 convolution kernels with the size of 3 multiplied by 3, a regularization layer and an activation function layer, the number of input channels of the first convolution layer is 32, and the number of output channels of the first convolution layer is 16; the second convolution layer comprises 3 convolution kernels with the size of 3 multiplied by 3, a regularization layer and an activation function layer, the number of input channels of the second convolution layer is 32, the number of output channels is 3, and the second convolution layer outputs a deformation field with the size of 3 channels and the same size as the input image.
4. The method of pancreatic CT image registration with fusion of multipath features and organ morphology guidance according to claim 3, wherein the deformation stage employs an STN network and the organ morphology guidance registration network has a loss function employing an NCC loss function.
5. The pancreatic CT image registration method according to claim 1, wherein in step S4, the multipath features extracted by the multipath feature registration network include features extracted from three path input images, wherein the input of the first path is a window image in the image histology theory most suitable for observing each organ of the abdominal cavity; the input of the second path is a window image corresponding to the edge feature image screened out by utilizing the self-adaptive threshold canny edge detection technology; the input of the third path is a window image after noise reduction and enhancement on the basis of the original image.
6. The pancreatic CT image registration method with fusion of multipath features and organ morphology guidance of claim 1, wherein the multipath feature registration network comprises an encoding stage, a decoding stage, and a deformation stage; the encoding stage is divided into three paths, wherein the first path inputs an image with a window width of 45 windows and 350 windows, the second path inputs a characteristic diagram after the image with a window width of 0 windows and 150 windows is subjected to self-adaptive threshold canny edge detection, and the third path inputs an image with a window width of 100 windows and 1000 windows, which is close to the original image;
The coding structure of the first path is the same as that of the second path, the coding stage has five layers, each coding layer comprises a layer of convolution block fused with SEnet, resNet, inception three network structures, and the feature images acquired by convolution are downsampled by mean value pooling among the coding layers, so that the feature image size is changed into 1/2 of the upper layer;
the third path is different from the encoding structures of the first path and the second path and comprises encoding and decoding stages, the encoding stages are also divided into five layers, each encoding layer structure is similar to the first two paths, but before each encoding layer, the characteristic images of the corresponding resolution levels acquired by the first path and the second path are subjected to convolution fusion for one time, and then the fused characteristic images and the characteristic images of the same resolution level of the third path are subjected to convolution fusion for one round;
after the encoding stage is finished, obtaining resolution characteristic images corresponding to five levels after three paths are fused, and then entering a decoding stage; five layers are correspondingly arranged in the decoding stage, the input of each decoding layer is a feature map of a resolution layer corresponding to the third path coding stage and a feature map obtained by upsampling a decoding layer on a third path, jump connection is carried out during decoding, and each decoding layer carries out two-round convolution operation on the input feature map, wherein the size of the feature map of the decoding layer after upsampling becomes 2 times of that of the upper layer; and performing convolution operation on the feature map obtained by the last decoding layer to finally obtain the multipath feature deformation field.
7. The method of pancreatic CT image registration with integrated multi-path feature and organ morphology guidance of claim 1, wherein the encoding structures of the first and second paths comprise a first encoding layer, a first downsampling layer, a second encoding layer, a second downsampling layer, a third encoding layer, a third downsampling layer, a fourth encoding layer, a fourth downsampling layer, and a fifth encoding layer that are sequentially connected,
a first coding layer, which performs one round of convolution coding on the characteristic image obtained in the coding stage by using a convolution block of 3 multiplied by 3 to obtain a characteristic image, wherein the channel number of the characteristic image is changed from 2 to 16;
the first downsampling layer downsamples the feature map obtained by the first coding layer by using mean value pooling, and the size of the feature map is changed into 1/2 of that of the original map;
second coding layer: performing one-round convolution coding on the feature map acquired by the first downsampling layer by using a convolution block of 3 multiplied by 3, wherein the number of channels of the feature map is changed from 32 to 64;
second downsampling layer: downsampling the feature map obtained by the second coding layer by using the mean value pooling, wherein the size of the feature map is changed into 1/4 of that of the original map;
third coding layer: performing one-round convolution coding on the feature map acquired by the second downsampling layer by using a convolution block of 3 multiplied by 3, wherein the number of channels of the feature map is changed from 64 to 128;
Third downsampling layer: downsampling the feature map obtained by the third coding layer by using the mean value pooling, wherein the size of the feature map is changed into 1/8 of that of the original map;
fourth coding layer: performing one-round convolutional encoding on the feature map acquired by the third downsampling layer by using a convolutional block of 3 multiplied by 3, wherein the number of channels of the feature map is changed from 128 to 256;
fourth downsampling layer: downsampling the feature map obtained by the fourth coding layer by using mean value pooling, wherein the size of the feature map is changed into 1/16 of that of the original map;
fifth coding layer: performing one-round convolutional encoding on the feature map acquired by the fourth downsampling layer by using a convolutional block of 3 multiplied by 3, wherein the channel number of the feature map is changed from 256 to 512;
the structure of the decoding stage comprises a first decoding layer, a first upsampling layer, a second decoding layer, a second upsampling layer, a third decoding layer, a third upsampling layer, a fourth decoding layer, a fourth upsampling layer and a fifth decoding layer which are sequentially connected,
first decoding layer: the method comprises 1 convolution process, wherein a convolution block of 3 multiplied by 3 is utilized to carry out one round of convolution decoding on the feature map obtained by the fifth coding layer, so as to obtain a feature map with 256 channels;
first upsampling layer: performing a deconvolution operation on the feature image obtained by the first decoding layer to change the size of the feature image into 1/8 of the original image, performing jump connection, and combining the feature image obtained by the up-sampling with the feature image obtained by the fourth coding layer to obtain a feature image with 256 channels;
Second decoding layer: the method comprises 2 convolution processes, namely, firstly, carrying out one convolution on the characteristic image spliced by the first upsampling layer by utilizing 256 convolution structures to obtain a characteristic image with 256 channels; then, the characteristic image is convolved again by utilizing 128 convolution structures, so that a characteristic image with 128 channels is obtained;
second upsampling layer: performing a deconvolution operation on the feature image obtained by the second decoding layer to change the size of the feature image into 1/4 of the original image, performing jump connection, and combining the feature image obtained by the up-sampling with the feature image obtained by the third coding layer to obtain a feature image with 128 channels;
third decoding layer: the method comprises the steps of 2 convolution processes, namely, firstly, carrying out one convolution on the feature images spliced by the second upsampling layer by utilizing 128 convolution structures to obtain feature images with the channel number of 128; then, the characteristic image is convolved again by using 64 convolution structures to obtain a characteristic image with the channel number of 64;
third upsampling layer: performing a deconvolution operation on the feature image obtained by the third decoding layer to change the size of the feature image into 1/2 of the original image, performing jump connection, and combining the feature image obtained by the up-sampling with the feature image obtained by the second coding layer to obtain a feature image with 64 channels;
Fourth decoding layer: the method comprises 2 convolution processes, namely, firstly, carrying out one-time convolution on the characteristic image spliced by the third upsampling layer by using 64 convolution structures to obtain a characteristic image with the channel number of 64; then, the characteristic image is convolved again by utilizing 32 convolution structures to obtain a characteristic image with the channel number of 32;
fourth upsampling layer: performing a deconvolution operation on the feature image obtained by the fourth decoding layer to change the feature image size into the original image size of the original image, performing jump connection, and combining the feature image obtained by the up-sampling with the feature image obtained by the first coding layer to obtain a feature image with the channel number of 32;
fifth decoding layer: the method comprises 2 convolution processes, namely, firstly, carrying out primary convolution on the characteristic image spliced by the fourth upsampling layer by utilizing 32 convolution structures to obtain a characteristic image with the channel number of 32; and then, the characteristic image is convolved again by utilizing 16 convolution structures, so that the characteristic image with the channel number of 16 is obtained.
CN202310755028.5A 2023-06-26 2023-06-26 Pancreas CT image registration method integrating multipath characteristics and organ morphology guidance Active CN116523983B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310755028.5A CN116523983B (en) 2023-06-26 2023-06-26 Pancreas CT image registration method integrating multipath characteristics and organ morphology guidance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310755028.5A CN116523983B (en) 2023-06-26 2023-06-26 Pancreas CT image registration method integrating multipath characteristics and organ morphology guidance

Publications (2)

Publication Number Publication Date
CN116523983A CN116523983A (en) 2023-08-01
CN116523983B true CN116523983B (en) 2023-10-27

Family

ID=87394454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310755028.5A Active CN116523983B (en) 2023-06-26 2023-06-26 Pancreas CT image registration method integrating multipath characteristics and organ morphology guidance

Country Status (1)

Country Link
CN (1) CN116523983B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260705A (en) * 2020-01-13 2020-06-09 武汉大学 Prostate MR image multi-task registration method based on deep convolutional neural network
CN115100252A (en) * 2022-05-26 2022-09-23 浙江大学 Four-dimensional CT registration method and device for pancreatic region
CN115393402A (en) * 2022-08-24 2022-11-25 北京医智影科技有限公司 Training method of image registration network model, image registration method and equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2019449137B2 (en) * 2019-06-06 2023-03-02 Elekta, Inc. sCT image generation using cyclegan with deformable layers
US11967084B2 (en) * 2021-03-09 2024-04-23 Ping An Technology (Shenzhen) Co., Ltd. PDAC image segmentation method, electronic device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260705A (en) * 2020-01-13 2020-06-09 武汉大学 Prostate MR image multi-task registration method based on deep convolutional neural network
CN115100252A (en) * 2022-05-26 2022-09-23 浙江大学 Four-dimensional CT registration method and device for pancreatic region
CN115393402A (en) * 2022-08-24 2022-11-25 北京医智影科技有限公司 Training method of image registration network model, image registration method and equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
64排螺旋CT三维重建和胰腺可视化仿真手术的应用研究;方驰华;刘宇斌;黄燕鹏;潘家辉;彭丰平;鲁朝敏;;中国实用外科杂志(09);73-76 *
两模态PET/CT图像融合研究进展;魏兴瑜;陆惠玲;周涛;;重庆医学(14);113-116 *
基于并行计算和多层次B样条的肺部CT-PET图像配准;余霞;葛红;李彬;田联房;;计算机应用(07);192-194 *

Also Published As

Publication number Publication date
CN116523983A (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
CN110176012B (en) Object segmentation method in image, pooling method, device and storage medium
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
JP2023540910A (en) Connected Machine Learning Model with Collaborative Training for Lesion Detection
CN112102385B (en) Multi-modal liver magnetic resonance image registration system based on deep learning
US8229189B2 (en) Visual enhancement of interval changes using temporal subtraction, convolving, and non-rigid transformation field mapping
CN113298830B (en) Acute intracranial ICH region image segmentation method based on self-supervision
WO2023063874A1 (en) Method and system for image processing based on convolutional neural network
US20010048758A1 (en) Image position matching method and apparatus
CN115471470A (en) Esophageal cancer CT image segmentation method
CN116152266A (en) Segmentation method, device and system for ultrasonic image of puncture needle
CN115830016A (en) Medical image registration model training method and equipment
CN114998362A (en) Medical image segmentation method based on double segmentation models
CN111383759A (en) Automatic pneumonia diagnosis system
US8229190B2 (en) Visual enhancement of interval changes using temporal subtraction and pattern detector
US8224046B2 (en) Visual enhancement of interval changes using rigid and non-rigid transformations
CN112884792A (en) Lung image segmentation method and device, electronic equipment and storage medium
CN116523983B (en) Pancreas CT image registration method integrating multipath characteristics and organ morphology guidance
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
CN108447066B (en) Biliary tract image segmentation method, terminal and storage medium
CN116229074A (en) Progressive boundary region optimized medical image small sample segmentation method
CN116091458A (en) Pancreas image segmentation method based on complementary attention
CN115841472A (en) Method, device, equipment and storage medium for identifying high-density characteristics of middle cerebral artery
CN113379691B (en) Breast lesion deep learning segmentation method based on prior guidance
US11625826B2 (en) Retinal OCT data processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant