CN115100306A - Four-dimensional cone-beam CT imaging method and device for pancreatic region - Google Patents

Four-dimensional cone-beam CT imaging method and device for pancreatic region Download PDF

Info

Publication number
CN115100306A
CN115100306A CN202210557390.7A CN202210557390A CN115100306A CN 115100306 A CN115100306 A CN 115100306A CN 202210557390 A CN202210557390 A CN 202210557390A CN 115100306 A CN115100306 A CN 115100306A
Authority
CN
China
Prior art keywords
image
projection
pancreas
proj
anchor point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210557390.7A
Other languages
Chinese (zh)
Inventor
牛田野
杨鹏飞
罗辰
王静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202210557390.7A priority Critical patent/CN115100306A/en
Publication of CN115100306A publication Critical patent/CN115100306A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Abstract

The invention discloses a four-dimensional cone beam CT imaging method and a four-dimensional cone beam CT imaging device for a pancreatic region. On the basis, a simulated projection synthesis model is used for carrying out projection prediction on a virtual cone beam CT projection graph corresponding to the pancreas CT image so as to obtain a simulated cone beam CT projection graph, and the simulated cone beam CT projection graph and the pancreas position marked in the pancreas CT image jointly train a segmentation network so as to construct a pancreas segmentation model; and finally, after the pancreas position is segmented by utilizing the pancreas segmentation model, the cone beam CT projection image of the pancreas region is reconstructed in groups according to the pancreas position, so that the imaging quality of the pancreas region of the cone beam CT guide image during pancreas radiotherapy can be greatly improved.

Description

Four-dimensional cone-beam CT imaging method and device for pancreatic region
Technical Field
The invention belongs to the technical field of medical engineering, and particularly relates to a four-dimensional cone beam CT imaging method and device for a pancreatic region.
Background
Radiation therapy is one of the main technical means of tumor therapy at present. The radiation therapy mode realizes the elimination and inhibition of tumor cells in a target area by using a physical mode such as high-energy X-ray. In the current radiation therapy mode, a doctor uses medical image information to determine a tumor region of a patient and determine a proper radiation therapy region.
Radiotherapy is one of the important therapeutic modalities for pancreatic cancer. At the time of actual radiotherapy, the position of the pancreatic tumor may be different from that at the time of radiotherapy planning, so that a cone-beam CT scan needs to be performed before radiotherapy to correct the tumor position. However, the location of the pancreatic tumor is affected by respiratory motion, and there are also severe motion artifacts in cone beam CT, which affect the identification of the location of the pancreatic tumor.
The four-dimensional cone beam CT imaging technology is a breathing-related cone beam CT scanning mode, and can obtain images of different breathing time phases, so that the motion state of pancreatic tumors in breathing is fully reflected. However, in the current four-dimensional cone-beam CT imaging, the time-phased reconstruction is performed by using the motion information acquired by the markers on the surface of the patient as the respiratory signals. The respiratory motion amplitude captured by the body surface of the patient is different from the motion of the pancreas of the patient. The four-dimensional reconstruction using the signals of the body surface makes it difficult to achieve optimal imaging for the pancreatic region, and thus the accuracy of pancreatic tumor radiotherapy is still insufficient. There is a need for a high-precision cone-beam CT imaging method for pancreatic tumors in clinic to achieve accurate imaging of pancreatic regions before radiotherapy.
Patent document CN112435307A discloses a four-dimensional cone-beam CT image reconstruction method assisted by a deep neural network, which includes (1) acquiring projection data, and grouping the projection data according to respiratory phase to obtain a time-phase projection diagram; (2) reconstructing the time phase projection image to obtain an initial cone beam CT image with artifacts; (3) removing artifacts from the initial cone beam CT image by using an artifact removing model constructed based on a deep neural network to obtain a time phase reconstructed image; (4) based on the time phase reconstruction image, flexibly registering other time phase reconstruction images and the initial time phase reconstruction image to obtain a positive deformation field and a reverse deformation field of the other time phase reconstruction images relative to the initial time phase reconstruction image; (5) and performing motion compensation reconstruction based on the time-phase projection diagram and the corresponding positive deformation field and the corresponding inverse deformation field to obtain a four-dimensional cone beam CT image. The four-dimensional cone-beam CT image reconstruction method needs to be calculated in the steps (1) to (5) every time reconstruction is carried out, calculation consumption is high, and imaging speed is relatively low.
Patent document CN112819911A discloses a four-dimensional cone beam CT reconstructed image enhancement algorithm based on N _ net and CycN _ net network structures, which is designed on the basis of fully exploiting the inherent characteristics of four-dimensional cone beam CT reconstructed images, wherein a deep neural network CycN-net takes the analytic reconstructed images of five continuous motion phases and corresponding prior images as network inputs, takes into account the prior knowledge in the four-dimensional cone beam CT image sequence and the spatio-temporal correlation between image sequences, and splices and fuses the extracted convolution feature maps to serve for image restoration. The method adopts a complex network and has a large reconstruction calculation amount.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, it is an object of the present invention to provide a four-dimensional cone-beam CT imaging method and apparatus for pancreatic region to improve the quality and efficiency of the four-dimensional cone-beam CT imaging of pancreatic region.
In order to achieve the above object, an embodiment provides a four-dimensional cone-beam CT imaging method for a pancreatic region, including the following steps:
acquiring a real cone beam CT projection image and a fan beam CT image of the thoracoabdominal part, and performing virtual orthographic projection on the fan beam CT image to obtain a virtual cone beam CT projection image;
training a pair reactance network by taking the virtual cone beam CT projection image and the real cone beam CT projection image as a sample pair so as to construct a simulation projection synthetic model capable of realizing projection prediction;
acquiring a pancreas CT image of a pancreas area, carrying out pancreas labeling, carrying out binarization processing on the pancreas CT image labeled with a pancreas position, and carrying out virtual orthographic projection to obtain a virtual cone beam CT projection image and pancreas labeling information corresponding to the pancreas CT image;
carrying out projection prediction on the virtual cone beam CT projection image corresponding to the pancreas CT image by using a simulated projection synthesis model so as to obtain a simulated cone beam CT projection image corresponding to the pancreas CT image;
training a segmentation network adopting a coding and decoding structure by taking the simulated cone beam CT projection drawing and the pancreas labeling information as samples so as to construct a pancreas segmentation model capable of realizing pancreas tracking;
carrying out pancreas tracking on the input cone beam CT projection image of the pancreas region by using a pancreas segmentation model to obtain a pancreas position;
and (3) grouping the cone beam CT projection images of the pancreatic region according to pancreatic positions reflecting different respiratory states, and carrying out four-dimensional cone beam CT imaging reconstruction on the grouped cone beam CT projection images.
In one embodiment, the generation countermeasure network comprises a generator and a discriminator, wherein the generator adopts two parts of encoding and decoding and is respectively used for quantizing and compressing information of an encoded input image and decoding and recovering an output image from the encoded information, the input image comprises a virtual cone beam CT projection graph and a real cone beam CT projection graph, and the discriminator is used for judging the difference between the output image and the input image;
when training the generated countermeasure network, the adopted loss function comprises the self countermeasure loss of the generated countermeasure network, and also comprises the contrast loss used for measuring the similarity between the input image and the output image and ensuring the structure consistency of the input image and the output image;
and after training is finished, extracting a generator with optimized parameters as a simulation projection synthesis model.
In one embodiment, when the contrast loss is constructed, for an input feature map and an output feature map output by each layer of a coding part corresponding to an input image and an output image, a random image block of the output feature map is taken as an anchor point, a position corresponding to the random image block in the input feature map is taken as a positive sample corresponding to the anchor point, a position corresponding to a non-random image block in the input feature map is taken as a negative sample corresponding to the anchor point, and the similarity between the input image and the output image is measured by calculating the similarity between the positive sample and the anchor point and the similarity between the negative sample and the anchor point, so that the structure consistency between the input image and the output image is ensured.
In one embodiment, when contrast loss is constructed, for an input feature map and an output feature map output by each layer of a coding part corresponding to an input image and an output image, a multi-layer perceptron is utilized to respectively perform feature extraction on the input feature map and the output feature map, so as to output the input perception feature map and the output perception feature map, take a random image block of the output perception feature map as an anchor point, take a position in the input perception feature map corresponding to the random image block as a positive sample corresponding to the anchor point, take a position in the input perception feature map corresponding to a non-random image block as a negative sample corresponding to the anchor point, and measure similarity between the input image and the output image by calculating similarity between the positive sample and the anchor point and similarity between the negative sample and the anchor point, so as to ensure that the input image and the output image have consistent structures.
In one embodiment, the segmentation network adopts U-net, when a sample containing a simulated cone beam CT projection graph and pancreas labeling information is adopted to train the segmentation network, the simulated cone beam CT projection graph is used as the input of the segmentation network, the pancreas labeling information is used as a label, the coincidence degree of a predicted position output by the segmentation network and the position of the label is used as a loss function to optimize network parameters, and after the optimization is finished, the segmentation network of the network parameters is determined to be used as a pancreas segmentation model.
In order to achieve the above object, an embodiment further provides a four-dimensional cone-beam CT imaging apparatus for a pancreatic region, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the memory stores a pancreas segmentation model constructed by the four-dimensional cone-beam CT imaging method, and the processor implements the following steps when executing the computer program:
acquiring a cone beam CT projection image of a pancreas area to be detected;
carrying out pancreas tracking on the input cone beam CT projection image of the pancreas area by using a pancreas segmentation model to obtain a pancreas position;
and (3) grouping the cone beam CT projection images of the pancreatic region according to pancreatic positions reflecting different respiratory states, and carrying out four-dimensional cone beam CT imaging reconstruction on the grouped cone beam CT projection images.
Compared with the prior art, the invention has the beneficial effects that at least:
by taking the real cone beam CT projection image and the virtual cone beam CT projection image of the chest and abdomen as samples and considering the similarity between the input image and the output image of the measurement generation type countermeasure network, the input image and the output image are ensured to be consistent in structure, the generated countermeasure network is trained, so that a simulation projection synthetic model capable of realizing projection prediction is constructed, and the accuracy of simulation projection generation can be improved by the obtained simulation projection synthetic model. On the basis, the simulated projection synthesis model is used for carrying out projection prediction on the virtual cone beam CT projection image corresponding to the pancreas CT image so as to obtain the simulated cone beam CT projection image, and the simulated cone beam CT projection image and the pancreas position marked in the pancreas CT image jointly train the segmentation network so as to construct the pancreas segmentation model, so that the segmentation accuracy of the pancreas segmentation model can be improved. And finally, after the pancreas position is segmented by utilizing the pancreas segmentation model, the cone beam CT projection image of the pancreas region is reconstructed in groups according to the pancreas position, so that the imaging quality of the pancreas region of the cone beam CT guide image during pancreas radiotherapy can be greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a four-dimensional cone-beam CT imaging method for a pancreatic region provided by an embodiment;
FIG. 2 is an exemplary projection view provided by an embodiment, wherein (a) is a virtual cone-beam CT projection view, (b) is a simulated cone-beam CT projection view, and (c) is a real cone-beam CT projection view;
fig. 3 is various images provided by the embodiment, wherein (a) is a virtual cone-beam CT projection map corresponding to a pancreas CT image, (b) is labeled pancreas position information, (c) is a simulated projection map corresponding to the pancreas CT image, and (d) is pancreas position information in the simulated projection map.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
In order to achieve low-computation, efficient and high-quality four-dimensional cone-beam CT imaging of pancreatic region projection data, embodiments provide a four-dimensional cone-beam CT imaging method and apparatus for a pancreatic region. As shown in fig. 1, the four-dimensional cone-beam CT imaging method for pancreatic region provided by the embodiment includes the following steps:
step 1, acquiring a real cone beam CT projection image and a fan beam CT image of the thoracoabdominal part, and performing virtual orthographic projection on the fan beam CT image to obtain a virtual cone beam CT projection image.
In the embodiment, the real cone beam CT projection image and the fan beam CT image of the thoracoabdominal part are obtained from a hospital, wherein the real cone beam CT projection image is obtained by collecting real projection, and after the fan beam CT image of the thoracoabdominal part is obtained, the fan beam CT image is subjected to virtual orthographic projection to obtain a virtual cone beam CT projection image. When virtual orthographic projection is carried out, the adopted imaging parameters are consistent with those of a cone beam CT scanning machine used clinically.
And 2, training a generative reactance network by taking the virtual cone beam CT projection graph and the real cone beam CT projection graph as a sample pair to construct a simulated projection synthesis model capable of realizing projection prediction.
In the embodiment, the virtual cone beam CT projection image and the real cone beam CT projection image are used as a sample pair for training a generation countermeasure network, and the generation countermeasure network after training can obtain a simulation projection synthesis model for realizing projection prediction. The virtual cone-beam CT projection diagram shown in (a) in fig. 2 and the real cone-beam CT projection diagram shown in (c) are trained to generate the countermeasure network, and the obtained simulated projection synthetic model can generate the simulated cone-beam CT projection diagram shown in (b) in fig. 2.
In an embodiment, the generation of the antagonistic network comprises a generator and a discriminator, wherein the generator is configured to perform a style migration, using two parts, encoding and decoding, for quantizing and compressing information of the encoded input image and decoding and recovering the output image from the encoded information, respectively, focusing on common parts, such as structural features, between the input image and the output image, while ignoring style differences, such as textures, between the two. The discriminator is used as an evaluation index for continuous optimization and is used for judging the difference between an output image and an input image, wherein the input image comprises a virtual cone beam CT projection image and a real cone beam CT projection image.
In an embodiment, the virtual cone-beam CT projection view in the sample pair is denoted as Proj f The real cone-beam CT projection image is designated as Proj, the generator is designated as G, the discriminator is designated as D, and the generated coding part is designated as G enc And the decoding part is marked as G dec . Virtual cone beam CT projection Proj f Inputting the image into a generator G, and generating an output image as a simulated cone beam CT projection Proj through encoding and decoding operations s Expressed as: proj s =G(Proj f )=G dec (G enc (Proj f ))。
In order to make the generator ignore the style of the input image, only pay attention to the content of the input image and avoid the change of the output image except the style, the real cone beam CT projection image Proj is input into the generator G, and the output image is generated as the identity projection image Proj after the encoding and decoding operations i Is represented by Proj i =G(Proj)=G dec (G enc (Proj)). Based on this, the challenge loss that generates the challenge network itself is expressed as:
Figure BDA0003652711800000071
wherein the content of the first and second substances,
Figure BDA0003652711800000072
representing the antagonistic losses, G and D representing the generator and the arbiter, Proj, respectively f Showing that a virtual cone beam CT projection image is obtained by carrying out virtual orthographic projection on a fan beam CT image, and showing that Proj represents a real cone beam CT projection image; d (Proj) represents the result of discrimination of Proj by discriminator D, G (Proj) f ) Representation generator G pairs Proj f Output image generated by encoding and decoding, D (Proj) f ) Represents the output image G (Proj) of the discriminator D f ) As a result of the determination of (1),
Figure BDA0003652711800000081
representing the maximum likelihood estimate for y, which follows the distribution of the Proj data,
Figure BDA0003652711800000082
representing the maximum likelihood estimate for x, subject to Proj f Distribution of data.
In the embodiment, when generating the paired training network, in order to avoid the structural difference between the input and output images caused by using the unpaired projection training, the similarity between the input image and the output image needs to be considered, and the structure of the input image and the structure of the output image are ensured to be consistent, so that the mutual information of the corresponding regions of the input and output images is evaluated through the characteristics by adopting a contrast learning mode. Meanwhile, the contrast learning mechanism maximizes the mutual information of the corresponding areas of the input and output images and minimizes the mutual information of the non-corresponding areas so as to ensure that the structures of the input and output images are consistent and ensure that the real projection can be obtained when the real projection is input into a network without introducing additional information.
The most important thing for the comparative learning is to construct positive and negative samples to obtain mutual information. The generation countermeasure network constructs positive and negative examples by treating patches in the image as objects. And recording the random image blocks in the output image as anchor points V. The image block corresponding to anchor point V in the input image corresponding to the output image is treated as a positive sample and recorded as V + . The rest of the input image corresponding to the output imageImage block, because of the different location from the image block at anchor point V, is treated as a negative sample and recorded as V - . The V vector is connected with V + The similarity of the vectors is marked as 1, and the V vector and the V are marked - The similarity of the vectors is noted as 0, and the cross entropy loss is calculated and expressed as:
Figure BDA0003652711800000083
by minimizing the above equation, V and V can be maximized + Mutual information between, minimizing V and V - So that the generated output image structurally coincides with the input image.
Besides the similarity between the input and output images, the similarity of convolution characteristics of the input and output images is also considered in the generation countermeasure network training. Convolution characteristic is represented by G enc Encoding, wherein different spatial positions of different layers represent different image blocks. Both the input image and the output image pass through G enc Encoding is carried out, so that G in the generator can be passed through without introducing other auxiliary networks enc A high-dimensional feature vector is obtained. The training enables the input image and the output image to have the same structure, and ensures the structural correspondence between the image blocks at the corresponding positions.
In one embodiment, when contrast loss is constructed, for an input feature map and an output feature map output by each layer of a coding part corresponding to an input image and an output image, a random image block of the output feature map is taken as an anchor point, a position corresponding to the random image block in the input feature map is taken as a positive sample corresponding to the anchor point, a position corresponding to a non-random image block in the input feature map is taken as a negative sample corresponding to the anchor point, and similarity between the input image and the output image is measured by calculating similarity between the positive sample and the anchor point and similarity between the negative sample and the anchor point, so that the input image and the output image are guaranteed to be consistent in structure. Specifically, the contrast loss includes:
Figure BDA0003652711800000091
wherein, Proj f Showing that the virtual orthographic projection is carried out on the fan beam CT image to obtain a virtual cone beam CT projection image,
Figure BDA0003652711800000092
represents Proj f G represents a generator, L represents a network layer index of a generator encoding part, L represents a total network layer number of the generator encoding part, S represents a feature point index, and S represents a contrast loss corresponding to the input image l Represents the total number of feature points in the random image block of layer i,
Figure BDA0003652711800000093
represents Proj f The feature of the s-th position in the corresponding output feature map of the l-th layer is used as an anchor point,
Figure BDA0003652711800000094
represents Proj f The feature of the s-th position in the input feature map of the corresponding l-th layer is used as a positive sample corresponding to the anchor point,
Figure BDA0003652711800000095
represents Proj f Removing the feature of the s-th position from the input feature map of the corresponding l-th layer to be used as a negative sample corresponding to the anchor point,
Figure BDA0003652711800000096
represents the cross-entropy loss between the similarity of the positive sample to the anchor and the similarity of the negative sample to the anchor, expressed as:
Figure BDA0003652711800000101
wherein the content of the first and second substances,
Figure BDA0003652711800000102
representing positive samples
Figure BDA0003652711800000103
And anchor point
Figure BDA0003652711800000104
The mutual information between them is used as the similarity,
Figure BDA0003652711800000105
representing negative examples
Figure BDA0003652711800000106
And anchor point
Figure BDA0003652711800000107
The mutual information between the negative samples is used as similarity, N is the serial number of the negative samples, N is the total number of the negative samples, and alpha is a hyper-parameter;
Figure BDA0003652711800000108
wherein, Proj represents a true cone-beam CT projection view,
Figure BDA0003652711800000109
representing the corresponding contrast loss when Proj is taken as the input image,
Figure BDA00036527118000001010
the feature of the s-th position in the output feature diagram of the l-th layer corresponding to the Proj is used as an anchor point,
Figure BDA00036527118000001011
the feature representing the s-th position in the input feature map of the l-th layer corresponding to the Proj is used as a positive sample corresponding to the anchor point,
Figure BDA00036527118000001012
the feature except the s-th position in the input feature map of the l-th layer corresponding to the Proj is used as a negative sample corresponding to the anchor point,
Figure BDA00036527118000001013
expressed as:
Figure BDA00036527118000001014
wherein the content of the first and second substances,
Figure BDA00036527118000001015
representing positive samples
Figure BDA00036527118000001016
And anchor point
Figure BDA00036527118000001017
Mutual information between the two is used as the similarity,
Figure BDA00036527118000001018
representing negative examples
Figure BDA00036527118000001019
And anchor point
Figure BDA00036527118000001020
The mutual information between them is used as the similarity.
In another embodiment, when contrast loss is constructed, for an input feature map and an output feature map output by each layer of a coding part corresponding to an input image and an output image, respectively performing feature extraction on the input feature map and the output feature map by using a multi-layer perceptron, outputting the input perception feature map and the output perception feature map, taking a random image block of the output perception feature map as an anchor point, taking a position in the input perception feature map corresponding to the random image block as a positive sample corresponding to the anchor point, taking a position in the input perception feature map corresponding to a non-random image block as a negative sample corresponding to the anchor point, and measuring similarity between the input image and the output image by calculating similarity between the positive sample and the anchor point and similarity between the negative sample and the anchor point, so as to ensure that the input image and the output image have consistent structures. Specifically, the contrast loss includes:
Figure BDA0003652711800000111
wherein, Proj f Showing that the virtual orthographic projection is carried out on the fan beam CT image to obtain a virtual cone beam CT projection image,
Figure BDA0003652711800000112
represents Proj f As the corresponding contrast loss when inputting the image, L represents the network layer index of the generator coding part, L represents the total network layer number of the generator coding part, G represents the generator, H represents the multi-layer perceptron, S represents the characteristic point index, S l Represents the total number of feature points in the random image block of layer i,
Figure BDA0003652711800000113
represents Proj f The feature of the s-th position in the corresponding output perception feature map of the l-th layer is used as an anchor point,
Figure BDA0003652711800000114
represents Proj f The feature of the s-th position in the input perception feature map of the corresponding l-th layer is used as a positive sample corresponding to the anchor point,
Figure BDA0003652711800000115
represents Proj f Removing the feature of the s-th position from the input perception feature map of the corresponding l-th layer to be used as a negative sample corresponding to the anchor point,
Figure BDA0003652711800000116
represents the cross-entropy loss between the similarity of the positive sample to the anchor and the similarity of the negative sample to the anchor, expressed as:
Figure BDA0003652711800000117
wherein the content of the first and second substances,
Figure BDA0003652711800000121
representing positive samples
Figure BDA0003652711800000122
With anchor points
Figure BDA0003652711800000123
Mutual information between the two is used as the similarity,
Figure BDA0003652711800000124
representing negative examples
Figure BDA0003652711800000125
And anchor point
Figure BDA0003652711800000126
Mutual information between the two is used as similarity, and alpha is a hyper-parameter;
Figure BDA0003652711800000127
where, Proj represents the true cone-beam CT projection,
Figure BDA0003652711800000128
representing the corresponding contrast loss when Proj is taken as the input image,
Figure BDA0003652711800000129
the feature of the s-th position in the output perception feature diagram of the l-th layer corresponding to the Proj is used as an anchor point,
Figure BDA00036527118000001210
representing the feature of the s-th position in the input perception feature map of the l-th layer corresponding to the Proj as a positive sample corresponding to the anchor point,
Figure BDA00036527118000001211
the feature except the s-th position in the input perception feature map of the l-th layer corresponding to the Proj is used as a negative sample corresponding to the anchor point,
Figure BDA00036527118000001212
expressed as:
Figure BDA00036527118000001213
wherein the content of the first and second substances,
Figure BDA00036527118000001214
representing positive samples
Figure BDA00036527118000001215
And anchor point
Figure BDA00036527118000001216
The mutual information between them is used as the similarity,
Figure BDA00036527118000001217
representing negative examples
Figure BDA00036527118000001218
And anchor point
Figure BDA00036527118000001219
The mutual information between them is used as the similarity.
The above-mentioned contrast loss allows the generator to ignore the style of the input image, and to focus on the content of the input image only, and leave the image unchanged as much as possible.
Total loss function used for training generation of countermeasure network
Figure BDA00036527118000001220
Comprises the following steps:
Figure BDA0003652711800000131
or the following steps:
Figure BDA0003652711800000132
wherein λ is x And λ y Is a weight. And after training is finished, extracting a generator with optimized parameters as a simulation projection synthesis model. The projection simulation projection synthesis model is used to generate a data set for training a projection domain pancreas segmentation model.
And 3, acquiring a pancreas CT image of the pancreas area, marking the pancreas, and performing binarization processing on the pancreas CT image marked with the pancreas position and virtual orthographic projection to obtain a virtual cone beam CT projection image and pancreas marking information corresponding to the pancreas CT image.
In an embodiment, the labeling of the pancreatic region is performed from a clinically collected pancreatic CT image, or the CT image and pancreatic location labeling information in the disclosed pancreatic segmented CT dataset are used. The pancreas CT image labeled with the pancreas position is binarized and then subjected to virtual forward projection, that is, the pancreas CT image and the pancreas position are respectively subjected to virtual forward projection to obtain the corresponding virtual cone beam CT projection image and the position labeling information in the projection image, as exemplarily shown in (a) and (b) in fig. 3. In the embodiment, in the binarization processing, only the pixel value of the pancreas position in the pancreas CT image is 1. The binary image containing the pancreas position generates an image of the pancreas position in the projection map, i.e. position labeling information, also by means of a virtual orthographic projection technique.
And 4, performing projection prediction on the virtual cone beam CT projection image corresponding to the pancreas CT image by using the simulated projection synthesis model to obtain the simulated cone beam CT projection image corresponding to the pancreas CT image.
And constructing the simulated projection synthetic model for realizing the conversion from the virtual cone beam CT projection drawing to the simulated cone beam CT projection drawing. Specifically, a virtual cone beam CT projection image corresponding to the pancreas CT image is input to the simulated projection synthesis model, and a simulated cone beam CT projection image closer to the real projection is obtained through calculation processing, and the pancreas position labeling information in the corresponding projection image can be obtained by performing virtual forward projection on the labeling information in the pancreas CT image, as shown in (c) and (d) in fig. 3, for example, the generated simulated cone beam CT projection image and the pancreas labeling information are used as a data set for training a segmentation network to construct a pancreas segmentation model.
And 5, training a segmentation network adopting a coding and decoding structure by taking the simulated cone beam CT projection drawing and the pancreas labeling information as samples so as to construct a pancreas segmentation model capable of realizing pancreas tracking.
In the embodiment, the segmentation network adopts a network with a coding and decoding structure, preferably a U-net network, during training, a simulated cone beam CT projection graph is used as input of the segmentation network, pancreas labeling information is used as a label, a coding module in the U-net network quantizes and compresses information of a projection image, probability images with different positions as pancreas are restored through a decoding module and are converted into binary images representing pancreas positions, the coincidence degree of a predicted position output by the segmentation network and the label position is used as a loss function to optimize network parameters, and after optimization is finished, the segmentation network of the network parameters is determined to be used as a pancreas segmentation model for identifying the pancreas positions in the cone beam CT projection graph.
Specifically, the split network structure is as follows: firstly, extracting the characteristics of an input simulation cone beam CT projection drawing by using a convolution kernel with the size of 3 multiplied by 3, Batch normalization (Batch norm) and a modified linear layer to obtain a characteristic drawing, wherein the operation is repeatedly used twice before up-sampling or down-sampling each time; a downsampling operation is then performed, using a maximum pooling layer of 2 x 2 size to reduce the profile to half of the original, for a total of four downsampling operations. Then, performing up-sampling on the feature map by adopting a bilinear interpolation mode, doubling the feature map, and gradually recovering the original image size through up-sampling for four times; finally, cross-layer connection is carried out, the encoder and the decoder are connected, and the front characteristic diagram and the rear characteristic diagram of the same scale are spliced by fully utilizing abundant space detail information in the encoder and semantic information in the decoder; it is worth noting that before each step of down-sampling and up-sampling, the feature map is compressed by compressing the channels according to the attention weights in the channels and the space, thereby improving the performance of model segmentation, and then the Sigmoid function is used to scale the numerical values in the feature map to 0 to 1, thereby generating the spatial attention weight, and then multiplying the spatial attention weight and the original feature map, thereby obtaining the feature map with enhanced spatial expression, meanwhile, the global pooling operation is used to compress the spatial information for compressing the feature map, thereby generating the channel attention weight, and then multiplying the channel attention weight and the original input feature map, thereby obtaining the feature map with enhanced channel characteristics, and then the feature map with enhanced spatial expression and the feature map with enhanced channel expression are fused, obtaining a characteristic diagram of simultaneously enhancing spatial and channel expression; and finally, predicting the category probability of each pixel for the feature map obtained in the last step, setting the pixel with the probability value being more than or equal to 0.5 as 1 as the pancreas region, and setting the pixel with the probability value being less than 0.5 as 0 as the background region as the pancreas region because only the pancreas and the background need to be segmented.
And 6, carrying out pancreas tracking on the input cone beam CT projection drawing of the pancreas area by utilizing a pancreas segmentation model to obtain a pancreas position.
In the embodiment, a cone beam CT projection image to be tracked by the pancreas and obtained by real-time scanning is input into a pancreas segmentation model, and the predicted pancreas position is output through calculation. Because the input is the cone beam CT projection image sequence, different cone beam CT projection images are corresponding to different time, and the pancreas positions obtained by identification can be presented at different positions along with the breathing states at different time.
And 7, grouping the cone beam CT projection drawings of the pancreas area according to pancreas positions reflecting different breathing states, and performing four-dimensional cone beam CT imaging reconstruction on the grouped cone beam CT projection drawings.
In an embodiment, the pancreas position obtained by predicting each cone beam CT projection image through the pancreas segmentation model is a region, and the center of mass of each pancreas position region needs to be calculated, and the center of mass is used as the final pancreas position of each cone beam CT projection image. The cone beam CT projection sequence is divided into different groups according to the final pancreas position, typically into 10 groups according to different breathing states. And (3) reconstructing each group of cone beam CT projection images by using an FDK (fully-drawn reconstruction) method, reducing motion artifacts caused by motion by grouping reconstruction, and finally, synthesizing a reconstruction result to obtain a four-dimensional cone beam CT image aiming at a pancreatic region.
The four-dimensional cone beam CT imaging method can improve the dynamic imaging performance of a pancreatic region, can be applied to image guidance during pancreatic cancer radiotherapy, and obtains more accurate pancreatic positioning.
Embodiments further provide a four-dimensional cone-beam CT imaging apparatus for a pancreatic region, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the memory stores a pancreas segmentation model constructed by the four-dimensional cone-beam CT imaging method, and the processor implements the following steps when executing the computer program:
acquiring a cone beam CT projection image of a pancreas region to be detected;
carrying out pancreas tracking on the input cone beam CT projection image of the pancreas region by using a pancreas segmentation model to obtain a pancreas position;
and (3) grouping the cone beam CT projection images of the pancreatic region according to the pancreatic positions, and carrying out four-dimensional cone beam CT imaging reconstruction on the grouped cone beam CT projection images.
In practical applications, the memory may be a volatile memory at the near end, such as RAM, a non-volatile memory, such as ROM, FLASH, a floppy disk, a mechanical hard disk, etc., or a remote storage cloud. The processor may be a Central Processing Unit (CPU), a microprocessor unit (MPU), a Digital Signal Processor (DSP), or a Field Programmable Gate Array (FPGA), i.e., the steps of the four-dimensional cone-beam CT imaging method for the pancreatic region may be implemented by these processors.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (9)

1. A four-dimensional cone-beam CT imaging method for a pancreatic region, comprising the steps of:
acquiring a real cone beam CT projection image and a fan beam CT image of the thoracoabdominal part, and performing virtual orthographic projection on the fan beam CT image to obtain a virtual cone beam CT projection image;
training a pair reactance network by taking the virtual cone beam CT projection image and the real cone beam CT projection image as a sample pair so as to construct a simulation projection synthetic model capable of realizing projection prediction;
acquiring a pancreas CT image of a pancreas area, carrying out pancreas labeling, carrying out binarization processing on the pancreas CT image labeled with a pancreas position, and carrying out virtual orthographic projection to obtain a virtual cone beam CT projection image and pancreas labeling information corresponding to the pancreas CT image;
carrying out projection prediction on the virtual cone beam CT projection image corresponding to the pancreas CT image by using a simulated projection synthesis model so as to obtain a simulated cone beam CT projection image corresponding to the pancreas CT image;
training a segmentation network adopting a coding and decoding structure by taking the simulated cone beam CT projection drawing and the pancreas labeling information as samples so as to construct a pancreas segmentation model capable of realizing pancreas tracking;
carrying out pancreas tracking on the input cone beam CT projection image of the pancreas region by using a pancreas segmentation model to obtain a pancreas position;
and (3) grouping the cone beam CT projection images of the pancreatic region according to pancreatic positions reflecting different respiratory states, and carrying out four-dimensional cone beam CT imaging reconstruction on the grouped cone beam CT projection images.
2. The method of claim 1, wherein the generation countermeasure network comprises a generator and a discriminator, wherein the generator employs two parts of encoding and decoding, respectively for quantizing and compressing information of the encoded input image and decoding and recovering an output image from the encoded information, wherein the input image comprises a virtual cone beam CT projection image and a real cone beam CT projection image, and the discriminator is used for judging a difference between the output image and the input image;
when training the generated countermeasure network, the adopted loss function comprises the self countermeasure loss of the generated countermeasure network, and also comprises the contrast loss used for measuring the similarity between the input image and the output image and ensuring the structure consistency of the input image and the output image;
and after training is finished, extracting a generator with optimized parameters as a simulation projection synthesis model.
3. The method of four-dimensional cone-beam CT imaging for the pancreatic region of claim 2, wherein the countermeasure loss is:
Figure FDA0003652711790000021
wherein the content of the first and second substances,
Figure FDA0003652711790000022
representing the antagonistic losses, G and D representing the generator and the arbiter, Proj, respectively f Showing that a virtual cone beam CT projection graph is obtained by carrying out virtual orthographic projection on a fan beam CT image, and Proj shows a real cone beam CT projection graph; d (Proj) represents the result of discrimination of Proj by discriminator D, G (Proj) f ) Representation generator G pairs Proj f Output image generated by encoding and decoding, D (Proj) f ) Represents the output image G (Proj) of the discriminator D f ) As a result of the determination of (1),
Figure FDA0003652711790000023
representing the maximum likelihood estimate for y, which obeys the distribution of the Proj data,
Figure FDA0003652711790000024
representing the maximum likelihood estimate for x, subject to Proj f Distribution of data.
4. The four-dimensional cone beam CT imaging method for the pancreatic region according to claim 2, wherein when constructing the contrast loss, for the input feature map and the output feature map output by each layer of the coding part corresponding to the input image and the output image, the random image block of the output feature map is taken as an anchor point, the position corresponding to the random image block in the input feature map is taken as a positive sample corresponding to the anchor point, the position corresponding to the non-random image block in the input feature map is taken as a negative sample corresponding to the anchor point, and the similarity between the input image and the output image is measured by calculating the similarity between the positive sample and the anchor point and the similarity between the negative sample and the anchor point, so as to ensure that the input image and the output image have the same structure.
5. The method of four-dimensional cone-beam CT imaging for a pancreatic region of claim 4, wherein said contrast loss comprises:
Figure FDA0003652711790000031
wherein, Proj f Showing that the virtual orthographic projection is carried out on the fan beam CT image to obtain a virtual cone beam CT projection image,
Figure FDA0003652711790000032
represents Proj f G represents a generator, L represents a network layer index of a generator encoding part, L represents a total network layer number of the generator encoding part, S represents a feature point index, and S represents a contrast loss corresponding to the input image l Represents the total number of feature points in the random image block of layer i,
Figure FDA0003652711790000033
represents Proj f The feature of the s-th position in the corresponding output feature map of the l-th layer is used as an anchor point,
Figure FDA0003652711790000034
represents Proj f The feature of the s-th position in the input feature map of the corresponding l-th layer is used as a positive sample corresponding to the anchor point,
Figure FDA0003652711790000035
represents Proj f Dividing the feature of the s-th position in the input feature map of the corresponding l-th layer to be used as a negative sample corresponding to the anchor point,
Figure FDA00036527117900000314
represents the cross-entropy loss between the similarity of the positive sample to the anchor and the similarity of the negative sample to the anchor, expressed as:
Figure FDA0003652711790000036
wherein the content of the first and second substances,
Figure FDA0003652711790000037
representing positive samples
Figure FDA0003652711790000038
And anchor point
Figure FDA0003652711790000039
The mutual information between them is used as the similarity,
Figure FDA00036527117900000310
representing negative examples
Figure FDA00036527117900000311
And anchor point
Figure FDA00036527117900000312
Mutual information among the negative samples is used as similarity, N is the serial number of the negative samples, N is the total number of the negative samples, and alpha is a hyperparameter;
Figure FDA00036527117900000313
where, Proj represents the true cone-beam CT projection,
Figure FDA0003652711790000041
representing the corresponding contrast loss when Proj is taken as the input image,
Figure FDA0003652711790000042
the characteristic of the s-th position in the output characteristic diagram of the l-th layer corresponding to the Proj is used as an anchor point,
Figure FDA0003652711790000043
the feature representing the s-th position in the input feature map of the l-th layer corresponding to the Proj is used as a positive sample corresponding to the anchor point,
Figure FDA0003652711790000044
the feature except the s-th position in the input feature map of the l-th layer corresponding to the Proj is used as a negative sample corresponding to the anchor point,
Figure FDA0003652711790000045
expressed as:
Figure FDA0003652711790000046
wherein the content of the first and second substances,
Figure FDA0003652711790000047
representing positive samples
Figure FDA0003652711790000048
And anchor point
Figure FDA0003652711790000049
The mutual information between them is used as the similarity,
Figure FDA00036527117900000410
representing negative examples
Figure FDA00036527117900000411
And anchor point
Figure FDA00036527117900000412
Mutual information between them is used as similarity.
6. The four-dimensional cone-beam CT imaging method for a pancreatic region of claim 2, when contrast loss is constructed, aiming at an input feature map and an output feature map output by each layer of a coding part corresponding to an input image and an output image, respectively carrying out feature extraction on the input feature map and the output feature map by utilizing a multilayer perceptron, the output and input perception characteristic diagram and the output perception characteristic diagram are used, the random image block of the output perception characteristic diagram is used as an anchor point, taking the position corresponding to the random image block in the input perceptual feature map as a positive sample corresponding to the anchor point, taking the position corresponding to the non-random image block in the input perceptual feature map as a negative sample corresponding to the anchor point, the similarity between the input image and the output image is measured by calculating the similarity between the positive sample and the anchor point and the similarity between the negative sample and the anchor point, and the structure consistency of the input image and the output image is ensured.
7. The method of four-dimensional cone-beam CT imaging for a pancreatic region of claim 6, wherein said contrast loss comprises:
Figure FDA0003652711790000051
wherein, Proj f Showing that the virtual orthographic projection is carried out on the fan beam CT image to obtain a virtual cone beam CT projection image,
Figure FDA0003652711790000052
represents Proj f As the corresponding contrast loss when inputting an image, L represents the network layer index of the generator encoding part, L represents the total network layer number of the generator encoding part, G represents the generator, H represents the multi-layer perceptron, S represents the characteristic point index, S l Represents the total number of feature points in the random image block of layer i,
Figure FDA0003652711790000053
represents Proj f The s bit in the output sensing characteristic diagram of the corresponding l layerThe feature of placement, as an anchor point,
Figure FDA0003652711790000054
represents Proj f The feature of the s-th position in the input perception feature map of the corresponding l-th layer is used as a positive sample corresponding to the anchor point,
Figure FDA0003652711790000055
represents Proj f Removing the feature of the s-th position from the input perception feature map of the corresponding l-th layer to be used as a negative sample corresponding to the anchor point,
Figure FDA0003652711790000056
represents the cross entropy loss between the similarity of the positive sample to the anchor point and the similarity of the negative sample to the anchor point, expressed as:
Figure FDA0003652711790000057
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003652711790000058
representing positive samples
Figure FDA0003652711790000059
With anchor points
Figure FDA00036527117900000510
Mutual information between the two is used as the similarity,
Figure FDA00036527117900000511
representing negative examples
Figure FDA00036527117900000512
And anchor point
Figure FDA00036527117900000513
Mutual information betweenAs the similarity, α is a hyperparameter;
Figure FDA00036527117900000514
wherein, Proj represents a true cone-beam CT projection view,
Figure FDA00036527117900000515
representing the corresponding contrast loss when Proj is taken as the input image,
Figure FDA0003652711790000061
the feature of the s-th position in the output perception feature diagram of the l-th layer corresponding to the Proj is used as an anchor point,
Figure FDA0003652711790000062
the feature of the s-th position in the input perception feature map of the l-th layer corresponding to the Proj is used as a positive sample corresponding to the anchor point,
Figure FDA0003652711790000063
the feature except the s-th position in the input perception feature map of the l-th layer corresponding to the Proj is used as a negative sample corresponding to the anchor point,
Figure FDA0003652711790000064
expressed as:
Figure FDA0003652711790000065
wherein the content of the first and second substances,
Figure FDA0003652711790000066
representing positive samples
Figure FDA0003652711790000067
And anchor point
Figure FDA0003652711790000068
The mutual information between them is used as the similarity,
Figure FDA0003652711790000069
representing negative examples
Figure FDA00036527117900000610
And anchor point
Figure FDA00036527117900000611
The mutual information between them is used as the similarity.
8. The four-dimensional cone-beam CT imaging method for the pancreatic region according to claim 1, wherein the segmentation network uses U-net, and when training the segmentation network using the sample including the simulated cone-beam CT projection view and the pancreatic labeling information, the simulated cone-beam CT projection view is used as the input of the segmentation network, the pancreatic labeling information is used as the label, the degree of coincidence between the predicted position and the label position output by the segmentation network is used as a loss function to optimize the network parameters, and after the optimization is completed, the segmentation network of the network parameters is determined as the pancreatic segmentation model.
9. A four-dimensional cone-beam CT imaging apparatus for a pancreatic region, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the memory stores a pancreas segmentation model constructed by the four-dimensional cone-beam CT imaging method according to any one of claims 1 to 8, and the processor executes the computer program to perform the following steps:
acquiring a cone beam CT projection image of a pancreas region to be detected;
carrying out pancreas tracking on the input cone beam CT projection image of the pancreas region by using a pancreas segmentation model to obtain a pancreas position;
and (3) grouping the cone beam CT projection images of the pancreatic region according to the pancreatic positions, and carrying out four-dimensional cone beam CT imaging reconstruction on the grouped cone beam CT projection images.
CN202210557390.7A 2022-05-19 2022-05-19 Four-dimensional cone-beam CT imaging method and device for pancreatic region Pending CN115100306A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210557390.7A CN115100306A (en) 2022-05-19 2022-05-19 Four-dimensional cone-beam CT imaging method and device for pancreatic region

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210557390.7A CN115100306A (en) 2022-05-19 2022-05-19 Four-dimensional cone-beam CT imaging method and device for pancreatic region

Publications (1)

Publication Number Publication Date
CN115100306A true CN115100306A (en) 2022-09-23

Family

ID=83289133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210557390.7A Pending CN115100306A (en) 2022-05-19 2022-05-19 Four-dimensional cone-beam CT imaging method and device for pancreatic region

Country Status (1)

Country Link
CN (1) CN115100306A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372565A (en) * 2023-12-06 2024-01-09 合肥锐视医疗科技有限公司 Respiration gating CT imaging method based on neural network time phase discrimination

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372565A (en) * 2023-12-06 2024-01-09 合肥锐视医疗科技有限公司 Respiration gating CT imaging method based on neural network time phase discrimination
CN117372565B (en) * 2023-12-06 2024-03-15 合肥锐视医疗科技有限公司 Respiration gating CT imaging method based on neural network time phase discrimination

Similar Documents

Publication Publication Date Title
CN113077471B (en) Medical image segmentation method based on U-shaped network
Zhu et al. How can we make GAN perform better in single medical image super-resolution? A lesion focused multi-scale approach
CN111627019B (en) Liver tumor segmentation method and system based on convolutional neural network
CN110930421B (en) Segmentation method for CBCT (Cone Beam computed tomography) tooth image
CN108022238B (en) Method, computer storage medium, and system for detecting object in 3D image
CN110992338B (en) Primary stove transfer auxiliary diagnosis system
US9070214B1 (en) Systems and methods for data and model-driven image reconstruction and enhancement
Han et al. Automated pathogenesis-based diagnosis of lumbar neural foraminal stenosis via deep multiscale multitask learning
CN111553892A (en) Lung nodule segmentation calculation method, device and system based on deep learning
WO2022213654A1 (en) Ultrasonic image segmentation method and apparatus, terminal device, and storage medium
CN112634273B (en) Brain metastasis segmentation system based on deep neural network and construction method thereof
CN110782427B (en) Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution
CN112634265B (en) Method and system for constructing and segmenting fully-automatic pancreas segmentation model based on DNN (deep neural network)
CN115100306A (en) Four-dimensional cone-beam CT imaging method and device for pancreatic region
Yin et al. CoT-UNet++: A medical image segmentation method based on contextual Transformer and dense connection
Tang et al. MMMNA-net for overall survival time prediction of brain tumor patients
CN112819914A (en) PET image processing method
CN117115132A (en) Oral cavity CBCT image tooth and soft tissue segmentation model method based on improved U-Net model
Pranata et al. Segmentation of the Lungs on X-Ray Thorax Images with the U-Net CNN Architecture
Kumar et al. Cardiac segmentation from MRI images using recurrent & residual convolutional neural network based on SegNet and level set methods
CN114155237A (en) Medical image anomaly detection method and terminal based on unsupervised learning
Soh et al. HUT: Hybrid UNet transformer for brain lesion and tumour segmentation
Zhao et al. Research on automatic detection algorithm of pulmonary nodules based on deep learning
Sun et al. Research on lung tumor cell segmentation method based on improved UNet algorithm
Armstrong et al. Brain tumor image segmentation using Deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination