CN115496659A - Three-dimensional CT image reconstruction method and device based on single projection data - Google Patents

Three-dimensional CT image reconstruction method and device based on single projection data Download PDF

Info

Publication number
CN115496659A
CN115496659A CN202211172460.3A CN202211172460A CN115496659A CN 115496659 A CN115496659 A CN 115496659A CN 202211172460 A CN202211172460 A CN 202211172460A CN 115496659 A CN115496659 A CN 115496659A
Authority
CN
China
Prior art keywords
dimensional
reconstructed
reconstruction
feature
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211172460.3A
Other languages
Chinese (zh)
Inventor
王庆
姚毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Linatech Medical Science And Technology
Original Assignee
Suzhou Linatech Medical Science And Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Linatech Medical Science And Technology filed Critical Suzhou Linatech Medical Science And Technology
Priority to CN202211172460.3A priority Critical patent/CN115496659A/en
Publication of CN115496659A publication Critical patent/CN115496659A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a three-dimensional CT image reconstruction method and a three-dimensional CT image reconstruction device based on single projection data, wherein the three-dimensional CT image reconstruction method comprises the following steps: acquiring a CT to be reconstructed; generating corresponding single KV projection data for the obtained CT to be reconstructed; performing feature extraction on the acquired single KV projection data by using an encoder with an attention mechanism and a multi-size fusion strategy in a convolutional neural network; performing three-dimensional conversion on the extracted features; performing feature reconstruction by using an encoder in a convolutional neural network to obtain a three-dimensional reconstruction CT; carrying out gray value difference calculation on the obtained three-dimensional reconstruction CT and the CT to be reconstructed; judging whether the difference of the two gray values reaches a global optimal solution; if so, successfully reconstructing and outputting a final three-dimensional CT image; otherwise, repeating the steps until the difference of the gray values of the two reaches the global optimal solution. According to the invention, the CT image with higher quality can be reconstructed by using single KV projection data, the reconstruction time is shorter, and the radiation dose brought to a patient is less.

Description

Three-dimensional CT image reconstruction method and device based on single projection data
Technical Field
The invention belongs to the technical field of CT image reconstruction, and particularly relates to a three-dimensional CT image reconstruction method and device based on single projection data.
Background
The CT imaging technology is a high-performance noninvasive diagnosis technology and plays a vital role in the aspect of assisting clinical diagnosis. In the CT, an X-ray beam is used to scan a layer of a certain thickness of a certain part of a human body, a flat digital detector receives the X-ray penetrating through the layer, the X-ray is converted into visible light, the visible light is converted into an electrical signal by photoelectric conversion, and the electrical signal is converted into a digital signal by an analog/digital converter and then input into a computer for processing. However, as the frequency of CT imaging increases, on one hand, conventional CT scanning takes a long time, and on the other hand, the health hazard of public health due to higher radiation dose during scanning is more and more important, so that shortening the scanning time and reducing the radiation dose have become urgent needs for CT development.
The current mainstream method is to reconstruct the CT image by sparse projection. A multi-scale sparse projection data fast CT reconstruction method is provided in Chinese patent application CN201610987138.4, and convex optimization iteration and regularization term constraint are adopted to reconstruct downsampled projection data. In the chinese patent CN201711420601.8, a CT sparse projection image reconstruction method and apparatus under the limitation of sampling angle are proposed, in which a pseudo-inverse matrix of a projection equation solution is obtained through projection data, then a random solution set is generated, solutions in the set are correspondingly replaced, and an optimal solution is obtained through continuous iteration, thereby obtaining a final reconstruction result.
Compared with the traditional reconstruction algorithm, the method has certain improvement on reconstruction speed and radiation dose reduction, but the ultra-sparse projection reconstruction CT is not realized, namely a CT image with corresponding high quality is reconstructed by only single projection data.
Disclosure of Invention
In order to solve the technical problem, the invention provides a three-dimensional CT image reconstruction method and a three-dimensional CT image reconstruction device based on single projection data.
In order to achieve the purpose, the technical scheme of the invention is as follows:
on one hand, the invention discloses a three-dimensional CT image reconstruction method based on single projection data, which comprises the following steps:
the method comprises the following steps: acquiring a CT to be reconstructed;
step two: generating corresponding single KV projection data for the CT to be reconstructed acquired in the step one;
step three: performing feature extraction on the single KV projection data acquired in the step two by using an encoder with an attention mechanism and a multi-size fusion strategy in a convolutional neural network;
step four: carrying out three-dimensional conversion on the features extracted in the step three;
step five: performing feature reconstruction by using an encoder in a convolutional neural network to obtain a three-dimensional reconstruction CT;
step six: performing gray value difference calculation on the three-dimensional reconstruction CT obtained in the step five and the CT to be reconstructed;
judging whether the difference of the two gray values reaches a global optimal solution;
if so, successfully reconstructing and outputting a final three-dimensional CT image;
otherwise, repeating the third step to the sixth step until the difference of the gray values of the two reaches the global optimal solution.
On the basis of the technical scheme, the following improvements can be made:
as a preferred scheme, the attention mechanism specifically comprises the following steps:
a1: respectively passing the input feature map through a global maximum pooling layer and a global average pooling layer;
a2: transmitting the output characteristic diagrams of the two to a multilayer vector machine, and further extracting the characteristics by the multilayer vector machine;
a3: processing the generated primary intermediate feature map to obtain a channel attention map;
a4: multiplying the channel attention diagram with the input feature diagram to obtain a secondary intermediate feature diagram;
a5: respectively passing the secondary intermediate feature graph through a global maximum pooling layer and a global average pooling layer;
a6: carrying out feature splicing operation on the output feature graphs of the two;
a7: processing the spliced characteristic diagram to obtain a space attention diagram;
a8: and multiplying the spatial attention diagram with the quadratic intermediate feature diagram to obtain a final output feature diagram.
With the above preferred scheme, the attention mechanism includes channel attention and spatial attention, and the attention diagrams are sequentially inferred in the channel and spatial dimensions respectively, and then are multiplied by the input feature diagram for adaptive optimization to obtain the region of interest.
As a preferred scheme, the fusion feature map obtained by the multi-scale fusion strategy is specifically represented by the following formula:
Figure BDA0003863854910000031
wherein G is k Fused feature map representing the k-th layer output of the encoder, F k Representing the input profile of the k-th layer of the encoder,
Figure BDA0003863854910000032
representative of upsampling 2 i-k Multiple, C stands for splicing operation, D sconv @2 i-k Then the expansion rate is 2 i-k Is convolved.
By adopting the preferred scheme, the deep-layer and shallow-layer characteristics of the encoder are effectively combined by the multi-scale fusion strategy, and the more detailed characteristics are captured and transmitted to the decoder part, so that the CT with higher quality is reconstructed.
As a preferred scheme, in the step five, the gray value difference calculation is specifically a mean square error loss calculation, and the mean square error loss calculation formula is as follows:
Figure BDA0003863854910000033
wherein m is the number of samples, y i For three-dimensional reconstruction of CT, y' i To be reconstructed.
By adopting the preferred scheme, the gray value difference calculation is to calculate the mean square error loss of the three-dimensional reconstruction CT generated by the network and the CT to be reconstructed, the loss is propagated reversely in the network to obtain the gradient of the network parameter, and the network parameter is updated by a gradient descent optimization method, so that the mean square error loss is minimum, namely, the global optimal solution.
As a preferred scheme, before feature extraction is performed by using a convolutional neural network, the convolutional neural network is trained, and the training steps are as follows:
b1: preprocessing CT and KV projection data to be reconstructed;
b2: forming a sample pair by the CT to be reconstructed and KV projection data corresponding to the CT and dividing the sample pair into a training set, a testing set and a verification set according to a certain proportion;
b3: inputting the training set into a convolutional neural network for training and three-dimensional conversion to obtain a three-dimensional reconstruction CT;
b4: performing mean square error loss calculation on the three-dimensional reconstruction CT and the CT to be reconstructed, performing back propagation on the obtained loss in a convolutional neural network to obtain the gradient of the network parameters, and updating the network parameters by a gradient descent optimization method to minimize the mean square error loss;
b5: after each training, verifying by using a verification set, and judging whether the mean square error loss on the verification set is minimum or not;
if yes, saving the weight;
if not, continuing training, and adjusting the learning rate when the mean square error loss on the verification set still does not decrease after the training reaches the threshold times;
b6: and repeating B3-B5 until the training result of the convolutional neural network reaches the expectation and the loss of mean square error on the training set reaches the minimum.
And obtaining the trained convolutional neural network by adopting the preferable scheme.
On the other hand, the invention also discloses a three-dimensional CT image reconstruction device based on single projection data, which comprises:
the projection data acquisition module is used for acquiring the CT to be reconstructed and generating corresponding single KV projection data for the acquired CT to be reconstructed;
the model building module is used for building a CT reconstruction model based on the convolutional neural network, and the CT reconstruction model comprises: the system comprises an encoder with an attention mechanism and a multi-size fusion strategy, a three-dimensional conversion unit and a decoder for feature reconstruction;
the three-dimensional reconstruction module is used for inputting the CT to be reconstructed into the CT reconstruction model to obtain a three-dimensional reconstruction CT, performing gray value difference calculation on the three-dimensional reconstruction CT and the CT to be reconstructed and judging whether the gray value difference of the two reaches a global optimal solution;
if so, successfully reconstructing and outputting a final three-dimensional CT image;
otherwise, repeatedly searching the optimal solution by a gradient descent method until the difference of the gray values of the two reaches the global optimal solution.
Preferably, the attention mechanism comprises: a channel attention module and a spatial attention module;
the channel attention module is used for enabling the input feature maps to pass through a global maximum pooling layer and a global average pooling layer respectively, transmitting the output feature maps of the input feature maps and the global average pooling layer to the multilayer vector machine, and further extracting features by the multilayer vector machine; processing the generated primary intermediate feature map to obtain a channel attention map;
the spatial attention module is used for multiplying the channel attention diagram with the input feature diagram to obtain a secondary intermediate feature diagram, enabling the secondary intermediate feature diagram to respectively pass through a global maximum pooling layer and a global average pooling layer, enabling feature splicing operation to be conducted on output feature diagrams of the secondary intermediate feature diagram and the global average pooling layer, and processing the spliced feature diagrams to obtain the spatial attention diagram.
With the above preferred scheme, the attention mechanism includes channel attention and spatial attention, and the attention diagrams are sequentially inferred in the channel and spatial dimensions respectively, and then are multiplied by the input feature diagram for adaptive optimization to obtain the region of interest.
As a preferred scheme, the fusion feature map obtained by the multi-scale fusion strategy is specifically represented by the following formula:
Figure BDA0003863854910000051
wherein G is k Fused feature map representing the k-th layer output of the encoder, F k Representing the input profile of the k-th layer of the encoder,
Figure BDA0003863854910000052
representative of upsampling 2 i-k Double generation, C generationWatch stitching operation, D sconv @2 i-k Then the expansion rate is 2 i-k Is convolved.
By adopting the preferred scheme, the deep-layer and shallow-layer characteristics of the encoder are effectively combined by the multi-scale fusion strategy, and the more detailed characteristics are captured and transmitted to the decoder part, so that the CT with higher quality is reconstructed.
As a preferred scheme, in the three-dimensional reconstruction module, the gray value difference calculation is specifically a mean square error loss calculation, and the mean square error loss calculation formula is as follows:
Figure BDA0003863854910000061
wherein m is the number of samples, y i For three-dimensional reconstruction of CT, y' i To be reconstructed.
By adopting the preferred scheme, the gray value difference calculation is to calculate the mean square error loss of the three-dimensional reconstruction CT generated by the network and the CT to be reconstructed, reversely transmit the loss in the network to obtain the gradient of the network parameter, and update the network parameter by a gradient descent optimization method so that the mean square error loss is minimum, namely, a global optimal solution.
As a preferred scheme, the training of the convolutional neural network specifically comprises the following steps:
b1: preprocessing CT and KV projection data to be reconstructed;
b2: forming a sample pair by the CT to be reconstructed and KV projection data corresponding to the CT and dividing the sample pair into a training set, a testing set and a verification set according to a certain proportion;
b3: inputting the training set into a convolutional neural network for training and three-dimensional conversion to obtain a three-dimensional reconstruction CT;
b4: performing mean square error loss calculation on the three-dimensional reconstruction CT and the CT to be reconstructed, performing back propagation on the obtained loss in a convolutional neural network to obtain the gradient of the network parameters, and updating the network parameters by a gradient descent optimization method to minimize the mean square error loss;
b5: after each training, verifying by using a verification set, and judging whether the mean square error loss on the verification set is minimum or not;
if yes, saving the weight;
if not, continuing training, and adjusting the learning rate when the mean square error loss on the verification set still does not decrease after the training reaches the threshold times;
b6: and repeating B3-B5 until the training result of the convolutional neural network reaches the expectation and the loss of mean square error on the training set reaches the minimum.
And obtaining the trained convolutional neural network by adopting the preferable scheme.
The three-dimensional CT image reconstruction method and the three-dimensional CT image reconstruction device based on single projection data have the following beneficial effects:
firstly, the deep learning method is used for training, compared with the traditional reconstruction algorithm, the whole training process is faster and more convenient, manual intervention is not needed, and the training is end-to-end.
Secondly, a CT image with higher quality can be reconstructed by using single KV projection data. The three-dimensional CT tomography is reconstructed by switching from sparse projection to single projection data, the reconstruction time is shorter on the premise of ensuring the generation quality, and the radiation dose brought to a patient is less.
And thirdly, an attention mechanism is added, so that the network can pay more attention to the region of interest in the training process, and the features can be extracted more accurately.
And fourthly, adding a multi-scale fusion strategy, better combining the shallow layer characteristics and the deep layer characteristics of the network encoder, acquiring fine-grained fusion characteristics, and transmitting the fused characteristics to a decoder part to generate a final reconstruction result.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flowchart of a three-dimensional CT image reconstruction method according to an embodiment of the present invention.
Fig. 2 is a block diagram of a decoder according to an embodiment of the present invention.
Fig. 3 is a flowchart of an attention mechanism provided by an embodiment of the present invention.
FIG. 4 is a flow chart of channel attention provided by an embodiment of the present invention.
FIG. 5 is a flow chart of spatial attention provided by an embodiment of the present invention.
Fig. 6 is a flowchart of a multi-scale fusion policy provided in an embodiment of the present invention.
Fig. 7 is a schematic structural diagram of a three-dimensional CT image reconstruction apparatus according to an embodiment of the present invention.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The use of the ordinal adjectives "first," "second," "third," etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
Also, the expression "comprising" an element is an expression of "open" which merely means that there is a corresponding component, and should not be interpreted as excluding additional components.
In order to achieve the object of the present invention, in some embodiments of a three-dimensional CT image reconstruction method and apparatus based on single projection data, as shown in fig. 1, the three-dimensional CT image reconstruction method includes the following steps:
the method comprises the following steps: acquiring a CT to be reconstructed;
step two: generating corresponding single KV projection data for the CT to be reconstructed acquired in the step one;
step three: performing feature extraction on the single KV projection data acquired in the step two by using an encoder with an attention mechanism and a multi-size fusion strategy in a convolutional neural network;
step four: carrying out three-dimensional conversion on the features extracted in the step three;
step five: performing feature reconstruction by using an encoder in a convolutional neural network to obtain a three-dimensional reconstruction CT;
step six: carrying out gray value difference calculation on the three-dimensional reconstruction CT obtained in the fifth step and the CT to be reconstructed;
judging whether the difference of the two gray values reaches a global optimal solution;
if so, successfully reconstructing and outputting a final three-dimensional CT image;
otherwise, repeating the third step to the sixth step until the difference of the gray values of the two reaches the global optimal solution.
The convolutional neural network is a deep learning method and comprises an encoder and a decoder. The encoder learns the projection data characteristics, and the decoder reconstructs the characteristics learned by the encoder to generate a final result. In order to better extract image features, an attention mechanism and a multi-scale fusion strategy are added in the middle layer of an encoder, and deeper features are obtained and transmitted to a decoder.
The three-dimensional conversion is to map the two-dimensional features extracted by the encoder into corresponding three-dimensional features, and transmit the three-dimensional features to the decoder part for feature reconstruction to obtain a reconstruction result.
Further, in some embodiments, the three-dimensional transformation step is performed synchronously after the multi-scale fusion strategy.
Further, in some specific embodiments, the encoder is composed of 10 layers of two-dimensional convolution, each two layers form a two-dimensional residual convolution block, each two-dimensional residual convolution block is composed of a two-dimensional convolution layer, a normalization layer and a ReLU activation function, the number of convolution kernel channels is 64, 128, 256, 512 and 1024 respectively, downsampling is performed through a pooling layer for extracting image local features, and residual connection is introduced to mitigate gradient disappearance.
As shown in fig. 2, the decoder is composed of 5 layers of three-dimensional transposed convolutions, each three-dimensional transposed convolution is composed of a three-dimensional transposed convolution layer, a normalization layer and a ReLU activation function, the number of convolution kernel channels corresponds to the number of convolution kernel channels of each layer of the encoder part, and upsampling is performed at the same time for reconstructing the features extracted at the encoding stage, generating a final result, and performing mean square error loss calculation with the CT to be reconstructed.
In order to further optimize the implementation effect of the present invention, in other embodiments, the remaining features are the same, except that, as shown in fig. 3, the attention mechanism specifically includes the following steps:
a1: as shown in fig. 4, the input feature map passes through a global maximum pooling layer and a global average pooling layer respectively;
a2: transmitting the output feature maps of the two to a multilayer vector machine, and further performing feature extraction by the multilayer vector machine;
a3: carrying out 1 × 1 convolution, a normalization layer and a Sigmoid activation function on the generated primary intermediate feature map to obtain a channel attention map;
a4: multiplying the channel attention diagram with the input feature diagram to obtain a secondary intermediate feature diagram;
a5: as shown in fig. 5, the secondary intermediate feature map is used as a new input feature map and passes through a global maximum pooling layer and a global average pooling layer respectively;
a6: carrying out feature splicing operation on the output feature graphs of the two;
a7: carrying out 7 multiplied by 7 convolution, a normalization layer and a Sigmoid activation function on the spliced feature graph to obtain a space attention diagram;
a8: and multiplying the spatial attention diagram with the quadratic intermediate feature diagram to obtain a final output feature diagram.
With the above preferred scheme, the attention mechanism includes channel attention and spatial attention, and the attention diagrams are sequentially inferred in the channel and spatial dimensions respectively, and then are multiplied by the input feature diagram for adaptive optimization to obtain the region of interest.
Wherein, the channel attention and the space attention can be expressed by the following formulas:
M c (F)=σ(MLP(AvgPool(F)+MLP(MaxPool(F)))
M s (F)=σ(f 7×7 ([AvgPool(F);MaxPool(F)]))
wherein M is * (. Cndot.) represents the output attention map, σ (-) represents the sigmoid function, MLP represents the multi-layer vector machine, avgPool and MaxPool represent average and maximum pooling, respectively, f 7×7 (. Cndot.) represents a7 × 7 convolution, and F is the input feature map.
In order to further optimize the implementation effect of the present invention, in other embodiments, the remaining feature technologies are the same, except that the fused feature map obtained by the multi-scale fusion strategy is specifically represented by the following formula:
Figure BDA0003863854910000111
wherein G is k Fused feature map representing the k-th layer output of the encoder, F k Representing the input profile of the k-th layer of the encoder,
Figure BDA0003863854910000112
representative of upsampling 2 i-k Multiple, C stands for splicing operation, D sconv @2 i-k Then the expansion rate is 2 i-k Is convolved.
By adopting the preferred scheme, the deep-layer and shallow-layer characteristics of the encoder are effectively combined by the multi-scale fusion strategy, and the more detailed characteristics are captured and transmitted to the decoder part, so that the CT with higher quality is reconstructed.
As shown in fig. 6, specifically, the implementation flow of the multi-scale fusion policy is described by taking the third layer of the encoder portion as an example, and the fusion policies of other corresponding layers are similar to this:
1) Respectively acquiring characteristic diagrams of the third layer, the fourth layer and the fifth layer of the encoder as input characteristic diagrams;
2) Inputting the feature map, and performing 3 × 3 convolution to obtain corresponding features of each layer as an intermediate feature map;
3) The intermediate characteristic maps of the fourth layer and the fifth layer are respectively up-sampled by 2 times and 4 times, and the resolution of the intermediate characteristic maps of the fourth layer and the fifth layer is kept the same as that of the intermediate characteristic maps of the third layer;
4) Splicing the channel dimensions of the intermediate characteristic diagram;
5) Inputting the spliced feature maps into expansion convolutions with expansion rates of 1, 2 and 4 respectively to obtain feature maps with different scales;
6) Splicing the feature graphs of different scales with channel dimensions;
7) And performing 1 × 1 convolution on the spliced feature map to obtain a final output feature map of a third layer of the encoder.
In order to further optimize the implementation effect of the present invention, in other embodiments, the remaining feature technologies are the same, except that in step five, the gray value difference calculation is specifically a mean square error loss calculation, and the mean square error loss calculation formula is as follows:
Figure BDA0003863854910000121
wherein m is the number of samples, y i For three-dimensional reconstruction of CT, y' i To be reconstructed.
By adopting the preferred scheme, the gray value difference calculation is to calculate the mean square error loss of the three-dimensional reconstruction CT generated by the network and the CT to be reconstructed, the loss is propagated reversely in the network to obtain the gradient of the network parameter, and the network parameter is updated by a gradient descent optimization method, so that the mean square error loss is minimum, namely, the global optimal solution.
In order to further optimize the implementation effect of the present invention, in other embodiments, the rest feature technologies are the same, except that before feature extraction by using the convolutional neural network, the convolutional neural network is trained, and the training steps are as follows:
b1: preprocessing CT and KV projection data to be reconstructed, wherein the preprocessing operation is to normalize and standardize the KV projection data to a [0,1] interval and adjust the KV projection data into 128 x 128 pixels, the CT to be reconstructed is also standardized to a [0,1] interval, and the voxel size is adjusted to 128 x 128;
b2: forming a sample pair by the CT to be reconstructed and KV projection data corresponding to the CT and dividing the sample pair into a training set, a testing set and a verification set according to a ratio of 6;
b3: inputting the training set into a convolutional neural network for training and three-dimensional conversion to obtain a three-dimensional reconstruction CT;
b4: performing mean square error loss calculation on the three-dimensional reconstruction CT and the CT to be reconstructed, performing back propagation on the obtained loss in a convolutional neural network to obtain the gradient of the network parameters, and updating the network parameters by a gradient descent optimization method to minimize the mean square error loss;
b5: after each training, verifying by using a verification set, and judging whether the mean square error loss on the verification set is minimum or not;
if yes, saving the weight;
if not, continuing training, and when the training reaches the threshold times, the mean square error loss on the verification set still does not decrease, adjusting the learning rate;
b6: and repeating B3-B5 until the training result of the convolutional neural network reaches the expectation and the loss of mean square error on the training set reaches the minimum.
And obtaining the trained convolutional neural network by adopting the preferable scheme.
Specifically, the global optimal solution is determined by the network's performance on the verification set and weights are saved. Specifically, the network verifies on a verification set every time the network is trained for one turn, and if the mean square error loss on the verification set is minimum at the moment, the weight is saved; if the learning rate is not minimum and the model does not decrease after the threshold round (for example, 10 rounds) is continued, the learning rate is adjusted to 1/2 of the original learning rate until the minimum is reached.
In some embodiments, the network trains 100 rounds in total, with an initial learning rate of 1e-5 and an optimizer of Adam.
In some embodiments, to evaluate the performance of the network, an additional evaluation index may be employed for quantitative analysis. Specifically, the evaluation indexes are Root Mean Square Error (RMSE), mean Absolute Error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity measure (SSIM), and the smaller the values of the first two indexes are, the better the values of the latter two indexes are, the higher the values are, the better the values are.
The formula is as follows:
Figure BDA0003863854910000131
Figure BDA0003863854910000132
Figure BDA0003863854910000133
Figure BDA0003863854910000134
wherein m is the number of samples, y i Reconstructed CT, y 'generated for the network' i For the CT to be reconstructed, maxValue represents the maximum value of the reconstructed CT generated by the network, u y
Figure BDA0003863854910000135
Mean and variance of y, u y′
Figure BDA0003863854910000136
Respectively representing the mean and variance, σ, of y yy′ Then the covariance between y and y', C 1 =(K 1 L) 2 And C 2 =(K 2 L) 2 Is a constant for maintaining stability, generally K 1 =0.01,K 2 =0.03, l is the maximum value in the data.
On the other hand, the embodiment of the present invention further discloses a three-dimensional CT image reconstruction apparatus based on single projection data, as shown in fig. 7, including:
the projection data acquisition module is used for acquiring the CT to be reconstructed and generating corresponding single KV projection data for the acquired CT to be reconstructed;
the model building module is used for building a CT reconstruction model based on the convolutional neural network, and the CT reconstruction model comprises: the system comprises an encoder with attention mechanism and multi-size fusion strategy, a three-dimensional conversion unit and a decoder for feature reconstruction;
the three-dimensional reconstruction module is used for inputting the CT to be reconstructed into the CT reconstruction model to obtain a three-dimensional reconstruction CT, performing gray value difference calculation on the three-dimensional reconstruction CT and the CT to be reconstructed and judging whether the gray value difference of the two reaches a global optimal solution;
if so, successfully reconstructing and outputting a final three-dimensional CT image;
otherwise, repeatedly searching the optimal solution by a gradient descent method until the difference of the gray values of the two solutions reaches the global optimal solution.
The convolutional neural network is a deep learning method and comprises an encoder and a decoder. The encoder learns the projection data characteristics, and the decoder reconstructs the characteristics learned by the encoder to generate a final result. In order to better extract image features, an attention mechanism and a multi-scale fusion strategy are added in the middle layer of an encoder, and deeper features are obtained and transmitted to a decoder.
The three-dimensional conversion is to map the two-dimensional features extracted by the encoder into corresponding three-dimensional features, and transmit the three-dimensional features to the decoder part for feature reconstruction to obtain a reconstruction result.
Further, in some embodiments, the three-dimensional transformation step is performed synchronously after the multi-scale fusion strategy.
Further, in some specific embodiments, the encoder is composed of 10 layers of two-dimensional convolution, each two layers form a two-dimensional residual convolution block, each two-dimensional residual convolution block is composed of a two-dimensional convolution layer, a normalization layer and a ReLU activation function, the number of convolution kernel channels is 64, 128, 256, 512 and 1024 respectively, downsampling is performed through a pooling layer for extracting image local features, and residual connection is introduced to mitigate gradient disappearance.
The decoder consists of 5 layers of three-dimensional transposition convolutions, each three-dimensional transposition convolution consists of a three-dimensional transposition convolution layer, a normalization layer and a ReLU activation function, the number of convolution kernel channels corresponds to the number of convolution kernel channels of each layer of the encoder part, and the convolution kernel channels are up-sampled simultaneously and used for reconstructing the characteristics extracted in the encoding stage, generating a final result and calculating the mean square error loss of the CT to be reconstructed.
In order to further optimize the implementation effect of the invention, in other embodiments, the rest of the features are the same, except that the attention mechanism comprises: a channel attention module and a spatial attention module;
the channel attention module is used for enabling the input feature maps to pass through a global maximum pooling layer and a global average pooling layer respectively, transmitting the output feature maps of the input feature maps and the global average pooling layer to the multilayer vector machine, and further extracting features by the multilayer vector machine; carrying out 1 × 1 convolution, a normalization layer and a Sigmoid activation function on the generated primary intermediate feature map to obtain a channel attention map;
the space attention module is used for multiplying the channel attention diagram with the input feature diagram to obtain a secondary intermediate feature diagram, enabling the secondary intermediate feature diagram to respectively pass through a global maximum pooling layer and a global average pooling layer, enabling output feature diagrams of the secondary intermediate feature diagram and the global average pooling layer to perform feature splicing operation, and enabling the spliced feature diagrams to pass through 7 x 7 convolution, a normalization layer and a Sigmoid activation function to obtain the space attention diagram.
With the above preferred scheme, the attention mechanism includes channel attention and spatial attention, and the attention diagrams are sequentially inferred in the channel and spatial dimensions respectively, and then are multiplied by the input feature diagram for adaptive optimization to obtain the region of interest. And multiplying the spatial attention diagram with the quadratic intermediate feature diagram to obtain a final output feature diagram.
Wherein, the channel attention and the space attention can be expressed by the following formulas:
M c (F)=σ(MLP(AvgPool(F)+MLP(MaxPool(F)))
M s (F)=σ(f 7×7 ([AvgPool(F);MaxPool(F)]))
wherein M is * (. -) represents the output attention map, σ (. Cndot.) represents the sigmoid function, MLP represents the multi-level vector machine, avgPool and MaxPool represent the average and maximum pooling, respectively, f 7×7 (. Cndot.) represents a7 × 7 convolution, and F is the input feature map.
In order to further optimize the implementation effect of the present invention, in other embodiments, the remaining feature technologies are the same, except that the fused feature map obtained by the multi-scale fusion strategy is specifically represented by the following formula:
Figure BDA0003863854910000161
wherein G is k Fused feature map representing the k-th layer output of the encoder, F k Representing the input profile of the k-th layer of the encoder,
Figure BDA0003863854910000162
representative of upsampling 2 i-k Multiple, C stands for splicing operation, D sconv @2 i-k Then the expansion rate is 2 i-k Is convolved with the hole.
By adopting the preferred scheme, the deep-layer and shallow-layer characteristics of the encoder are effectively combined by the multi-scale fusion strategy, and the more detailed characteristics are captured and transmitted to the decoder part, so that the CT with higher quality is reconstructed.
Specifically, the implementation flow of the multi-scale fusion strategy is described by taking the third layer of the encoder part as an example, and the fusion strategies of other corresponding layers are similar to this:
1) Respectively acquiring characteristic diagrams of the third layer, the fourth layer and the fifth layer of the encoder as input characteristic diagrams;
2) Inputting the feature map, and performing 3 × 3 convolution to obtain corresponding features of each layer as an intermediate feature map;
3) The intermediate characteristic maps of the fourth layer and the fifth layer are respectively up-sampled by 2 times and 4 times, and the resolution of the intermediate characteristic maps of the fourth layer and the fifth layer is kept the same as that of the intermediate characteristic maps of the third layer;
4) Splicing the intermediate characteristic diagrams in channel dimensions;
5) Inputting the spliced feature maps into expansion convolutions with expansion rates of 1, 2 and 4 respectively to obtain feature maps with different scales;
6) Splicing the feature graphs of different scales with channel dimensions;
7) And performing 1 × 1 convolution on the spliced feature map to obtain a final output feature map of a third layer of the encoder.
In order to further optimize the implementation effect of the present invention, in other embodiments, the remaining feature technologies are the same, except that, in the three-dimensional reconstruction module, the gray value difference calculation is specifically a calculation of a mean square error loss, and the mean square error loss calculation formula is as follows:
Figure BDA0003863854910000171
wherein m is the number of samples, y i For three-dimensional reconstruction of CT, y' i To be reconstructed.
By adopting the preferred scheme, the gray value difference calculation is to calculate the mean square error loss of the three-dimensional reconstruction CT generated by the network and the CT to be reconstructed, reversely transmit the loss in the network to obtain the gradient of the network parameter, and update the network parameter by a gradient descent optimization method so that the mean square error loss is minimum, namely, a global optimal solution.
In order to further optimize the implementation effect of the present invention, in other embodiments, the remaining feature technologies are the same, except that the training of the convolutional neural network specifically includes the following steps:
b1: preprocessing CT and KV projection data to be reconstructed, wherein the preprocessing operation is to normalize and standardize the KV projection data to a [0,1] interval and adjust the KV projection data into 128 x 128 pixels, the CT to be reconstructed is also standardized to a [0,1] interval, and the voxel size is adjusted to 128 x 128;
b2: forming a sample pair by the CT to be reconstructed and KV projection data corresponding to the CT and dividing the sample pair into a training set, a testing set and a verification set according to a ratio of 6;
b3: inputting the training set into a convolutional neural network for training and three-dimensional conversion to obtain a three-dimensional reconstruction CT;
b4: performing mean square error loss calculation on the three-dimensional reconstruction CT and the CT to be reconstructed, performing back propagation on the obtained loss in a convolutional neural network to obtain the gradient of the network parameters, and updating the network parameters by a gradient descent optimization method to minimize the mean square error loss;
b5: after each training, verifying by using a verification set, and judging whether the mean square error loss on the verification set is minimum or not;
if yes, saving the weight;
if not, continuing training, and adjusting the learning rate when the mean square error loss on the verification set still does not decrease after the training reaches the threshold times;
b6: and repeating B3-B5 until the training result of the convolutional neural network reaches the expectation and the loss of mean square error on the training set reaches the minimum.
The invention discloses a three-dimensional CT image reconstruction method and a three-dimensional CT image reconstruction device based on single projection data, which have the following beneficial effects:
firstly, the deep learning method is used for training, compared with the traditional reconstruction algorithm, the whole training process is faster and more convenient, manual intervention is not needed, and the training is end-to-end.
Secondly, a CT image with higher quality can be reconstructed by using single KV projection data. The three-dimensional CT tomography is reconstructed by switching from sparse projection to single projection data, the reconstruction time is shorter on the premise of ensuring the generation quality, and the radiation dose brought to a patient is less.
And thirdly, an attention mechanism is added, so that the network can pay more attention to the region of interest in the training process, and the features can be extracted more accurately.
And fourthly, adding a multi-scale fusion strategy, better combining the shallow layer characteristics and the deep layer characteristics of the network encoder, acquiring fine-grained fusion characteristics, and transmitting the fused characteristics to a decoder part to generate a final reconstruction result.
The various embodiments above may be implemented in cross-parallel.
It should be understood that the elements and algorithm steps of the various examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the various examples have been described in a functional generic sense in the foregoing description for the purpose of clearly illustrating the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.

Claims (10)

1. The three-dimensional CT image reconstruction method based on single projection data is characterized by comprising the following steps of:
the method comprises the following steps: acquiring a CT to be reconstructed;
step two: generating corresponding single KV projection data for the CT to be reconstructed acquired in the first step;
step three: performing feature extraction on the single KV projection data acquired in the second step by using an encoder with an attention mechanism and a multi-size fusion strategy in a convolutional neural network;
step four: performing three-dimensional conversion on the features extracted in the step three;
step five: performing feature reconstruction by using an encoder in a convolutional neural network to obtain a three-dimensional reconstruction CT;
step six: carrying out gray value difference calculation on the three-dimensional reconstruction CT obtained in the fifth step and the CT to be reconstructed;
judging whether the difference of the two gray values reaches a global optimal solution;
if so, successfully reconstructing and outputting a final three-dimensional CT image;
otherwise, repeating the third step to the sixth step until the difference of the gray values of the two reaches the global optimal solution.
2. The three-dimensional CT image reconstruction method according to claim 1, characterized in that the attention mechanism comprises the following steps:
a1: respectively passing the input feature map through a global maximum pooling layer and a global average pooling layer;
a2: transmitting the output feature maps of the two to a multilayer vector machine, and further performing feature extraction by the multilayer vector machine;
a3: processing the generated primary intermediate feature map to obtain a channel attention map;
a4: multiplying the channel attention diagram with the input feature diagram to obtain a secondary intermediate feature diagram;
a5: respectively passing the secondary intermediate feature graph through a global maximum pooling layer and a global average pooling layer;
a6: carrying out feature splicing operation on the output feature graphs of the two;
a7: processing the spliced characteristic diagram to obtain a space attention diagram;
a8: and multiplying the spatial attention diagram with the quadratic intermediate feature diagram to obtain a final output feature diagram.
3. The three-dimensional CT image reconstruction method according to claim 1, wherein the fusion feature map obtained by the multi-scale fusion strategy is specifically represented by the following formula:
Figure FDA0003863854900000021
wherein G is k Fused feature map representing the k-th layer output of the encoder, F k Representing the input profile of the k-th layer of the encoder,
Figure FDA0003863854900000022
representative of upsampling 2 i-k Multiple, C stands for splicing operation, D sconv @2 i-k Then the expansion rate is 2 i-k Is convolved.
4. The three-dimensional CT image reconstruction method according to claim 2, wherein in the fifth step, the gray value difference calculation is specifically a mean square error loss calculation, and the mean square error loss calculation formula is as follows:
Figure FDA0003863854900000023
wherein m is the number of samples, y i For three-dimensional reconstruction of CT, y i ' is the CT to be reconstructed.
5. The three-dimensional CT image reconstruction method of claim 1, wherein before feature extraction using the convolutional neural network, the convolutional neural network is trained, the training steps are as follows:
b1: preprocessing CT and KV projection data to be reconstructed;
b2: forming a sample pair by the CT to be reconstructed and KV projection data corresponding to the CT and dividing the sample pair into a training set, a testing set and a verification set according to a certain proportion;
b3: inputting the training set into a convolutional neural network for training and three-dimensional conversion to obtain a three-dimensional reconstruction CT;
b4: performing mean square error loss calculation on the three-dimensional reconstruction CT and the CT to be reconstructed, performing back propagation on the obtained loss in a convolutional neural network to obtain the gradient of the network parameters, and updating the network parameters by a gradient descent optimization method to minimize the mean square error loss;
b5: after each training, verifying by using a verification set, and judging whether the mean square error loss on the verification set is minimum or not;
if yes, saving the weight;
if not, continuing training, and adjusting the learning rate when the mean square error loss on the verification set still does not decrease after the training reaches the threshold times;
b6: and repeating B3-B5 until the training result of the convolutional neural network reaches the expectation and the mean square error loss on the training set reaches the minimum.
6. Three-dimensional CT image reconstruction device based on single projection data, characterized by including:
the projection data acquisition module is used for acquiring the CT to be reconstructed and generating corresponding single KV projection data for the acquired CT to be reconstructed;
a model building module for building a CT reconstruction model based on a convolutional neural network, the CT reconstruction model comprising: the system comprises an encoder with an attention mechanism and a multi-size fusion strategy, a three-dimensional conversion unit and a decoder for feature reconstruction;
the three-dimensional reconstruction module is used for inputting the CT to be reconstructed into the CT reconstruction model to obtain a three-dimensional reconstructed CT, performing gray value difference calculation on the three-dimensional reconstructed CT and the CT to be reconstructed and judging whether the gray value difference between the two gray values reaches a global optimal solution or not;
if yes, successfully reconstructing, and outputting a final three-dimensional CT image;
otherwise, repeatedly searching the optimal solution by a gradient descent method until the difference of the gray values of the two solutions reaches the global optimal solution.
7. The three-dimensional CT image reconstruction apparatus of claim 6, wherein the attention mechanism comprises: a channel attention module and a spatial attention module;
the channel attention module is used for enabling the input feature maps to pass through a global maximum pooling layer and a global average pooling layer respectively, transmitting the output feature maps of the input feature maps and the global average pooling layer to the multilayer vector machine, and further extracting features by the multilayer vector machine; processing the generated primary intermediate feature map to obtain a channel attention map;
the space attention module is used for multiplying the channel attention diagram with the input feature diagram to obtain a secondary intermediate feature diagram, enabling the secondary intermediate feature diagram to respectively pass through a global maximum pooling layer and a global average pooling layer, carrying out feature splicing operation on output feature diagrams of the secondary intermediate feature diagram and the global average pooling layer, and processing the spliced feature diagrams to obtain the space attention diagram.
8. The three-dimensional CT image reconstruction device of claim 6, wherein the fused feature map obtained by the multi-scale fusion strategy is specifically represented by the following formula:
Figure FDA0003863854900000041
wherein G is k Fused feature map representing the k-th layer output of the encoder, F k Representing the input profile of the k-th layer of the encoder,
Figure FDA0003863854900000042
representative of upsampling 2 i-k Multiple, C stands for splicing operation, D sconv @2 i-k Then the expansion rate is 2 i-k Is convolved.
9. The three-dimensional CT image reconstruction device of claim 6, wherein the gray value difference calculation in the three-dimensional reconstruction module is specifically a mean square error loss calculation, and the mean square error loss calculation formula is as follows:
Figure FDA0003863854900000043
wherein m is the number of samples, y i For three-dimensional reconstruction of CT, y' i To be reconstructed.
10. The three-dimensional CT image reconstruction apparatus according to claim 6, wherein the training of the convolutional neural network specifically comprises the following steps:
b1: preprocessing CT and KV projection data to be reconstructed;
b2: forming a sample pair by the CT to be reconstructed and KV projection data corresponding to the CT and dividing the sample pair into a training set, a testing set and a verification set according to a certain proportion;
b3: inputting the training set into a convolutional neural network for training and three-dimensional conversion to obtain a three-dimensional reconstruction CT;
b4: performing mean square error loss calculation on the three-dimensional reconstruction CT and the CT to be reconstructed, performing back propagation on the obtained loss in a convolutional neural network to obtain the gradient of the network parameters, and updating the network parameters by a gradient descent optimization method to minimize the mean square error loss;
b5: after each training, verifying by using a verification set, and judging whether the mean square error loss on the verification set is minimum or not;
if yes, saving the weight;
if not, continuing training, and adjusting the learning rate when the mean square error loss on the verification set still does not decrease after the training reaches the threshold times;
b6: and repeating B3-B5 until the training result of the convolutional neural network reaches the expectation and the mean square error loss on the training set reaches the minimum.
CN202211172460.3A 2022-09-26 2022-09-26 Three-dimensional CT image reconstruction method and device based on single projection data Pending CN115496659A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211172460.3A CN115496659A (en) 2022-09-26 2022-09-26 Three-dimensional CT image reconstruction method and device based on single projection data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211172460.3A CN115496659A (en) 2022-09-26 2022-09-26 Three-dimensional CT image reconstruction method and device based on single projection data

Publications (1)

Publication Number Publication Date
CN115496659A true CN115496659A (en) 2022-12-20

Family

ID=84469951

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211172460.3A Pending CN115496659A (en) 2022-09-26 2022-09-26 Three-dimensional CT image reconstruction method and device based on single projection data

Country Status (1)

Country Link
CN (1) CN115496659A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152825A (en) * 2023-10-27 2023-12-01 中影年年(北京)文化传媒有限公司 Face reconstruction method and system based on single picture

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152825A (en) * 2023-10-27 2023-12-01 中影年年(北京)文化传媒有限公司 Face reconstruction method and system based on single picture
CN117152825B (en) * 2023-10-27 2024-03-08 中影年年(北京)科技有限公司 Face reconstruction method and system based on single picture

Similar Documents

Publication Publication Date Title
Lyu et al. Multi-contrast super-resolution MRI through a progressive network
CN108460726B (en) Magnetic resonance image super-resolution reconstruction method based on enhanced recursive residual network
CN109146988B (en) Incomplete projection CT image reconstruction method based on VAEGAN
Heinrich et al. Residual U-net convolutional neural network architecture for low-dose CT denoising
CN109035142B (en) Satellite image super-resolution method combining countermeasure network with aerial image prior
CN112435164B (en) Simultaneous super-resolution and denoising method for generating low-dose CT lung image based on multiscale countermeasure network
CN112348936B (en) Low-dose cone-beam CT image reconstruction method based on deep learning
CN115953494B (en) Multi-task high-quality CT image reconstruction method based on low dose and super resolution
CN109118487B (en) Bone age assessment method based on non-subsampled contourlet transform and convolutional neural network
US11978146B2 (en) Apparatus and method for reconstructing three-dimensional image
CN111861910A (en) CT image noise reduction system and method
CN115984117B (en) Channel attention-based variation self-coding image super-resolution method and system
CN113052935A (en) Single-view CT reconstruction method for progressive learning
CN111696042B (en) Image super-resolution reconstruction method based on sample learning
CN113487503A (en) PET (positron emission tomography) super-resolution method for generating antagonistic network based on channel attention
CN115496659A (en) Three-dimensional CT image reconstruction method and device based on single projection data
CN108460723A (en) Bilateral full variation image super-resolution rebuilding method based on neighborhood similarity
CN113538616B (en) Magnetic resonance image reconstruction method combining PUGAN with improved U-net
Chan et al. An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction
CN111968108A (en) CT intelligent imaging method, device and system based on intelligent scanning protocol
CN115880158A (en) Blind image super-resolution reconstruction method and system based on variational self-coding
CN112669400B (en) Dynamic MR reconstruction method based on deep learning prediction and residual error framework
WO2023000244A1 (en) Image processing method and system, and application of image processing method
CN114862982A (en) Hybrid domain unsupervised finite angle CT reconstruction method based on generation countermeasure network
KR20220071554A (en) Medical Image Fusion System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination