CN113870375A - CT image geometric artifact evaluation method based on residual error network - Google Patents

CT image geometric artifact evaluation method based on residual error network Download PDF

Info

Publication number
CN113870375A
CN113870375A CN202111161046.8A CN202111161046A CN113870375A CN 113870375 A CN113870375 A CN 113870375A CN 202111161046 A CN202111161046 A CN 202111161046A CN 113870375 A CN113870375 A CN 113870375A
Authority
CN
China
Prior art keywords
geometric
image
network
artifact
residual error
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111161046.8A
Other languages
Chinese (zh)
Inventor
韩玉
朱明婉
李磊
闫镔
席晓琦
朱林林
谭思宇
孙艳敏
亢冠宇
杨双站
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Information Engineering University of PLA Strategic Support Force
Original Assignee
Information Engineering University of PLA Strategic Support Force
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Information Engineering University of PLA Strategic Support Force filed Critical Information Engineering University of PLA Strategic Support Force
Priority to CN202111161046.8A priority Critical patent/CN113870375A/en
Publication of CN113870375A publication Critical patent/CN113870375A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a CT image geometric artifact evaluation method based on a residual error network. The method comprises the following steps: step 1: based on the characteristics of the CT image, a matched training sample data set is manufactured; step 2: taking a residual error network Resnet50 as a basic network, and obtaining a geometric artifact evaluation network by reducing the step length of a convolution residual block and adding an attention module design; and step 3: training the geometric artifact evaluation network by using the training sample data set; and 4, step 4: and inputting the CT image to be evaluated into a trained geometric artifact evaluation network to obtain the geometric artifact level of the CT image. The method can evaluate the degree of the geometric artifact of the CT image with high accuracy.

Description

CT image geometric artifact evaluation method based on residual error network
Technical Field
The invention relates to the technical field of CT image processing, in particular to a CT image geometric artifact evaluation method based on a residual error network.
Background
Deviations of the actual system geometry from the ideal geometry lead to geometric artifacts in the CT reconstructed images. The geometric artifact seriously affects the quality of CT reconstructed images, so that the images are blurred, and ghost images appear at edges, which is one of key factors for preventing the CT from developing towards intellectualization and high precision. Geometric artifact correction is a prerequisite for CT systems to obtain high quality three-dimensional reconstructed images. The performance of the geometric artifact correction method determines the accuracy of the CT image. The evaluation of the severity of the geometric artifact of the image can depict the mismatch degree of the geometric parameters of the system for the quality control of the CT system on one hand, and can fairly and objectively evaluate the precision of the geometric artifact correction method and screen the geometric artifact correction method on the other hand.
Disclosure of Invention
In order to depict the degree of mismatch of system geometric parameters and be used for quality control of a CT system, the invention provides a CT image geometric artifact evaluation method based on a residual error network, which can quickly and accurately finish the CT image geometric artifact evaluation by utilizing a depth network. The invention discloses a method for evaluating geometric artifacts of a CT image based on a residual error network, which is used for grading the image according to the degree of the artifacts in the image and is essentially a multi-classification problem.
The invention provides a CT image geometric artifact evaluation method based on a residual error network, which comprises the following steps:
step 1: based on the characteristics of the CT image, a matched training sample data set is manufactured;
step 2: taking a residual error network Resnet50 as a basic network, and obtaining a geometric artifact evaluation network by reducing the step length of a convolution residual block and adding an attention module design;
and step 3: training the geometric artifact evaluation network by using the training sample data set;
and 4, step 4: and inputting the CT image to be evaluated into a trained geometric artifact evaluation network to obtain the geometric artifact level of the CT image.
Further, the step 1 specifically includes:
by varying Δ u0Reconstructing to obtain CT images with different geometric artifact degrees; wherein u is0Is the projection abscissa, Deltau, of the X-ray source on the flat panel detector0Is u0The deviation value of (a);
dividing the CT images with different degrees of geometric artifacts into three categories of no geometric artifact, a slight geometric artifact and a heavy geometric artifact, and respectively setting category labels for the three categories of images;
each CT image and the class label form a training sample, and all the training samples form a training sample data set.
Further, the step 2 specifically includes:
taking the residual error network Resnet50 as a basic network, and reducing the step size of a second convolution residual error block in the original residual error network Resnet50 from 2 to 1;
adding an attention module in each convolution residual block, wherein the attention module comprises 2 independent sub-modules which are respectively as follows: the system comprises a channel attention module and a space attention module, wherein 2 sub-modules are respectively used for paying attention to a channel and a space;
giving an intermediate feature map F as an input of an attention module in a geometric artifact evaluation network, and sequentially calculating a one-dimensional channel attention map M by the attention modulec(F) And a two-dimensional spatial attention map Ms(F'); two are combinedThe individual attention diagrams are connected in series to carry out overall attention, and an output characteristic diagram F' is obtained
Figure BDA0003289944710000021
Figure BDA0003289944710000022
Wherein, F' is an intermediate operation characteristic diagram,
Figure BDA0003289944710000023
is an element-by-element multiplication.
Further, the channel attention module respectively uses the spatial information of the global average pooling and the global maximum pooling of the aggregated input feature map F to respectively obtain the feature maps
Figure BDA0003289944710000024
And characteristic diagrams
Figure BDA0003289944710000025
Then, the feature map is processed
Figure BDA0003289944710000026
And characteristic diagrams
Figure BDA0003289944710000027
Sending the data to a multi-layer sensor to obtain intermediate operation characteristic graphs respectively
Figure BDA0003289944710000028
And
Figure BDA0003289944710000029
re-matching the feature map
Figure BDA00032899447100000210
And
Figure BDA00032899447100000211
to carry out
Figure BDA00032899447100000212
After operation, theOutputting one-dimensional channel attention diagram M by oversymoid functionc(F) (ii) a Wherein the content of the first and second substances,
Figure BDA00032899447100000213
representing element-by-element summation.
Further, the spatial attention module generates global average pooling features and global maximum pooling features of the input feature map F' on a channel axis by using global average pooling and global maximum pooling, performs channel splicing on the global average pooling features and the global maximum pooling features, and generates a two-dimensional spatial attention map M through a 7 × 7 convolutional layer and a sigmoid function in sequences(F’)。
The invention has the beneficial effects that:
according to the CT image geometric artifact evaluation method based on the residual error network, provided by the invention, firstly, a data set containing different geometric artifact degrees is creatively constructed by changing sensitive geometric parameters, and then, a CT image geometric artifact evaluation network is designed. Geometry artifact evaluation network a Resnet50 network was used as an infrastructure, and a geometry artifact evaluation network was designed by reducing the convolution residual block step size and adding an attention module. Extracting image features of higher dimensionality by using a residual block structure; an attention module is added in the convolution residual module, channel level and space dimension characteristics are excavated, image edge characteristics are further focused, and the grading evaluation effect of the geometric artifact evaluation network is improved. The trained geometric artifact evaluation network can evaluate the CT image artifact degree with high accuracy.
Drawings
Fig. 1 is a schematic flowchart of a CT image geometric artifact evaluation method based on a residual error network according to an embodiment of the present invention;
fig. 2 is an overall structural diagram of a geometric artifact evaluation network according to an embodiment of the present invention;
fig. 3 is a structural diagram of a Conv Block residual Block in a geometric artifact evaluation network according to an embodiment of the present invention;
fig. 4 is a structural diagram of an ID Block residual Block in a geometric artifact evaluation network according to an embodiment of the present invention;
FIG. 5 is a block diagram of an attention module according to an embodiment of the present invention
FIG. 6 is a block diagram of a channel attention module provided in accordance with an embodiment of the present invention;
fig. 7 is a structural diagram of a spatial attention module according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a method for evaluating geometric artifacts of a CT image based on a residual error network, including the following steps:
s101: based on the characteristics of the CT image, a matched training sample data set is manufactured;
in particular, the magnitude of the systematic geometric parameter deviation reflects the severity of the reconstructed image geometric artifact. In order to obtain geometric artifacts of different degrees of severity, the embodiment of the invention selects the most sensitive parameter u of the geometric parameters0By varying the deviation value Deltau thereof0To generate images with different degrees of geometric artifacts. u. of0The projection abscissa of the X-ray source on the flat panel detector is used, and the influence of the value deviation on the reconstructed image is the largest.
Theoretically,. DELTA.u0The larger the size, the more severe the geometrical artifact, the lower the resolution of the reconstructed image, and the greater the information loss. The network performance after training is closely related to the training data, and high-quality data is important for network optimization. In order to obtain high-quality geometric artifact images with different severity degrees, the invention selects a Structural SIMilarity (SSIM) index to evaluate the SIMilarity of the two images so as to further verify the SIMilarity along with the delta u0Increase of (2) and deepening of severity of geometrical artifact of reconstructed image. The SSIM index comprehensively measures the similarity of two images through three aspects of brightness, contrast and structure.
By comparing Δ u0Geometric artifact free image and Δ u when 00SSIM with geometric artifact image at 0.01mm, 0.02mm, 0.03mm, 0.04mm, 0.05mm, 0.06mm, 0.07mm, 0.08mm, and results in an SSIM with Δ u0The SSIM value between the reconstructed image and the image without geometric artifacts gradually decreases, indicating that the image similarity decreases and the severity of the geometric artifacts gradually increases. This is demonstrated by varying Δ u0Can generate geometric artifact images with different degrees of severity. But the same Δ u0The SSIM values calculated in the different phantom data were different, indicating that the degree of artifact was different for the different phantom data under the same parameter deviations. Meanwhile, in each phantom data, the same SSIM value corresponds to Δ u at different slice positions0Are different, indicating that different phantom data parameters deviate differently with the same degree of artifact.
Therefore, when the training sample data set is constructed, the embodiment of the invention mainly comprises the following sub-steps:
s1011: by varying Δ u0Reconstructing to obtain CT images with different geometric artifact degrees; wherein u is0Is the projection abscissa, Deltau, of the X-ray source on the flat panel detector0Is u0The deviation value of (a);
s1012: combining artificial experience with Deltau0The deviation of the CT image with different degrees of the geometric artifacts divides the CT image with different degrees of the geometric artifacts into three categories of no geometric artifacts, slight geometric artifacts and severe geometric artifacts, and sets category labels for the three categories of the image;
specifically, no geometric artifacts appear as sharp images, and no geometric artifacts are visually observed. Slight geometric artifacts correspond to certain deviations in the geometric parameters, but the artifacts are not visually apparent. The geometric parameter deviation of the image with the severe geometric artifact is large, and the artifact is obvious visually.
In order to store the structural information of the CT images, each CT image is stored in a matrix form when the training sample data set is constructed.
S1013: each CT image and the class label form a training sample, and all the training samples form a training sample data set.
S102: taking a residual error network Resnet50 as a basic network, and obtaining a geometric artifact evaluation network by reducing the step length of a convolution residual block and adding an attention module design;
specifically, the reduction of the step length of the convolution residual block can help the network to extract more comprehensive features, and the addition of the attention module can further enable the network to further mine the features of the channel layer and the space dimension, so that the edge feature information in the CT image can be more effectively extracted, and the artifact grading evaluation accuracy is improved. The overall structure of the geometric artifact evaluation network designed by the embodiment of the invention is shown in fig. 2.
As one possible implementation, the structural design of the geometric artifact evaluation network includes the following processes:
taking the residual error network Resnet50 as a basic network, and reducing the step size of a second convolution residual error block in the original residual error network Resnet50 from 2 to 1;
adding an attention module to each convolution residual block, as shown in fig. 5, the attention module includes 2 independent sub-modules, which are: a channel attention module and a spatial attention module; the 2 sub-modules are respectively used for paying attention to the channel and the space;
giving an intermediate feature map F as an input of an attention module in a geometric artifact evaluation network, and sequentially calculating a one-dimensional channel attention map M by the attention modulec(F) And a two-dimensional spatial attention map Ms(F'); taking overall note of two attention diagrams in series, an output characteristic diagram F' is obtained
Figure BDA0003289944710000051
Figure BDA0003289944710000052
Wherein, F' is an intermediate operation characteristic diagram,
Figure BDA0003289944710000053
is an element-by-element multiplication;
wherein, as shown in fig. 6, the channel attention module respectively uses the spatial information of the global average pooling and the global maximum pooling input feature map F to respectively obtain the feature maps
Figure BDA0003289944710000054
And characteristic diagrams
Figure BDA0003289944710000055
Then, the feature map is processed
Figure BDA0003289944710000056
And characteristic diagrams
Figure BDA0003289944710000057
Sending into a multilayer Perceptron (MLP) to respectively obtain intermediate operation characteristic maps
Figure BDA0003289944710000058
And
Figure BDA0003289944710000059
re-matching the feature map
Figure BDA00032899447100000510
And
Figure BDA00032899447100000511
to carry out
Figure BDA00032899447100000512
After operation, outputting a one-dimensional channel attention diagram M through a sigmoid functionc(F) (ii) a Wherein the content of the first and second substances,
Figure BDA00032899447100000513
representing element-by-element summation;
as shown in FIG. 7, the spatial attention module generates the global average pooled feature and the global maximum pool of the input feature map F' using the global average pooling and the global maximum pooling on the channel axisPerforming channel splicing on the global average pooling characteristic and the global maximum pooling characteristic, and generating a two-dimensional space attention map M through a 7 multiplied by 7 convolutional layer and a sigmoid function in sequences(F'); wherein the characteristic diagram F' is obtained by a one-dimensional channel attention diagram Mc(F) And inputting the feature map F
Figure BDA0003289944710000061
After the operation, the product is obtained,
Figure BDA0003289944710000062
representing element-by-element multiplication.
As an implementation, the structure of the Conv Block residual Block (also called convolution residual Block) in the geometric artifact evaluation network is shown in fig. 3, and "CBAM" in fig. 3 is an added attention module.
As an implementable manner, the structure of the ID Block residual Block in the geometry artifact evaluation network is shown in fig. 4.
It should be noted that, because the constructed training sample data set is stored in a matrix manner, the size of the data set is relatively large (up to more than 10G), the geometric artifact evaluation network constructed in the embodiment of the present invention fully considers the size of the data set, and the selected network has a moderate depth and a relatively good evaluation effect.
S103: training the geometric artifact evaluation network by using the training sample data set;
in particular, training and testing work can be done under the Tensorflow deep learning framework on an AMAX workstation. The AMAX workstation is provided with two CPUs (central processing units) of Intel Xeon E5-2640 v4 and 64GB in memory size, and four GPUs of Geforce GTX 1080Ti and 11GB in video memory size.
S104: and inputting the CT image to be evaluated into a trained geometric artifact evaluation network to obtain the geometric artifact level of the CT image.
According to the CT image geometric artifact evaluation method based on the residual error network, provided by the invention, firstly, a data set containing different geometric artifact degrees is creatively constructed by changing sensitive geometric parameters, and then, a CT image geometric artifact evaluation network is designed. Geometry artifact evaluation network a Resnet50 network was used as an infrastructure, and a geometry artifact evaluation network was designed by reducing the convolution residual block step size and adding an attention module. Extracting image features of higher dimensionality by using a residual block structure; an attention module is added in the convolution residual module, channel level and space dimension characteristics are excavated, image edge characteristics are further focused, and the grading evaluation effect of the geometric artifact evaluation network is improved. The trained geometric artifact evaluation network can evaluate the CT image artifact degree with high accuracy.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (5)

1. The CT image geometric artifact evaluation method based on the residual error network is characterized by comprising the following steps:
step 1: based on the characteristics of the CT image, a matched training sample data set is manufactured;
step 2: taking a residual error network Resnet50 as a basic network, and obtaining a geometric artifact evaluation network by reducing the step length of a convolution residual block and adding an attention module design;
and step 3: training the geometric artifact evaluation network by using the training sample data set;
and 4, step 4: and inputting the CT image to be evaluated into a trained geometric artifact evaluation network to obtain the geometric artifact level of the CT image.
2. The residual error network-based CT image geometric artifact evaluation method according to claim 1, wherein the step 1 specifically comprises:
by varying Δ u0Reconstructing to obtain CT images with different geometric artifact degrees; wherein u is0Is the projection abscissa, Deltau, of the X-ray source on the flat panel detector0Is u0The deviation value of (a);
dividing the CT images with different degrees of geometric artifacts into three categories of no geometric artifact, a slight geometric artifact and a heavy geometric artifact, and respectively setting category labels for the three categories of images;
each CT image and the class label form a training sample, and all the training samples form a training sample data set.
3. The residual error network-based CT image geometric artifact evaluation method according to claim 1, wherein the step 2 specifically comprises:
taking the residual error network Resnet50 as a basic network, and reducing the step size of a second convolution residual error block in the original residual error network Resnet50 from 2 to 1;
adding an attention module in each convolution residual block, wherein the attention module comprises 2 independent sub-modules which are respectively as follows: a channel attention module and a spatial attention module; the 2 sub-modules are respectively used for paying attention to the channel and the space;
giving an intermediate feature map F as an input of an attention module in a geometric artifact evaluation network, and sequentially calculating a one-dimensional channel attention map M by the attention modulec(F) And a two-dimensional spatial attention map Ms(F'); two attention diagrams are connected in series to carry out overall attention, and an output characteristic diagram F' is obtained
Figure FDA0003289944700000011
Figure FDA0003289944700000012
Wherein, F' is an intermediate operation characteristic diagram,
Figure FDA0003289944700000013
is an element-by-element multiplication.
4. The residual network-based CT image geometric artifact evaluation method according to claim 3, wherein the channel attention module respectively aggregates spatial information of the input feature map F by using global average pooling and global maximum pooling to obtain the feature map
Figure FDA0003289944700000021
And characteristic diagrams
Figure FDA0003289944700000022
Then, the feature map is processed
Figure FDA0003289944700000023
And characteristic diagrams
Figure FDA0003289944700000024
Sending the data to a multi-layer sensor to obtain intermediate operation characteristic graphs respectively
Figure FDA0003289944700000025
And
Figure FDA0003289944700000026
re-matching the feature map
Figure FDA0003289944700000027
And
Figure FDA0003289944700000028
to carry out
Figure FDA0003289944700000029
After operation, outputting a one-dimensional channel attention diagram M through a sigmoid functionc(F) (ii) a Wherein the content of the first and second substances,
Figure FDA00032899447000000210
representing element-by-element summation.
5. Root of herbaceous plantThe residual error network-based CT image geometric artifact evaluation method of claim 3 or 4, wherein the spatial attention module generates a global average pooling feature and a global maximum pooling feature of an input feature map F' by using global average pooling and global maximum pooling on a channel axis, and then generates a two-dimensional spatial attention map M by sequentially passing through a 7 x 7 convolutional layer and a sigmoid function after channel splicing the global average pooling feature and the global maximum pooling features(F’)。
CN202111161046.8A 2021-09-30 2021-09-30 CT image geometric artifact evaluation method based on residual error network Pending CN113870375A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111161046.8A CN113870375A (en) 2021-09-30 2021-09-30 CT image geometric artifact evaluation method based on residual error network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111161046.8A CN113870375A (en) 2021-09-30 2021-09-30 CT image geometric artifact evaluation method based on residual error network

Publications (1)

Publication Number Publication Date
CN113870375A true CN113870375A (en) 2021-12-31

Family

ID=79001228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111161046.8A Pending CN113870375A (en) 2021-09-30 2021-09-30 CT image geometric artifact evaluation method based on residual error network

Country Status (1)

Country Link
CN (1) CN113870375A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797729A (en) * 2023-01-29 2023-03-14 有方(合肥)医疗科技有限公司 Model training method and device, and motion artifact identification and prompting method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797729A (en) * 2023-01-29 2023-03-14 有方(合肥)医疗科技有限公司 Model training method and device, and motion artifact identification and prompting method and device

Similar Documents

Publication Publication Date Title
US11631162B2 (en) Machine learning training method, system, and device
JP6961139B2 (en) An image processing system for reducing an image using a perceptual reduction method
CN110738697A (en) Monocular depth estimation method based on deep learning
CN112862681B (en) Super-resolution method, device, terminal equipment and storage medium
CN108764247B (en) Dense connection-based deep learning object detection method and device
CN112365514A (en) Semantic segmentation method based on improved PSPNet
CN112215755B (en) Image super-resolution reconstruction method based on back projection attention network
US20220392025A1 (en) Restoring degraded digital images through a deep learning framework
US11669943B2 (en) Dual-stage system for computational photography, and technique for training same
CN108921801B (en) Method and apparatus for generating image
GB2579262A (en) Space-time memory network for locating target object in video content
CN113538246A (en) Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
CN111931857A (en) MSCFF-based low-illumination target detection method
CN107169498B (en) A kind of fusion part and global sparse image significance detection method
JP2022101507A (en) Intelligent denoising
CN115439408A (en) Metal surface defect detection method and device and storage medium
CN113870375A (en) CT image geometric artifact evaluation method based on residual error network
CN113516697B (en) Image registration method, device, electronic equipment and computer readable storage medium
Akbarzadeh et al. Medical image magnification based on original and estimated pixel selection models
CN116128980A (en) Automatic calibration method and system for camera inner and outer parameters based on self-encoder
CN112085668B (en) Image tone mapping method based on region self-adaptive self-supervision learning
CN107729885B (en) Face enhancement method based on multiple residual error learning
Wang et al. Efficient multi-branch dynamic fusion network for super-resolution of industrial component image
Wang [Retracted] An Old Photo Image Restoration Processing Based on Deep Neural Network Structure
Meng et al. Siamese CNN-based rank learning for quality assessment of inpainted images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination