CN112258423A - Deartifact method, device, equipment and storage medium based on deep learning - Google Patents

Deartifact method, device, equipment and storage medium based on deep learning Download PDF

Info

Publication number
CN112258423A
CN112258423A CN202011278989.4A CN202011278989A CN112258423A CN 112258423 A CN112258423 A CN 112258423A CN 202011278989 A CN202011278989 A CN 202011278989A CN 112258423 A CN112258423 A CN 112258423A
Authority
CN
China
Prior art keywords
image
projection
artifact
projection image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011278989.4A
Other languages
Chinese (zh)
Inventor
何楠君
谢佳轩
马锴
郑冶枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011278989.4A priority Critical patent/CN112258423A/en
Publication of CN112258423A publication Critical patent/CN112258423A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The disclosure provides a method, a device, equipment and a storage medium for removing artifacts based on deep learning, which relate to the field of artificial intelligence, wherein the method comprises the following steps: acquiring a target projection image corresponding to a target image containing an artifact; determining an artifact-free projection area in the target projection image; carrying out image reconstruction on the target projection image based on the artifact-free projection area to obtain a first reconstructed projection image; inputting the first reconstruction projection image into a denoising image generation model, and denoising a back projection image corresponding to the first reconstruction projection image to obtain a first denoising image of a target image; the de-noised image generation model is obtained by performing constrained training on de-noised image generation on the generation model based on a sample image without artifacts and a sample noise-added projection image corresponding to the sample image. By using the technical scheme provided by the disclosure, the artifacts in the image can be effectively removed, the image quality is improved, and the accuracy of the image information is ensured.

Description

Deartifact method, device, equipment and storage medium based on deep learning
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to a method, an apparatus, a device, and a storage medium for removing artifacts based on deep learning.
Background
Many important information, such as identification information, lesion information, building structure information, and position information, can be obtained from images, and thus, it is necessary to ensure the quality of images to ensure the accuracy of image information. However, many images carry artifacts due to various reasons, which seriously reduce the image quality and definition, and further, effective information cannot be obtained from the images with artifacts, for example, the medical images often have metal artifacts or magnetic sensitivity artifacts, which causes that disease diagnosis cannot be performed according to the medical images with artifacts, and brings great challenges to subsequent related treatments.
Therefore, there is a need to provide a reliable and efficient solution to the problem of artifacts in existing images.
Disclosure of Invention
The present disclosure provides a method, an apparatus, a device and a storage medium for removing artifacts based on deep learning, which can effectively remove artifacts in images, improve image quality and ensure accuracy of image information.
In one aspect, the present disclosure provides a method for removing artifacts based on deep learning, the method comprising:
acquiring a target projection image corresponding to a target image containing an artifact;
determining artifact-free projection regions in the target projection image;
carrying out image reconstruction on the target projection image based on the artifact-free projection area to obtain a first reconstructed projection image;
inputting the first reconstruction projection image into a denoising image generation model, and denoising a back projection image corresponding to the first reconstruction projection image to obtain a first denoising image of the target image;
the de-noised image generation model is a model obtained by performing constrained training of de-noised image generation on a generation model based on a sample image without artifacts and a sample noise-added projection image corresponding to the sample image.
Another aspect provides a depth learning based deghost apparatus, the apparatus comprising:
an image acquisition module: the method comprises the steps of acquiring a target projection image corresponding to a target image containing an artifact;
an image region determination module: for determining artifact-free projection regions in the target projection image;
a first image reconstruction module: the image reconstruction device is used for carrying out image reconstruction on the target projection image based on the artifact-free projection area to obtain a first reconstructed projection image;
a first image generation module: the first reconstruction projection image is input into a denoising image generation model, and denoising processing is carried out on a back projection image corresponding to the first reconstruction projection image to obtain a first denoising image of the target image;
the de-noised image generation model is a model obtained by performing constrained training of de-noised image generation on a generation model based on a sample image without artifacts and a sample noise-added projection image corresponding to the sample image.
Another aspect provides a deep learning based deghost apparatus, the apparatus comprising a processor and a memory, the memory having stored therein at least one instruction or at least one program, the at least one instruction or the at least one program being loaded and executed by the processor to implement the deep learning based deghost method as described above.
Another aspect provides a computer-readable storage medium, in which at least one instruction or at least one program is stored, the at least one instruction or the at least one program being loaded and executed by a processor to implement the deep learning based deghost method as described above.
The artifact removing method, device, equipment and storage medium based on deep learning provided by the disclosure have the following technical effects:
the method can obtain a target projection image corresponding to a target image containing an artifact, determine an artifact-free projection area in the target projection image, perform image reconstruction on the target projection image based on the artifact-free projection area to obtain a first reconstructed projection image, can remove interference of the artifact-free area in the image, and convert the artifact-free problem of the image into a denoising problem of the image, then input the first reconstructed projection image into a denoising image generation model, perform denoising processing on a back projection image corresponding to the first reconstructed projection image to obtain a first denoising image of the target image, and can obtain the noise-free artifact-free image on the basis of keeping details of an original image, thereby effectively improving the image quality and ensuring the accuracy of image information.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without inventive efforts.
FIG. 1 is a schematic diagram of an application environment provided by an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a training method for a denoised image generation model according to an embodiment of the present disclosure;
FIG. 3 is a schematic view of a process for acquiring a noisy projection image of a CT sample corresponding to a CT sample image according to an embodiment of the present disclosure;
fig. 4 is a schematic process diagram of denoising a CT sample noisy projection image by using a generation model to obtain a noiseless CT sample image according to the embodiment of the present disclosure;
fig. 5 is a flowchart illustrating a deep learning based artifact removing method according to an embodiment of the disclosure;
fig. 6 is a schematic diagram of the depth learning-based artifact removing process provided in this embodiment;
fig. 7 is a graph of an experimental result of a depth learning based deghost provided by an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a deep learning based artifact removing device provided by an embodiment of the present disclosure;
fig. 9 is a block diagram of a hardware structure of a server of a deep learning based artifact removing method according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Before further detailed description of the embodiments of the present disclosure, terms and expressions referred to in the embodiments of the present disclosure are explained, and the terms and expressions referred to in the embodiments of the present disclosure are applied to the following explanations.
The artifact is a picture of various forms appearing on an image without an object to be scanned.
CT (Computed Tomography), a type of medical imaging device, uses precisely collimated X-ray beams, gamma rays, ultrasonic waves, etc. to scan sections of a human body one by one together with a highly sensitive detector around a certain part of the human body, and performs radiographic projection measurements at different angles on the object to obtain imaging of the object cross-sectional information, which can be used for examination of various diseases.
HU (Hounsfield Unit ), which is used to measure CT value, is a Unit of measure for measuring the density of a certain local tissue or organ of a human body, air is-1000, and dense bone is + 1000.
CNNs (Convolutional Neural Networks) are a class of feed forward Neural Networks (feed forward Neural Networks) that include convolution calculations and have a deep structure, are one of the representative algorithms for deep learning (deep learning), and are widely used in image classification tasks.
Unsupervised learning: is a kind of machine learning, which means that input data is classified or enhanced automatically without giving a training example labeled in advance.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
In recent years, with research and development of artificial intelligence technology, artificial intelligence technology is widely applied in a plurality of fields, and the scheme provided by the embodiment of the disclosure relates to technologies such as machine learning/deep learning of artificial intelligence, and is specifically described by the following embodiments:
referring to fig. 1, fig. 1 is a schematic diagram of an application environment provided by an embodiment of the present disclosure, and as shown in fig. 1, the application environment may include at least a server 01 and a terminal 02.
In the embodiment of the present disclosure, the server 01 may include a server operating independently, or a distributed server, or a server cluster composed of a plurality of servers. The server 01 may comprise a network communication unit, a processor, a memory, etc. Specifically, the server 01 may be configured to train and learn a denoising image generation model, and in the embodiment of the present disclosure, the denoising image generation model may be configured to perform denoising processing on the reconstructed projection image to generate a denoising image without artifacts.
In the embodiment of the present disclosure, the terminal 02 may include a smart phone, a desktop computer, a tablet computer, a laptop computer, a digital assistant, an Augmented Reality (AR)/Virtual Reality (VR) device, an intelligent wearable device, a medical imaging device (such as a CT Computed Tomography (CT) scanner and a nuclear magnetic resonance imaging (mri) device), an AI disease diagnosis device, and other types of physical devices, and may also include software running in the physical devices, such as an application program. The operating system running on terminal 02 in the embodiment of the present disclosure may include, but is not limited to, an android system, an IOS system, linux, windows, and the like. In the embodiment of the disclosure, the terminal 02 may be configured to provide an image reconstruction service, obtain a reconstructed projection image of a target image, and provide an image denoising service based on a denoising image generation model trained by the server 01, so that an image of the target image after image reconstruction and denoising retains the detail features of an original image, and meanwhile, an artifact of the target image may be removed.
In embodiments of the present disclosure, artifacts may include, but are not limited to, metal artifacts, motion artifacts, aliasing artifacts or wrapping artifacts, chemical shift artifacts, chemical misregistration artifacts, truncation artifacts, magnetic susceptibility artifacts, zipper artifacts, cross-excitation and corduroy artifacts, and the like. Accordingly, the target image may be an image containing artifacts, such as a CT image containing artifacts.
In addition, fig. 1 is only an application environment of the artifact removal processing based on deep learning, and in practical applications, training learning of a denoised image generation model may also be performed on a device providing a denoising processing service for an image.
The artifact-removing modeling method based on deep learning of the present disclosure is introduced below, and in the embodiment of the present disclosure, modeling may be performed based on an image reconstruction model and a deep learning model, and specifically, the method may include:
s101: constructing an initial model based on artifact-removed iterative reconstruction, wherein the initial model comprises a projection image reconstruction item and an image prior constraint item; wherein the projection image reconstruction term is used for estimating an artifact-free reconstructed projection image from an artifact-containing image to be processed, and the image prior constraint term is used for constraining the estimated artifact-free reconstructed projection image.
In the embodiment of the disclosure, artifact removal iterative reconstruction is to perform data modeling by taking an artifact removal problem as an image filling problem to construct a projection image reconstruction item; introducing image prior knowledge to construct an image prior constraint term, and constraining the reconstructed projection image; and solving an optimization artifact removal problem to obtain an artifact-removed image.
In some embodiments, step S101 may include:
s1011: and acquiring a mask corresponding to an artifact area in the image containing the artifact.
S1012: a projection image reconstruction term is constructed based on the mask and the projection function.
S1013: and constructing an initial model based on the projection image reconstruction item and the weighted image prior constraint item.
In one embodiment, the expression of the initial model is:
Figure BDA0002780105450000061
wherein the content of the first and second substances,
Figure BDA0002780105450000062
characterizing a target artifact-removed image, wherein Y is a projection image corresponding to an image containing an artifact; x is a variable to be solved, and represents an artifact-removed image obtained by artifact-removing processing; mtThe method comprises the following steps that a binary mask corresponding to an artifact region in an image containing an artifact is adopted, wherein 1 in the mask represents that the artifact exists at the position, and 0 represents that the artifact does not exist at the position; a is throwA shadow function; item 1 in the above formula I is a projection image reconstruction item; r (-) represents the image prior constraint, and the 2 nd term R (X) in the formula I is an image prior constraint term; λ is a first weighting coefficient for weighting R (X) to balance term 1 with term 2 in equation one.
In one embodiment, the image containing the artifact may be a CT image containing the artifact and, correspondingly,
Figure BDA0002780105450000063
characterizing a target artifact-removed CT image, wherein Y is a projection image corresponding to the CT image containing the artifact; and X is the artifact-removed CT image obtained by artifact-removing processing to be solved.
S102: and adding a decoupling constraint term into the initial model to obtain an artifact-removed model, wherein the decoupling constraint term is used for decoupling the artifact-free projection image from the initial model.
In some embodiments, the decoupling constraint term is constructed by introducing alternative variables.
In one embodiment, based on the above formula one, a replacement variable Z is introduced, and in addition, Z is equal to AX, it is known that Z is a projection image obtained by performing projection transformation on the artifact-removed image X, and is a projection image without artifact interference, which is hereinafter characterized as an artifact-free reconstructed projection image, so as to obtain a decoupling constraint condition that Z is equal to AX.
Further, for the convenience of subsequent calculation, the first term of formula 1 is multiplied by a scaling factor
Figure BDA0002780105450000071
Transforming the formula one of the initial model into the following formula two:
Figure BDA0002780105450000072
where s.t.z ═ AX is the decoupling constraint.
It should be noted that, because the scaling factor is a scalar, the optimization problem in the initial model is not changed
Figure BDA0002780105450000073
So that formula one and formula two are equivalent. The second formula can effectively decouple the projection image Z without artifact interference (reconstructed projection image without artifact) from the optimization problem of the first formula, and provides convenience for subsequent model solution;
further, a Lagrange multiplier method is adopted, a decoupling constraint term is obtained based on the decoupling constraint condition, the decoupling constraint term is added into the second formula, and the following third formula is obtained and is an expression of the artifact removing model:
Figure BDA0002780105450000074
wherein μ is a second weight coefficient for balancing the first two terms of equation three with term 3, term 3 of equation three
Figure BDA0002780105450000075
To decouple the constraint terms.
In practical applications, based on the step S102, preliminary modeling of the image deghost problem has been completed.
Based on the foregoing embodiment, in the embodiment of the present disclosure, the modeling method further includes a step of solving the artifact removal model, and specifically may include:
s103: decomposing the artifact removing model according to a variable separation method or a direction-alternative multiplier algorithm to obtain a first sub-model and a second sub-model; the first sub-model comprises a projection image reconstruction item and a decoupling constraint item and is used for reconstructing a corresponding reconstructed projection image without an artifact from an image containing the artifact, and the second sub-model comprises an image priori constraint item and a decoupling constraint item and is used for generating a denoised image without the artifact based on the reconstructed projection image without the artifact;
in one embodiment, the artifact-removed model is decomposed by a variable separation method to obtain a first sub-model and a second sub-model.
Specifically, the optimization problem of formula three is decomposed into two sub-problems by a variable separation method, the first sub-problem is to reconstruct a corresponding artifact-free reconstructed projection image from an artifact-containing image, and the second sub-problem is to generate a denoised artifact-free denoised image based on the artifact-free reconstructed projection image.
Further, the essence of the first sub-problem is to calculate Z, and only the 1 st and 3 rd terms in the formula three are related to Z, so that the expression of the first sub-model corresponding to the first sub-problem is obtained based on the 1 st and 3 rd terms (please refer to the first row of the formula four); the essence of the second sub-problem is to calculate X, and only the 2 nd and 3 rd terms in the formula three are related to X, so that the expression of the second sub-model corresponding to the second sub-problem is obtained based on the 2 nd and 3 rd terms (see the second row of the formula four).
Figure BDA0002780105450000081
S104: obtaining an analytic solution of a first sub-model;
in practical application, the first sub-model corresponding to the first sub-problem has an analytic solution.
In one embodiment, based on the above formula, derivation is performed on the expression of the first submodel, and the derivative is made to be 0, so as to obtain an expression of an analytic solution of the first submodel, where k represents the number of iterations, as shown in the following formula five. As can be seen from the formula five, the reconstructed projection image Z without artifacts is a linear combination of the projection images Y and AX corresponding to the image including the artifacts, where AX is a projection image obtained by projection-converting the artifact-removed image. Therefore, the detail part in the artifact image can be fully reserved, and the image obtained after artifact removing processing is more real and clear.
Figure BDA0002780105450000082
S105: constructing a denoising image generation model based on deep learning according to the second sub-model;
in practical application, the essence of the second subproblem corresponding to the second submodel is the image denoising problem, and therefore, a denoised image generation model based on deep learning can be constructed based on the second model and the second subproblem.
In an embodiment, based on the above formula four and formula five, the expression of the denoised image generation model may be as shown in the following formula six, and the denoised image generation model may be a model obtained by performing constraint training of denoised image generation on the generation model based on the artifact-free sample image and the sample noisy projection image corresponding to the sample image.
Figure BDA0002780105450000091
S106: and determining the first sub-model and the de-noised image generation model as a target de-artifact model.
Therefore, training of deep learning models such as a generation model and the like is not required to be performed based on paired sample artifact images and sample artifact-removed images, the difficulty in acquiring training data and the difficulty in training a denoising image generation model are reduced, and the accuracy of the denoising image generation model is improved; and the problem of artifact removal of the image is converted into the problem of unsupervised learning, so that the overall accuracy and universality of the target artifact removal model are improved.
In practical application, in the artifact removing application process of the target artifact removing model, the first sub-model and the denoised image generation model are solved, and a denoised image output by the denoised image generation model is used as a final target artifact removing image.
In some embodiments, in the artifact removing application process of the target artifact removing model, the first sub-model and the denoised image generation model are alternately solved until the denoised image output by the denoised image generation model meets a preset convergence condition, and the denoised image meeting the preset convergence condition is used as a final target artifact removing image.
Therefore, the second sub-model is solved through the de-noised image generation model, the non-artifact reconstruction projection image obtained by the first sub-model is restrained by the priori knowledge which is learned by the de-noised image generation model and has stronger generalization, artifacts in the artifact image can be effectively removed, and the authenticity and the definition of the image are improved.
The following introduces an embodiment of a training process of the deep learning based denoised image generation model of the present disclosure, which may specifically include:
in the embodiment of the disclosure, the denoised image generation model is a model obtained by performing constraint training of denoised image generation on the generation model based on a sample image without an artifact and a sample noisy projection image corresponding to the sample image.
An embodiment of a training method for a deep learning-based denoised image generation model is described below with reference to fig. 2.
S201: and acquiring a sample image without artifacts and a sample noise projection image corresponding to the sample image.
In embodiments of the present disclosure, the sample images and the sample noisy projection images may form a sample training pair as training data for generating the model. In particular, the sample image may be a large number of images.
In one embodiment, assuming that the type of the sample image is a medical image, the sample image may be a large number of artifact-free CT images, and accordingly, a large number of artifact-free CT images may be acquired from the public CT data set, and then the acquired CT images are preprocessed according to the requirements of the model for input images in the subsequent training, for example, the size of the image is adjusted by scaling, so as to obtain the sample image.
Further, the classification into a training set and a test set may be based on the size of HU values in the CT image. Specifically, CT images containing artifacts, for example, CT images containing metal artifacts, can be acquired from the internationally published CT dataset, SpineWeb, as a test set. In a specific embodiment, the training set includes more than 20000 CT images without metal artifacts and the test set includes more than 200 CT images containing metal artifacts.
In another embodiment, the sample image may also be a human image, such as a face image, and accordingly, artifact-free face images of a large number of users may be collected from the internet website, and then the collected face images are preprocessed according to the requirements of the model for input images in the subsequent training, for example, the size of the image is adjusted by scaling, etc., so as to serve as the sample object image.
It should be noted that, the embodiment of the training method for the deep learning-based denoised image generation model of the present disclosure may be based on the target artifact removal model constructed in the foregoing artifact removal modeling method embodiment.
Therefore, training of deep learning models such as a generation model and the like is not required to be performed based on paired sample artifact images and sample artifact-removed images, the difficulty in acquiring training data and the difficulty in generating the training model are reduced, and the accuracy of the de-noised image generation model is improved; and the problem of artifact removal of the image is converted into the problem of unsupervised learning, so that the overall accuracy and universality of the target artifact removal model are improved.
In practical applications, the acquiring of the sample noisy projection image corresponding to the sample image in step S201 may include:
s2011: carrying out projection transformation on the sample image to obtain a sample projection image;
s2012: and carrying out noise adding processing on the sample projection image to obtain the sample noise added projection image.
In a specific embodiment, the projective transformation establishes a spatial mapping relationship between the original image and the pixels of the projected image according to geometric constraints, and obtains the image transformation of the projected image. Projective transforms include, but are not limited to, fan projective transforms, parallel projective transforms, and the like. In one embodiment, the geometric constraint of the projective transformation may be the projection function (e.g., projection function a) in the aforementioned embodiment of the deghost modeling method.
In some embodiments, step S2012 may specifically be: and carrying out Gaussian noise processing on the sample projection image to obtain a sample noise projection image containing Gaussian noise.
In an embodiment, the sample image may be a CT sample image without artifacts, please refer to fig. 3, and fig. 3 is a schematic diagram of a process of acquiring a noisy projection image of a CT sample corresponding to the CT sample image according to an embodiment, where fig. 3a is the CT sample image, fig. 3b is the CT sample projection image, and fig. 3c is the noisy projection image of the CT sample.
The noise addition processing for the sample projection image is not limited to the gaussian noise addition processing described above, and may include other noise addition processing capable of generating a noise-added projection image.
Therefore, the acquired artifact-free images are subjected to batch denoising processing to obtain a training sample set, paired sample pairs do not need to be acquired, the acquisition difficulty of training data and the training difficulty of generating a model are reduced, and the accuracy of generating the model by the denoised images is improved.
S202: and performing denoising image generation training on the generation model based on the sample image and the sample denoising projection image until a training denoising image output by the generation model meets a training convergence condition.
In the embodiment of the present disclosure, the generation model may include, but is not limited to, a deep learning model such as a convolutional neural network, a cyclic neural network, or a recurrent neural network.
In a specific embodiment, the first convolution layer of the generated model may be used to perform back-projection transformation on the input image, and may perform back-projection transformation on the sample noisy projection image to obtain a back-projection image of the sample noisy projection image. And further, other structural layers of the generated model perform denoising processing on the back projection image of the sample noisy projection image, and output a denoised image.
In one embodiment, the geometric constraint of the back projection transformation of the first convolution layer may be a back projection function corresponding to the projection function in the aforementioned modeling method embodiment, such as the back projection function a-1
In practical applications, step S202 may include:
s2021: inputting the sample noise-added projection image into a generation model, and carrying out denoising processing on a back projection image corresponding to the sample noise-added projection image to obtain a training denoising image;
s2022: acquiring an image error between a training denoising image and a sample image;
s2023: adjusting model parameters of the generated model based on the image error until the obtained image error meets a model convergence condition;
s2024: and determining that the training denoised image corresponding to the image error meeting the model convergence condition meets the training convergence condition.
In a specific embodiment, the sample noisy projection image can be used as an input of a generation model, and the back projection transformation processing is performed on the sample noisy projection image to obtain a back projection image of the sample noisy projection image; training learning of denoising image generation is carried out on the generated model based on the back projection image of the sample denoising projection image and the sample image, so that the generated model learns image prior knowledge of image denoising processing, and a training denoising image is obtained; then, an image error between the training denoised image and the sample image is calculated based on the loss function.
It should be noted that the model structure of the generative model may be set according to the requirement of the denoising process, and the disclosure is not limited herein.
In practical applications, step S2023 may include:
s20231: judging whether the image error meets a model convergence condition;
s20232: if the result of the determination is negative, the model parameters in the generated model are adjusted based on the gradient descent method, and the training learning steps of steps S2021, S2022, and S20231 described above are repeated.
S20233: when the result of the determination is yes, the above-described step S2024 is performed.
In some embodiments, the image error satisfying the model convergence condition may be specifically: the image error is less than or equal to a preset error threshold; specifically, the image error may represent the similarity between the training denoised image and the sample image.
In a specific embodiment, the preset error threshold may be set in combination with a requirement for the definition of a target artifact-removed image obtained by performing artifact-removal processing on a target image containing an artifact in practical application, and generally, the smaller the preset error threshold is, the higher the definition of an image output by a trained denoising image generation model is, but the longer the training time is; on the contrary, the larger the preset error threshold is, the lower the image definition output by the trained denoising image generation model is, but the training time is shorter.
S203: and taking the generated model meeting the training convergence condition as a de-noised image generated model.
In an embodiment, the sample image may be an artifact-free CT sample image, please refer to fig. 4, where fig. 4 is a schematic diagram of a process of denoising a CT sample noisy projection image by using a generation model to obtain a noise-free CT sample image according to an embodiment, where fig. 4a is the CT sample noisy projection image, fig. 4b is the CT sample image, and a dashed square represents the generation model.
It should be noted that the denoised image generation model in the embodiment of the deep learning based denoised image generation model training method and the deep learning based denoised image generation model training method may be the same as those in the embodiment of the deep learning based denoised image modeling method. In one embodiment, the expression may be the same as the above formula six.
In the embodiment of the disclosure, the training and learning of image denoising are performed on the generated model by combining the sample image and the sample noisy projection image, so that the similarity between the image generated by the generated model and the sample image can be improved, and the denoised image generation model can process any projection image containing noise into an image which is noiseless, clear and complete in information.
The following introduces the depth learning based deghost method of the present disclosure based on the denoised image generation model, and fig. 5 is a schematic flow chart of a depth learning based deghost method provided by an embodiment of the present disclosure. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. In practice, the system or server product may be implemented in a sequential or parallel manner (e.g., parallel processor or multi-threaded environment) according to the embodiments or methods shown in the figures. Specifically, as shown in fig. 5, the method may include:
s301: and acquiring a target projection image corresponding to the target image containing the artifact.
In embodiments of the present disclosure, artifacts may include, but are not limited to, metal artifacts, motion artifacts, aliasing artifacts or wrapping artifacts, chemical shift artifacts, chemical misregistration artifacts, truncation artifacts, magnetic susceptibility artifacts, zipper artifacts, cross-excitation and corduroy artifacts, and the like. Accordingly, the target image may be an image containing artifacts, such as a CT image containing artifacts.
In practical application, the target projection image corresponding to the target image is a projection image obtained by performing projection transformation on the target image. Specifically, the projective transformation may be the projective transformation in the aforementioned embodiment of the artifact-removing modeling method and the training method for generating the model, or may be the projective transformation of other types in the prior art, or may be the projective transformation corresponding to the back-projective transformation used in the generating process of the target image. For example, the target image is a CT image, the CT image acquisition process in the CT scan reconstruction includes a back-projection transformation process, a corresponding back-projection transformation geometric constraint is adopted, and the projection transformation in step S301 may be a projection transformation corresponding to the back-projection transformation geometric constraint.
S303: an artifact-free projection region in the target projection image is determined.
In the embodiment of the present disclosure, the artifact-free projection region is a region that is not interfered by an artifact in the target projection image.
In practical applications, before step S303, the method may further include:
s302: artifact and/or artifact-free regions in the target image are identified.
Accordingly, in one embodiment, step S303 may include:
s3031: acquiring a mapping relation between a target image and a target projection image;
s3032: determining an artifact projection area corresponding to an artifact area in the target image in the target projection image according to the mapping relation between the target image and the target projection image;
s3033: the region outside the artifact projection region in the target projection image is determined as an artifact-free projection region.
Specifically, an artifact-free projection region in the target projection image is determined based on the artifact region in the target image, and the artifact-free projection region is an image region in the target projection image corresponding to a region outside the artifact region in the target image.
In another embodiment, step S303 may include:
s3031: acquiring a mapping relation between a target image and a target projection image;
s3035: and determining a non-artifact projection area corresponding to the non-artifact area in the target image in the target projection image according to the mapping relation between the target image and the target projection image.
In a specific embodiment, the above artifact region or the artifact-free region may be characterized by a mask, or the above artifact projection region or the artifact-free projection region may be characterized by a mask. When the target image is a two-dimensional image, the mask may be a binary mask.
It should be noted that, the artifact region and/or the artifact projection region in the acquired target projection image may also be identified first, and then the artifact-free region in the target image may be determined according to the mapping relationship between the target image and the target projection image.
S305: and carrying out image reconstruction on the target projection image based on the artifact-free projection area to obtain a first reconstructed projection image.
In the embodiment of the present disclosure, step S305 may include:
s3051: and performing image interpolation on the target image based on the artifact-free region in the target image corresponding to the artifact-free projection region to obtain an initial reconstructed image.
In practical application, image interpolation reconstruction can be performed on an artifact region in a target image based on features (such as pixel features) of the artifact-free region in the target image, so as to obtain an initial reconstructed image. In particular, the interpolated reconstruction may include, but is not limited to, a linear interpolated reconstruction.
S3052: and generating a first reconstruction projection image according to the projection image of the initial reconstruction image and the target sub-projection image corresponding to the artifact-free projection area in the target projection image.
In practical application, the projection image of the initial reconstruction image is an image obtained by performing projection transformation on the initial reconstruction image, and the target sub-projection image is a projection image of a projection area without an artifact in the target projection image.
In some embodiments, step S3052 may specifically be: and linearly combining the projection image of the initial reconstruction image and the target sub-projection image to obtain a first reconstruction projection image. In particular, the linear combination here is a weighted linear combination.
S307: and inputting the first reconstructed projection image into a de-noising image generation model, and de-noising the back projection image corresponding to the first reconstructed projection image to obtain a first de-noising image of the target image.
The de-noised image generation model is obtained by performing constrained training on de-noised image generation on the generation model based on a sample image without artifacts and a sample noise-added projection image corresponding to the sample image.
In the embodiment of the present disclosure, step S307 may include:
s3071: inputting the first reconstruction projection image into a de-noising image generation model for back projection transformation to obtain a back projection image corresponding to the first reconstruction projection image;
s3072: and denoising the back projection image corresponding to the first reconstruction projection image based on the denoising image generation model to obtain a first denoising image of the target image.
In practical application, the first convolution layer of the denoised image generation model may be used to perform back-projection transformation processing on the input projection image to obtain a back-projection image of the input projection image. Further, other structural layers of the denoised image generation model perform denoising processing on the back projection image of the input projection image.
Based on all or part of the above embodiments, in some embodiments, the first denoised image is used as a target artifact-removed image, and artifact-removal processing of the target image is completed.
In the embodiment of the disclosure, a target projection image corresponding to a target image containing an artifact can be acquired, an artifact-free projection area in the target projection image is determined, image reconstruction is performed on the target projection image based on the artifact-free projection area to obtain a first reconstructed projection image, interference of the artifact-free area in the image can be removed, the artifact-free problem of the image is converted into a denoising problem of the image, then the first reconstructed projection image is input into a denoising image generation model, denoising processing is performed on a back projection image corresponding to the first reconstructed projection image to obtain a first denoising image of the target image, and the noise-free artifact-free image can be obtained on the basis of keeping details of an original image, so that the image quality is effectively improved, and the accuracy of image information is ensured.
Based on all or part of the foregoing embodiments, in other embodiments, after step S307, the artifact removing method may further include:
s309: and performing projection transformation on the first denoising image to obtain a first denoising projection image.
In the embodiment of the present disclosure, the projective transformation processing performed on the first denoised image may be similar to the projective transformation processing related to step S301.
S311: and generating a second reconstruction projection image according to the first denoising projection image and a target sub-projection image corresponding to the artifact-free area in the target projection image.
S313: and inputting the second reconstruction projection image into a denoising image generation model, and denoising a back projection image corresponding to the second reconstruction projection image to obtain a second denoising image.
In the embodiment of the present disclosure, a generation process of the second reconstructed projection image may be similar to the step S3052, and a generation process of the second denoised image may be similar to the step S307, which is not described herein again.
S315: and judging whether the second denoised image meets a preset convergence condition.
In some embodiments, the preset convergence condition may include: and the similarity between the currently generated denoised image and the previously generated denoised image is more than or equal to the preset similarity. Accordingly, step S315 may include:
1) acquiring the similarity between the second denoised image and the first denoised image;
2) judging whether the similarity between the second denoised image and the first denoised image is greater than or equal to a preset similarity or not;
3) and if so, determining that the second denoised image meets a preset convergence condition.
In a specific embodiment, the preset similarity may be set in combination with a requirement for the definition of a target artifact-removed image obtained by performing artifact-removal processing on a target image including an artifact in practical application, generally, the larger the preset similarity is, the higher the definition of the obtained output image is, but the longer the artifact-removal processing time is; conversely, the smaller the preset similarity is, the lower the definition of the obtained output image is, but the artifact removal processing time is shorter.
In other embodiments, the preset convergence condition may include: and the iteration times corresponding to the currently generated denoised image are preset iteration times. Accordingly, step S315 may include:
1) acquiring iteration times corresponding to the second denoised image;
2) judging whether the iteration times corresponding to the second denoised image are consistent with the preset iteration times or not;
3) and if so, determining that the second denoised image meets a preset convergence condition.
In a specific embodiment, the preset iteration times can be combined with the requirement of definition of a target artifact-removed image obtained by performing artifact-removal processing on a target image containing an artifact in practical application, generally, the larger the preset iteration times is, the higher the definition of the obtained output image is, but the longer the artifact-removal processing time is; conversely, the smaller the preset iteration number is, the lower the definition of the obtained output image is, but the artifact removal processing time is shorter.
S317: and if the second denoised image meets the preset convergence condition, taking the second denoised image as a target denoised image of the target image.
S319: if the second denoised image does not meet the preset convergence condition, repeating the steps S309 to S315 until the obtained denoised image meets the preset convergence condition, and taking the denoised image meeting the preset convergence condition as the target denoised image of the target image.
In some embodiments, the linear combination in step S3052 is a weighted linear combination, corresponding to a linear weighting coefficient. Correspondingly, when the preset convergence condition is that the iteration number corresponding to the currently generated denoised image is the preset iteration number, each iteration in the artifact removal processing process corresponds to a preset linear weighting coefficient, and the preset linear weighting coefficient is gradually reduced along with the increase of the iteration number.
In practical application, the target de-noised image is determined as a target de-artifact image, and de-artifact processing of the target image is completed.
In one embodiment, the generation process of the reconstructed projection images (including the first reconstructed projection image and the second reconstructed projection image, etc.) is based on formula five in the foregoing artifact-removing modeling method embodiment, and the expression of the denoised image generation model is formula six, as shown below.
Figure BDA0002780105450000171
Figure BDA0002780105450000181
Wherein M istIs a binary mask corresponding to the artifact region in the target image, where 1 in the mask represents that there is an artifact at the position, and 0 represents that there is no artifact at the position, then (1-M)t) Characterizing a region without artifacts for an inversion operation; y is the target projection image corresponding to the target image, and accordingly, (1-M)t) The object child projection image is indicated by Y; k is the number of iterations, A is the projection function, μ σ2Linear weighting coefficients can be characterized.
When k is equal to 0, in equation five: x0Characterizing the initial reconstructed image, AX0Projection images characterizing the initial reconstructed image, Z1(Zk+1) Characterizing X0Substituting into the first reconstructed projection image obtained by solving in the formula five and the X in the formula six1(Xk+1) And characterizing a first de-noised image obtained by inputting the first reconstruction projection image into a formula six for solving.
When k is more than or equal to 1, X in the formula VkRepresenting the previously denoised image, Y, based on formula sixkIs XkCorresponding back projected image, then AXkIs to XkPerforming projection processing to obtain a projection image; z in formula VIk+1And the reconstructed projection image is obtained based on the formula five in the iteration process.
Further, if the first denoised image is taken as a target artifact-removed image, the artifact-removed processing of the target image is completed when k is equal to 1.
Further, if the target de-noised image is determined as the target de-artifact image, the de-artifact processing of the target image is completed when k is equal to n (n is larger than or equal to 1) and meets a preset convergence condition.
Further, if the preset convergence condition is that the iteration number corresponding to the currently generated denoised image is a preset iteration number (for example, 5), a preset linear weighting coefficient (preset μ σ) corresponding to each iteration is preset in the artifact removing process2Value), preset μ σ2The value decreases gradually as the number of iterations increases.
In this case, in step S3052, the first reconstructed projection image may be obtained based on the above formula five, where Y is the initial reconstructed projection image, a is a projection function, X is the back projection image corresponding to Y, and AX is the projection image obtained by performing projection processing on the back projection image corresponding to Y.
In practical application, when a target image is processed, a target projection image corresponding to the target image containing an artifact is obtained, an artifact-free projection area or an artifact projection area in the target projection image is determined, and/or an artifact area or an artifact-free area in the target image is determined; then, image interpolation is carried out on the target image to obtain an initial reconstruction image.
Further, substituting the initial reconstruction image into the formula five, and solving the formula five according to the projection image of the initial reconstruction image and the target sub-projection image corresponding to the artifact-free projection area in the target projection image to obtain a first initial reconstruction image; then, the first initial reconstruction image is input into a formula six (a denoising image generation model) to be solved, and denoising processing is performed on a back projection image corresponding to the first reconstruction projection image in the solving process to obtain a first denoising image of the target image.
And taking the first denoised image as a target artifact removing image without iterative calculation.
Or, under the condition that iterative computation is needed, continuing to perform projection transformation on the first de-noised image to obtain a first de-noised projected image; then substituting the first de-noised projection image into a formula V, and solving the formula V according to the first de-noised projection image and a target sub-projection image corresponding to the artifact-free projection area in the target projection image to obtain a second reconstructed projection image; and inputting the second reconstruction projection image into a formula six for solving, and denoising a back projection image corresponding to the second reconstruction projection image in the solving process to obtain a second denoised image, so that the formula five and the formula six are alternately solved until the denoised image output by the denoised image generation model meets a preset convergence condition, and taking the denoised image meeting the preset convergence condition as a target denoised image, namely a target artifact removing image. Referring to fig. 6, fig. 6 is a schematic diagram of a depth learning based artifact removing process provided in the present embodiment, where fig. 6a is a target image and fig. 6b is a target artifact removing image.
Further, please refer to fig. 7, fig. 7 shows a depth learning-based artifact removal experimental result diagram provided by the embodiment of the present disclosure, in which a first behavior is a CT image containing a metal artifact and a second behavior is a artifact removal CT image processed by the artifact removal method of the present disclosure.
When the artifact removing method disclosed by the present disclosure is applied to medical image processing, the artifact removing method can be implemented in an AI disease diagnosis device or other devices as an image preprocessing method, so as to improve the robustness of the AI disease diagnosis device to artifacts. Or, the method can be executed in a medical imaging instrument as an imaging algorithm to improve the imaging quality of the medical imaging instrument.
It should be noted that the artifact removing method based on deep learning of the present disclosure may include the foregoing artifact removing modeling method and the training method of the denoised image generation model.
In the embodiment of the disclosure, a target projection image corresponding to a target image containing an artifact can be acquired, an artifact-free projection area in the target projection image is determined, image reconstruction is performed on the target projection image based on the artifact-free projection area to obtain a first reconstructed projection image, interference of the artifact-free area in the image can be removed, the artifact-free problem of the image is converted into a denoising problem of the image, then the first reconstructed projection image is input into a denoising image generation model, denoising processing is performed on a back projection image corresponding to the first reconstructed projection image to obtain a first denoising image of the target image, and the noise-free artifact-free image can be obtained on the basis of keeping details of an original image, so that the image quality is effectively improved, and the accuracy of image information is ensured.
The embodiment of the present disclosure further provides an artifact removing device based on deep learning, as shown in fig. 8, the device includes:
the image acquisition module 10 can be used for acquiring a target projection image corresponding to a target image containing an artifact;
an image region determination module 20 operable to determine an artifact-free projection region in the target projection image;
the first image reconstruction module 30 may be configured to perform image reconstruction on the target projection image based on the artifact-free projection region to obtain a first reconstructed projection image;
the first image generation module 40 may be configured to input the first reconstructed projection image into a denoising image generation model, and perform denoising processing on a back projection image corresponding to the first reconstructed projection image to obtain a first denoising image of the target image;
the de-noised image generation model is obtained by performing constrained training on de-noised image generation on the generation model based on a sample image without artifacts and a sample noise-added projection image corresponding to the sample image.
In some embodiments, the apparatus of the present disclosure may further comprise:
the projection transformation module can be used for inputting the first reconstructed projection image into a denoising image generation model, denoising a back projection image corresponding to the first reconstructed projection image to obtain a first denoising image of the target image, and then performing projection transformation on the first denoising image to obtain a first denoising projection image;
the second image reconstruction module can be used for generating a second reconstruction projection image according to the first denoising projection image and a target sub-projection image corresponding to the artifact-free region in the target projection image;
the second image generation module is used for inputting the second reconstruction projection image into the denoising image generation model, and denoising a back projection image corresponding to the second reconstruction projection image to obtain a second denoising image;
and the target de-noising image determining module can be used for taking the second de-noising image as the target de-noising image of the target image if the second de-noising image meets the preset convergence condition.
In some embodiments, the first image reconstruction module 30 may include:
an initial reconstruction unit: the method can be used for carrying out image interpolation on the target image based on the artifact-free region in the target image corresponding to the artifact-free projection region to obtain an initial reconstruction image;
a first reconstruction unit: may be used to generate a first reconstructed projection image from the projection image of the initial reconstructed image and a target sub-projection image of the target projection image corresponding to the artifact-free projection area.
In some embodiments, the preset convergence condition may include: the similarity between the currently generated denoised image and the previously generated denoised image is more than or equal to the preset similarity; or the iteration number corresponding to the currently generated denoised image is a preset iteration number.
In some embodiments, the first image generation module 40 may include:
a back projection transformation unit: the method can be used for inputting the first reconstruction projection image into a de-noising image generation model for back projection transformation to obtain a back projection image corresponding to the first reconstruction projection image;
a denoising processing unit: the method can be used for denoising the back projection image corresponding to the first reconstruction projection image based on the denoised image generation model to obtain a first denoised image of the target image.
In some embodiments, the apparatus of the present disclosure may further comprise:
the training sample data acquisition module can be used for acquiring a sample image without an artifact and a sample noisy projection image corresponding to the sample image;
the constraint training learning module can be used for carrying out denoising image generation training on the generation model based on the sample image and the sample noise-added projection image until a training denoising image output by the generation model meets a training convergence condition;
and the generative model determining module can be used for taking the generative model meeting the training convergence condition as a de-noised image generative model.
In some embodiments, the training sample data acquisition module may include:
a sample image back projection unit: the method can be used for carrying out projection transformation on the sample image to obtain a sample projection image;
an image noise adding unit: the method can be used for conducting noise adding processing on the sample projection image to obtain the sample noise adding projection image.
In some embodiments, the constraint training learning module may include:
training a denoising image generating unit: the method can be used for inputting the sample noise-added projection image into a generation model, and carrying out denoising processing on a back projection image corresponding to the sample noise-added projection image to obtain a training denoising image;
an image error acquisition unit: the method can be used for acquiring the image error between the training denoising image and the sample image;
a model parameter adjustment unit: the model parameter that can be used for adjusting the generating model based on the image error, until the acquired image error meets the convergence condition of the model;
a training stop determination unit: the method can be used for determining that the training denoised image corresponding to the image error meeting the model convergence condition meets the training convergence condition.
The device and method embodiments in the device embodiment described above are based on the same application concept.
The embodiment of the present disclosure provides a deep learning based deghost apparatus, which includes a processor and a memory, where the memory stores at least one instruction or at least one program, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the deep learning based deghost method provided by the above method embodiment.
The memory may be used to store software programs and modules, and the processor may execute various functional applications and data processing by operating the software programs and modules stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system, application programs needed by functions and the like; the storage data area may store data created according to use of the device, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide the processor access to the memory.
The method embodiments provided by the embodiments of the present disclosure may be executed in a mobile terminal, a computer terminal, a server or a similar computing device. Taking the example of running on a server, fig. 9 is a block diagram of a hardware structure of the server of the deep learning based artifact removing method according to the embodiment of the present disclosure. As shown in fig. 9, the server 800 may have a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 810 (the processor 810 may include but is not limited to a Processing device such as a microprocessor MCU or a programmable logic device FPGA), a memory 830 for storing data, one or more storage media 820 (e.g., one or more mass storage devices) for storing applications 823 or data 822. Among them, the memory 830 and the storage medium 820 may be transient or persistent. The program stored in storage medium 820 may include one or more modules, each of which may include a series of instruction operations for a server. Still further, the central processor 810 may be configured to communicate with the storage medium 820 to execute a series of instruction operations in the storage medium 820 on the server 800. The Server 800 may also include one or more power supplies 860, one or more wired or wireless network interfaces 850, one or more input-output interfaces 840, and/or one or more operating systems 821, such as Windows ServerTM,Mac OS XTM,UnixTMLinuxTM, FreeBSDTM, etc.
The input-output interface 840 may be used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the server 800. In one example, i/o Interface 840 includes a Network adapter (NIC) that may be coupled to other Network devices via a base station to communicate with the internet. In one example, the input/output interface 840 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
It will be understood by those skilled in the art that the structure shown in fig. 9 is only an illustration and is not intended to limit the structure of the electronic device. For example, server 800 may also include more or fewer components than shown in FIG. 9, or have a different configuration than shown in FIG. 9.
Embodiments of the present disclosure also provide a storage medium, where the storage medium may be disposed in a server to store at least one instruction or at least one program for implementing a method for processing noise of an image in the method embodiments, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the method for processing noise of an image provided in the method embodiments.
Alternatively, in this embodiment, the storage medium may be located in at least one network server of a plurality of network servers of a computer network. Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
According to an aspect of the present disclosure, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations described above.
As can be seen from the above embodiments of the depth learning-based artifact removing method, apparatus, device, server or storage medium provided by the present disclosure, in the present disclosure, a target projection image corresponding to a target image containing an artifact is obtained, an artifact-free projection region in the target projection image is determined, image reconstruction is performed on the target projection image based on the artifact-free projection region to obtain a first reconstructed projection image, the interference of the artifact-containing region in the image can be removed, the artifact removing problem of the image is converted into the denoising problem of the image, then, the first reconstruction projection image is input into a denoising image generation model, the de-noising processing is carried out on the back projection image corresponding to the first reconstruction projection image to obtain a first de-noised image of the target image, and a noise-free de-artifact image can be obtained on the basis of keeping the details of the original image, so that the image quality is effectively improved, and the accuracy of image information is ensured.
It should be noted that: the precedence order of the embodiments of the present disclosure is merely for description, and does not represent the merits of the embodiments. And specific embodiments of the disclosure have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the disclosure are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus, device and storage medium embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference may be made to some descriptions of the method embodiments for relevant points.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware to implement the above embodiments, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk, an optical disk, or the like.
The above description is only exemplary of the present disclosure and is not intended to limit the present disclosure, so that any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (10)

1. A method for removing artifacts based on deep learning, the method comprising:
acquiring a target projection image corresponding to a target image containing an artifact;
determining artifact-free projection regions in the target projection image;
carrying out image reconstruction on the target projection image based on the artifact-free projection area to obtain a first reconstructed projection image;
inputting the first reconstruction projection image into a denoising image generation model, and denoising a back projection image corresponding to the first reconstruction projection image to obtain a first denoising image of the target image;
the de-noised image generation model is a model obtained by performing constrained training of de-noised image generation on a generation model based on a sample image without artifacts and a sample noise-added projection image corresponding to the sample image.
2. The method as claimed in claim 1, wherein after the inputting the first reconstructed projection image into a denoised image generation model and performing denoising processing on a back projection image corresponding to the first reconstructed projection image to obtain a first denoised image of the target image, the method further comprises:
performing projection transformation on the first denoising image to obtain a first denoising projection image;
generating a second reconstruction projection image according to the first denoising projection image and a target sub-projection image corresponding to the artifact-free region in the target projection image;
inputting the second reconstruction projection image into a denoising image generation model, and denoising a back projection image corresponding to the second reconstruction projection image to obtain a second denoising image;
and if the second denoised image meets a preset convergence condition, taking the second denoised image as a target denoised image of the target image.
3. The method of claim 1, wherein image reconstructing the projection image of interest based on the artifact-free projection region to obtain a first reconstructed projection image comprises:
performing image interpolation on the target image based on the artifact-free region in the target image corresponding to the artifact-free projection region to obtain an initial reconstructed image;
and generating the first reconstruction projection image according to the projection image of the initial reconstruction image and a target sub-projection image corresponding to the artifact-free projection area in the target projection image.
4. The method of claim 1, wherein the inputting the first reconstructed projection image into a denoised image generation model, and performing denoising processing on a back projection image corresponding to the first reconstructed projection image to obtain a first denoised image of the target image comprises:
inputting the first reconstruction projection image into a de-noising image generation model for back projection transformation to obtain a back projection image corresponding to the first reconstruction projection image;
and denoising the back projection image corresponding to the first reconstruction projection image based on a denoising image generation model to obtain a first denoising image of the target image.
5. The method according to any one of claims 1-4, further comprising:
acquiring a sample image without artifacts and a sample noise projection image corresponding to the sample image;
denoising image generation training is carried out on the generation model based on the sample image and the sample denoising projection image until a training denoising image output by the generation model meets a training convergence condition;
and taking the generated model meeting the training convergence condition as the de-noised image generated model.
6. The method of claim 5, wherein the obtaining of the sample noisy projection image corresponding to the sample image comprises:
carrying out projection transformation on the sample image to obtain a sample projection image;
and carrying out noise adding processing on the sample projection image to obtain the sample noise added projection image.
7. The method of claim 5, wherein the denoising image generation training for the generative model based on the sample image and the sample noisy projection image until a training denoising image output by the generative model satisfies a training convergence condition comprises:
inputting the sample noise-added projection image into the generation model, and carrying out denoising processing on a back projection image corresponding to the sample noise-added projection image to obtain a training denoising image;
acquiring an image error between the training denoising image and the sample image;
adjusting model parameters of the generated model based on the image error until the obtained image error meets a model convergence condition;
and determining that the training denoised image corresponding to the image error meeting the model convergence condition meets the training convergence condition.
8. An apparatus for removing artifacts based on deep learning, the apparatus comprising:
an image acquisition module: the method comprises the steps of acquiring a target projection image corresponding to a target image containing an artifact;
an image region determination module: for determining artifact-free projection regions in the projection images;
a first image reconstruction module: the image reconstruction device is used for reconstructing an image of the projection image based on the artifact-free projection area to obtain a first reconstructed projection image;
a first image generation module: the first reconstruction projection image is input into a denoising image generation model, and denoising processing is carried out on a back projection image corresponding to the first reconstruction projection image to obtain a first denoising image of the target image;
the de-noised image generation model is a model obtained by performing constrained training of de-noised image generation on a generation model based on a sample image without artifacts and a sample noise-added projection image corresponding to the sample image.
9. Computer-readable storage medium, in which at least one instruction or at least one program is stored, which is loaded and executed by a processor to implement the deep learning based deghost method according to any one of claims 1 to 7.
10. Deep learning based deghost apparatus, characterized in that the apparatus comprises a processor and a memory, in which at least one instruction or at least one program is stored, which is loaded and executed by the processor to implement the deep learning based deghost method according to any one of claims 1 to 7.
CN202011278989.4A 2020-11-16 2020-11-16 Deartifact method, device, equipment and storage medium based on deep learning Pending CN112258423A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011278989.4A CN112258423A (en) 2020-11-16 2020-11-16 Deartifact method, device, equipment and storage medium based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011278989.4A CN112258423A (en) 2020-11-16 2020-11-16 Deartifact method, device, equipment and storage medium based on deep learning

Publications (1)

Publication Number Publication Date
CN112258423A true CN112258423A (en) 2021-01-22

Family

ID=74266107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011278989.4A Pending CN112258423A (en) 2020-11-16 2020-11-16 Deartifact method, device, equipment and storage medium based on deep learning

Country Status (1)

Country Link
CN (1) CN112258423A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241070A (en) * 2021-12-01 2022-03-25 北京长木谷医疗科技有限公司 Method and device for removing metal artifacts from CT image and training model
CN116012478A (en) * 2022-12-27 2023-04-25 哈尔滨工业大学 CT metal artifact removal method based on convergence type diffusion model
WO2023087260A1 (en) * 2021-11-19 2023-05-25 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for data processing
CN116309922A (en) * 2023-05-22 2023-06-23 杭州脉流科技有限公司 De-artifact method, device, equipment and storage medium for CT perfusion image
WO2023202265A1 (en) * 2022-04-19 2023-10-26 腾讯科技(深圳)有限公司 Image processing method and apparatus for artifact removal, and device, product and medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023087260A1 (en) * 2021-11-19 2023-05-25 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for data processing
CN114241070A (en) * 2021-12-01 2022-03-25 北京长木谷医疗科技有限公司 Method and device for removing metal artifacts from CT image and training model
CN114241070B (en) * 2021-12-01 2022-09-16 北京长木谷医疗科技有限公司 Method and device for removing metal artifacts from CT image and training model
WO2023202265A1 (en) * 2022-04-19 2023-10-26 腾讯科技(深圳)有限公司 Image processing method and apparatus for artifact removal, and device, product and medium
CN116012478A (en) * 2022-12-27 2023-04-25 哈尔滨工业大学 CT metal artifact removal method based on convergence type diffusion model
CN116012478B (en) * 2022-12-27 2023-08-18 哈尔滨工业大学 CT metal artifact removal method based on convergence type diffusion model
CN116309922A (en) * 2023-05-22 2023-06-23 杭州脉流科技有限公司 De-artifact method, device, equipment and storage medium for CT perfusion image
CN116309922B (en) * 2023-05-22 2023-08-11 杭州脉流科技有限公司 De-artifact method, device, equipment and storage medium for CT perfusion image

Similar Documents

Publication Publication Date Title
EP3506209B1 (en) Image processing method, image processing device and storage medium
CN110462689B (en) Tomographic reconstruction based on deep learning
JP7039153B2 (en) Image enhancement using a hostile generation network
CN112258423A (en) Deartifact method, device, equipment and storage medium based on deep learning
US10937206B2 (en) Deep-learning-based scatter estimation and correction for X-ray projection data and computer tomography (CT)
US20200357153A1 (en) System and method for image conversion
US11120582B2 (en) Unified dual-domain network for medical image formation, recovery, and analysis
EP3716214A1 (en) Medical image processing apparatus and method for acquiring training images
Jia et al. GPU-based fast low-dose cone beam CT reconstruction via total variation
US20140363067A1 (en) Methods and systems for tomographic reconstruction
JP6885517B1 (en) Diagnostic support device and model generation device
CN109215094B (en) Phase contrast image generation method and system
CN111881926A (en) Image generation method, image generation model training method, image generation device, image generation equipment and image generation medium
CN110782502B (en) PET scattering estimation system based on deep learning and method for using perception neural network model
Kim et al. Unsupervised training of denoisers for low-dose CT reconstruction without full-dose ground truth
WO2023202265A1 (en) Image processing method and apparatus for artifact removal, and device, product and medium
Bubba et al. Deep neural networks for inverse problems with pseudodifferential operators: An application to limited-angle tomography
US20190328341A1 (en) System and method for motion estimation using artificial intelligence in helical computed tomography
KR20190135618A (en) Method for processing interior computed tomography image using artificial neural network and apparatus therefor
CN114170146A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113850796A (en) Lung disease identification method and device based on CT data, medium and electronic equipment
CN117197349A (en) CT image reconstruction method and device
CN113424227A (en) Motion estimation and compensation in Cone Beam Computed Tomography (CBCT)
KR102329938B1 (en) Method for processing conebeam computed tomography image using artificial neural network and apparatus therefor
Kiefer et al. Multi-channel Potts-based reconstruction for multi-spectral computed tomography

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination