CN117765118A - artifact correction method, artifact correction device, electronic equipment and computer readable storage medium - Google Patents

artifact correction method, artifact correction device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN117765118A
CN117765118A CN202410195987.0A CN202410195987A CN117765118A CN 117765118 A CN117765118 A CN 117765118A CN 202410195987 A CN202410195987 A CN 202410195987A CN 117765118 A CN117765118 A CN 117765118A
Authority
CN
China
Prior art keywords
image
artifact
interference
correction
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410195987.0A
Other languages
Chinese (zh)
Inventor
华树成
宋磊
竭晶
张勇
刘晗
管青天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Hospital Jinlin University
Original Assignee
First Hospital Jinlin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Hospital Jinlin University filed Critical First Hospital Jinlin University
Priority to CN202410195987.0A priority Critical patent/CN117765118A/en
Publication of CN117765118A publication Critical patent/CN117765118A/en
Pending legal-status Critical Current

Links

Abstract

The present disclosure provides an artifact correction method, an artifact correction device, an electronic device, and a computer readable storage medium, and relates to the technical field of medical data processing. The method comprises the following steps: acquiring a first computer tomography image including an artifact and a second computer tomography image not including the artifact corresponding to a target object and an interference computer tomography image including interference; masking the first computed tomography image and the interference computed tomography image to obtain an artifact shielding image and an interference shielding image respectively; correcting the interference shielding image and the artifact shielding image through a target network model to obtain an interference correction image and an artifact correction image respectively; and determining target loss values among the interference correction image, the artifact correction image and the second computer tomography image so as to train the target network model. The target network model trained through the embodiment of the disclosure can accurately correct the artifact in the computed tomography image.

Description

artifact correction method, artifact correction device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of medical data processing technology, and in particular, to an artifact correction method, an artifact correction device, an electronic device, and a computer readable storage medium.
Background
this section is intended to provide a background or context to the embodiments of the disclosure recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
The computed tomography device is a medical imaging technology for obtaining anatomical information of the interior of a human body in a lossless manner. In clinical application, the image reconstructed by the computer tomography (Computed Tomography, CT) equipment has high accuracy and definition, so that the image is rapidly one of the most widely applied equipment in the field of diagnostic imaging. However, CT devices themselves suffer from a number of unavoidable drawbacks, and CT systems are more prone to artifacts than conventional X-ray imaging.
at present, the image artifact correction method has poor robustness, the output result contains more secondary artifacts, and image details under interference information cannot be effectively recovered.
Because, the technical problem to be solved by the present disclosure is how to effectively restore details in an artifact image.
Disclosure of Invention
an object of the present disclosure is to provide an artifact correction method, apparatus, electronic device, and computer-readable storage medium capable of accurately correcting an artifact in a computed tomography image.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
The embodiment of the disclosure provides an artifact correction method, which comprises the following steps: acquiring a first computer tomography image and a second computer tomography image of a target object, wherein the first computer tomography image comprises artifacts, and the second computer tomography image does not comprise artifacts; adding noise interference to the second computed tomography image to generate an interference computed tomography image; performing random mask shielding processing on the first computer tomography image and the interference computer tomography image to respectively obtain an artifact shielding image and an interference shielding image; correcting the interference shielding image through a first network structure of a target network model to obtain an interference correction image; correcting the artifact shielding image through a second network structure of the target network model to obtain an artifact correction image; determining a loss of correspondence between the interference corrected image, the artifact corrected image, and the second computed tomography image as a target loss value; and training the target network model according to the target loss value so as to correct the artifact of the computed tomography image according to the trained target network model.
In some embodiments, the method further comprises: taking the image block which is shielded in the artifact shielding image as an artifact image hiding block; taking an image block which is not shielded in the artifact shielding image as an artifact image visible block; the artifact shielding image is corrected through a second network structure of the target network model, so as to obtain an artifact correction image, which comprises the following steps: and carrying out coding prediction processing on the visible block of the artifact image through the second network structure so as to predict the hidden block of the artifact image and obtain the artifact correction image.
In some embodiments, performing, by the second network structure, a coding prediction process on the artifact image visible block to predict the artifact image hidden block to obtain the artifact corrected image, including: encoding the artifact image visible block through an encoding module of the second network structure to obtain an artifact image visible feature; combining the visible characteristic of the artifact image with the artifact image hiding block to obtain an artifact image combined characteristic; and predicting the artifact image merging features through a decoding module of the second network structure to obtain the artifact correction image.
In some embodiments, the method further comprises: taking the image block which is shielded in the interference shielding image as an interference image hiding block; taking the image block which is not shielded in the interference shielding image as an interference image visible block; the method for correcting the interference shielding image through the first network structure of the target network model to obtain an interference correction image comprises the following steps: and carrying out coding prediction processing on the visible blocks of the interference image through the first network structure so as to predict the hidden blocks of the interference image and obtain the interference correction image.
In some embodiments, performing, by the first network structure, a coding prediction process on the interference image visible block to predict the interference image hidden block to obtain the interference corrected image, including: the visible block of the interference image is coded through a coding module of the second network structure, so that visible characteristics of the interference image are obtained; combining the visible features of the interference images with the interference image hiding blocks to obtain combined features of the interference images; and predicting the interference image merging feature through a decoding module of the first network structure to obtain the interference correction image.
In some embodiments, determining a loss of correspondence between the interference corrected image, the artifact corrected image, and the second computed tomography image as a target loss value comprises: determining a loss of correspondence between the interference corrected image and the second computed tomography image as a first loss value; determining a loss of correspondence between the artifact corrected image and the second computed tomography image as a second loss value; determining a consistency loss between the interference correction image and the artifact correction image as a third loss value; and determining the target loss value according to the first loss value, the second loss value and the third loss value.
in some embodiments, the method further comprises: encoding the artifact correction image through an encoding module of the second network structure to obtain an artifact correction image characteristic; and classifying and predicting the artifact correction image features through a multi-layer perceptron so as to classify the artifact correction image.
The embodiment of the disclosure provides an artifact correction device, comprising: the device comprises an image acquisition module, a noise processing module, a random mask shielding module, a first correction module, a second correction module, a loss value determining module and a training module.
The image acquisition module is used for acquiring a first computer tomography image and a second computer tomography image of a target object, wherein the first computer tomography image comprises artifacts, and the second computer tomography image does not comprise artifacts; the noise processing module may add noise interference to the second computed tomography image to generate an interference computed tomography image; the random mask shielding module can be used for carrying out random mask shielding processing on the first computer tomography image and the interference computer tomography image to respectively obtain an artifact shielding image and an interference shielding image; the first correction module may be configured to perform correction processing on the interference shielding image through a first network structure of the target network model to obtain an interference correction image; the second correction module may be configured to perform correction processing on the artifact shielding image through a second network structure of the target network model to obtain an artifact correction image; the loss value determination module may be configured to determine a consistency loss between the interference corrected image, the artifact corrected image, and the second computed tomography image as a target loss value; the training module may be configured to train the target network model according to the target loss value, so as to perform artifact correction of the computed tomography image according to the trained target network model.
The embodiment of the disclosure provides an electronic device, which comprises: a memory and a processor; the memory is used for storing computer program instructions; the processor invokes the computer program instructions stored by the memory to implement the artifact correction method of any one of the above.
embodiments of the present disclosure provide a computer readable storage medium having stored thereon computer program instructions for implementing an artifact correction method according to any of the above.
Embodiments of the present disclosure propose a computer program product or a computer program comprising computer program instructions stored in a computer-readable storage medium. The computer program instructions are read from a computer readable storage medium and executed by a processor to implement the artifact correction method described above.
According to the artifact correction method, the artifact correction device, the electronic equipment and the computer readable storage medium, on one hand, the target network model is trained through the interference computer tomography image comprising interference and the first computer tomography image comprising artifacts so as to improve the generalization capability of the target network model, and further improve the accurate artifact correction capability of the target network model on the computer tomography image; on the other hand, the first computer tomography image and the interference computer tomography image are subjected to random mask shielding processing, so that the trained target network model can better predict the local information of the image, and the artifact correction capability of the target network model on the computer tomography image is improved again; in addition, by using the consistency loss among the interference correction image, the artifact correction image, and the second computed tomography image as the target loss value when training the target network model, the artifact correction capability of the target network model on the computed tomography image can be improved again.
it is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
fig. 1 shows a schematic view of a scenario in which an artifact correction method or an artifact correction apparatus may be applied to an embodiment of the present disclosure.
fig. 2 is a flow chart illustrating a method of artifact correction according to an exemplary embodiment.
Fig. 3 is a schematic diagram illustrating a method of artifact correction according to an exemplary embodiment.
Fig. 4 is a flowchart illustrating a method of artifact-corrected image acquisition according to an exemplary embodiment.
Fig. 5 is a schematic diagram of encoder-decoder for image reconstruction of a target network model, according to an exemplary embodiment.
fig. 6 is a flowchart illustrating a method of artifact correction image determination according to an exemplary embodiment.
Fig. 7 is a flowchart illustrating a method for obtaining an interference corrected image according to an exemplary embodiment.
Fig. 8 is a flowchart illustrating a method for determining an interference corrected image according to an exemplary embodiment.
fig. 9 is a flowchart illustrating a target loss value determination method according to an exemplary embodiment.
FIG. 10 is a flowchart illustrating a downstream sort task processing method, according to an example embodiment.
Fig. 11 is a block diagram illustrating an artifact correction device according to an exemplary embodiment.
fig. 12 shows a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted.
one skilled in the art will appreciate that embodiments of the present disclosure may be a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the following forms, namely: complete hardware, complete software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
The described features, structures, or characteristics of the disclosure may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. However, those skilled in the art will recognize that the aspects of the present disclosure may be practiced with one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
In the presently disclosed embodiments, the term "module" or "unit" refers to a computer program or a portion of a computer program having a predetermined function and working with other related portions to achieve a predetermined objective, and may be implemented in whole or in part by using software, hardware (such as a processing circuit or a memory), or a combination thereof. Also, a processor (or multiple processors or memories) may be used to implement one or more modules or units. Furthermore, each module or unit may be part of an overall module or unit that incorporates the functionality of the module or unit.
the drawings are merely schematic illustrations of the present disclosure, in which like reference numerals denote like or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and not necessarily all of the elements or steps are included or performed in the order described. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
In the description of the present disclosure, unless otherwise indicated, "/" means "or" and, for example, a/B may mean a or B. "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. Furthermore, "at least one" means one or more, and "a plurality" means two or more. The terms "first," "second," and the like do not limit the amount and order of execution, and the terms "first," "second," and the like do not necessarily differ; the terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements/components/etc., in addition to the listed elements/components/etc.
In order that the above-recited objects, features and advantages of the present disclosure can be more clearly understood, a more particular description of the disclosure will be rendered by reference to the appended drawings and detailed description, which are to be interpreted as illustrative of embodiments of the disclosure and of features in the examples that are not necessarily related to each other.
It should be noted that, in the technical solution of the present disclosure, the related aspects of collecting, updating, analyzing, processing, using, transmitting, storing, etc. of the personal information of the user all conform to the rules of the related laws and regulations, and are used for legal purposes without violating the public order colloquial. Necessary measures are taken for the personal information of the user, illegal access to the personal information data of the user is prevented, and the personal information security and network security of the user are maintained.
after collecting the computerized tomography information, the present disclosure will desensitize the data by technical means.
the following is a first explanation of some of the terms involved in the embodiments of the present disclosure to facilitate understanding by those skilled in the art.
Artifacts refer to the difference between the reconstructed value and the actual attenuation coefficient of the object when the actual object is scanned. In CT detection of patients, the outer sleeve containing the metal buttons, the metal ornaments and the like are usually removed, so that the influence of external metal factors on CT imaging can be avoided. However, metal objects implanted in the patient, such as: the problems of metal artifacts caused by internal fixation of femur, tibia, vertebral body, metal teeth, etc. cannot be avoided. The metal artifacts are mainly represented by radial stripes emitted from the periphery of the metal, and the bright or dark areas among a plurality of metals and cup-shaped artifacts in a background image caused by X-ray hardening, so that the definition of the periphery of the metal in a broken layer image is visually reduced, the diagnosis and analysis of a fault structure by a doctor are directly influenced, and even part of metal artifacts possibly resemble certain symptoms, and misdiagnosis of the doctor is easily caused.
The technical problem underlying the present application is therefore how to accurately correct a computed tomography image comprising artifacts in order to make a medical decision from the corrected computed tomography image.
having introduced some noun concepts to which embodiments of the present disclosure relate, the following describes technical features to which embodiments of the present disclosure relate.
The following describes example embodiments of the present disclosure in detail with reference to the accompanying drawings.
fig. 1 shows a schematic view of a scenario in which an artifact correction method or an artifact correction apparatus may be applied to an embodiment of the present disclosure.
referring to fig. 1, a schematic diagram of an implementation environment provided by an exemplary embodiment of the present disclosure is shown.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop computers, desktop computers, wearable devices, virtual reality devices, smart homes, etc.
The server 105 may be a server providing various services, such as a background management server providing support for devices operated by users with the terminal devices 101, 102, 103. The background management server can analyze and process the received data such as the request and the like, and feed back the processing result to the terminal equipment.
The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server or the like for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDN (Content Delivery Network ), basic cloud computing services such as big data and artificial intelligent platform, and the disclosure is not limited thereto.
The server 105 may, for example, acquire a first computed tomography image of the target object including the artifact and a second computed tomography image not including the artifact; the server 105 may, for example, add noise interference to the second computed tomography image, generating an interference computed tomography image; the server 105 may, for example, perform a random mask masking process on the first computed tomography image and the interference computed tomography image to obtain an artifact masking image and an interference masking image, respectively; the server 105 may perform correction processing on the interference shielding image, for example, through the first network structure of the target network model, to obtain an interference correction image; the server 105 may perform correction processing on the artifact-masked image, for example, through a second network structure of the target network model, to obtain an artifact-corrected image; the server 105 may, for example, determine a loss of correspondence between the interference corrected image, the artifact corrected image, and the second computed tomography image as a target loss value; the server 105 may train the target network model, for example, based on the target loss values, to perform artifact correction of the computed tomography image based on the trained target network model.
it should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative, and that the server 105 may be a server of one entity, or may be composed of a plurality of servers, and may have any number of terminal devices, networks and servers according to actual needs.
Those skilled in the art will appreciate that the number of terminals, servers, networks, and network side devices in fig. 1 is merely illustrative, and that any number of terminals, networks, and servers may be provided as desired. The embodiments of the present disclosure are not limited in this regard.
Under the system architecture described above, an artifact correction method is provided in an embodiment of the present disclosure, and the method may be performed by any electronic device having computing processing capabilities.
Fig. 2 is a flow chart illustrating a method of artifact correction according to an exemplary embodiment. The method provided by the embodiments of the present disclosure may be performed by any electronic device having computing processing capability, for example, the method may be performed by a server or a terminal device in the embodiment of fig. 1, or may be performed by both the server and the terminal device, and in the following embodiments, the server is taken as an example to illustrate an execution subject, but the present disclosure is not limited thereto.
referring to fig. 2, the artifact correction method provided by the embodiment of the present disclosure may include the following steps.
In step S202, a first computed tomography image and a second computed tomography image of the target object are acquired, wherein the first computed tomography image includes an artifact and the second computed tomography image does not include an artifact.
In some embodiments, the target object may be an object that performs computed tomography imaging, e.g., a human body, an animal, etc., as the application is not limited in this respect.
the first and second computed tomography images may be different computed tomography images acquired for the target object. Wherein the first computed tomography image may include artifacts therein (the first computed tomography image may be, for example, artifact-containing image 303 in fig. 3), and the second computed tomography image may not include artifacts therein (the second computed tomography image may be, for example, artifact-free image 301 in fig. 3).
in some embodiments, when the target object is being computed tomography scanned, the target object may carry an object such as metal that can cause artifacts in the computed tomography image, and then cause artifacts to be included in the computed tomography image obtained.
in some embodiments, the object may be removed from the object and computed tomography may be performed such that no artifact is included in the computed tomography image obtained.
In step S204, noise interference is added to the second computed tomography image to generate an interference computed tomography image.
In some embodiments, noise interference may be added to a second computed tomography image (which may be, for example, artifact free image 301 in fig. 3) to generate an interference computed tomography image (which may be, for example, interference image 302 in fig. 3). The noise may be gaussian noise, poisson noise, pretzel noise or other noise, and the present application is not limited to the type of noise.
In step S206, a random mask masking process is performed on the first computed tomography image and the interference computed tomography image to obtain an artifact masking image and an interference masking image, respectively.
in some embodiments, the artifact mask image and the interference mask image may be obtained by performing a random mask masking process on the first computed tomography image and the interference computed tomography image by a mask masking structure 310 as shown in fig. 3.
The above-described random masking process may include randomly selecting a position in the image or the shadow to perform the masking process. Wherein the masking process may include masking by masking. As shown in fig. 5, after performing a random mask masking process on an input image indicated by 501 (e.g., a first computed tomography image or an interference computed tomography image), a masked image indicated by 502 may be obtained, where a portion of image blocks (or a portion of pixels in the image) in the masked image indicated by 502 are masked (e.g., an image block corresponding to a blank region may represent the masked image block).
step S208, the interference shielding image is corrected through the first network structure of the target network model to obtain an interference correction image.
In some embodiments, the target network model may include a first network structure (e.g., first network structure 311 in fig. 3) and a second network structure (e.g., second network structure 312 in fig. 3). The first network structure and the second network structure may have the same structure but different parameters, or may have different structures and parameters, and the present application will be explained below with reference to the first network structure and the second network structure that may have the same structure but different parameters, but the present application is not limited thereto.
The first network structure or the second network structure may be any neural network structure such as a convolutional network structure, which is not limited in the present application. For example, a backbone network based on a transducer may be used.
In some embodiments, feature extraction may be performed on the interference-shielded image to obtain interference-shielded image features; and then inputting the interference shielding image characteristics into a first network structure to perform prediction correction processing, so as to obtain an interference correction image. The interference correction image is an image obtained by correcting the interference in the interference computed tomography image.
after the training of the target network model is completed, the interference correction image obtained after correcting the interference in the interference computed tomography image by the target network model should be consistent with the second computed tomography image that does not include the artifact.
step S210, the artifact shielding image is corrected through the second network structure of the target network model to obtain an artifact correction image.
In some embodiments, feature extraction may be performed on the artifact-masked image to obtain artifact-masked image features; and then inputting the artifact shielding image characteristics into a second network structure for prediction correction processing to obtain an artifact correction image. The artifact correction image is an image obtained by correcting an artifact in the first computed tomography image.
After the training of the target network model is completed, the artifact correction image obtained by correcting the artifact in the first computed tomography image through the target network model should be consistent with the second computed tomography image which does not include the artifact.
in step S212, a consistency loss among the interference corrected image, the artifact corrected image, and the second computed tomography image is determined as a target loss value.
step S214, training the target network model according to the target loss value so as to correct the artifact of the computed tomography image according to the trained target network model.
According to the technical scheme provided by the embodiment, on one hand, the target network model is trained by interfering the computed tomography image and the first computed tomography image comprising the artifact, so that the generalization capability of the target network model is improved, and the accurate correction capability of the target network model on the artifact in the computed tomography image is improved; on the other hand, the first computer tomography image and the interference computer tomography image are subjected to random mask shielding processing, so that the trained target network model can better predict the local information of the image, and the artifact correction capability of the target network model on the computer tomography image is improved again; in addition, by using the consistency loss among the interference correction image, the artifact correction image, and the second computed tomography image as the target loss value when training the target network model, the artifact correction capability of the target network model on the computed tomography image can be improved again.
The medical image artifact correction method provided by the embodiment provides quality assurance for subsequent clinical diagnosis analysis, so that the medical image artifact correction method can be better applied to downstream tasks such as classification, detection and segmentation, and the influence of CT images containing artifacts on the diagnosis and treatment process of clinicians can be reduced.
Fig. 4 is a flowchart illustrating a method of artifact-corrected image acquisition according to an exemplary embodiment.
Referring to fig. 4, the artifact correction image obtaining method may include the following steps.
in step S402, the image block masked in the artifact mask image is used as an artifact image hiding block.
in some embodiments, the image blocks that are occluded in the artifact-occluded image may be referred to as artifact-image-hidden blocks (white-filled portions of the image as indicated at 502 in fig. 5).
In step S404, the image block that is not masked in the artifact-masked image is taken as an artifact-visible block.
In some embodiments, the image blocks in the artifact-masked image that are not masked may be referred to as artifact-image-hidden blocks (e.g., line-filled portions in the image indicated at 502 in fig. 5).
In step S406, the second network structure is used to encode and predict the visible block of the artifact image to predict the hidden block of the artifact image, thereby obtaining the artifact correction image.
In some embodiments, the visible block portion of the artifact image may be encoded and predicted by a second network structure to predict the hidden block of the artifact image. After the artifact image hidden block is obtained through prediction, the artifact correction image can be obtained by combining the artifact image visible block.
According to the embodiment, the image blocks which are already shielded can be accurately predicted through the image blocks which are not shielded, so that the prediction capability of the target network model on partial image contents is improved, and the correction capability of the target network model on artifacts in computed tomography images is further improved.
fig. 6 is a flowchart illustrating a method of artifact correction image determination according to an exemplary embodiment.
referring to fig. 6, the artifact correction image determining method may include the following steps.
Step S602, the coding module of the second network structure is used for coding the visible block of the artifact image to obtain the visible feature of the artifact image.
In some embodiments, the second network structure may include an encoding module (e.g., the encoder of fig. 5) and a decoding module (e.g., the decoder of fig. 5).
In some embodiments, the artifact image visible block may be encoded by an encoding module of the second network structure to obtain an artifact image visible feature.
In step S604, the visible feature of the artifact image and the hidden block of the artifact image are combined to obtain the combined feature of the artifact image.
in step S606, the decoding module of the second network structure predicts the merging features of the artifact images to obtain the artifact corrected image.
when the artifact shielding image is encoded through the second network structure, the embodiment only encodes the part which is not shielded in the artifact shielding image, so that the encoding data volume can be reduced, and the encoding efficiency and the encoding accuracy can be improved.
Fig. 7 is a flowchart illustrating a method for obtaining an interference corrected image according to an exemplary embodiment.
referring to fig. 7, the above-mentioned interference correction image acquisition method may include the following steps.
in step S702, the image block masked in the interference mask image is used as an interference image hiding block.
In step S704, the image block that is not masked in the interference mask image is used as the interference image visible block.
In step S706, encoding prediction processing is performed on the visible block of the interference image through the first network structure to predict the hidden block of the interference image, so as to obtain an interference correction image.
According to the embodiment, the image blocks which are already shielded can be accurately predicted through the image blocks which are not shielded, so that the prediction capability of the target network model on partial image contents is improved, and the correction capability of the target network model on interference in computed tomography images is further improved.
Fig. 8 is a flowchart illustrating a method for determining an interference corrected image according to an exemplary embodiment.
referring to fig. 8, the above-mentioned interference correction image determining method may include the following steps.
Step S802, the visible block of the interference image is encoded by the encoding module of the second network structure, and the visible feature of the interference image is obtained.
Step S804, combining the visible features of the interference image with the hidden blocks of the interference image to obtain combined features of the interference image.
step S806, the interference image merging feature is predicted by a decoding module of the first network structure, and an interference correction image is obtained.
when the first network structure is used for encoding the interference shielding image, the embodiment only encodes the part which is not shielded in the interference shielding image, so that the encoding data volume can be reduced, and the encoding efficiency and the encoding accuracy can be improved.
fig. 9 is a flowchart illustrating a target loss value determination method according to an exemplary embodiment.
referring to fig. 9, the above-described target loss value determination method may include the following methods.
in step S902, a consistency loss between the interference corrected image and the second computed tomography image is determined as a first loss value.
In step S904, a loss of correspondence between the artifact corrected image and the second computed tomography image is determined as a second loss value.
In step S906, a consistency loss between the interference correction image and the artifact correction image is determined as a third loss value.
Step S908, determining a target loss value according to the first loss value, the second loss value, and the third loss value.
according to the embodiment, the target loss value is determined through the consistency loss among the interference correction image, the artifact correction image and the second computed tomography image so as to train the target network model, so that the finally trained target network model can accurately correct the computed tomography image comprising the artifact.
FIG. 10 is a flowchart illustrating a downstream sort task processing method, according to an example embodiment.
referring to fig. 10, the flow chart of the downstream sort task processing method described above may include the following steps.
step S1002, the artifact correction image is encoded by the encoding module of the second network structure, so as to obtain an artifact correction image feature.
in step S1004, the multi-layer perceptron is used to classify and predict the image characteristics of the artifact correction image, so as to classify the artifact correction image.
In the above embodiment, the characteristics extracted from the artifact correction image through the trained second network structure can accurately describe the artifact correction image, and the feature extraction is performed on the artifact correction image through the second network structure as the pre-training structure, so that the feature extraction accuracy can be improved to improve the classification accuracy of the downstream task, and the training efficiency can be improved.
The present disclosure also discloses a CT image artifact correction method, such that the corrected image may be used for downstream tasks, the method mainly comprising the following components.
(1) Artifact data collection phase.
(2) Interference data construction phase for auxiliary tasks.
(3) A shadow mask is superimposed for the input image.
(4) And inputting the shielded interference data and artifact data into a target network model for multi-task learning to perform model training.
(5) A loss function is constructed.
(6) Downstream task migration.
Specifically, the method mainly comprises the following steps.
(1) And a data collection stage. High-quality CT images of the same patient with artifacts and high-quality CT images without artifacts (patient consent is required to be obtained) are collected, and the privacy of patient information and the quality management of data are further completed.
(2) Interference data construction phase for auxiliary tasks. Noise interference is added to the artifact-free image to generate an interference image, and the generalization capability of the reconstruction model is improved by taking an interference correction task as an auxiliary task. The original artifact-free and interference-free image (high quality CT image without artifacts nor interference) is a truth label of the interference image and the artifact image (high quality CT image without artifacts).
(3) A mask is superimposed for the input images (interference images and artifact images). A random masking mask for global reconstruction is added to the artifact image and the interference image at a predetermined mask rate.
(4) The masked interference data and artifact data are input into a multitasking model (such as the target network model described above) for training. The multitasking includes an artifact image correction task and an interference image correction task, and the multitasking learning can improve the generalization capability of the target network model, and the process of the multitasking learning is shown in fig. 3. In some embodiments, the target network model employs an asymmetric encoder-decoder architecture. Wherein the input to the encoder is an unobstructed visible image block of an artifact image or interference image for compressing the input image into a potential representation. The output of the encoder and the occluded image blocks of the artifact or interference image are combined and the original order is preserved, as input to the decoder, which may employ a lightweight converter for reconstructing the original image from the potential representation.
(5) A loss function is constructed. The loss function is realized by calculating the mean square error between the corrected image and the original image by adopting consistency loss, and consists of three parts, namely the consistency of the interference corrected image and the truth value label, the consistency of the artifact corrected image and the truth value label and the consistency of the interference corrected image and the artifact corrected image.
(6) Downstream task migration. Taking the abnormal classification task of the CT image as an example, after the pre-training of the target network model, only the pre-trained encoder is transferred to the downstream classification task, and a multi-layer perceptron (MLP) head is added after the pre-training of the encoder. The loss function may be calculated using the cross entropy loss between the true classification label and the prediction label.
the technical scheme disclosed in the process can be specifically the following steps.
(1) Artifact data collection phase. And collecting CT images with high quality, wherein the CT images have artifacts and no artifacts, and further completing privacy removal and data quality management of patient information.
(2) And (5) an auxiliary task data construction stage. And adding noise interference to the artifact-free image, and taking an interference correction task as an auxiliary task to improve the generalization capability of the reconstruction model. The original artifact-free and interference-free image is a truth-value label of the interference image and the artifact image.
(3) A random masking mask for global reconstruction is added to the artifact image and the interference image at a predetermined mask rate.
(3.1) superimposing the artifact image with a mask for global reconstruction. All image blocks are marked for the artifact image and then the artifact masking transforms are superimposed in a predetermined ratio.
(3.2) marking the interference image with all image blocks and then superimposing the noise masking transform in a determined proportion.
(4) And inputting the shielded interference data and the artifact data into a multi-task target network model for training. The multitasking includes an artifact image correction task and an interference image correction task, by which the generalization ability of the target network model can be improved.
(4.1) the target network model employs an asymmetric encoder-decoder architecture, the reconstruction process of which is shown in fig. 5.
(4.2) the masked tiles to be added to the interference and artifact images will be the hidden image blocks, and the remaining non-masked visible image blocks will be the learnable encoder in the target network modelFor mapping the input to a representation space, the encoder based on the visual transducer, the position embedding is added to each visible image block, generating a potential representation z of the visible image block by:
Wherein,Visible blocks for artifact images,/>for potential characterization of visible block correspondence of artifact images,/>Visible blocks for interference images,/>The corresponding potential representation of the block is visible for the interference image.
(4.3) decoderis made up of two parts, an encoded visible image block and an invisible hidden image block.
the output of the encoder and the masked image blocks of the artifact or interference image are combined and the original order is preserved as input to the decoder.
(4.4) the decoder employs a lightweight Transformer for predicting and reconstructing invisible hidden image blocks from the potential representations of the visible image blocks. All image blocks have position embedding added to cover the position information. DecoderThe output of (2) can be obtained by the following formulas (3) - (4).
Wherein,latent characterization of an artifact image after concealment of the block for the artifact image visible block,/>decoder output corresponding to visible blocks of an artifact image,/>superimposing corresponding potential representations after concealment of the interference image for visible blocks of the interference image,/>The decoder output corresponding to the visible block is seen for the interference image.
(5) A loss function is constructed. The consistency loss function is realized by calculating the Mean Square Error (MSE) error between the corrected image and the original image, and consists of three parts, as shown in a formula (5)may include the consistency/>, of the interference corrected image with the truth labelsconsistency/>, of artifact correction image and truth labelAnd consistency between the interference corrected image and the artifact corrected imageThe composition is formed.
Wherein,decoder output corresponding to visible blocks of an artifact image,/>Decoder output corresponding to visible blocks of an interference image,/>Representing the truth label (computed tomography image without artifacts).
Wherein,the mean square error-based consistency loss is represented by the following calculation formula:
/>
Wherein,representing the predicted value (e.g., interference corrected image or artifact corrected image)/>Representing a truth label.
(7) Downstream task migration. Taking the abnormal classification task of the CT image as an example, after the training of the image reconstruction model, only the encoder model based on the target network model is transferred to the downstream classification task, and a multi-layer perceptron (MLP) head is added after the encoder for classification prediction. The loss function may be calculated using the cross entropy loss between the true classification label and the prediction label.
In the related art, metal artifacts cause a reduction in the quality of the resulting image when scanning patients containing metal implants, and the associated anatomical structures are completely covered by the artifacts, thereby increasing the risk of misdiagnosis, which is always one of the most important factors affecting the CT imaging quality due to its uncontrollability. In order to increase further analysis of artifact images, the above embodiments of the disclosure provide a CT image artifact correction method, which constructs a multi-task learning target network model to further improve generalization capability of the target network model, and in addition, in the model training process, a plurality of consistency constraints are constructed to make the prediction result more accurate, so that the disclosure aims at reconstructing an image of a CT image of a patient containing an artifact, and further diagnosis and treatment processes such as lesion analysis and lesion detection can be performed.
It should be noted that the steps in the embodiments of the artifact correction method described above may be interleaved, replaced, added, and subtracted. Therefore, these rational permutation and combination transformation should also belong to the protection scope of the present disclosure, and should not limit the protection scope of the present disclosure to the embodiments.
Based on the same inventive concept, an artifact correction device is further provided in the embodiments of the present disclosure, such as the following embodiments. Since the principle of solving the problem of the embodiment of the device is similar to that of the embodiment of the method, the implementation of the embodiment of the device can be referred to the implementation of the embodiment of the method, and the repetition is omitted.
Fig. 11 is a block diagram illustrating an artifact correction device according to an exemplary embodiment. Referring to fig. 11, an artifact correction apparatus 1100 provided by an embodiment of the present disclosure may include: an image acquisition module 1101, a noise processing module 1102, a random mask masking module 1103, a first correction module 1104, a second correction module 1105, a loss value determination module 1106, and a training module 1107.
the image obtaining module 1101 may be configured to obtain a first computed tomography image and a second computed tomography image of the target object, where the first computed tomography image includes an artifact, and the second computed tomography image does not include an artifact; the noise processing module 1102 may add noise interference to the second computed tomography image to generate an interference computed tomography image; the stochastic mask masking module 1103 may be configured to perform stochastic mask masking processing on the first computed tomography image and the interference computed tomography image to obtain an artifact masking image and an interference masking image, respectively; the first correction module 1104 may be configured to perform correction processing on the interference shielding image through a first network structure of the target network model to obtain an interference correction image; the second correction module 1105 may be configured to perform correction processing on the artifact shielding image through a second network structure of the target network model to obtain an artifact correction image; the loss value determination module 1106 may be configured to determine a loss of correspondence between the interference corrected image, the artifact corrected image, and the second computed tomography image as a target loss value; the training module 1107 may be configured to train the target network model according to the target loss value, so as to perform artifact correction of the computed tomography image according to the trained target network model.
Here, the image obtaining module 1101, the noise processing module 1102, the random mask masking module 1103, the first correction module 1104, the second correction module 1105, the loss value determining module 1106 and the training module 1107 correspond to S202 to S214 in the method embodiment, and the foregoing modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the foregoing method embodiment. It should be noted that the modules described above may be implemented as part of an apparatus in a computer system, such as a set of computer-executable instructions.
In some embodiments, the artifact correction apparatus 1100 may further include: an artifact image hidden block determination module and an artifact image visible block determination module.
The artifact image hidden block determining module may be configured to use the image block that is hidden in the artifact shielding image as an artifact image hidden block; the artifact image visible block determining module may be configured to take an image block in the artifact mask image that is not masked as an artifact image visible block.
In some embodiments, the second correction module 1105 may include: an artifact concealment block prediction sub-module.
the artifact-hidden-block prediction sub-module may be configured to perform coding prediction processing on the artifact image visible block through the second network structure, so as to predict the artifact image hidden block, and obtain the artifact correction image.
In some embodiments, the artifact-concealment block prediction sub-module may comprise: an artifact image visible feature acquisition unit, an artifact image merging feature acquisition unit, and an artifact correction image acquisition unit.
The artifact image visible feature obtaining unit may be configured to perform encoding processing on the artifact image visible block by using an encoding module of the second network structure to obtain an artifact image visible feature; the artifact image merging feature obtaining unit may be configured to merge the visible feature of the artifact image with the artifact image hiding block to obtain an artifact image merging feature; the artifact correction image obtaining unit may be configured to perform prediction processing on the artifact image merging feature by using a decoding module of the second network structure, so as to obtain the artifact correction image.
In some embodiments, the artifact correction apparatus 1100 may further include: an interference image hidden block determination module and an interference image visible block determination module.
The interference image hiding block determining module may be configured to use the image block that is hidden in the interference shielding image as an interference image hiding block; the interference image visible block determination module may be configured to take an image block that is not occluded in the interference occlusion image as an interference image visible block.
in some embodiments, the first correction module 1104 may include: an interfering image hidden block prediction sub-module.
The interference image hidden block prediction sub-module may be configured to perform coding prediction processing on the interference image visible block through the first network structure, so as to predict the interference image hidden block, and obtain the interference correction image.
In some embodiments, the interfering image concealed block prediction submodule may include: an interference image visible feature determining unit, an interference image merging feature determining unit, and an interference correction image determining unit.
The interference image visible feature determining unit may be configured to perform encoding processing on the interference image visible block by using an encoding module of the second network structure to obtain an interference image visible feature; the interference image merging feature determining unit may be configured to merge the visible feature of the interference image with the interference image hiding block to obtain an interference image merging feature; the interference correction image determining unit may be configured to perform prediction processing on the interference image merging feature by using a decoding module of the first network structure, so as to obtain the interference correction image.
In some embodiments, the loss value determination module 1106 may include: the first loss determination sub-module, the second loss determination sub-module, the third loss determination sub-module, and the target loss determination sub-module.
Wherein a first loss determination sub-module may be used to determine a loss of correspondence between the interference corrected image and the second computed tomography image as a first loss value; a second loss determination sub-module may be used to determine a loss of correspondence between the artifact correction image and the second computed tomography image as a second loss value; a third loss determination sub-module may be configured to determine a loss of correspondence between the interference corrected image and the artifact corrected image as a third loss value; a target loss determination submodule may be configured to determine the target loss value from the first loss value, the second loss value, and the third loss value.
in some embodiments, the artifact correction apparatus 1100 may further include: an artifact corrected image feature determination module and a classification module.
the artifact correction image feature determining module may be configured to perform encoding processing on the artifact correction image through an encoding module of the second network structure to obtain an artifact correction image feature; the classifying module can be used for performing classified prediction processing on the artifact correction image features through the multi-layer perceptron so as to perform classified processing on the artifact correction image.
since the functions of the apparatus 1100 are described in detail in the corresponding method embodiments, the disclosure is not repeated herein.
The modules and/or sub-modules and/or units referred to in the embodiments of the present disclosure may be implemented in software or hardware. The described modules and/or sub-modules and/or units may also be provided in a processor. Wherein the names of the modules and/or sub-modules and/or units do not in some cases constitute a limitation of the module and/or sub-modules and/or units themselves.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module or portion of a program that comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer program instructions.
Furthermore, the above-described figures are only schematic illustrations of processes included in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
fig. 12 shows a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure. It should be noted that the electronic device 1200 shown in fig. 12 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present disclosure.
As shown in fig. 12, the electronic apparatus 1200 includes a Central Processing Unit (CPU) 1201, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1202 or a program loaded from a storage section 1208 into a Random Access Memory (RAM) 1203. In the RAM 1203, various programs and data required for the operation of the electronic apparatus 1200 are also stored. The CPU 1201, ROM 1202, and RAM 1203 are connected to each other through a bus 1204. An input/output (I/O) interface 1205 is also connected to the bus 1204.
The following components are connected to the I/O interface 1205: an input section 1206 including a keyboard, a mouse, and the like; an output portion 1207 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 1208 including a hard disk or the like; and a communication section 1209 including a network interface card such as a LAN card, a modem, or the like. The communication section 1209 performs communication processing via a network such as the internet. The drive 1210 is also connected to the I/O interface 1205 as needed. A removable medium 1211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on the drive 1210 so that a computer program read out therefrom is installed into the storage section 1208 as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program comprising computer program instructions for performing the method shown in the flowchart. In such an embodiment, the computer program can be downloaded and installed from a network via the communication portion 1209, and/or installed from the removable media 1211. The above-described functions defined in the system of the present disclosure are performed when the computer program is executed by a Central Processing Unit (CPU) 1201.
It should be noted that the computer readable storage medium shown in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, a computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with computer-readable computer program instructions embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Computer program instructions embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
As another aspect, the present disclosure also provides a computer-readable storage medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer-readable storage medium carries one or more programs which, when executed by a device, cause the device to perform functions including: acquiring a first computer tomography image and a second computer tomography image of a target object, wherein the first computer tomography image comprises artifacts, and the second computer tomography image does not comprise artifacts; adding noise interference to the second computed tomography image to generate an interference computed tomography image; performing random mask shielding processing on the first computer tomography image and the interference computer tomography image to respectively obtain an artifact shielding image and an interference shielding image; correcting the interference shielding image through a first network structure of a target network model to obtain an interference correction image; correcting the artifact shielding image through a second network structure of the target network model to obtain an artifact correction image; determining a loss of correspondence between the interference corrected image, the artifact corrected image, and the second computed tomography image as a target loss value; and training the target network model according to the target loss value so as to correct the artifact of the computed tomography image according to the trained target network model.
According to one aspect of the present disclosure, there is provided a computer program product or computer program comprising computer program instructions stored in a computer readable storage medium. The computer program instructions are read from a computer-readable storage medium and executed by a processor to implement the methods provided in the various alternative implementations of the above embodiments.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solutions of the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.), and include several computer program instructions for causing an electronic device (may be a server or a terminal device, etc.) to perform a method according to the embodiments of the present disclosure.
other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the disclosure is not to be limited to the details of construction, the manner of drawing, or the manner of implementation, which has been set forth herein, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (10)

1. a method of artifact correction, comprising:
Acquiring a first computer tomography image and a second computer tomography image of a target object, wherein the first computer tomography image comprises artifacts, and the second computer tomography image does not comprise artifacts;
Adding noise interference to the second computed tomography image to generate an interference computed tomography image;
performing random mask shielding processing on the first computer tomography image and the interference computer tomography image to respectively obtain an artifact shielding image and an interference shielding image;
Correcting the interference shielding image through a first network structure of a target network model to obtain an interference correction image;
Correcting the artifact shielding image through a second network structure of the target network model to obtain an artifact correction image;
Determining a loss of correspondence between the interference corrected image, the artifact corrected image, and the second computed tomography image as a target loss value;
and training the target network model according to the target loss value so as to correct the artifact of the computed tomography image according to the trained target network model.
2. The method according to claim 1, wherein the method further comprises:
taking the image block which is shielded in the artifact shielding image as an artifact image hiding block;
Taking an image block which is not shielded in the artifact shielding image as an artifact image visible block;
the artifact shielding image is corrected through a second network structure of the target network model, so as to obtain an artifact correction image, which comprises the following steps:
And carrying out coding prediction processing on the visible block of the artifact image through the second network structure so as to predict the hidden block of the artifact image and obtain the artifact correction image.
3. the method of claim 2, wherein encoding the artifact-corrected image visible block through the second network structure to predict the artifact-concealed block to obtain the artifact-corrected image comprises:
Encoding the artifact image visible block through an encoding module of the second network structure to obtain an artifact image visible feature;
combining the visible characteristic of the artifact image with the artifact image hiding block to obtain an artifact image combined characteristic;
and predicting the artifact image merging features through a decoding module of the second network structure to obtain the artifact correction image.
4. the method according to claim 1, wherein the method further comprises:
taking the image block which is shielded in the interference shielding image as an interference image hiding block;
taking the image block which is not shielded in the interference shielding image as an interference image visible block;
The method for correcting the interference shielding image through the first network structure of the target network model to obtain an interference correction image comprises the following steps:
and carrying out coding prediction processing on the visible blocks of the interference image through the first network structure so as to predict the hidden blocks of the interference image and obtain the interference correction image.
5. the method of claim 4, wherein encoding prediction processing is performed on the interference image visible block through the first network structure to predict the interference image hidden block to obtain the interference corrected image, comprising:
the visible block of the interference image is coded through a coding module of the second network structure, so that visible characteristics of the interference image are obtained;
Combining the visible features of the interference images with the interference image hiding blocks to obtain combined features of the interference images;
And predicting the interference image merging feature through a decoding module of the first network structure to obtain the interference correction image.
6. The method of claim 1, wherein determining a loss of correspondence between the interference corrected image, the artifact corrected image, and the second computed tomography image as a target loss value comprises:
Determining a loss of correspondence between the interference corrected image and the second computed tomography image as a first loss value;
determining a loss of correspondence between the artifact corrected image and the second computed tomography image as a second loss value;
determining a consistency loss between the interference correction image and the artifact correction image as a third loss value;
And determining the target loss value according to the first loss value, the second loss value and the third loss value.
7. a method according to claim 3, wherein the method further comprises:
encoding the artifact correction image through an encoding module of the second network structure to obtain an artifact correction image characteristic;
And classifying and predicting the artifact correction image features through a multi-layer perceptron so as to classify the artifact correction image.
8. An artifact correction device, comprising:
The image acquisition module is used for acquiring a first computer tomography image and a second computer tomography image of a target object, wherein the first computer tomography image comprises artifacts, and the second computer tomography image does not comprise artifacts;
a noise processing module that adds noise interference to the second computed tomography image to generate an interference computed tomography image;
the random mask shielding module is used for carrying out random mask shielding processing on the first computer tomography image and the interference computer tomography image to respectively obtain an artifact shielding image and an interference shielding image;
the first correction module is used for correcting the interference shielding image through a first network structure of the target network model to obtain an interference correction image;
the second correction module is used for correcting the artifact shielding image through a second network structure of the target network model to obtain an artifact correction image;
a loss value determining module configured to determine a consistency loss among the interference corrected image, the artifact corrected image, and the second computed tomography image as a target loss value;
And the training module is used for training the target network model according to the target loss value so as to correct the artifact of the computed tomography image according to the trained target network model.
9. an electronic device, comprising:
a memory; and
A processor coupled to the memory, the processor being configured to perform the artifact correction method according to any of claims 1-7 based on computer program instructions stored in the memory.
10. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the artifact correction method according to any of claims 1-7.
CN202410195987.0A 2024-02-22 2024-02-22 artifact correction method, artifact correction device, electronic equipment and computer readable storage medium Pending CN117765118A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410195987.0A CN117765118A (en) 2024-02-22 2024-02-22 artifact correction method, artifact correction device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410195987.0A CN117765118A (en) 2024-02-22 2024-02-22 artifact correction method, artifact correction device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN117765118A true CN117765118A (en) 2024-03-26

Family

ID=90326098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410195987.0A Pending CN117765118A (en) 2024-02-22 2024-02-22 artifact correction method, artifact correction device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117765118A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780649A (en) * 2016-12-16 2017-05-31 上海联影医疗科技有限公司 The artifact minimizing technology and device of image
US20210082107A1 (en) * 2019-09-13 2021-03-18 Siemens Healthcare Gmbh Manipulable object synthesis in 3d medical images with structured image decomposition
CN116569207A (en) * 2020-12-12 2023-08-08 三星电子株式会社 Method and electronic device for managing artifacts of images
CN116664710A (en) * 2023-05-19 2023-08-29 中国人民解放军战略支援部队信息工程大学 CT image metal artifact unsupervised correction method based on transducer

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780649A (en) * 2016-12-16 2017-05-31 上海联影医疗科技有限公司 The artifact minimizing technology and device of image
US20210082107A1 (en) * 2019-09-13 2021-03-18 Siemens Healthcare Gmbh Manipulable object synthesis in 3d medical images with structured image decomposition
CN116569207A (en) * 2020-12-12 2023-08-08 三星电子株式会社 Method and electronic device for managing artifacts of images
CN116664710A (en) * 2023-05-19 2023-08-29 中国人民解放军战略支援部队信息工程大学 CT image metal artifact unsupervised correction method based on transducer

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LINLIN ZHU ETC.: ""Unsupervised metal artifacts reduction network for CT images based on efficient transformer"", 《BIOMEDICAL SIGNAL PROCESSING AND CONTROL》, 17 November 2023 (2023-11-17) *

Similar Documents

Publication Publication Date Title
CN109409503B (en) Neural network training method, image conversion method, device, equipment and medium
Cosman et al. Evaluating quality of compressed medical images: SNR, subjective rating, and diagnostic accuracy
Popescu et al. Arrhythmic sudden death survival prediction using deep learning analysis of scarring in the heart
CN110570492A (en) Neural network training method and apparatus, image processing method and apparatus, and medium
Liang et al. Metal artifact reduction for practical dental computed tomography by improving interpolation‐based reconstruction with deep learning
CN111598989B (en) Image rendering parameter setting method and device, electronic equipment and storage medium
CN116664713B (en) Training method of ultrasound contrast image generation model and image generation method
JPWO2013076930A1 (en) Medical image compression apparatus, medical image compression method, and prediction knowledge database creation apparatus
Urbaniak et al. Quality assessment of compressed and resized medical images based on pattern recognition using a convolutional neural network
Janet et al. Lossless compression techniques for medical images in telemedicine
CN114897756A (en) Model training method, medical image fusion method, device, equipment and medium
CN112767505A (en) Image processing method, training method, device, electronic terminal and storage medium
CN115471470A (en) Esophageal cancer CT image segmentation method
Gao et al. CoreDiff: Contextual error-modulated generalized diffusion model for low-dose CT denoising and generalization
CN117152442B (en) Automatic image target area sketching method and device, electronic equipment and readable storage medium
CN106228520B (en) Image enchancing method and device
Finck et al. Uncertainty-aware and lesion-specific image synthesis in multiple sclerosis magnetic resonance imaging: a multicentric validation study
CN117765118A (en) artifact correction method, artifact correction device, electronic equipment and computer readable storage medium
CN113592968B (en) Method and device for reducing metal artifacts in tomographic images
Rizzi et al. Digital watermarking for healthcare: a survey of ECG watermarking methods in telemedicine
CN113096238B (en) X-ray diagram simulation method and device, electronic equipment and storage medium
Zhang et al. Task-based model/human observer evaluation of SPIHT wavelet compression with human visual system-based quantization1
Fonseca et al. X-ray image enhancement: A technique combination approach
CN114820861A (en) MR synthetic CT method, equipment and computer readable storage medium based on cycleGAN
CN113689435A (en) Image segmentation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination