CN115170424B - Heart ultrasonic image artifact removing method and device - Google Patents
Heart ultrasonic image artifact removing method and device Download PDFInfo
- Publication number
- CN115170424B CN115170424B CN202210803955.5A CN202210803955A CN115170424B CN 115170424 B CN115170424 B CN 115170424B CN 202210803955 A CN202210803955 A CN 202210803955A CN 115170424 B CN115170424 B CN 115170424B
- Authority
- CN
- China
- Prior art keywords
- artifact
- image
- vector
- loss
- category
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 89
- 239000013598 vector Substances 0.000 claims abstract description 274
- 238000013145 classification model Methods 0.000 claims description 91
- 125000004122 cyclic group Chemical group 0.000 claims description 33
- 238000012545 processing Methods 0.000 claims description 32
- 238000012549 training Methods 0.000 claims description 27
- 230000008569 process Effects 0.000 claims description 20
- 238000012790 confirmation Methods 0.000 claims description 10
- 238000013527 convolutional neural network Methods 0.000 claims description 10
- 230000000306 recurrent effect Effects 0.000 claims description 4
- 230000003042 antagnostic effect Effects 0.000 claims description 2
- 238000002604 ultrasonography Methods 0.000 abstract description 12
- 230000000747 cardiac effect Effects 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 34
- 230000006870 function Effects 0.000 description 23
- 238000004590 computer program Methods 0.000 description 14
- 238000005516 engineering process Methods 0.000 description 14
- 238000004891 communication Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 7
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 238000010295 mobile communication Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000002829 reductive effect Effects 0.000 description 5
- 230000004913 activation Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 238000002591 computed tomography Methods 0.000 description 4
- 230000010365 information processing Effects 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000005481 NMR spectroscopy Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30048—Heart; Cardiac
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The present disclosure relates to a method and a device for removing cardiac ultrasound image artifacts, wherein the method comprises the following steps: acquiring an original image and a first artifact category vector corresponding to the original image, wherein the first artifact category vector is used for indicating the category of an artifact needing to be removed from the original image; inputting the original image and the first artifact category vector into a first generator, and outputting an artifact-free target image; the first generator is obtained under the condition that a loop generation confrontation network meets a preset condition, and the loop generation confrontation network is trained on a first artifact image, a second artifact category vector and a first artifact-free image. The disclosed embodiments may improve the efficiency and flexibility of removing artifacts.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for removing cardiac ultrasound image artifacts.
Background
In medical images, noise and artifacts exist, which cause inconvenience in image interpretation and clinical diagnosis. In recent years, a deep learning technology simulating human perception is remarkably developed in the fields of voice recognition, image recognition and the like, and in the field of medical images, the deep learning technology has been researched and applied in many ways, so that assistance is provided for analysis and diagnosis of the medical images.
In the related art, when removing the artifacts, the checkerboard artifacts may be reduced by a sparse deconvolution method, and the speckle artifacts may be reduced by a non-local low-order filtering method. These methods are themselves computationally intensive, requiring a significant amount of computing resources to be expended. Meanwhile, because one method can only remove one kind of artifact, in practical application, multiple methods need to be iterated to perform calculation, so that at present, only a single kind of artifact can be removed by a single generated confrontation network model for removing the artifact. However, many types of artifacts may exist in the actual medical image at the same time, so in the actual application, a plurality of models need to be integrated to cope with different types of artifacts, which consumes a lot of storage resources and computing resources, resulting in low efficiency.
Disclosure of Invention
The present disclosure provides a method and an apparatus for removing artifacts in cardiac ultrasound images, which can improve the efficiency and flexibility of removing artifacts.
According to an aspect of the present disclosure, there is provided an artifact removing method, the method including: acquiring an original image and a first artifact category vector corresponding to the original image, wherein the first artifact category vector is used for indicating the category of an artifact needing to be removed from the original image; inputting the original image and the first artifact category vector into a first generator, and outputting an artifact-free target image; the first generator is obtained under the condition that a cyclic generation confrontation network satisfies a preset condition, the cyclic generation confrontation network is trained on a first artifact image, a second artifact category vector and a first non-artifact image, the second artifact category vector is used for indicating the category of an artifact needing to be removed from the first artifact image, the first non-artifact image represents an image of the first artifact image after the artifact is removed, the cyclic generation confrontation network comprises the first generator used for removing the artifact in the image, a second generator used for adding the artifact to the image, a first discriminator used for discriminating the first artifact image from a second artifact image output by the second generator, and a second discriminator used for discriminating the first artifact-free image from the second artifact image output by the first generator.
In the embodiment of the disclosure, the artifact category to be removed from the original image is controlled by the first artifact category vector, so that the removal of artifacts of various categories by using one generator is realized, the storage resource and cost are saved, and the efficiency, controllability and flexibility of removing the artifact are improved.
In one possible implementation, the process of training the loop generation countermeasure network based on the first artifact image, the second artifact class vector, and the first artifact-free image includes:
inputting said first artifact image and said second artifact class vector into said first generator, outputting said second artifact-free image, inputting said second artifact-free image and said second artifact class vector into said second generator, outputting a third artifact image, determining a first loss between said first artifact image and said third artifact image;
inputting said first non-artifact image and said second artifact class vector into said second generator, outputting said second artifact image, inputting said second artifact image and said second artifact class vector into said first generator, outputting a third non-artifact image, determining a second loss between said first non-artifact image and said third non-artifact image;
inputting the first artifact image into the first discriminator to obtain a third loss and a fourth loss corresponding to the first artifact image, where the third loss corresponding to the first artifact image is used to characterize whether the first artifact image is a true artifact image, and the fourth loss corresponding to the first artifact image is used to characterize whether the identified artifact category of the first artifact image is correct;
inputting the second artifact image into the first discriminator to obtain a third loss and a fourth loss corresponding to the second artifact image, where the third loss corresponding to the second artifact image is used to characterize whether the second artifact image is a true artifact image, and the fourth loss corresponding to the second artifact image is used to characterize whether the identified artifact category of the second artifact image is correct;
inputting the first artifact-free image into the second discriminator to obtain a fifth loss corresponding to the first artifact-free image, wherein the fifth loss corresponding to the first artifact-free image is used for representing whether the first artifact-free image is a real artifact-free image;
inputting the second artifact-free image into the second discriminator to obtain a fifth loss corresponding to the second artifact-free image, wherein the fifth loss corresponding to the second artifact-free image is used for representing whether the second artifact-free image is a real artifact-free image;
adjusting the cyclic generation countermeasure network according to cyclic consistency loss, resolution loss and classification loss, wherein the cyclic consistency loss includes the first loss and the second loss, the resolution loss includes a third loss corresponding to the first artifact image, a third loss corresponding to the second artifact image, a fifth loss corresponding to the first artifact-free image and a fifth loss corresponding to the second artifact-free image, and the classification loss includes a fourth loss corresponding to the first artifact-free image and a fourth loss corresponding to the second artifact-free image.
In an embodiment of the present disclosure, the first generator, the second generator, the first discriminator, and the second discriminator are trained by the first artifact-bearing image, the second artifact class vector, and the first non-artifact image, resulting in the first generator being capable of removing artifacts from the image to effect a transformation that results in an artifact-free target image from an artifact-bearing original image.
In a possible implementation manner, the preset condition is: a maximum number of iterations is reached or a loss function falls to a preset value, wherein the loss function is determined by the loop consistency loss, the resolution loss and the classification loss.
In an embodiment of the present disclosure, the specific end time of the training is determined by limiting the number of iterations and the specific condition of the loss function.
In one possible implementation, the first generator, the second generator, the first discriminator, and the second discriminator employ a convolutional neural network model.
In one possible implementation, obtaining the first artifact class vector includes:
and inputting the original image into a classification model, and outputting the first artifact category vector, wherein the classification model adopts a convolutional neural network model.
In the embodiment of the present disclosure, by using the first artifact category vector, the classified identification of a specific artifact category is realized.
In one possible implementation, the classification model includes a first classification model for artifact class identification based on a single frame image, and the inputting the original image into the classification model and outputting the first artifact class vector includes:
and inputting the original image into the first classification model, and outputting the first artifact category vector.
In the embodiment of the disclosure, the first artifact category vector can be efficiently and accurately obtained by inputting the single-frame original image into the first classification model and thus outputting the first artifact category vector.
In one possible implementation, the classification model includes a second classification model for artifact category identification based on multi-frame images, the inputting the original image into the classification model, and the outputting the first artifact category vector includes:
acquiring the original image and a plurality of reference images from a video stream, wherein the plurality of reference images comprise a plurality of image frames which are positioned in front of and adjacent to the original image and/or a plurality of image frames which are positioned behind and adjacent to the original image;
and inputting the original image and the plurality of reference images into the second classification model, and outputting the first artifact category vector.
In the embodiment of the disclosure, the single-frame original image and the plurality of reference images are input into the second classification model, so that the first artifact category vector is output, the motion artifact can be accurately identified, and the accuracy of the first artifact category vector is further improved.
In one possible implementation, obtaining the first artifact class vector includes:
inputting the original image into a classification model, and outputting a reference artifact category vector;
feeding back the reference artifact category vector to an operator;
determining the reference artifact category vector as the first artifact category vector if a confirmation instruction of the operator for the reference artifact category vector is received;
and under the condition that an adjusting instruction of the operator for the reference artifact category vector is received, determining an artifact category vector carried in the adjusting instruction as the first artifact category vector.
In one possible implementation, obtaining the first artifact class vector includes:
and determining an artifact category vector provided by an operator as the first artifact category vector.
In the embodiment of the disclosure, the first artifact category vector is determined through the confirmation instruction and the adjustment instruction of the operator, so that calibration of the first artifact category vector can be realized, the accuracy of the first artifact category vector is further improved, and meanwhile, the flexibility of the removed artifact category is increased.
In one possible implementation, the method further includes:
acquiring an operation guide corresponding to the first artifact category vector;
and feeding back the operation instruction to an operator so that the operator can adjust an image acquisition mode according to the operation instruction in subsequent operation.
In the embodiment of the disclosure, the corresponding operation guidance is obtained through the first artifact category vector, so that an operator can adjust an image acquisition mode according to the operation guidance, and the occurrence probability of the artifact of a subsequently acquired image is reduced.
In one possible implementation, the dimension of the first artifact category vector is N, where each dimension is used to indicate whether an artifact of one category exists, and N is an integer greater than 0.
In the embodiment of the disclosure, accurate representation of artifact categories can be realized through the corresponding relationship between the number of elements of the vector and the number of artifact categories.
According to an aspect of the present disclosure, there is provided an artifact removal system, the system comprising an image acquisition device, an artifact identification device, an artifact removal device, wherein,
the image acquisition equipment is used for acquiring an original image;
the artifact identification device is used for identifying a first artifact category vector corresponding to the original image, wherein the first artifact category vector is used for indicating the category of the artifact needing to be removed from the original image;
the artifact removing device is used for processing the original image and the first artifact category vector by the artifact removing method to obtain an artifact-free target image.
According to an aspect of the present disclosure, there is provided an artifact removing apparatus, the apparatus including: the device comprises a first acquisition module, a first display module and a second acquisition module, wherein the first acquisition module is used for acquiring an original image and a first artifact category vector corresponding to the original image, and the first artifact category vector is used for indicating the category of an artifact needing to be removed from the original image; the input and output module is used for inputting the original image acquired by the first acquisition module and the first artifact category vector into a first generator and outputting a target image without artifacts; the first generator is obtained under the condition that a cyclic generation confrontation network satisfies a preset condition, the cyclic generation confrontation network is trained on a first artifact image, a second artifact category vector and a first non-artifact image, the second artifact category vector is used for indicating the category of an artifact needing to be removed from the first artifact image, the first non-artifact image represents an image of the first artifact image after the artifact is removed, the cyclic generation confrontation network comprises the first generator used for removing the artifact in the image, a second generator used for adding the artifact to the image, a first discriminator used for discriminating the first artifact image from a second artifact image output by the second generator, and a second discriminator used for discriminating the first artifact-free image from the second artifact image output by the first generator.
In one possible implementation, the apparatus further includes a training module to train the loop-generating countermeasure network based on a first artifact image, a second artifact class vector, and a first non-artifact image;
wherein training the cyclic generation countermeasure network based on the first artifact image, the second artifact class vector, and the first artifact-free image comprises:
inputting said first artifact image and said second artifact class vector into said first generator, outputting said second artifact-free image, inputting said second artifact-free image and said second artifact class vector into said second generator, outputting a third artifact image, determining a first loss between said first artifact image and said third artifact image;
inputting said first non-artifact image and said second artifact class vector into said second generator, outputting said second artifact image, inputting said second artifact image and said second artifact class vector into said first generator, outputting a third non-artifact image, determining a second loss between said first non-artifact image and said third non-artifact image;
inputting the first artifact image into the first discriminator to obtain a third loss and a fourth loss corresponding to the first artifact image, where the third loss corresponding to the first artifact image is used to characterize whether the first artifact image is a true artifact image, and the fourth loss corresponding to the first artifact image is used to characterize whether the identified artifact category of the first artifact image is correct;
inputting the second artifact image into the first discriminator to obtain a third loss and a fourth loss corresponding to the second artifact image, where the third loss corresponding to the second artifact image is used to characterize whether the second artifact image is a true artifact image, and the fourth loss corresponding to the second artifact image is used to characterize whether the artifact category of the identified second artifact image is correct;
inputting the first artifact-free image into the second discriminator to obtain a fifth loss corresponding to the first artifact-free image, wherein the fifth loss corresponding to the first artifact-free image is used for representing whether the first artifact-free image is a real artifact-free image;
inputting the second artifact-free image into the second discriminator to obtain a fifth loss corresponding to the second artifact-free image, wherein the fifth loss corresponding to the second artifact-free image is used for representing whether the second artifact-free image is a real artifact-free image;
adjusting the cyclic generation countermeasure network according to cyclic consistency loss, resolution loss and classification loss, wherein the cyclic consistency loss includes the first loss and the second loss, the resolution loss includes a third loss corresponding to the first artifact image, a third loss corresponding to the second artifact image, a fifth loss corresponding to the first artifact-free image and a fifth loss corresponding to the second artifact-free image, and the classification loss includes a fourth loss corresponding to the first artifact-free image and a fourth loss corresponding to the second artifact-free image.
In a possible implementation manner, the preset condition is:
a maximum number of iterations is reached or a loss function falls to a preset value, wherein the loss function is determined by the loop consistency loss, the resolution loss and the classification loss.
In one possible implementation, the first generator, the second generator, the first discriminator, and the second discriminator employ a convolutional neural network model.
In a possible implementation manner, the first obtaining module is further configured to:
and inputting the original image into a classification model, and outputting the first artifact category vector, wherein the classification model adopts a convolutional neural network model.
In one possible implementation, the classification model includes a first classification model for artifact class identification based on a single frame image, and the inputting the original image into the classification model and outputting the first artifact class vector includes:
and inputting the original image into the first classification model, and outputting the first artifact category vector.
In one possible implementation, the classification model includes a second classification model for artifact class identification based on a multi-frame image, and the inputting the original image into the classification model and the outputting the first artifact class vector include:
acquiring the original image and a plurality of reference images from a video stream, wherein the plurality of reference images comprise a plurality of image frames which are positioned in front of and adjacent to the original image and/or a plurality of image frames which are positioned in back of and adjacent to the original image;
and inputting the original image and the plurality of reference images into the second classification model, and outputting the first artifact category vector.
In a possible implementation manner, the first obtaining module is further configured to:
inputting the original image into a classification model, and outputting a reference artifact category vector;
feeding back the reference artifact category vector to an operator;
determining the reference artifact category vector as the first artifact category vector if a confirmation instruction of the operator for the reference artifact category vector is received;
and under the condition that an adjusting instruction of the operator for the reference artifact category vector is received, determining an artifact category vector carried in the adjusting instruction as the first artifact category vector.
In a possible implementation manner, the first obtaining module is further configured to:
and determining an artifact category vector provided by an operator as the first artifact category vector.
In one possible implementation, the apparatus further includes:
a second obtaining module, configured to obtain an operation guidance corresponding to the first artifact category vector;
and the feedback module is used for feeding the operation instruction back to an operator so that the operator can adjust the image acquisition mode in subsequent operation according to the operation instruction.
In one possible implementation, the dimension of the first artifact category vector is N, where each dimension is used to indicate whether an artifact of one category exists, and N is an integer greater than 0.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flow diagram of an artifact removal method according to an embodiment of the present disclosure;
FIG. 2 illustrates a schematic diagram of a classification model provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a training process of a loop generation countermeasure network provided by an embodiment of the disclosure;
FIG. 4 shows an exemplary network architecture diagram of a generator;
FIG. 5 illustrates an exemplary network structure diagram of a residual block;
FIG. 6 illustrates an exemplary network architecture diagram of a discriminator;
FIG. 7 illustrates an exemplary application diagram of a first generator provided by an embodiment of the disclosure;
fig. 8 is a schematic diagram illustrating an implementation process of an artifact removing method according to an embodiment of the present disclosure;
fig. 9 illustrates an architectural diagram of an artifact removal system provided in accordance with an embodiment of the present disclosure;
fig. 10 shows a block diagram of an artifact removing apparatus provided in an embodiment of the present disclosure;
FIG. 11 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure;
fig. 12 shows a block diagram of an electronic device 1900 according to an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. Additionally, the term "at least one" herein means any one of a variety or any combination of at least two of a variety, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flow diagram of an artifact removal method according to an embodiment of the present disclosure. The artifact removing method may be performed by an electronic device such as a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the method may be implemented by a processor calling a computer readable instruction stored in a memory. Alternatively, the method may be performed by a server. As shown in fig. 1, the artifact removing method may include:
in step S11, an original image and a first artifact category vector corresponding to the original image are obtained.
The original image may be used to represent an image that an artifact needs to be removed, and may be a picture acquired by an image acquisition device, an image frame acquired from a video stream, or an image frame acquired in another manner, which is not specifically limited in this disclosure. For example, the original image may be an ultrasound image, a Computed Tomography (CT) image, or a magnetic resonance image. Accordingly, the image acquisition device may be an ultrasound acquisition device, a CT scanner, or a nuclear magnetic resonance apparatus, etc. Therefore, the artifact removing method provided by the embodiment of the disclosure can be used for removing artifacts of images such as ultrasound images, CT images or nuclear magnetic resonance images. Specifically, the artifact removal method provided by the embodiment of the disclosure can remove the artifact of the cardiac ultrasound image. That is to say, the embodiment of the present disclosure provides a method for removing cardiac ultrasound image artifacts. It should be noted that, the above is merely an exemplary example of an original image and an image capturing device, the original image may also be an image that needs to remove an artifact, and the image capturing device may also be another device that can capture a picture or a video, and the embodiment of the present disclosure is not limited.
A first artifact category vector may be used to indicate the category of artifacts that need to be removed from the original image. The category of the artifact includes, but is not limited to, common artifact categories such as speckle artifact, partial volume effect artifact, side lobe effect artifact, acoustic artifact, and echo enhancement artifact.
In one possible implementation, the first artifact class vector may indicate no artifacts, a certain class of artifacts, or a combination of several classes of artifacts. When the first artifact category vector indicates no artifact, indicating that the original image does not need to remove the artifact; when the first artifact category vector indicates a certain category artifact, the first artifact category vector indicates that the original image needs to remove the artifact of the category; when the first artifact class vector indicates a combination of several classes of artifacts, it represents that the original image needs to remove these several classes of artifacts simultaneously.
In one example, the first artifact class vector has dimensions N, where each dimension is used to indicate whether an artifact of one class is present, N being an integer greater than 0. Taking two types of artifacts, namely a speckle artifact and a side lobe artifact, as an example, at this time, the value of N is 2, and the dimension of a first artifact type vector is 2, where a first element is used to indicate whether the speckle artifact exists, and a second element is used to indicate whether the side lobe artifact exists. If the value of the first element is 0, the first element represents no speckle artifact, and if the value of the first element is 1, the first element represents speckle artifact; when the value of the second element is 0, the side lobe effect-free artifact is represented, and when the value of the second element is 1, the side lobe effect-free artifact is represented. When the first artifact class vector is [0,0], it indicates no artifact; when the first artifact category vector is [1,0], it indicates that an artifact is present, and the category of the artifact present is a speckle artifact; when the first artifact class vector is [0,1], it indicates that an artifact is present, and the class of the present artifact is a side lobe effect artifact; when the first artifact class vector is [1,1], it indicates that an artifact is present, and the classes of artifacts present are speckle artifacts and side lobe artifacts.
In a possible implementation manner, the first artifact category vector may be obtained by an artifact identification device, where the artifact identification device may directly obtain an artifact category vector specified by a user, or may also input an original image into a classification model to obtain the artifact category vector, where the classification model may adopt a convolutional neural Network model, and a model structure thereof may adopt a Network structure such as a Residual Network (ResNet), a Densely connected convolutional neural Network (densneet), a visual attention mechanism (Squeeze and attention Network) sensor, an Efficient Network (Efficient Network ), and the like. The training mode of the classification model may refer to the related art, and is not described herein again. The process of obtaining the first artifact type vector will be described in detail later, and will not be described herein.
In step S12, the original image and the first artifact class vector are input to a first generator, and an artifact-free target image is output.
Wherein the first generator may be used to remove artifacts in the image. The first generator is obtained in a case where the loop generation countermeasure network satisfies a preset condition. In the case where the loop generation countermeasure network satisfies the preset condition, the first generator included therein is the first generator used in step S12, and at this time, the original image and the first artifact class vector may be input into the first generator, so as to output the artifact-free target image.
The loop generation countermeasure network is trained based on the first artifact image, the second artifact class vector, and the first artifact-free image. The first artifact image may be used to represent an artifact image of the input loop generation countermeasure network when the training loop generation countermeasure network. The first artifact-free image may represent an artifact-removed image of the first artifact-free image. The second artifact category vector may be used to indicate a category of artifacts that need to be removed from the first artifact image. The second artifact type vector may refer to the first artifact type vector, and is not described herein. After the first artifact image removes the artifacts of the category indicated by the second artifact category vector, a first artifact-free image can be obtained. In the disclosed embodiment, the first artifact image, the second artifact class vector, and the first non-artifact image are inputs to the recurrent formation countermeasure network.
The loop generation countermeasure network includes a first generator for removing artifacts in an image, a second generator for adding artifacts to the image, a first discriminator for discriminating between a first artifact image and a second artifact image output by the second generator, and a second discriminator for discriminating between a first artifact-free image and a second artifact-free image output by the first generator. The structure of the loop generation countermeasure network and the training process will be described in detail later, and will not be described in detail here.
In the embodiment of the disclosure, the artifact category to be removed from the original image is controlled by the first artifact category vector, so that the removal of artifacts of various categories by using one generator is realized, the storage resource and cost are saved, and the efficiency, controllability and flexibility of removing the artifact are improved.
The following describes the process of acquiring the first artifact class vector in detail.
In a possible implementation manner, the obtaining of the first artifact category vector in step S11 may include: and inputting the original image into a classification model, and outputting the first artifact category vector.
In an embodiment of the present disclosure, the first artifact class vector may be obtained by a classification model. In one example, the process of training the classification model may include: acquiring a preset training set, wherein the preset training set comprises a plurality of sample images and artifact category vectors corresponding to the sample images; inputting the sample images in the training set into a classification model to obtain a sample processing result of the sample images; determining the loss of the classification model according to the loss between the sample processing result of the sample image and the corresponding artifact category vector; reversely adjusting network parameters of the classification model according to the loss; after a plurality of iterations, when a training condition (such as network convergence) is met, a trained classification model is obtained.
Fig. 2 shows a schematic diagram of a classification model provided by an embodiment of the present disclosure. As shown in fig. 2, the classification model in the embodiment of the present disclosure may be divided into a first classification model and a second classification model. Wherein the first classification model can be used for artifact class identification based on a single frame image. The second classification model may be used for artifact class identification based on the multi-frame image.
In one example, the classification model comprises a first classification model, and as shown in fig. 2, the inputting the original image into the classification model and outputting the first artifact class vector comprises: and inputting the original image into the first classification model, and outputting the first artifact category vector.
The input of the first classification model is a single-frame image, and the output is an artifact category vector. In the embodiment of the disclosure, the first artifact category vector can be efficiently and accurately obtained by inputting the single-frame original image into the first classification model and thus outputting the first artifact category vector.
In yet another example, the classification model comprises a second classification model, and as shown in fig. 2, the inputting the original image into the classification model and outputting the first artifact class vector comprises: and acquiring the original image and a plurality of reference images from a video stream, inputting the original image and the plurality of reference images into the second classification model, and outputting the first artifact category vector.
Wherein the plurality of reference images comprise a plurality of image frames located before and adjacent to the original image and/or a plurality of image frames located after and adjacent to the original image. The number of the reference images may be set according to actual needs, for example, may be set to 10 or 20, and is not specifically limited in the embodiments of the present disclosure.
In a possible implementation manner, the model network structure of the second classification model includes convolution layers for performing a convolution operation in a time dimension, so as to extract the time-series features of the input image (including the original image and the reference image). Here, the extracted time sequence features can effectively improve the accuracy of motion artifact identification. For example, for a video stream acquired by an image acquisition device, motion artifacts are easy to occur, and whether the motion artifacts exist can be better judged through a timing characteristic.
In the embodiment of the disclosure, the single-frame original image and the plurality of reference images are input into the second classification model, so as to output the first artifact category vector, the motion artifact can be accurately identified, and the accuracy of the first artifact category vector is further improved
In a possible implementation manner, the obtaining of the first artifact category vector in step S11 may include: inputting the original image into a classification model, and outputting a reference artifact category vector; feeding back the reference artifact category vector to an operator; determining the reference artifact category vector as the first artifact category vector if a confirmation instruction of the operator for the reference artifact category vector is received; and under the condition that an adjusting instruction of the operator for the reference artifact category vector is received, determining an artifact category vector carried in the adjusting instruction as the first artifact category vector.
The confirmation instruction is generated in the event that the operator confirms that the reference artifact class vector is error free. In the case of receiving the sub-confirmation instruction, it indicates that the reference artifact category vector output by the classification model is correctly determined, so that the reference artifact category vector can be determined as the first artifact category vector for subsequent processing.
The adjustment instruction is generated when the operator adjusts the reference artifact class vector. And in the case of receiving the adjustment instruction, indicating that the reference artifact category vector output by the classification model is wrong, so that the artifact category vector carried in the adjustment quality can be determined as the first artifact category vector for subsequent processing. Wherein the adjustment of the reference artifact category vector by the operator includes but is not limited to: the method includes the steps of adjusting the presence of artifacts to be free of artifacts, adjusting the absence of artifacts to be free of artifacts, reducing the types of artifacts when artifacts are present, or increasing the types of artifacts when artifacts are present.
In the embodiment of the disclosure, the first artifact category vector is determined through the confirmation instruction and the adjustment instruction of the operator, so that the first artifact category vector can be calibrated, the accuracy of the first artifact category vector is further improved, and the flexibility of the removed artifact category is increased.
In a possible implementation manner, the obtaining of the first artifact category vector in step S11 may include: and determining an artifact category vector provided by an operator as the first artifact category vector.
In the embodiment of the present disclosure, the operator may directly provide the first artifact category vector, which may improve accuracy of the first artifact category vector, and may facilitate the operator to flexibly select an artifact to be removed, thereby increasing an applicable scenario.
The obtaining the first artifact category vector in step S11 may further include: after the first artifact category vector is obtained, obtaining operation guidance corresponding to the first artifact category vector; and feeding back the operation instruction to an operator so that the operator can adjust an image acquisition mode according to the operation instruction in subsequent operation.
In the embodiment of the present disclosure, an experience library may be preset, and an operation guidance corresponding to each category of artifacts is stored in the experience library in advance. The operator can operate the acquisition equipment according to the operation guidance, so that the probability of corresponding category artifacts in the acquired image can be effectively reduced. The operation guidance includes, but is not limited to, acquisition orientation and/or acquisition distance, etc.
After the first artifact category vector is acquired, the category of the artifact specifically indicated by the first artifact category vector may be determined, then the operation guidance corresponding to each indicated artifact category is acquired from the experience library, and the acquired operation guidance is fed back to the operator. The operator can adjust the image acquisition mode according to the received operation guidance in subsequent operation. The image acquisition mode is adjusted by changing the acquisition direction and/or acquisition distance of the equipment during acquisition. In one example, the previous acquisition distance adopted when the operator acquires the image is 20 centimeters, the image has speckle artifacts, and the operation guidance for removing the speckle artifacts in the experience library is that the acquisition distance is 10 centimeters, so that the operator can reduce the acquisition distance in the subsequent operation to eliminate the speckle artifacts in the subsequently acquired image.
In the embodiment of the disclosure, various artifacts can be identified for the image, and inexperienced operators can reduce the artifacts in the subsequently acquired image according to operation guidance, thereby improving the quality of the subsequently acquired image.
The training process for circularly generating the countermeasure network is explained below. Fig. 3 shows a schematic diagram of a training process of the cycle generation countermeasure network provided by the embodiment of the disclosure. As shown in fig. 3, the cycle generation countermeasure network includes: the device comprises a first generator, a second generator, a first discriminator and a second discriminator.
Wherein the first generator may be used to remove artifacts of the image. When the input of the first generator is a first artifact image and a second artifact class vector, the output is a second artifact-free image. When the input to the first generator is the second artifact-free image and the second artifact class vector, the output is a third artifact-free image. A second generator may be used to generate artifacts of the image. When the input of the second generator is the first artifact-free image and the second artifact class vector, the output is the second artifact-free image. When the input of the second generator is the second artifact-free image and the second artifact class vector, the output is the third artifact-free image. A first discriminator may be used to discriminate whether an image is a true artefact image, and the class of artefacts present in the image. Specifically, the first artifact image and the second artifact image are input into a first discriminator, the first discriminator determines a third loss corresponding to the first artifact image according to whether the first artifact image is a real artifact image and an image label, determines a third loss corresponding to the second artifact image according to whether the second artifact image is a real artifact image and an image label, and determines a corresponding loss according to artifact categories and image labels of the first artifact image and the second artifact image. A second discriminator may be used to discriminate whether an image is a truly artefact-free image. Specifically, the first artifact-free image and the second artifact-free image are input into the second discriminator, the second discriminator determines a fifth loss corresponding to the first artifact-free image according to whether the first artifact-free image is a true artifact-free image and the image label, and determines a fifth loss corresponding to the second artifact-free image according to whether the second artifact-free image is a true artifact-free image and the image label.
As shown in fig. 3, the process of training the loop generation countermeasure network based on the first artifact image, the second artifact class vector, and the first artifact-free image includes: inputting said first artifact image and said second artifact class vector into said first generator, outputting said second artifact-free image, inputting said second artifact-free image and said second artifact class vector into said second generator, outputting a third artifact image, determining a first loss between said first artifact image and said third artifact image; inputting said first non-artifact image and said second artifact class vector into said second generator, outputting said second artifact image, inputting said second artifact image and said second artifact class vector into said first generator, outputting a third non-artifact image, determining a second loss between said first non-artifact image and said third non-artifact image; inputting the first artifact image and the second artifact image into the first discriminator, obtaining a third loss and a fourth loss corresponding to the first artifact image according to the first artifact image and an image label, obtaining a third difference and a fourth difference corresponding to the second artifact image according to the second artifact image and the image label, wherein the third loss is used for representing whether the first artifact image and the second artifact image are real artifact images, and the fourth loss is used for representing the loss of artifact categories of the first artifact image and the second artifact image; inputting the first artifact-free image and the second artifact-free image into the second discriminator, obtaining a fifth loss corresponding to the first artifact-free image according to the first artifact-free image and the image label, and obtaining a fifth difference corresponding to the second artifact-free image according to the second artifact-free image and the image label, wherein the fifth loss is used for representing whether the first artifact-free image and the second artifact-free image are real artifact-free images; adjusting the cycle generation countermeasure network according to cycle consistency loss, resolution loss, and classification loss.
In a possible implementation manner, the first artifact-bearing image and the first non-artifact-bearing image are in a one-to-one correspondence relationship, and the artifact of the category indicated by the second artifact category vector is removed from the first artifact-bearing image, so that the first non-artifact-bearing image can be obtained. Wherein, the first artifact-free image can be obtained by removing the artifact from the first artifact image through a traditional method. For example, if the second artifact category vector indicates a chessboard artifact and a speckle artifact, the first artifact-free image can be obtained by removing the chessboard artifact in the first artifact image by a sparse deconvolution method and removing the first artifact speckle artifact by a non-local low-order filtering method.
The second artifact category vector can be obtained through the first artifact image and the classification model, the first artifact image is input into the classification model, the second artifact category vector is obtained, and the second artifact category vector can be further specified by a user during training.
In a possible implementation manner, the first generator may process the input artifact image and the second artifact category vector, and output a corresponding artifact-free vector; the second generator may process the input artifact-free image and the second artifact class vector, and output a corresponding artifact-containing vector.
In one example, the number of input channels of the generator (including the first generator and the second generator) is determined by the number of artifact types that it can identify, the generator, after receiving the image and the second artifact category vector, the image itself occupies three channels, and the generator increases the corresponding number of channels according to the dimension of the input second artifact category vector, that is, the number of input channels is N +3.
In one possible implementation, the cycle generation countermeasure network is adjusted during training according to cycle consistency loss, resolution loss, and classification loss.
The loss of cyclic consistency may include a first loss and a second loss, the loss of cyclic consistency being used to constrain the behavior of the features of the two data sets, and in embodiments of the present disclosure, the loss of cyclic consistency may be used to constrain the features of the two data sets, the first artifact-free image and the third artifact-free image, and to constrain the features of the two data sets, the first artifact-free image and the third artifact-free image. A loss of cyclic consistency, which becomes less indicative of an increased similarity of features between the first and third artifact and the first and third non-artifact images, is used to adjust the first and second generators simultaneously. The approximate contours of the image can be constrained by a cyclic consistency loss.
The resolution loss includes a third loss corresponding to the first artifact image, a third loss corresponding to the second artifact image, and a fifth loss corresponding to the first artifact-free image and a fifth loss corresponding to the second artifact-free image, and is mainly used for forming a standard and a constraint in the confrontation training of the generator and the discriminator, namely, the confrontation between the first generator and the second discriminator and the confrontation between the second generator and the first discriminator. The generator and discriminator may be continuously trained by changing the resolution loss weights.
The classification loss comprises a fourth loss corresponding to the first artifact image and a fourth loss corresponding to the second artifact image, and is used for representing the loss of artifact categories of the first artifact image and the second artifact image. The classification loss is used to adjust parameters of the first generator, the second generator, the first discriminator, and the second discriminator.
In one possible implementation, the recurrent generated confrontation network stops training when a preset condition is met.
Wherein the preset conditions include: the maximum number of iterations is reached or a loss function falls to a preset value, wherein the loss function is determined by a weighted summation of the cyclic consistency loss, the resolution loss and the classification loss. The maximum iteration number is not restricted and can be set according to the requirement.
It should be noted that, in the training process, if the maximum number of iterations is not reached or the loss function is not reduced significantly, the reason is found out, the parameters are changed, and the training is continued until the best effect is reached.
In one possible implementation, the first generator, the second generator, the first discriminator, and the second discriminator employ a convolutional neural network model.
Wherein, the generator mainly comprises three parts: an encoding section, a converter and a decoding section. Fig. 4 shows an exemplary network architecture diagram of the generator. The generator shown in fig. 4 may be the first generator or the second generator. It should be noted that the network structure shown in fig. 4 is only an exemplary network structure of the generator, and is not used to limit the network structure of the generator.
As shown in fig. 4, the encoding portion of the generator may include a cubic downsampling network layer. The structure of the downsampled network Layer includes a convolutional Layer (i.e., the Conv Layer in fig. 4), a normalization process (i.e., the InstNorm function in fig. 4), and an activation Layer (i.e., the Relu function in fig. 4), in order to enable the encoding stage to extract multi-scale features, the convolutional Layer may be processed by using one convolution with 7 × 7 and two convolution with 3 × 3, in order to be suitable for image generation tasks of two domains, the normalization process may use example normalization, and the activation Layer selects the Relu function for activation.
As shown in fig. 4, the converter of the generator may comprise 6 residual blocks (i.e. the Resnet Block layer in fig. 4). Fig. 5 shows an exemplary network structure diagram of a residual block. The residual block included in the converter shown in fig. 4 is shown in fig. 5, and the residual block can perform a stitching operation on the generated feature map, in such a way, the deep information and the shallow information can be fused, and thus, more comprehensive image features can be extracted.
As shown in fig. 4, the decoding portion of the generator may include an upsampling network layer. The up-sampling network Layer includes a deconvolution Layer (i.e., the Upsample Layer in fig. 4), a normalization process, and an activation Layer.
Fig. 6 shows an exemplary network structure diagram of the discriminator. The discriminator mainly comprises a convolution layer and a full connection layer.
As shown in fig. 6, the first discriminator includes 6 convolutional layers and two Fully Connected layers (i.e., a Fully Connected Layer1 and a Fully Connected Layer2 in fig. 6), as shown in fig. 6, wherein the results of the convolutional layers are output by two channels, and through one Fully Connected Layer, true and false of the image (i.e., whether the image is a true artifact image) and the artifact type loss can be output. The second discriminator includes 6 convolutional layers and a Fully Connected Layer (i.e., the full Connected Layer1 in fig. 6), and can output true and false of the image (i.e., whether the image is a true artifact-free image). At this point, the training of the cycle generation countermeasure network is completed. A first generator included in the trained cyclic generation countermeasure network may be used to remove artifacts in the image.
The following explains the artifact removing method provided by the embodiment of the present disclosure by using two application examples, and the following application examples are only used as examples to describe the artifact removing process more clearly and do not limit the artifact removing method provided by the embodiment of the present disclosure.
Fig. 7 shows an exemplary application diagram of the first generator provided in the embodiment of the present disclosure. The first artifact category vector is input into a first generator together with the original image, and the first generator outputs the target image after removing some artifact or artifacts corresponding to the first artifact category vector. The output of the first generator can be changed by controlling the first artifact category vector, and the type of removing the artifacts is controlled, so that the accuracy and the flexibility of removing the artifacts are improved.
The original image shown in fig. 7 may be obtained by an image acquisition device or may be input by an operator, the first artifact vector may be obtained by the operator or/and a classification model (the first classification model or the second classification model described above), and the first generator may be obtained by generating the countermeasure network through a training loop.
In one example, the first generator has the capability of removing two artifacts, namely speckle artifact and side lobe effect artifact, the input original image is the acquired ultrasonic image with the two artifacts, and the first artifact category vectors are [0,0], [1,0], [0,1] and [1,1], respectively, then after the original image and the first artifact category vector are input into the first generator, the output artifact-free target image is sequentially: the method comprises the steps of inputting an ultrasonic image, removing a speckle artifact, removing a side lobe effect artifact and removing the two artifacts.
Fig. 8 shows a schematic implementation process diagram of an artifact removing method provided by the embodiment of the present disclosure. As shown in fig. 8, after an original image is acquired by an image acquisition device, the original image is input into a classification model to obtain an artifact category vector. Then, the obtained artifact category vector can be used as a first artifact category vector to be input into a first generator together with the original image, and a target image without artifacts is output; or the artifact category vector can be used as a reference artifact category vector to be output to an operator, a first artifact category vector is obtained after the confirmation or adjustment of the operator, the obtained first artifact category vector and the original image are input into a first generator together, and the artifact-free image is output. In addition, the original image and/or the first artifact category vector may also be manually input by an operator.
In a possible implementation manner, after the classification model outputs the artifact category vector, an operation guidance corresponding to the artifact category vector may be provided to an operator, and the operator adjusts an image acquisition manner according to the guidance.
When the artifact is removed, the method and the device can control the type of the removed artifact by adjusting the artifact category vector, have strong controllability, can be suitable for different conditions, and realize the automatic end-to-end ultrasound image artifact removal.
In one example, as shown in fig. 8, after a single frame of original image is acquired by the ultrasound acquisition device, the original image is input into a pre-trained classification model (specifically, a first classification model) to output an artifact class vector. And the artifact class vector output by the classification model is input into a trained first generator together with the original image as a first artifact class vector, and the target image without the artifact is output. Meanwhile, after the classification model outputs the artifact category vector, the operation guidance corresponding to the artifact category vector can be output to an operator, and the operator adjusts the image acquisition mode according to the operation guidance.
In yet another example, as shown in fig. 8, an ultrasound acquisition device acquires a video stream, extracts a frame of image from the video stream as an original image, selects a plurality of frames of images before and after the original image as reference images (not shown), inputs the original image and a plurality of reference images into a pre-trained classification model (specifically, a second classification model) together to output an artifact category vector, outputs the output artifact category vector as a reference artifact category vector to an operator, and the operator changes elements in the reference artifact category vector as needed and inputs the changed vector as a first artifact category vector into a trained first generator together with an ultrasound image to obtain an artifact-free target image.
Fig. 9 illustrates an architecture diagram of an artifact removal system provided according to an embodiment of the present disclosure. As shown in fig. 9, the system includes: an image acquisition device 91, an artifact identification device 92 and an artifact removal device 93.
Wherein the image capturing device 91 may be used to capture the original image.
The artifact identification device 92 may be configured to identify a first artifact class vector of the original image, the first artifact class vector indicating a class of artifacts that need to be removed from the original image.
The artifact removal device 93 may be configured to obtain an artifact-free target image from the original image and the first artifact class vector.
After the image acquisition device 91 acquires an original image, the original image is input into an artifact identification device 92 and an artifact removal device 93, the artifact identification device 92 performs artifact category identification on the received original image to obtain an artifact category vector (i.e., a first artifact category vector) of the original image, and then the first artifact category vector is input into the artifact removal device 93. The artifact removing device 93 performs artifact removing processing according to the original image input by the image acquisition device 91 and the first artifact category vector input by the artifact identification device 92, and outputs an artifact-free target image. Specifically, the artifact removing device 93 may input the original image and the first artifact class vector into the first generator, and output the artifact-free target image.
In one possible implementation, a human-computer interaction device 94 may also be included in the artifact removal system shown in fig. 9. The operator may confirm or adjust the artifact category vector output by the artifact identification device 92 through the human-computer interaction device 94, or may adjust the subsequent image acquisition mode according to the operation guidance displayed by the human-computer interaction device 94.
In the embodiment of the disclosure, the artifact removing system can flexibly and accurately remove the artifact in the original image.
It is understood that the above-mentioned embodiments of the method of the present disclosure can be combined with each other to form a combined embodiment without departing from the principle logic, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides an artifact removing device, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any artifact removing method provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method section are not repeated.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and for specific implementation, reference may be made to the description of the above method embodiments, and for brevity, details are not described here again.
Fig. 10 shows a block diagram of an artifact removing apparatus provided by an embodiment of the present disclosure. As shown in fig. 10, the apparatus 20 may include:
a first obtaining module 21, configured to obtain an original image and a first artifact category vector corresponding to the original image, where the first artifact category vector is used to indicate a category of an artifact that needs to be removed from the original image;
an input/output module 22, configured to input the original image and the first artifact category vector acquired by the first acquisition module 21 into a first generator, and output an artifact-free target image;
the first generator is obtained under the condition that a cyclic generation confrontation network satisfies a preset condition, the cyclic generation confrontation network is trained on a first artifact image, a second artifact category vector and a first non-artifact image, the second artifact category vector is used for indicating the category of an artifact needing to be removed from the first artifact image, the first non-artifact image represents an image of the first artifact image after the artifact is removed, the cyclic generation confrontation network comprises the first generator used for removing the artifact in the image, a second generator used for adding the artifact to the image, a first discriminator used for discriminating the first artifact image from a second artifact image output by the second generator, and a second discriminator used for discriminating the first artifact-free image from the second artifact image output by the first generator.
In one possible implementation, the apparatus further includes a training module to train the recurrent confronting network based on a first artifact-bearing image, a second artifact-class vector, and a first non-artifact image;
wherein training the cyclic generation countermeasure network based on the first artifact image, the second artifact class vector, and the first artifact-free image comprises: inputting said first artifact image and said second artifact class vector into said first generator, outputting said second artifact-free image, inputting said second artifact-free image and said second artifact class vector into said second generator, outputting a third artifact image, determining a first loss between said first artifact image and said third artifact image; inputting said first non-artifact image and said second artifact class vector into said second generator, outputting said second artifact image, inputting said second artifact image and said second artifact class vector into said first generator, outputting a third non-artifact image, determining a second loss between said first non-artifact image and said third non-artifact image; inputting the first artifact image and the second artifact image into the first discriminator, obtaining a third loss and a fourth loss corresponding to the first artifact image according to the first artifact image and an image label, obtaining a third difference and a fourth difference corresponding to the second artifact image according to the second artifact image and the image label, wherein the third loss is used for representing whether the first artifact image and the second artifact image are real artifact images, and the fourth loss is used for representing the loss of artifact categories of the first artifact image and the second artifact image; inputting the first artifact-free image and the second artifact-free image into the second discriminator, obtaining a fifth loss corresponding to the first artifact-free image according to the first artifact-free image and the image label, and obtaining a fifth difference corresponding to the second artifact-free image according to the second artifact-free image and the image label, wherein the fifth loss is used for representing whether the first artifact-free image and the second artifact-free image are real artifact-free images; adjusting the cycle-generating antagonistic network according to a cycle consistency loss, a resolution loss, and a classification loss, wherein the cycle consistency loss comprises the first loss and the second loss, the resolution loss comprises the third loss and the fifth loss, and the classification loss comprises the fourth loss.
In a possible implementation manner, the preset condition is: a maximum number of iterations is reached or a loss function falls to a preset value, wherein the loss function is determined by the loop consistency loss, the resolution loss and the classification loss.
In one possible implementation, the first generator, the second generator, the first discriminator, and the second discriminator employ a convolutional neural network model.
In a possible implementation manner, the first obtaining module is further configured to: and inputting the original image into a classification model, and outputting the first artifact category vector, wherein the classification model adopts a convolutional neural network model.
In one possible implementation, the classification model includes a first classification model for artifact class identification based on a single frame image, and the inputting the original image into the classification model and outputting the first artifact class vector includes: and inputting the original image into the first classification model, and outputting the first artifact category vector.
In one possible implementation, the classification model includes a second classification model for artifact class identification based on a multi-frame image, and the inputting the original image into the classification model and the outputting the first artifact class vector include: acquiring the original image and a plurality of reference images from a video stream, wherein the plurality of reference images comprise a plurality of image frames which are positioned in front of and adjacent to the original image and/or a plurality of image frames which are positioned behind and adjacent to the original image; and inputting the original image and the plurality of reference images into the second classification model, and outputting the first artifact category vector.
In a possible implementation manner, the first obtaining module is further configured to: inputting the original image into a classification model, and outputting a reference artifact category vector; feeding back the reference artifact category vector to an operator; determining the reference artifact category vector as the first artifact category vector if a confirmation instruction of the operator for the reference artifact category vector is received; and under the condition that an adjusting instruction of the operator for the reference artifact category vector is received, determining an artifact category vector carried in the adjusting instruction as the first artifact category vector.
In a possible implementation manner, the first obtaining module is further configured to: and determining an artifact category vector provided by an operator as the first artifact category vector.
In one possible implementation, the apparatus further includes: a second obtaining module, configured to obtain an operation guidance corresponding to the first artifact category vector; and the feedback module is used for feeding the operation instruction back to an operator so that the operator can adjust the image acquisition mode in subsequent operation according to the operation instruction.
In one possible implementation, the dimension of the first artifact category vector is N, where each dimension is used to indicate whether an artifact of one category exists, and N is an integer greater than 0.
The embodiment of the present disclosure further provides an artifact removing system, which includes an image collecting device, an artifact identifying device, and an artifact removing device, wherein the image collecting device is configured to collect an original image; the artifact identification device is used for identifying a first artifact category vector of the original image, wherein the first artifact category vector is used for indicating the category of the artifact needing to be removed from the original image; the artifact removing device is used for processing the original image and the first artifact category vector by the artifact removing method to obtain an artifact-free target image
Embodiments of the present disclosure also provide a computer-readable storage medium, on which computer program instructions are stored, and when executed by a processor, the computer program instructions implement the above method. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device performs the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 11 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or other terminal device.
Referring to fig. 11, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communications component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (Wi-Fi), a second generation mobile communication technology (2G), a third generation mobile communication technology (3G), a fourth generation mobile communication technology (4G), a long term evolution of universal mobile communication technology (LTE), a fifth generation mobile communication technology (5G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 12 shows a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, electronic device 1900 may be provided as a server or terminal device. Referring to fig. 12, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, that are executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as the Microsoft Server operating system (Windows Server), the apple computer operating system based on graphical user interface (Mac OS XTM) offered by apple Inc., the Multi-user Multi-Process computer operating system (Unix), the free and open native Unix-like operating system (Linux), the open native code Unix-like operating system (FreeBSDTM), or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer-readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK) or the like.
The foregoing description of the various embodiments is intended to highlight different aspects of the various embodiments that are the same or similar, which can be referenced with one another and therefore are not repeated herein for brevity.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
If the technical scheme disclosed by the invention relates to personal information, a product applying the technical scheme disclosed by the invention clearly informs personal information processing rules before processing the personal information, and obtains personal independent consent. If the technical scheme disclosed by the invention relates to sensitive personal information, a product applying the technical scheme disclosed by the invention obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'express consent'. For example, at a personal information collection device such as a camera, a clear and significant identifier is set to inform that the personal information collection range is entered, the personal information is collected, and if the person voluntarily enters the collection range, the person is regarded as agreeing to collect the personal information; or on the device for processing the personal information, under the condition of informing the personal information processing rule by using obvious identification/information, obtaining personal authorization by modes of popping window information or asking a person to upload personal information of the person by himself, and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing method, and a type of personal information to be processed.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (10)
1. A method of artifact removal, the method comprising:
acquiring an original image and a first artifact category vector corresponding to the original image, wherein the first artifact category vector is used for indicating the category of an artifact needing to be removed from the original image;
inputting the original image and the first artifact category vector into a first generator, and outputting an artifact-free target image;
the first generator is obtained under the condition that a cyclic generation confrontation network satisfies a preset condition, the cyclic generation confrontation network is trained on a first artifact image, a second artifact category vector and a first non-artifact image, the second artifact category vector is used for indicating the category of an artifact needing to be removed from the first artifact image, the first non-artifact image represents an image of the first artifact image after the artifact is removed, the cyclic generation confrontation network comprises the first generator used for removing the artifact in the image, a second generator used for adding the artifact to the image, a first discriminator used for discriminating the first artifact image from a second artifact image output by the second generator, and a second discriminator used for discriminating the first artifact-free image from the second artifact image output by the first generator.
2. The method of claim 1, wherein training the recurrent antagonistic network based on the first artifact-bearing image, the second artifact class vector, and the first non-artifact image comprises:
inputting said first artifact image and said second artifact class vector into said first generator, outputting said second artifact-free image, inputting said second artifact-free image and said second artifact class vector into said second generator, outputting a third artifact image, determining a first loss between said first artifact image and said third artifact image;
inputting said first non-artifact image and said second artifact class vector into said second generator, outputting said second artifact image, inputting said second artifact image and said second artifact class vector into said first generator, outputting a third non-artifact image, determining a second loss between said first non-artifact image and said third non-artifact image;
inputting the first artifact image into the first discriminator to obtain a third loss and a fourth loss corresponding to the first artifact image, where the third loss corresponding to the first artifact image is used to characterize whether the first artifact image is a true artifact image, and the fourth loss corresponding to the first artifact image is used to characterize whether the identified artifact category of the first artifact image is correct;
inputting the second artifact image into the first discriminator to obtain a third loss and a fourth loss corresponding to the second artifact image, where the third loss corresponding to the second artifact image is used to characterize whether the second artifact image is a true artifact image, and the fourth loss corresponding to the second artifact image is used to characterize whether the identified artifact category of the second artifact image is correct;
inputting the first artifact-free image into the second discriminator to obtain a fifth loss corresponding to the first artifact-free image, wherein the fifth loss corresponding to the first artifact-free image is used for representing whether the first artifact-free image is a real artifact-free image;
inputting the second artifact-free image into the second discriminator to obtain a fifth loss corresponding to the second artifact-free image, wherein the fifth loss corresponding to the second artifact-free image is used for representing whether the second artifact-free image is a real artifact-free image;
adjusting the cyclic generation countermeasure network according to cyclic consistency loss, resolution loss and classification loss, wherein the cyclic consistency loss includes the first loss and the second loss, the resolution loss includes a third loss corresponding to the first artifact image, a third loss corresponding to the second artifact image, a fifth loss corresponding to the first artifact-free image and a fifth loss corresponding to the second artifact-free image, and the classification loss includes a fourth loss corresponding to the first artifact-free image and a fourth loss corresponding to the second artifact-free image.
3. The method of claim 2, wherein obtaining the first artifact class vector comprises:
and inputting the original image into a classification model, and outputting the first artifact category vector, wherein the classification model adopts a convolutional neural network model.
4. The method of claim 3, wherein the classification model comprises a first classification model for artifact class identification based on a single frame image, and wherein inputting the original image into the classification model and outputting the first artifact class vector comprises:
and inputting the original image into the first classification model, and outputting the first artifact category vector.
5. The method of claim 3, wherein the classification model comprises a second classification model for artifact class identification based on multi-frame images, and wherein inputting the original image into the classification model and outputting the first artifact class vector comprises:
acquiring the original image and a plurality of reference images from a video stream, wherein the plurality of reference images comprise a plurality of image frames which are positioned in front of and adjacent to the original image and/or a plurality of image frames which are positioned behind and adjacent to the original image;
and inputting the original image and the plurality of reference images into the second classification model, and outputting the first artifact category vector.
6. The method of claim 2, wherein obtaining the first artifact class vector comprises:
inputting the original image into a classification model, and outputting a reference artifact category vector;
feeding back the reference artifact category vector to an operator;
determining the reference artifact category vector as the first artifact category vector if a confirmation instruction of the operator for the reference artifact category vector is received;
and under the condition that an adjusting instruction of the operator for the reference artifact category vector is received, determining an artifact category vector carried in the adjusting instruction as the first artifact category vector.
7. The method of claim 2, wherein obtaining the first artifact class vector comprises:
and determining an artifact category vector provided by an operator as the first artifact category vector.
8. The method according to any one of claims 3 to 7, further comprising:
acquiring an operation guide corresponding to the first artifact category vector;
and feeding back the operation instruction to an operator so that the operator can adjust an image acquisition mode according to the operation instruction in subsequent operation.
9. An artifact removal system, characterized in that the system comprises an image acquisition device, an artifact identification device, an artifact removal device, wherein,
the image acquisition equipment is used for acquiring an original image;
the artifact identification device is used for identifying a corresponding first artifact category vector of the original image, wherein the first artifact category vector is used for indicating the category of the artifact needing to be removed from the original image;
the artifact removing device is configured to process the original image and the first artifact class vector by the method according to any one of claims 1 to 8, so as to obtain an artifact-free target image.
10. An artifact removal device, the device comprising:
the image processing device comprises a first acquisition module, a second acquisition module and a processing module, wherein the first acquisition module is used for acquiring an original image and a first artifact category vector corresponding to the original image, and the first artifact category vector is used for indicating the category of an artifact needing to be removed from the original image;
the input-output module is used for inputting the original image acquired by the first acquisition module and the first artifact category vector into a first generator and outputting a target image without artifacts;
the first generator is obtained under the condition that a cyclic generation confrontation network satisfies a preset condition, the cyclic generation confrontation network is trained on a first artifact image, a second artifact category vector and a first non-artifact image, the second artifact category vector is used for indicating the category of an artifact needing to be removed from the first artifact image, the first non-artifact image represents an image of the first artifact image after the artifact is removed, the cyclic generation confrontation network comprises the first generator used for removing the artifact in the image, a second generator used for adding the artifact to the image, a first discriminator used for discriminating the first artifact image from a second artifact image output by the second generator, and a second discriminator used for discriminating the first artifact-free image from the second artifact image output by the first generator.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210803955.5A CN115170424B (en) | 2022-07-07 | 2022-07-07 | Heart ultrasonic image artifact removing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210803955.5A CN115170424B (en) | 2022-07-07 | 2022-07-07 | Heart ultrasonic image artifact removing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115170424A CN115170424A (en) | 2022-10-11 |
CN115170424B true CN115170424B (en) | 2023-04-07 |
Family
ID=83493792
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210803955.5A Active CN115170424B (en) | 2022-07-07 | 2022-07-07 | Heart ultrasonic image artifact removing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115170424B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116012452B (en) * | 2023-03-28 | 2023-07-07 | 天津市第四中心医院 | Puncture navigation system and method for positioning target object based on ultrasonic image |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112446873A (en) * | 2020-12-11 | 2021-03-05 | 深圳高性能医疗器械国家研究院有限公司 | Method for removing image artifacts |
CN114119356A (en) * | 2021-11-24 | 2022-03-01 | 北京理工大学 | Method for converting thermal infrared image into visible light color image based on cycleGAN |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020246996A1 (en) * | 2019-06-06 | 2020-12-10 | Elekta, Inc. | Sct image generation using cyclegan with deformable layers |
CN111862258B (en) * | 2020-07-23 | 2024-06-28 | 深圳高性能医疗器械国家研究院有限公司 | Image metal artifact inhibition method |
US12026853B2 (en) * | 2020-08-03 | 2024-07-02 | The Board Of Trustees Of The Leland Stanford Junior University | Deep learning based denoising and artifact reduction in cardiac CT cine imaging |
CN112967240A (en) * | 2021-02-26 | 2021-06-15 | 江南大学 | Medical image generation method based on deep 3D network and transfer learning |
CN113256520B (en) * | 2021-05-21 | 2023-12-19 | 中国农业大学 | Domain-adaptive underwater image enhancement method |
CN113723220B (en) * | 2021-08-11 | 2023-08-25 | 电子科技大学 | Deep counterfeiting traceability system based on big data federation learning architecture |
-
2022
- 2022-07-07 CN CN202210803955.5A patent/CN115170424B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112446873A (en) * | 2020-12-11 | 2021-03-05 | 深圳高性能医疗器械国家研究院有限公司 | Method for removing image artifacts |
CN114119356A (en) * | 2021-11-24 | 2022-03-01 | 北京理工大学 | Method for converting thermal infrared image into visible light color image based on cycleGAN |
Also Published As
Publication number | Publication date |
---|---|
CN115170424A (en) | 2022-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110084775B (en) | Image processing method and device, electronic equipment and storage medium | |
US11544820B2 (en) | Video repair method and apparatus, and storage medium | |
US11532180B2 (en) | Image processing method and device and storage medium | |
US20210110522A1 (en) | Image processing method and apparatus, and storage medium | |
CN109871883B (en) | Neural network training method and device, electronic equipment and storage medium | |
CN109145970B (en) | Image-based question and answer processing method and device, electronic equipment and storage medium | |
CN114820584B (en) | Lung focus positioner | |
CN108881952B (en) | Video generation method and device, electronic equipment and storage medium | |
CN113887474B (en) | Respiration rate detection method and device, electronic device and storage medium | |
CN109903252B (en) | Image processing method and device, electronic equipment and storage medium | |
KR20220012407A (en) | Image segmentation method and apparatus, electronic device and storage medium | |
CN111833344A (en) | Medical image processing method and device, electronic equipment and storage medium | |
CN112184787A (en) | Image registration method and device, electronic equipment and storage medium | |
CN115170424B (en) | Heart ultrasonic image artifact removing method and device | |
CN113506229B (en) | Neural network training and image generating method and device | |
CN114445753A (en) | Face tracking recognition method and device, electronic equipment and storage medium | |
CN116957936A (en) | Video super-resolution method, device, electronic equipment and storage medium | |
CN113506320B (en) | Image processing method and device, electronic equipment and storage medium | |
CN112734015B (en) | Network generation method and device, electronic equipment and storage medium | |
CN115661619A (en) | Network model training method, ultrasonic image quality evaluation method, device and electronic equipment | |
CN114565962A (en) | Face image processing method and device, electronic equipment and storage medium | |
CN112686867B (en) | Medical image recognition method and device, electronic equipment and storage medium | |
CN114973359A (en) | Expression recognition method and device, electronic equipment and storage medium | |
CN114550261A (en) | Face recognition method and device, electronic equipment and storage medium | |
CN112802032A (en) | Training and image processing method, device, equipment and medium for image segmentation network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |