CN111325675A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN111325675A
CN111325675A CN201811541513.8A CN201811541513A CN111325675A CN 111325675 A CN111325675 A CN 111325675A CN 201811541513 A CN201811541513 A CN 201811541513A CN 111325675 A CN111325675 A CN 111325675A
Authority
CN
China
Prior art keywords
image
artifact
sample data
pure
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811541513.8A
Other languages
Chinese (zh)
Other versions
CN111325675B (en
Inventor
葛永帅
陈剑威
朱炯滔
梁栋
刘新
郑海荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201811541513.8A priority Critical patent/CN111325675B/en
Priority to PCT/CN2018/125629 priority patent/WO2020124682A1/en
Publication of CN111325675A publication Critical patent/CN111325675A/en
Application granted granted Critical
Publication of CN111325675B publication Critical patent/CN111325675B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses an image processing method, an image processing device, image processing equipment and a storage medium, wherein the method comprises the following steps: acquiring an artifact image to be processed; and obtaining an image of the artifact image after removing the artifact by using a pre-constructed artifact removing model, wherein sample data of the artifact removing model is obtained by mathematically fusing the pure artifact image and the natural image. According to the embodiment of the invention, a large amount of sample data is obtained through mathematical fusion, and the artifact removing model is trained based on the obtained sample data, so that the artifact removing model can more effectively remove the image artifact, and compared with methods such as Fourier transform or wavelet transform in the prior art, the accuracy and the processing efficiency of artifact removing are improved.

Description

Image processing method, device, equipment and storage medium
Technical Field
The embodiments of the present invention relate to the field of computer technologies and the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an image processing device, and a storage medium.
Background
The X-ray grating phase contrast imaging technology is a grating imaging method based on Talbot (Talbot) effect and Lau effect. By the technology, the X-ray exposure can be effectively carried out on the object to be detected, so that an absorption signal, a scattering signal and a refraction signal in the object can be obtained. Research shows that the refraction signal can effectively improve the contrast of soft tissue detection, and the scattering signal can greatly improve the detection sensitivity of particles or void structures in an object. Therefore, the X-ray grating phase-contrast imaging technique has received much attention from researchers, and is expected to be applied to clinical examination.
Currently, the extraction of the absorption, scattering and refraction signals is based on a plurality of phase-stepping projection diagrams. However, due to the constraints of the stepping displacement accuracy of the grating and the output stability of the optical machine, the traditional signal extraction method can cause residual moire artifacts in the obtained absorption diagram, scattering diagram and refraction diagram. These residual moire artifacts can greatly reduce the readability of the image, i.e., indirectly reduce the radiation dose utilization efficiency of X-ray grating phase contrast imaging techniques. In order to reduce residual moire artifacts and improve radiation dose utilization efficiency, attempts have been made to use a deghost method based on fourier transform techniques. By suppressing the frequency information corresponding to the moire artifact in the frequency domain space, the method can reduce the residual moire artifact to a certain extent. At the same time, there have been studies attempting to reduce this type of artifact based on wavelet transform techniques. The fundamental motivation for using wavelet transform techniques is to solve the difficulty of different spatial frequency distributions of moire artifacts. Further, it has been proposed to estimate a minute offset amount for each step by a certain mathematical method, and to optimize and correct the estimated minute offset amount.
The main challenge for fourier transform or wavelet transform based methods is that they can only be applied to cases where the moire artifacts are distributed regularly and oriented uniformly. Both of these approaches have significant limitations for moire artifact distributions with arbitrary shapes. Secondly, both methods may cause a loss of image resolution to varying degrees, which makes its practical application difficult. For the optimization correction method based on the maximum likelihood method, the defects are as follows: in principle, the X-ray grating phase contrast theory is based on the precise stepping condition, the maximum likelihood method can only calculate the offset of each step, and after step correction, only a part of artifacts can be relieved and the artifacts cannot be completely removed. In addition, the effect is not obvious when the number of steps is small. However, another disadvantage of this method is that the effect of the instability of the opto-mechanical output is not taken into account. It is known that, in the experimental process, for long-time exposure, the light spot of the optical machine can drift, the drift amount of the light spot is often larger than the offset of the mechanical movement grating, and the light spot drift indirectly causes inaccurate stepping. Although the light intensity inconsistency can be corrected later, for some experiments, especially when it is difficult to correct the data, the image post-processing algorithm for the light intensity correction becomes difficult. In summary, the residual moire artifacts have different distribution patterns, which cause difficulties in applying the above methods, and thus, the methods have certain limitations.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, image processing equipment and a storage medium, which can effectively remove artifacts of an image.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring an artifact image to be processed;
and obtaining an image of the artifact image after removing the artifact by using a pre-constructed artifact removing model, wherein sample data of the artifact removing model is obtained by mathematically fusing a pure artifact image and a natural image.
In a second aspect, an embodiment of the present invention further provides an image processing apparatus, including:
the image acquisition module is used for acquiring an artifact image to be processed;
and the artifact removing module is used for obtaining an image of the artifact image after the artifact is removed by using a pre-constructed artifact removing model, and sample data of the artifact removing model is obtained by mathematically fusing a pure artifact image and a natural image.
In a third aspect, an embodiment of the present invention further provides an apparatus, where the apparatus includes:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the image processing method as described above.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the image processing method as described above.
The method and the device for removing the artifacts of the image data acquire the artifact image to be processed, and utilize a pre-constructed artifact removal model to acquire the image of the artifact image after the artifact is removed, wherein sample data of the artifact removal model is acquired by mathematically fusing a pure artifact image and a natural image. According to the embodiment of the invention, a large amount of sample data is obtained through mathematical fusion, and the artifact removing model is trained based on the obtained sample data, so that the artifact removing model can more effectively remove the image artifact, and compared with methods such as Fourier transform or wavelet transform in the prior art, the accuracy and the processing efficiency of artifact removing are improved.
Drawings
FIG. 1 is a flowchart illustrating an image processing method according to a first embodiment of the present invention;
FIG. 2 is a diagram illustrating an image processing method according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of a pure artifact image according to a first embodiment of the present invention;
FIG. 4 is a flowchart of an image processing method according to a second embodiment of the present invention;
FIG. 5 is a diagram illustrating an artifact removal model according to a second embodiment of the present invention;
FIG. 6 is a diagram illustrating an image processing procedure according to a second embodiment of the present invention;
FIG. 7 is a schematic structural diagram of an image processing apparatus according to a third embodiment of the present invention;
fig. 8 is a schematic structural diagram of an apparatus according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of an image processing method according to a first embodiment of the present invention, where the present embodiment is applicable to a case of implementing image processing, the method may be executed by an image processing apparatus, and the apparatus may be implemented in software and/or hardware, for example, the apparatus may be configured in a device.
The artifact removal model in the embodiment of the present invention is constructed based on a deep learning model, and with reference to fig. 2, specific processing on an artifact image is performed, and fig. 2 is a schematic diagram of an image processing method in the first embodiment of the present invention. Inputting sample data obtained through mathematical fusion into an artifact removal model for training to obtain a trained artifact removal model; in the figure, a real artifact image obtained in an experiment is input into the trained artifact removal model, a dotted line represents an artifact, and the obtained result is an artifact-removed image.
As shown in fig. 1, the method may specifically include:
and S110, acquiring an artifact image to be processed.
Among them, Artifacts (Artifacts) are images of various forms appearing on an image without an original object to be scanned, and the artifact images are images with Artifacts of different degrees. The artifacts in this embodiment are illustrated by taking moire artifacts in X-ray grating phase-contrast imaging as an example, and for an X-ray grating phase-contrast imaging system, images finally obtained may have moire artifacts of different degrees due to inaccurate phase stepping and unstable output light intensity of an optical machine.
Specifically, in this embodiment, an image including a moire artifact, which is acquired on a laboratory platform in real time, may be acquired, and an existing image including a moire artifact in the internet may also be acquired.
And S120, obtaining an image of the artifact image after the artifact is removed by using a pre-constructed artifact removal model, wherein sample data of the artifact removal model is obtained by mathematically fusing the pure artifact image and the natural image.
The sample data of the artifact removal model is obtained by mathematically fusing a pure artifact image and a natural image, and comprises the following steps:
acquiring a pure artifact image and a natural image, and carrying out normalization processing on the pure artifact image and the natural image;
and carrying out image fusion on the pure artifact image and the natural image after the normalization processing by adopting a preset fusion formula to obtain sample data.
The pure artifact image may be an image without an object acquired on a laboratory platform, the image comprising only moire artifacts. In this embodiment, the phase-stepping projection image is acquired without an object on the laboratory platform, and pure artifact images of the absorption image, the scattering image, and the refraction image can be obtained through the conventional signal extraction process. Exemplarily, referring to fig. 3, fig. 3 is a schematic diagram of a pure artifact image in a first embodiment of the present invention, in which the pure artifact image is an absorption image, and a dashed line in the figure represents an artifact. The natural image is an image without artifacts, and the specific source of the natural image is not limited in this embodiment, and may be, for example, an image without artifacts obtained from the internet.
The normalization processing is performed on the pure artifact image and the natural image, and may include: carrying out gray processing on the collected natural image through Matlab (Matrix & Laboratory), and normalizing the natural image to be between 0 and 1; and normalizing the acquired pure artifact image through a preset normalization formula. The preset normalization formula is not limited in this embodiment, and normalization may be implemented. For example, the preset normalization formula may be Mn ═ Yn-mean (Yn) ]/[ max (Yn) -min (Yn) ], where Mn represents a pixel value after the normalization of the pure artifact image, Yn represents an initial pixel value of the pure artifact image, mean (Yn) represents an average value for Yn, max (Yn) represents a maximum value for Yn, min (Yn) represents a minimum value for Yn, and the pixel mean value of the artifact streak in the pure artifact image after the normalization is 0 and the amplitude is 0.5.
The method includes the steps of inputting pixel values of a pure artifact image and a natural image after normalization processing into a preset fusion formula for calculation, and obtaining a fusion image which is sample data, wherein the preset fusion formula can be a linear formula or a nonlinear formula, and is described In the embodiment by taking the linear formula as An example.
Illustratively, two images are randomly selected from 256 pure artifact images and hundred thousand natural images of various types, and then hundred thousand fusion images are generated through normalization processing and a preset fusion formula, so that hundred thousand sample data are obtained.
Due to the limitations of a laboratory platform and objective conditions, a large number of images without artifacts are difficult to acquire in practice, so that a large amount of sample data meeting requirements is obtained through mathematical fusion in the embodiment, and the acquisition of the sample data can be realized more simply and conveniently.
Specifically, training an artifact removal model based on sample data obtained by mathematical fusion to obtain a trained artifact removal model; and inputting the artifact image to be processed into the trained artifact removing model, so as to obtain the image after the artifact image is removed. The artifact removal model is constructed based on a convolutional neural network, and a specific structure is not limited in this embodiment, and artifact removal can be achieved.
In the embodiment, an artifact image to be processed is obtained, and a pre-constructed artifact removal model is used to obtain an image of the artifact image after the artifact is removed, wherein sample data of the artifact removal model is obtained by mathematically fusing a pure artifact image and a natural image. In the embodiment, a large amount of sample data is obtained through mathematical fusion, and the artifact removal model is trained based on the obtained sample data, so that the artifact removal model can more effectively remove the image artifact, and compared with methods such as Fourier transform or wavelet transform in the prior art, the accuracy and the processing efficiency of artifact removal are improved.
Example two
Fig. 4 is a flowchart of an image processing method according to a second embodiment of the invention. The present embodiment further optimizes the image processing method on the basis of the above embodiments. Correspondingly, the method of the embodiment specifically includes:
s210, acquiring a pure artifact image and a natural image, and normalizing the pure artifact image and the natural image.
In this embodiment, the phase-stepping projection image is acquired without an object on the laboratory platform, and pure artifact images of the absorption image, the scattering image, and the refraction image can be obtained through the conventional signal extraction process. Carrying out gray processing on the collected natural image through Matlab (Matrix & Laboratory), and normalizing the natural image to be between 0 and 1; and normalizing the acquired pure artifact image through a preset normalization formula.
And S220, carrying out image fusion on the pure artifact image and the natural image after the normalization processing by adopting a preset fusion formula to obtain sample data.
And inputting the pixel values of the pure artifact image and the natural image after the normalization processing into a preset fusion formula for calculation to obtain a fusion image, wherein the fusion image is sample data. The preset fusion formula may be a linear formula or a nonlinear formula, and the linear formula is taken as an example in this embodiment for explanation.
Illustratively, two images are randomly selected from 256 pure artifact images and hundred thousand natural images of various types, and then hundred thousand fusion images are generated through normalization processing and a preset fusion formula, so that hundred thousand sample data are obtained.
And S230, taking the sample data as the input of the convolutional neural network, taking the natural image corresponding to the sample data as the output of the convolutional neural network, and training the convolutional neural network to obtain an artifact removal model.
Specifically, fig. 5 is a schematic diagram of the artifact removal model in the second embodiment of the present invention, the down-sampling module in the diagram includes D convolutional layers, the size of the input image of convolutional layer 1 is denoted as M × N, the down-sampling convolution is performed using a mode with a step size (stride) of 2, the convolutional layer 1 includes a plurality of cascaded convolution units therein, the size of the output feature image is (M/2) × (N/2), the size of the input image of convolutional layer 2 is (M/2) × (N/2), and so on, the size of the feature image of the convolutional layer D output is (M/2D) × (N/2D), the down-sampling module in the diagram also includes D convolutional layers, the specific up-sampling process is opposite to the down-sampling process, the size of the input image of convolutional layer D +1 is (M/2D) × (N/2D), the up-down sampling module in the last stage includes D2), and the up-down sampling module includes D2, the up-sampling module is connected to the up-sampling module (stride), and the up-down sampling module is connected to the up-sampling module (stride), and up-sampling module is connected to the up-sampling module, the up-down sampling module, and the up-sampling module includes the up sampling module (stride) and up-down sampling module, the up-sampling module).
The size of the convolution kernel used in each convolution layer in fig. 5 may be 3 × 3, 5 × 5, or 7 × 7, etc., the number of input/output feature images of each convolution layer may be 8, 16, 32, or 64, etc., and the activation function of each convolution layer may be a Linear rectification function (Rectified Linear Unit, ReLU), a Leaky Linear rectification function (leak ReLU), a hyperbolic tangent (Tanh) function, or a Sigmoid function, etc., each convolution layer of the down-sampling module and the up-sampling module in fig. 5 is connected by a residual connecting layer, and the output of each convolution layer of the down-sampling module is connected to the output of each convolution layer of the up-sampling module, respectively.
Specifically, sample data is used as an input of the convolutional neural network shown in fig. 5, and a natural image corresponding to the sample data is used as a monitor of an output image to train the convolutional neural network, so as to obtain a trained artifact removal model.
And S240, acquiring an artifact image to be processed.
The artifact in the present embodiment is described by taking a moire artifact in X-ray grating phase contrast imaging as an example. In the embodiment, an image including a moire artifact, which is acquired on a laboratory platform in real time, can be acquired, and an existing image including the moire artifact in the internet can also be acquired.
And S250, obtaining an image of the artifact image after artifact removal by using a pre-constructed artifact removal model.
The sample data of the artifact removal model is obtained by mathematically fusing a pure artifact image and a natural image.
Fig. 6 is a schematic diagram of an image processing process in the second embodiment of the present invention, in which a down-sampling module and an up-sampling module of an artifact removal model are both composed of 5 convolutional layers, 5 residual connecting layers respectively connect the output of the down-sampling module to the output of the up-sampling module, the convolutional layer in fig. 6 uses a convolutional kernel of 7 × 7, the activation function of the convolutional layer is a ReLU function, the size of an input artifact image to be processed is 224 × 224, a dashed line in the figure represents an artifact, and an artifact-removed image is output.
In the embodiment, an artifact image to be processed is obtained, and a pre-constructed artifact removal model is used to obtain an image of the artifact image after the artifact is removed, wherein sample data of the artifact removal model is obtained by mathematically fusing a pure artifact image and a natural image. In the embodiment, a large amount of sample data is obtained through mathematical fusion, and the artifact removing model is trained based on the obtained sample data, so that the artifact removing model can more effectively remove the image artifacts, and compared with methods such as Fourier transform or wavelet transform in the prior art, the method can be suitable for various artifacts, and the accuracy and the processing efficiency of artifact removing are improved; in addition, the artifact removal model in this embodiment adopts a deep convolutional neural network combining downsampling and upsampling, which can remove the artifact without losing the image resolution, and the structure of the neural network can be flexibly configured and extended.
EXAMPLE III
Fig. 7 is a schematic structural diagram of an image processing apparatus according to a third embodiment of the present invention, which is applicable to an image processing situation. The image processing device provided by the embodiment of the invention can execute the image processing method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. The apparatus specifically includes an image acquisition module 310 and an artifact removal module 320, wherein:
an image obtaining module 310, configured to obtain an artifact image to be processed;
and the artifact removing module 320 is configured to obtain an image after the artifact of the artifact image is removed by using a pre-constructed artifact removing model, and sample data of the artifact removing model is obtained by mathematically fusing a pure artifact image and a natural image.
The method and the device for removing the artifacts of the image data acquire the artifact image to be processed, and utilize a pre-constructed artifact removal model to obtain the image of the artifact image after the artifact is removed, wherein sample data of the artifact removal model is obtained by mathematically fusing a pure artifact image and a natural image. According to the embodiment of the invention, a large amount of sample data is obtained through mathematical fusion, and the artifact removing model is trained based on the obtained sample data, so that the artifact removing model can more effectively remove the image artifact, and compared with methods such as Fourier transform or wavelet transform in the prior art, the accuracy and the processing efficiency of artifact removing are improved.
Optionally, the artifact removal module 320 comprises:
the processing unit is used for acquiring a pure artifact image and a natural image and carrying out normalization processing on the pure artifact image and the natural image;
and the sample unit is used for carrying out image fusion on the pure artifact image and the natural image after the normalization processing by adopting a preset fusion formula to obtain sample data.
Optionally, the apparatus further comprises a model module, the model module being specifically configured to:
and taking the sample data as the input of the convolutional neural network, and taking the natural image corresponding to the sample data as the output of the convolutional neural network to train the convolutional neural network to obtain an artifact removal model.
Optionally, the artifact removal model comprises a down-sampling module and an up-sampling module, the down-sampling module and the up-sampling module being connected by a residual connection layer.
The image processing device provided by the embodiment of the invention can execute the image processing method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 8 is a schematic structural diagram of an apparatus according to a fourth embodiment of the present invention. FIG. 8 illustrates a block diagram of an exemplary device 412 suitable for use in implementing embodiments of the present invention. The device 412 shown in fig. 8 is only an example and should not impose any limitation on the functionality or scope of use of embodiments of the present invention.
As shown in fig. 8, the device 412 is in the form of a general purpose device. The components of device 412 may include, but are not limited to: one or more processors 416, a storage device 428, and a bus 418 that couples the various system components including the storage device 428 and the processors 416.
Bus 418 represents one or more of any of several types of bus structures, including a memory device bus or memory device controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Device 412 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by device 412 and includes both volatile and nonvolatile media, removable and non-removable media.
Storage 428 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 430 and/or cache Memory 432. The device 412 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 434 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 8, and commonly referred to as a "hard drive"). Although not shown in FIG. 8, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk such as a Compact disk Read-Only Memory (CD-ROM), Digital Video disk Read-Only Memory (DVD-ROM) or other optical media may be provided. In these cases, each drive may be connected to bus 418 by one or more data media interfaces. Storage 428 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 440 having a set (at least one) of program modules 442 may be stored, for instance, in storage 428, such program modules 442 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. The program modules 442 generally perform the functions and/or methodologies of the described embodiments of the invention.
The device 412 may also communicate with one or more external devices 414 (e.g., keyboard, Graphics Processing Unit (GPU), pointing terminal, display 424, etc.), with one or more terminals that enable a user to interact with the device 412, and/or with any terminals (e.g., network card, modem, etc.) that enable the device 412 to communicate with one or more other computing terminals. Such communication may occur via input/output (I/O) interfaces 422. Further, the device 412 may also communicate with one or more networks (e.g., a Local Area Network (LAN), Wide Area Network (WAN), and/or a public Network, such as the internet) via the Network adapter 420. As shown in FIG. 8, network adapter 420 communicates with the other modules of device 412 via bus 418. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the device 412, including but not limited to: microcode, end drives, Redundant processors, external disk drive Arrays, RAID (Redundant Arrays of independent disks) systems, tape drives, and data backup storage systems.
The processor 416 executes various functional applications and data processing by executing programs stored in the storage device 428, for example, implementing an image processing method provided by an embodiment of the present invention, the method including:
acquiring an artifact image to be processed;
and obtaining an image of the artifact image after removing the artifact by using a pre-constructed artifact removing model, wherein sample data of the artifact removing model is obtained by mathematically fusing the pure artifact image and the natural image.
EXAMPLE five
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements an image processing method provided in an embodiment of the present invention, where the method includes:
acquiring an artifact image to be processed;
and obtaining an image of the artifact image after removing the artifact by using a pre-constructed artifact removing model, wherein sample data of the artifact removing model is obtained by mathematically fusing the pure artifact image and the natural image.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, Python, and various open source deep learning toolkits developed based on the Python framework, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or terminal. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. An image processing method, comprising:
acquiring an artifact image to be processed;
and obtaining an image of the artifact image after removing the artifact by using a pre-constructed artifact removing model, wherein sample data of the artifact removing model is obtained by mathematically fusing a pure artifact image and a natural image.
2. The method of claim 1, wherein the sample data of the artifact removal model is obtained by mathematically fusing a pure artifact image and a natural image, and comprises:
acquiring a pure artifact image and a natural image, and carrying out normalization processing on the pure artifact image and the natural image;
and carrying out image fusion on the pure artifact image and the natural image after the normalization processing by adopting a preset fusion formula to obtain sample data.
3. The method of claim 1, further comprising, prior to acquiring an artifact image to be processed:
and taking sample data as the input of a convolutional neural network, and taking a natural image corresponding to the sample data as the output of the convolutional neural network to train the convolutional neural network to obtain an artifact removal model.
4. The method of claim 1, wherein the artifact removal model comprises a down-sampling module and an up-sampling module, the down-sampling module and the up-sampling module being connected by a residual connection layer.
5. An image processing apparatus characterized by comprising:
the image acquisition module is used for acquiring an artifact image to be processed;
and the artifact removing module is used for obtaining an image of the artifact image after the artifact is removed by using a pre-constructed artifact removing model, and sample data of the artifact removing model is obtained by mathematically fusing a pure artifact image and a natural image.
6. The apparatus of claim 5, wherein the artifact removal module comprises:
the processing unit is used for acquiring a pure artifact image and a natural image and carrying out normalization processing on the pure artifact image and the natural image;
and the sample unit is used for carrying out image fusion on the pure artifact image and the natural image after the normalization processing by adopting a preset fusion formula to obtain sample data.
7. The apparatus of claim 5, further comprising a model module, the model module specifically configured to:
and taking sample data as the input of a convolutional neural network, and taking a natural image corresponding to the sample data as the output of the convolutional neural network to train the convolutional neural network to obtain an artifact removal model.
8. The apparatus of claim 5, wherein the artifact removal model comprises a down-sampling module and an up-sampling module, the down-sampling module and the up-sampling module being connected by a residual connection layer.
9. An apparatus, characterized in that the apparatus comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the image processing method of any one of claims 1-4.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the image processing method according to any one of claims 1 to 4.
CN201811541513.8A 2018-12-17 2018-12-17 Image processing method, device, equipment and storage medium Active CN111325675B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811541513.8A CN111325675B (en) 2018-12-17 2018-12-17 Image processing method, device, equipment and storage medium
PCT/CN2018/125629 WO2020124682A1 (en) 2018-12-17 2018-12-29 Image processing method, device and apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811541513.8A CN111325675B (en) 2018-12-17 2018-12-17 Image processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111325675A true CN111325675A (en) 2020-06-23
CN111325675B CN111325675B (en) 2023-12-26

Family

ID=71102448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811541513.8A Active CN111325675B (en) 2018-12-17 2018-12-17 Image processing method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN111325675B (en)
WO (1) WO2020124682A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255756A (en) * 2021-05-20 2021-08-13 联仁健康医疗大数据科技股份有限公司 Image fusion method and device, electronic equipment and storage medium
CN117689980A (en) * 2024-02-04 2024-03-12 青岛海尔科技有限公司 Method for constructing environment recognition model, method, device and equipment for recognizing environment

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815692B (en) * 2020-07-15 2023-12-01 东软教育科技集团有限公司 Artifact-free data, method and system for generating artifact-free data, and storage medium
CN112419372B (en) * 2020-11-11 2024-05-17 广东拓斯达科技股份有限公司 Image processing method, device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730479A (en) * 2017-08-30 2018-02-23 中山大学 High dynamic range images based on compressed sensing go artifact fusion method
US20180082115A1 (en) * 2015-07-17 2018-03-22 Hp Indigo B.V. Methods of detecting moire artifacts
CN107945132A (en) * 2017-11-29 2018-04-20 深圳安科高技术股份有限公司 A kind of artifact correction method and device of the CT images based on neutral net
CN107958472A (en) * 2017-10-30 2018-04-24 深圳先进技术研究院 PET imaging methods, device, equipment and storage medium based on sparse projection data

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170337682A1 (en) * 2016-05-18 2017-11-23 Siemens Healthcare Gmbh Method and System for Image Registration Using an Intelligent Artificial Agent
US10387765B2 (en) * 2016-06-23 2019-08-20 Siemens Healthcare Gmbh Image correction using a deep generative machine-learning model
US11832969B2 (en) * 2016-12-22 2023-12-05 The Johns Hopkins University Machine learning approach to beamforming

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180082115A1 (en) * 2015-07-17 2018-03-22 Hp Indigo B.V. Methods of detecting moire artifacts
CN107730479A (en) * 2017-08-30 2018-02-23 中山大学 High dynamic range images based on compressed sensing go artifact fusion method
CN107958472A (en) * 2017-10-30 2018-04-24 深圳先进技术研究院 PET imaging methods, device, equipment and storage medium based on sparse projection data
CN107945132A (en) * 2017-11-29 2018-04-20 深圳安科高技术股份有限公司 A kind of artifact correction method and device of the CT images based on neutral net

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ANSELM GRUNDHÖFER ET AL.: "CAMERA-SPECIFIC IMAGE QUALITY ENHANCEMENT USING A CONVOLUTIONAL NEURAL NETWORK", 2017 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), pages 1 - 5 *
GE YONGSHUAI ET AL.: "A novel 2-dimensional cosmic ray position detector based on a CsI(Na) pixel array and an ICCD camera", CHINESE PHYSICS C, vol. 36, no. 11, pages 1101 - 1105 *
毕浩宇 等: "小波变换在激光医学图像伪影去除的应用", 激光杂志, vol. 37, no. 06, pages 94 - 97 *
郑海荣 等: "基于超快超声平面波成像的医学超声探头改进", 北京生物医学工程, vol. 35, no. 04, pages 353 - 359 *
陈剑威 等: "超声筛查发育性髋关节异常高危因素小婴儿2100例", 北京医学, vol. 35, no. 04, pages 263 - 266 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255756A (en) * 2021-05-20 2021-08-13 联仁健康医疗大数据科技股份有限公司 Image fusion method and device, electronic equipment and storage medium
CN113255756B (en) * 2021-05-20 2024-05-24 联仁健康医疗大数据科技股份有限公司 Image fusion method and device, electronic equipment and storage medium
CN117689980A (en) * 2024-02-04 2024-03-12 青岛海尔科技有限公司 Method for constructing environment recognition model, method, device and equipment for recognizing environment
CN117689980B (en) * 2024-02-04 2024-05-24 青岛海尔科技有限公司 Method for constructing environment recognition model, method, device and equipment for recognizing environment

Also Published As

Publication number Publication date
CN111325675B (en) 2023-12-26
WO2020124682A1 (en) 2020-06-25

Similar Documents

Publication Publication Date Title
Yap et al. Deep learning in diabetic foot ulcers detection: A comprehensive evaluation
CN111325675B (en) Image processing method, device, equipment and storage medium
CN110766769B (en) Magnetic resonance image reconstruction method, device, equipment and medium
CN110766768B (en) Magnetic resonance image reconstruction method, device, equipment and medium
JP2013514854A (en) Bone suppression in X-ray radiographs
CN110728673A (en) Target part analysis method and device, computer equipment and storage medium
CN111709897B (en) Domain transformation-based positron emission tomography image reconstruction method
CN112419378B (en) Medical image registration method, electronic device and storage medium
CN111145160A (en) Method, device, server and medium for determining coronary artery branch where calcified area is located
WO2024087359A1 (en) Lesion detection method and apparatus for endoscope, and electronic device and storage medium
CN113888566B (en) Target contour curve determination method and device, electronic equipment and storage medium
CN112150571A (en) Image motion artifact eliminating method, device, equipment and storage medium
US8989462B2 (en) Systems, methods and computer readable storage mediums storing instructions for applying multiscale bilateral filtering to magnetic resonance (RI) images
CN116721045B (en) Method and device for fusing multiple CT images
Li et al. Unsupervised data fidelity enhancement network for spectral CT reconstruction
CN112101396A (en) Classification method, classification device, classification equipment and storage medium
CN113255756B (en) Image fusion method and device, electronic equipment and storage medium
WO2022127318A1 (en) Scanning positioning method and apparatus, storage medium and electronic device
CN115115736A (en) Image artifact removing method, device and equipment and storage medium
CN111627081B (en) CT image reconstruction method, device, equipment and medium
US20210406681A1 (en) Learning loss functions using deep learning networks
CN114511666A (en) Model generation method, image reconstruction method, device, equipment and medium
CN112580680B (en) Training sample generation method and device, storage medium and electronic equipment
CN112991266A (en) Semantic segmentation method and system for small sample medical image
CN114627521A (en) Method, system, equipment and storage medium for judging living human face based on speckle pattern

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant