CN112862728B - Artifact removal method, device, electronic equipment and storage medium - Google Patents

Artifact removal method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112862728B
CN112862728B CN202110302949.7A CN202110302949A CN112862728B CN 112862728 B CN112862728 B CN 112862728B CN 202110302949 A CN202110302949 A CN 202110302949A CN 112862728 B CN112862728 B CN 112862728B
Authority
CN
China
Prior art keywords
artifact removal
model
image
artifact
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110302949.7A
Other languages
Chinese (zh)
Other versions
CN112862728A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bi Ren Technology Co ltd
Original Assignee
Shanghai Biren Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Biren Intelligent Technology Co Ltd filed Critical Shanghai Biren Intelligent Technology Co Ltd
Priority to CN202110302949.7A priority Critical patent/CN112862728B/en
Publication of CN112862728A publication Critical patent/CN112862728A/en
Application granted granted Critical
Publication of CN112862728B publication Critical patent/CN112862728B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an artifact removal method, an artifact removal device, electronic equipment and a storage medium, wherein the artifact removal method comprises the following steps: determining an initial image and an artifact removal model; inputting the initial image into an artifact removal model to obtain an artifact removal image of the initial image output by the artifact removal model; the artifact removal model is trained by applying the sample image and its de-artifact image on a network architecture containing a discard layer. The method, the device, the electronic equipment and the storage medium provided by the invention effectively solve the problem of low accuracy caused by different distribution of training and testing images when the traditional deep learning model is applied to artifact removal, reduce the difficulty in realizing artifact removal and improve the reliability and accuracy of artifact removal. In addition, the artifact removal model is irrelevant to image content, has universality, is not limited by motion estimation, can be used for realizing artifact removal of various low-illumination images, and effectively ensures the usability of artifact removal.

Description

Artifact removal method, device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an artifact removal method, an artifact removal device, an electronic device, and a storage medium.
Background
Taking pictures at low illumination often requires extended time to achieve adequate exposure, which can lead to increased scenes or artifacts due to camera movement.
To reduce artifacts, compensation is currently performed mainly by estimating the motion trajectories of moving objects in the scene. However, in the case of low illuminance, the self-brightness of the moving object is not high enough, and the motion trajectory is difficult to estimate.
Although the deep learning model is widely applied in the field of computer vision, a new thought is provided for artifact removal, in practical application, because factors such as blurring of a camera lens, performance reduction of a sensor and the like exist, the distribution of a test picture of the deep learning model is likely to deviate from the distribution of a training picture, so that the accuracy of artifact removal based on the deep learning model is greatly reduced.
Disclosure of Invention
The invention provides an artifact removal method, an artifact removal device, electronic equipment and a storage medium, which are used for solving the problems of high implementation difficulty and poor accuracy of the existing artifact removal method.
The invention provides an artifact removal method, which comprises the following steps:
determining an initial image;
determining a pre-trained artifact removal model, wherein the artifact removal model is obtained by applying sample images and artifact removal image training thereof on a network architecture comprising a discarding layer;
and inputting the initial image into the artifact removal model to obtain an artifact removal image of the initial image output by the artifact removal model.
According to the artifact removal method provided by the invention, the artifact removal model is determined by the following steps:
determining an initial deghosting model, wherein the initial deghosting model is obtained by training a network architecture comprising a discarding layer based on the sample image and a deghosting image thereof;
based on a preset operation number, carrying out model reasoning on the initial artifact removal model to obtain a plurality of artifact removal sub-models of the preset operation number;
and constructing the artifact removal model based on the preset operation number of artifact removal sub-models.
According to the artifact removal method provided by the invention, the determining an initial artifact removal model comprises the following steps:
obtaining a pre-training artifact removal model, wherein the pre-training artifact removal model is obtained by selecting from the existing artifact removal models;
if the pre-training artifact removal model comprises the discarding layer, determining the pre-training artifact removal model as the initial artifact removal model;
otherwise, adding a discarding layer in the pre-training artifact removal model, and performing migration learning on the pre-training artifact removal model added with the discarding layer based on the sample image and the artifact removal image thereof to obtain the initial artifact removal model.
According to the artifact removal method provided by the invention, the artifact removal model comprises a preset operation number of parallel artifact removal sub-models and an output layer connected with each artifact removal sub-model.
According to the artifact removal method provided by the invention, the initial image is input into an artifact removal model to obtain the artifact removal image of the initial image output by the artifact removal model, and the artifact removal method comprises the following steps:
respectively inputting the initial images into each artifact removal sub-model to obtain candidate images output by each artifact removal sub-model;
and inputting each candidate image into the output layer to obtain the artifact-removed image output by the output layer.
According to the artifact removing method provided by the invention, the step of inputting each candidate image to the output layer to obtain the artifact removed image output by the output layer comprises the following steps:
based on the output layer, determining the maximum value and the minimum value of the same pixel point in each candidate image;
if the difference between the maximum value and the minimum value of any pixel point is greater than a preset threshold value, taking the intermediate value of the pixel point in each candidate image as the value of the pixel point in the artifact removal image;
otherwise, taking the average value of the pixel point in each candidate image as the value of the pixel point in the artifact removing image.
The present invention provides an artifact removal device, comprising:
an image determining unit configured to determine an initial image;
a model determining unit, configured to determine a pre-trained artifact removal model, where the artifact removal model is obtained by applying a sample image and a artifact removal image thereof to a network architecture including a discarding layer;
and the artifact removing unit is used for inputting the initial image into the artifact removing model to obtain a de-artifact image of the initial image output by the artifact removing model.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any one of the artifact removal methods described above when executing the computer program.
The invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the artifact removal method according to any of the preceding claims.
According to the artifact removal method, the device, the electronic equipment and the storage medium, which are provided by the invention, the artifact removal is carried out by applying the artifact removal model obtained by training on the network framework comprising the discarding layer, so that the problem that the accuracy is low due to different training and testing image distribution when the traditional deep learning model is applied to artifact removal is effectively solved, the implementation difficulty of artifact removal is reduced, and the reliability and accuracy of artifact removal are improved. In addition, the artifact removal model is irrelevant to image content, has universality, is not limited by motion estimation, can be used for realizing artifact removal of various low-illumination images, and effectively ensures the usability of artifact removal.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an artifact removal method according to the present invention;
FIG. 2 is a flow chart of an artifact removal model determination method provided by the present invention;
FIG. 3 is a schematic diagram of an artifact removal device according to the present invention;
FIG. 4 is a second schematic diagram of an artifact removal device according to the present invention;
fig. 5 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
At present, the deep learning model achieves high accuracy in the field of computer vision, and even the model performance exceeds the reference line of human expert on certain specific tasks. These models are often based on an unrealistic assumption that both training data and test data obey the same distribution. But this assumption is often not true in real-world environments. Particularly, when the deep learning model is applied to an artifact removal link of a low-illumination image, the camera lens for image acquisition may become blurred during application, and the performance of a sensor to which the image acquisition is applied may also change, so that the image distribution for final testing and the image distribution for training may deviate, and although the deviation is extremely difficult to be found by a person, the accuracy of the deep learning model may be practically affected.
In order to solve the problem, the embodiment of the invention provides an artifact removal method. Fig. 1 is a flow chart of an artifact removal method according to the present invention, as shown in fig. 1, the method includes:
at step 110, an initial image is determined.
Here, the initial image is an image that needs artifact removal, and the initial image may be a low-illumination image acquired directly by a camera or a low-illumination image downloaded through network transmission.
Step 120 determines a pre-trained artifact removal model that is trained by applying the sample image and its de-artifact image on a network architecture that includes a discard layer.
And 130, inputting the initial image into an artifact removal model to obtain an artifact removal image of the initial image output by the artifact removal model.
Specifically, the artifact removal model is a deep learning model with artifact removal capability, and in the model application stage, an initial image to be artifact removed can be directly input into the artifact removal model, the artifact removal model performs artifact removal on the initial image, and an artifact removal result, namely a deghost image, is output.
Considering the problem that the final prediction obtained result has low accuracy rate due to the fact that the deep learning model often has deviation due to image distribution applied during training and testing, according to the embodiment of the invention, a discarding layer (dropout) is added in a deep learning network architecture of the artifact removal model, so that each neural network unit has a certain probability of being temporarily discarded from the deep learning network in the training process of the deep learning network architecture, and each branch is further trained on different network architectures, therefore, the artifact removal model obtained through training actually comprises a plurality of sub-models which can all realize artifact removal, and as the network architectures of the sub-models are different, the artifact removal results respectively output by the sub-models for the initial image are possibly different, and under the condition that a plurality of artifact removal results are obtained, the final artifact removal image can be determined according to the confidence space calculated by all the results.
Further, in the execution process of step 120, the pre-trained artifact removal model may be directly obtained, or the artifact removal model may be obtained by means of model training, where the specific model training may be specifically implemented by the following steps: first, a sample image in a low-illumination scene is collected, and an artifact-removed image after an artifact removal operation is performed on the sample image. The low-illuminance scene referred to at this time may be a night scene, an indoor scene, or the like. Then, the model training can be performed by applying the sample image and the artifact removal image thereof on the network architecture comprising the discarding layer, thereby obtaining an artifact removal model comprising a plurality of sub-models of different structural parameters.
It should be noted that, in the embodiment of the present invention, the execution sequence of the step 110 and the step 120 is not specifically limited, and the step 110 may be performed before or after the step 120, or may be performed synchronously with the step 120.
According to the method provided by the embodiment of the invention, the artifact removal is carried out by applying the artifact removal model obtained by training on the network framework comprising the discarding layer, so that the problem of low accuracy caused by different training and testing image distribution when the traditional deep learning model is applied to artifact removal is effectively solved, the implementation difficulty of artifact removal is reduced, and the reliability and accuracy of artifact removal are improved. In addition, the artifact removal model is irrelevant to image content, has universality, is not limited by motion estimation, can be used for realizing artifact removal of various low-illumination images, and effectively ensures the usability of artifact removal.
Based on the above embodiment, fig. 2 is a schematic flow chart of an artifact removal model determining method according to the present invention, as shown in fig. 2, where the method includes:
step 210, determining an initial deghosting model, wherein the initial deghosting model is obtained by training a network architecture including a discard layer based on the sample image and the deghosting image thereof.
Specifically, the model obtained by training the network architecture including the discarding layer is different from the model obtained by training the network architecture not including the discarding layer, and a probability flow is added to each neural network unit in the initial artifact removal model, i.e. the trained initial artifact removal model includes parameters of each neural network unit and probability of running each neural network unit.
Further, the initial deghosting model may be understood as a bayesian-based depth model, that is, a deterministic model with fixed model parameters, where the initial deghosting model in the embodiment of the present invention regards network parameters as hidden variables, considers data as visible variables, gives priori knowledge to the hidden variables, and recalculates the distribution of the network parameters, that is, posterior distribution, in combination with the data.
And 220, carrying out model reasoning on the initial artifact removal model based on the preset operation number to obtain a plurality of artifact removal sub-models of the preset operation number.
Step 230, constructing an artifact removal model based on the preset running number of artifact removal sub-models.
The preset number of operations referred to herein may be set manually, specifically during the model inference phase, for guiding the number of operations of the initial deghosting model. Assuming that the preset operation number is N, in the model reasoning stage, each time the initial artifact removal model is operated, a part of parameters in the initial artifact removal model are set to zero, and the initial artifact removal model which is partially set to zero at the moment can be used as an artifact removal sub-model generated by the operation. Accordingly, the N initial artifact removal sub-models are operated for N times, and then N artifact removal sub-models can be obtained. Further, N may be directly taken as the number of MC dropouts when model construction is performed based on Monte-Carlo dropouts.
After the N artifact removal sub-models are obtained, a combination of the N artifact removal sub-models may be used as an artifact removal model, or an output layer for synthesizing outputs of each sub-model may be accessed after the N artifact removal sub-models to construct an artifact removal model, which is not particularly limited in the embodiment of the present invention.
Further, the initial deghosting model may be understood as a bayesian-based depth model, that is, a deterministic model with fixed model parameters, where the initial deghosting model in the embodiment of the present invention regards network parameters as hidden variables, considers data as visible variables, gives priori knowledge to the hidden variables, and recalculates the distribution of the network parameters, that is, posterior distribution, in combination with the data.
In the model reasoning process corresponding to step 220, the model parameters are obtained by summarizing and sampling from posterior distribution, so that the results obtained by each operation may be different, and multiple results may be obtained by multiple operations to calculate the confidence space of the operation result.
Based on any of the above embodiments, step 210 includes:
obtaining a pre-training artifact removal model, wherein the pre-training artifact removal model is obtained by selecting from the existing artifact removal models;
if the pre-training artifact removal model comprises a discarding layer, determining the pre-training artifact removal model as an initial artifact removal model;
otherwise, a discarding layer is added in the pre-training artifact removal model, and based on the sample image and the artifact removal image thereof, the pre-training artifact removal model added with the discarding layer is subjected to migration learning, so that an initial artifact removal model is obtained.
Specifically, the pre-training artifact removal model is a deep learning model which is already trained and can be used for artifact removal of a low-illumination image, and one of the existing artifact removal models can be selected as the pre-training artifact removal model. After the pre-training artifact removal model is obtained, it may be first determined whether a discard layer is included in the network structure of the pre-training artifact removal model:
aiming at the condition of containing a discarding layer, the pre-training artifact removal model meets the requirement of an initial artifact removal model, and the pre-training artifact removal model can be directly used as the initial artifact removal model so as to save time and resource cost required by constructing the artifact removal model;
for the case that no discarding layer is included, the discarding layer needs to be added on the basis of the current network architecture of the pre-training artifact removal model, for example, the discarding layer may be added after the full connection layer Dense of the network architecture, or the discarding layer may be added after the convolution layer Conv, which is not particularly limited in the embodiment of the present invention. After adding the discarding layer, it is also necessary to perform transfer learning transfer learning on the pre-trained artifact removal model after adding the discarding layer, and use the model after transfer learning as the initial artifact removal model.
The method provided by the embodiment of the invention can realize the training construction of the artifact removal model by adding the discarding layer, can be directly combined with the existing artifact removal model, avoids extra model modification work to the greatest extent, and ensures the portability and practicability of the artifact removal method. Meanwhile, the confidence space of the model prediction result can be estimated, and the reliability and the robustness of the artifact removal result are improved. .
Based on any of the above embodiments, the artifact removal model includes a preset number of parallel artifact removal sub-models, and an output layer connected to each of the artifact removal sub-models.
Here, each sub-model may include a different network architecture, and may also carry different parameters. In the operation process of the artifact removal model, each sub-model can execute the artifact removal task of the input initial image in parallel and output respective artifact removal results. In the process, the parallel computation of a plurality of sub-models can effectively ensure the execution efficiency of artifact removal while improving the accuracy of artifact removal.
Based on any of the above embodiments, step 130 includes:
respectively inputting the initial images into each artifact removal sub-model to obtain candidate images output by each artifact removal sub-model;
and inputting each candidate image into an output layer to obtain an artifact-removed image output by the output layer.
Specifically, in the operation process of the artifact removal model, an initial image is firstly input into each artifact removal sub-model, the artifact removal task of the initial image is respectively executed by each artifact removal sub-model, and corresponding artifact removal results, namely candidate images, are respectively output. It should be noted that, due to the differences between the network architecture and the model parameters of each artifact removal sub-model, the candidate images output by each artifact removal sub-model for the same initial image are also different.
And then inputting the candidate images respectively output by the artifact removal submodels into an output layer, and integrating the candidate images by the output layer to generate and output a final artifact removal image.
Based on any of the foregoing embodiments, in step 130, the inputting each candidate image to the output layer to obtain the de-artifact image output by the output layer includes:
based on the output layer, determining the maximum value and the minimum value of the same pixel point in each candidate image;
if the difference between the maximum value and the minimum value of any pixel point is greater than a preset threshold value, taking the intermediate value of the pixel point in each candidate image as the value of the pixel point in the artifact removal image;
otherwise, taking the average value of the pixel points in each candidate image as the value of the pixel points in the artifact removal image.
Specifically, the output layer integrates each candidate image point by taking pixel points as units. For the same pixel point in different candidate images, the different candidate images correspond to different values, the values in each candidate image can be counted, the maximum value and the minimum value of the pixel point are determined, the difference value between the maximum value and the minimum value is calculated, if the difference value between the maximum value and the minimum value exceeds a preset threshold value, the fact that the value difference of the pixel point in each candidate image is larger is indicated, and the intermediate value in the values can be determined as the value of the pixel point in the artifact removal image; if the difference value between the two values does not exceed the preset threshold value, the value difference of the pixel point in each candidate image is smaller, and the average value of the values can be directly determined as the value of the pixel point in the artifact removal image.
Based on any of the above embodiments, after step 130, the cognitive uncertainty and occasional uncertainty of the artifact removal model may also be calculated based on the candidate images output by each of the artifact removal sub-models, respectively.
Based on any of the above embodiments, fig. 3 is a schematic structural diagram of an artifact removing device according to the present invention, and as shown in fig. 3, the device includes:
an image determining unit 310 for determining an initial image;
a model determining unit 320, configured to determine a pre-trained artifact removal model, where the artifact removal model is obtained by applying a sample image and a de-artifact image thereof to a network architecture including a discarding layer;
and an artifact removal unit 330, configured to input the initial image to an artifact removal model, and obtain a de-artifact image of the initial image output by the artifact removal model.
According to the device provided by the embodiment of the invention, the artifact removal is carried out by applying the artifact removal model obtained by training on the network framework comprising the discarding layer, so that the problem of low accuracy caused by different training and testing image distribution when the traditional deep learning model is applied to artifact removal is effectively solved, the implementation difficulty of artifact removal is reduced, and the reliability and accuracy of artifact removal are improved. In addition, the artifact removal model is irrelevant to image content, has universality, is not limited by motion estimation, can be used for realizing artifact removal of various low-illumination images, and effectively ensures the usability of artifact removal.
Based on any of the above embodiments, fig. 4 is a second schematic structural diagram of the artifact removing apparatus according to the present invention, as shown in fig. 4, where the apparatus further includes a model building unit 300 configured to:
determining an initial deghosting model, wherein the initial deghosting model is obtained by training a network architecture comprising a discarding layer based on the sample image and a deghosting image thereof;
the model determining unit 320 is specifically configured to:
based on a preset operation number, carrying out model reasoning on the initial artifact removal model to obtain a plurality of artifact removal sub-models of the preset operation number;
and constructing the artifact removal model based on the preset operation number of artifact removal sub-models.
Based on any of the above embodiments, the model building unit is configured to:
obtaining a pre-training artifact removal model, wherein the pre-training artifact removal model is obtained by selecting from the existing artifact removal models;
if the pre-training artifact removal model comprises the discarding layer, determining the pre-training artifact removal model as the initial artifact removal model;
otherwise, adding a discarding layer in the pre-training artifact removal model, and performing migration learning on the pre-training artifact removal model added with the discarding layer based on the sample image and the artifact removal image thereof to obtain the initial artifact removal model.
Based on any of the above embodiments, the artifact removal model includes a preset number of parallel artifact removal sub-models, and an output layer connected to each of the artifact removal sub-models.
Based on any of the above embodiments, the artifact removal unit 330 is configured to:
respectively inputting the initial images into each artifact removal sub-model to obtain candidate images output by each artifact removal sub-model;
and inputting each candidate image into the output layer to obtain the artifact-removed image output by the output layer.
Based on any of the above embodiments, the artifact removal unit 330 is configured to:
based on the output layer, determining the maximum value and the minimum value of the same pixel point in each candidate image;
if the difference between the maximum value and the minimum value of any pixel point is larger than a preset threshold value, taking the intermediate value of any pixel point in each candidate image as the value of any pixel point in the artifact removal image;
otherwise, taking the average value of any pixel point in each candidate image as the value of any pixel point in the artifact removal image.
Fig. 5 illustrates a physical schematic diagram of an electronic device, as shown in fig. 5, which may include: processor (Processor) 510, communication interface (Communications Interface) 520, memory (Memory) 530, and communication bus 540, wherein Processor 510, communication interface 520, memory 530 complete communication with each other through communication bus 540. Processor 510 may invoke logic instructions in memory 530 to perform an artifact removal method comprising: determining an initial image; determining a pre-trained artifact removal model, wherein the artifact removal model is obtained by applying sample images and artifact removal image training thereof on a network architecture comprising a discarding layer; and inputting the initial image into an artifact removal model to obtain an artifact removal image of the initial image output by the artifact removal model.
Further, the logic instructions in the memory 530 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the artifact removal method provided by the methods described above, the method comprising: determining an initial image; determining a pre-trained artifact removal model, wherein the artifact removal model is obtained by applying sample images and artifact removal image training thereof on a network architecture comprising a discarding layer; and inputting the initial image into an artifact removal model to obtain an artifact removal image of the initial image output by the artifact removal model.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the artifact removal methods provided above, the method comprising: determining an initial image; determining a pre-trained artifact removal model, wherein the artifact removal model is obtained by applying sample images and artifact removal image training thereof on a network architecture comprising a discarding layer; and inputting the initial image into an artifact removal model to obtain an artifact removal image of the initial image output by the artifact removal model.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (5)

1. A method of artifact removal, comprising:
determining an initial image;
determining a pre-trained artifact removal model, wherein the artifact removal model is obtained by applying sample images and artifact removal image training thereof on a network architecture comprising a discarding layer;
inputting the initial image into the artifact removal model to obtain an artifact removal image of the initial image output by the artifact removal model;
the artifact removal model comprises a plurality of sub-models which can realize artifact removal, and the network architecture of each sub-model is different; the artifact removal image of the initial image is obtained based on confidence spaces of artifact removal results respectively output by each sub-model for the initial image;
the artifact removal model is determined by:
determining an initial deghosting model, wherein the initial deghosting model is obtained by training a network architecture comprising a discarding layer based on the sample image and a deghosting image thereof;
based on a preset operation number, carrying out model reasoning on the initial artifact removal model to obtain a plurality of artifact removal sub-models of the preset operation number;
constructing an artifact removal model based on the preset operation number of artifact removal sub-models;
the artifact removal model comprises a preset running number of parallel artifact removal sub-models and an output layer connected with each artifact removal sub-model;
the step of inputting the initial image to the artifact removal model to obtain an artifact removal image of the initial image output by the artifact removal model comprises the following steps:
respectively inputting the initial images into each artifact removal sub-model to obtain candidate images output by each artifact removal sub-model;
inputting each candidate image to the output layer to obtain the artifact-removed image output by the output layer; the output layer is configured to determine a maximum value and a minimum value of the same pixel point in each candidate image, if a difference between the maximum value and the minimum value of any pixel point in each candidate image is greater than a preset threshold, take a median value of any pixel point in each candidate image as a value of any pixel point in the artifact removal image, and otherwise take a mean value of any pixel point in each candidate image as a value of any pixel point in the artifact removal image.
2. The artifact removal method of claim 1, wherein said determining an initial deghost model comprises:
obtaining a pre-training artifact removal model, wherein the pre-training artifact removal model is obtained by selecting from the existing artifact removal models;
if the pre-training artifact removal model comprises the discarding layer, determining the pre-training artifact removal model as the initial artifact removal model;
otherwise, adding a discarding layer in the pre-training artifact removal model, and performing migration learning on the pre-training artifact removal model added with the discarding layer based on the sample image and the artifact removal image thereof to obtain the initial artifact removal model.
3. An artifact removal device, comprising:
an image determining unit configured to determine an initial image;
a model determining unit, configured to determine a pre-trained artifact removal model, where the artifact removal model is obtained by applying a sample image and a artifact removal image thereof to a network architecture including a discarding layer;
the artifact removing unit is used for inputting the initial image into the artifact removing model to obtain a de-artifact image of the initial image output by the artifact removing model;
the artifact removal model comprises a plurality of sub-models which can realize artifact removal, and the network architecture of each sub-model is different; the artifact removal image of the initial image is obtained based on confidence spaces of artifact removal results respectively output by each sub-model for the initial image; the model building unit is used for:
determining an initial deghosting model, wherein the initial deghosting model is obtained by training a network architecture comprising a discarding layer based on the sample image and a deghosting image thereof;
the model determining unit is specifically configured to:
based on a preset operation number, carrying out model reasoning on the initial artifact removal model to obtain a plurality of artifact removal sub-models of the preset operation number;
constructing an artifact removal model based on the preset operation number of artifact removal sub-models;
the artifact removal model comprises a preset operation number of parallel artifact removal sub-models and an output layer connected with each artifact removal sub-model;
the artifact removal unit is specifically configured to:
respectively inputting the initial images into each artifact removal sub-model to obtain candidate images output by each artifact removal sub-model;
inputting each candidate image to the output layer to obtain the artifact-removed image output by the output layer; the output layer is configured to determine a maximum value and a minimum value of the same pixel point in each candidate image, if a difference between the maximum value and the minimum value of any pixel point in each candidate image is greater than a preset threshold, take a median value of any pixel point in each candidate image as a value of any pixel point in the artifact removal image, and otherwise take a mean value of any pixel point in each candidate image as a value of any pixel point in the artifact removal image.
4. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the artifact removal method according to claim 1 or 2 when the program is executed.
5. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the artifact removal method according to claim 1 or 2.
CN202110302949.7A 2021-03-22 2021-03-22 Artifact removal method, device, electronic equipment and storage medium Active CN112862728B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110302949.7A CN112862728B (en) 2021-03-22 2021-03-22 Artifact removal method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110302949.7A CN112862728B (en) 2021-03-22 2021-03-22 Artifact removal method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112862728A CN112862728A (en) 2021-05-28
CN112862728B true CN112862728B (en) 2023-06-27

Family

ID=75991895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110302949.7A Active CN112862728B (en) 2021-03-22 2021-03-22 Artifact removal method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112862728B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706419A (en) * 2021-09-13 2021-11-26 上海联影医疗科技股份有限公司 Image processing method and system
CN114241070B (en) * 2021-12-01 2022-09-16 北京长木谷医疗科技有限公司 Method and device for removing metal artifacts from CT image and training model
CN115330615A (en) * 2022-08-09 2022-11-11 腾讯医疗健康(深圳)有限公司 Method, apparatus, device, medium, and program product for training artifact removal model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960073A (en) * 2018-06-05 2018-12-07 大连理工大学 Cross-module state image steganalysis method towards Biomedical literature
CN111297327A (en) * 2020-02-20 2020-06-19 京东方科技集团股份有限公司 Sleep analysis method, system, electronic equipment and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107507148B (en) * 2017-08-30 2018-12-18 南方医科大学 Method based on the convolutional neural networks removal down-sampled artifact of magnetic resonance image
CN107958471B (en) * 2017-10-30 2020-12-18 深圳先进技术研究院 CT imaging method and device based on undersampled data, CT equipment and storage medium
JP7288628B2 (en) * 2018-03-23 2023-06-08 アイテック株式会社 dental imaging system
CN112368715A (en) * 2018-05-15 2021-02-12 蒙纳士大学 Method and system for motion correction for magnetic resonance imaging
CN109816742B (en) * 2018-12-14 2022-10-28 中国人民解放军战略支援部队信息工程大学 Cone beam CT geometric artifact removing method based on fully-connected convolutional neural network
CN109885378A (en) * 2019-01-04 2019-06-14 平安科技(深圳)有限公司 Model training method, device, computer equipment and computer readable storage medium
CN111223066B (en) * 2020-01-17 2024-06-11 上海联影医疗科技股份有限公司 Motion artifact correction method, motion artifact correction device, computer equipment and readable storage medium
CN111462010A (en) * 2020-03-31 2020-07-28 腾讯科技(深圳)有限公司 Training method of image processing model, image processing method, device and equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960073A (en) * 2018-06-05 2018-12-07 大连理工大学 Cross-module state image steganalysis method towards Biomedical literature
CN111297327A (en) * 2020-02-20 2020-06-19 京东方科技集团股份有限公司 Sleep analysis method, system, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Sparse-DenseNet模型的森林火灾识别研究;周浪;樊坤;瞿华;吕媛媛;张正宜;;北京林业大学学报(第10期);全文 *

Also Published As

Publication number Publication date
CN112862728A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN112862728B (en) Artifact removal method, device, electronic equipment and storage medium
KR101967089B1 (en) Convergence Neural Network based complete reference image quality evaluation
JP7078139B2 (en) Video stabilization methods and equipment, as well as non-temporary computer-readable media
CN111861925B (en) Image rain removing method based on attention mechanism and door control circulation unit
CN109003234B (en) For the fuzzy core calculation method of motion blur image restoration
CN111462019A (en) Image deblurring method and system based on deep neural network parameter estimation
Nair et al. At-ddpm: Restoring faces degraded by atmospheric turbulence using denoising diffusion probabilistic models
CN112541877B (en) Defuzzification method, system, equipment and medium for generating countermeasure network based on condition
CN112053308B (en) Image deblurring method and device, computer equipment and storage medium
CN106204636B (en) Video foreground extracting method based on monitor video
CN110557633B (en) Compression transmission method, system and computer readable storage medium for image data
CN112614072B (en) Image restoration method and device, image restoration equipment and storage medium
CN108564546B (en) Model training method and device and photographing terminal
CN113570516A (en) Image blind motion deblurring method based on CNN-Transformer hybrid self-encoder
CN111028166A (en) Video deblurring method based on iterative neural network
CN115359334A (en) Training method of multi-task learning deep network and target detection method and device
CN113689348B (en) Method, system, electronic device and storage medium for restoring multi-task image
CN112801890B (en) Video processing method, device and equipment
Li et al. Deep image quality assessment driven single image deblurring
CN116309158A (en) Training method, three-dimensional reconstruction method, device, equipment and medium of network model
CN116129239A (en) Small target detection method, device, equipment and storage medium
EP4030383A1 (en) Probabilistic sampling acceleration and corner feature extraction for vehicle systems
CN110415190B (en) Method, device and processor for removing image compression noise based on deep learning
CN111932514A (en) Image noise level estimation and suppression method and device and electronic equipment
CN116091364B (en) Image blurring processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 201114 room 1302, 13 / F, building 16, 2388 Chenhang Road, Minhang District, Shanghai

Patentee after: Shanghai Bi Ren Technology Co.,Ltd.

Country or region after: China

Address before: 201114 room 1302, 13 / F, building 16, 2388 Chenhang Road, Minhang District, Shanghai

Patentee before: Shanghai Bilin Intelligent Technology Co.,Ltd.

Country or region before: China