WO2021109867A1 - Procédé et appareil de traitement d'image, support de stockage lisible par ordinateur et dispositif électronique - Google Patents

Procédé et appareil de traitement d'image, support de stockage lisible par ordinateur et dispositif électronique Download PDF

Info

Publication number
WO2021109867A1
WO2021109867A1 PCT/CN2020/129437 CN2020129437W WO2021109867A1 WO 2021109867 A1 WO2021109867 A1 WO 2021109867A1 CN 2020129437 W CN2020129437 W CN 2020129437W WO 2021109867 A1 WO2021109867 A1 WO 2021109867A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
intermediate image
processed
noise
image processing
Prior art date
Application number
PCT/CN2020/129437
Other languages
English (en)
Chinese (zh)
Inventor
陈曦
Original Assignee
RealMe重庆移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by RealMe重庆移动通信有限公司 filed Critical RealMe重庆移动通信有限公司
Publication of WO2021109867A1 publication Critical patent/WO2021109867A1/fr

Links

Images

Classifications

    • G06T5/70

Definitions

  • the present disclosure relates to the field of image processing technology, and in particular, to an image processing method, an image processing device, a computer-readable medium, and an electronic device.
  • Integrating high-pixel sensors on mobile terminals has become a trend in the development of mobile terminals.
  • the total number of pixels of the sensor has doubled.
  • the increase in the actual photosensitive size of the sensor is limited. This has caused the problem of increasing pixel density and weakening of the signal received by each pixel and the more serious electronic crosstalk.
  • the output image has more noise and low signal-to-noise ratio, which severely limits high-pixel sensors.
  • an image processing method including: acquiring an image to be processed, and performing an iterative process using the image to be processed until the similarity between the first intermediate image and the second intermediate image is greater than the similarity Up to the threshold, the first intermediate image and the second intermediate image are both images generated in the denoising process of the image to be processed; after the iterative process is ended, the first intermediate image or the second intermediate image is output as the image corresponding to the image to be processed The processed image; wherein, the iterative process includes: based on the objective function, the second intermediate image is determined using the image to be processed and the first intermediate image; the third intermediate image is determined using the noise estimation model and the second intermediate image; the third intermediate image is determined by the noise estimation model and the second intermediate image.
  • the image serves as the first intermediate image.
  • an image processing device including: an image denoising module for acquiring an image to be processed, and using the image to be processed to perform an iterative process until the difference between the first intermediate image and the second intermediate image Until the similarity between the two is greater than the similarity threshold, the first intermediate image and the second intermediate image are both images generated in the denoising process of the image to be processed; the image output module is used to output the first intermediate image after the iterative process is completed Or the second intermediate image, as the processed image corresponding to the image to be processed; wherein, the iterative process includes: based on the objective function, the second intermediate image is determined by using the image to be processed and the first intermediate image; and the noise estimation model and the first intermediate image are used to determine the second intermediate image.
  • the second intermediate image determines the third intermediate image; the third intermediate image is used as the first intermediate image.
  • a computer-readable medium on which a computer program is stored, and the computer program is executed by a processor to implement the above-mentioned image processing method.
  • an electronic device including: one or more processors; a storage device, for storing one or more programs, when one or more programs are executed by one or more processors , Enabling one or more processors to implement the above-mentioned image processing method.
  • FIG. 1 shows a schematic diagram of an exemplary system architecture of an image processing method or image processing apparatus to which an embodiment of the present disclosure can be applied;
  • FIG. 2 shows a schematic structural diagram of a computer system suitable for implementing an electronic device according to an embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of a process of determining an optimal solution after introducing auxiliary variables according to an exemplary embodiment of the present disclosure
  • Fig. 4 schematically shows a flowchart of an image processing method according to an exemplary embodiment of the present disclosure
  • FIG. 5 schematically shows a flowchart of an iterative process according to an exemplary embodiment of the present disclosure
  • Fig. 6 shows a schematic structural diagram of a noise estimation model according to an exemplary embodiment of the present disclosure
  • FIG. 7 shows a schematic diagram of visualized iterative processing according to an exemplary embodiment of the present disclosure
  • FIG. 8 schematically shows a block diagram of an image processing apparatus according to an exemplary embodiment of the present disclosure
  • FIG. 9 schematically shows a block diagram of an image processing apparatus according to another exemplary embodiment of the present disclosure.
  • FIG. 10 schematically shows a block diagram of an image processing apparatus according to another exemplary embodiment of the present disclosure.
  • FIG. 11 schematically shows a block diagram of an image processing apparatus according to still another exemplary embodiment of the present disclosure.
  • FIG. 1 shows a schematic diagram of an exemplary system architecture of an image processing method or image processing apparatus to which an embodiment of the present disclosure can be applied.
  • the system architecture 1000 may include one or more of terminal devices 1001, 1002, 1003, a network 1004 and a server 1005.
  • the network 1004 is used to provide a medium for communication links between the terminal devices 1001, 1002, 1003 and the server 1005.
  • the network 1004 may include various connection types, such as wired, wireless communication links, or fiber optic cables, and so on.
  • the numbers of terminal devices, networks, and servers in FIG. 1 are merely illustrative. There can be any number of terminal devices, networks, and servers according to implementation needs.
  • the server 1005 may be a server cluster composed of multiple servers.
  • the user can use the terminal devices 1001, 1002, 1003 to interact with the server 1005 through the network 1004 to receive or send messages and so on.
  • the terminal devices 1001, 1002, 1003 may be various electronic devices with display screens, including but not limited to smart phones, tablet computers, portable computers, desktop computers, and so on.
  • the terminal device 1001, 1002, 1003 may obtain the image to be processed. Specifically, the image captured by the terminal device 1001, 1002, 1003 through its camera module may be used as the image to be processed. Next, the terminal device 1001, 1002, 1003 may perform the following iterative process until the similarity between the first intermediate image and the second intermediate image associated with the image to be processed is less than the similarity threshold, and the iterative process ends After that, the first intermediate image or the second intermediate image is used as the processed image.
  • the iterative process may include: the first step is to substitute the image to be processed and the first intermediate image into a pre-configured objective function to determine the second intermediate image; the second step is to use the noise estimation model and the second intermediate image The third intermediate image is determined, and the third intermediate image is used as the first intermediate image to update the first intermediate image. Therefore, the first and second steps above are repeated continuously to realize the iterative process.
  • a machine learning model such as a convolutional neural network
  • the training process of the noise estimation model can be performed by the server 1005.
  • the server 1005 transmits the trained model parameters to the terminal devices 1001, 1002, 1003 through the network 1004. , Thus better solve the problem of insufficient processing capacity of the terminal equipment 1001, 1002, 1003.
  • the main steps of the image processing method involved in the present disclosure may also be executed by the server 1005.
  • the terminal devices 1001, 1002, and 1003 send the image taken by the camera module to the server 1005 via the network 1004, and the image is the image to be processed.
  • the server 1005 uses the image to be processed to perform the above iterative process until the first intermediate image The similarity with the second intermediate image is greater than the similarity threshold. After the iterative process is over, the first intermediate image or the second intermediate image is used as the processed image, and the determined processed image is sent to the terminal devices 1001, 1002, 1003 through the network 1004, so that the user can view the denoised Image.
  • the image processing method of the exemplary embodiment of the present disclosure is generally executed by the terminal device 1001, 1002, 1003, and specifically, is usually executed by a mobile terminal such as a mobile phone.
  • the image processing apparatus described below is generally configured in the terminal equipment 1001, 1002, 1003.
  • Fig. 2 shows a schematic structural diagram of a computer system suitable for implementing an electronic device according to an exemplary embodiment of the present disclosure.
  • This electronic device corresponds to a terminal device that executes the image processing method of the exemplary embodiment of the present disclosure.
  • the computer system 200 includes a central processing unit (CPU) 201, which can be based on a program stored in a read-only memory (ROM) 202 or a program loaded from a storage portion 208 into a random access memory (RAM) 203 And perform various appropriate actions and processing.
  • ROM read-only memory
  • RAM random access memory
  • various programs and data required for system operation are also stored.
  • the CPU 201, the ROM 202, and the RAM 203 are connected to each other through a bus 204.
  • An input/output (I/O) interface 205 is also connected to the bus 204.
  • the following components are connected to the I/O interface 205: the input part 206 including keyboard, mouse, touch screen, etc.; including the output part 207 such as cathode ray tube (CRT), liquid crystal display (LCD), etc., and speakers; including hard disk, etc.
  • the communication section 209 performs communication processing via a network such as the Internet.
  • the drive 210 is also connected to the I/O interface 205 as needed.
  • a removable medium 211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 210 as needed, so that the computer program read from it is installed into the storage section 208 as needed.
  • the system structure may also include a camera module. Specifically, it may include dual-camera, triple-camera, quad-camera, etc., to enrich shooting modes to meet the needs of different shooting scenes.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication section 209, and/or installed from the removable medium 211.
  • CPU central processing unit
  • various functions defined in the system of the present application are executed.
  • the computer-readable medium shown in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable removable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium, and the computer-readable medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wireless, wire, optical cable, RF, etc., or any suitable combination of the above.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of the code, and the above-mentioned module, program segment, or part of the code contains one or more for realizing the specified logic function.
  • Executable instructions may also occur in a different order from the order marked in the drawings. For example, two blocks shown one after another can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram or flowchart, and the combination of blocks in the block diagram or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations, or can be implemented by It is realized by a combination of dedicated hardware and computer instructions.
  • the units described in the embodiments of the present disclosure may be implemented in software or hardware, and the described units may also be provided in a processor. Among them, the names of these units do not constitute a limitation on the unit itself under certain circumstances.
  • this application also provides a computer-readable medium.
  • the computer-readable medium may be included in the electronic device described in the above-mentioned embodiments; or it may exist alone without being assembled into the electronic device. in.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by an electronic device, the electronic device realizes the method described in the following embodiments.
  • machine learning models are used to estimate noise.
  • this method can be used to fit a more complex noise model to achieve a better processing effect, and the processing time is short.
  • the processing effect of this method is heavily dependent on the sample size and conditions in the model training process.
  • the image denoising problem can be considered as the main branch of the image restoration field.
  • the image denoising can be represented by a degradation model, which can be represented by Formula 1. :
  • y represents the image before denoising
  • x represents the image after denoising
  • H represents the identity matrix
  • n represents additive white Gaussian noise with a standard deviation of ⁇ .
  • represents the regularization parameter, which is used to measure the importance of the former constraint and the latter constraint. If ⁇ is larger, it means that the latter item in the entire constraint is more important. If ⁇ is smaller (for example, much less than two One part), it means that the previous constraint is more important.
  • ⁇ (x) is a general representation of the prior distribution of the signal. It represents a pre-judgmental constraint on the signal distribution. For example, it can be a constraint on the gradient, a constraint on the space, or Constraints in the frequency domain are not limited in this disclosure.
  • the function corresponding to Formula 3 may be referred to as an intermediate function.
  • It is also called the fidelity term, and ⁇ (x) can be called the regularization term.
  • formula 3 some time-consuming iterative optimization algorithms can be used to approximate the optimal solution.
  • the solution process is to obtain a set of prior parameters ⁇ , this set of prior parameters ⁇ are related parameters of the loss function to be optimized, by using a large-capacity training set with a one-to-one correspondence between a noise image and a noiseless image To determine the best parameters that meet the corresponding relationship between the two Use this loss function to estimate the noise-free image corresponding to the noise image. Therefore, for the model learning method, formula 3 can be rewritten as formula 4:
  • the HQS Half Quadratic Splitting
  • the present disclosure introduces an auxiliary variable (that is, a direction different from the direction of x), and approaches the optimal solution from two directions by continuously iterating the auxiliary variable and x. It should be understood that these two directions are similar to each other.
  • u represents a regularization parameter, which is used to represent the importance of the constraint item, and is a constraint to ensure that x and z are similar.
  • This processing strategy can be understood as a process of exploring "downhill”. As shown in Figure 3, although it is not known from which direction from the initial point to find the optimal solution, it is known that there are two directions from which to approach the optimal solution (minimum objective function).
  • Equation 7 For (i) of formula 7, it can be solved by the way of finding the extreme value of the quadratic term. As for (ii) of formula 7, it returns to the solution of a standard statistical model, and the solution of this equation depends on the prior situation. The previous method of solving this problem believes that a certain transform domain dimension of z (frequency domain, difference domain, etc.) has certain sparse characteristics. However, the noise is not sparse, so (ii) in Equation 7 can be transformed into a formula 8:
  • exemplary embodiments of the present disclosure provide a new image processing method.
  • FIG. 4 schematically shows a flowchart of an image processing method of an exemplary embodiment of the present disclosure.
  • the image processing method may include the following steps:
  • the image to be processed may be an image taken by a camera module of a terminal device, or may be an image obtained from another terminal device or the network.
  • the image to be processed may also be any image to be denoised in the video.
  • the present disclosure does not limit the source, size, shooting scene, etc. of the image to be processed.
  • the terminal device After acquiring the image to be processed, the terminal device can use the image to be processed to perform an iterative process.
  • the iterative process involved in the present disclosure will be described below with reference to steps S52 to S56 in FIG. 5.
  • step S52 based on the objective function, the second intermediate image is determined using the image to be processed and the first intermediate image.
  • steps S52 to S56 only describe one iteration process.
  • the process of initializing the first intermediate image is included.
  • the image to be processed may be filtered to obtain the initialized first intermediate image, which is used as the first intermediate image to perform the iterative process for the first time.
  • a high-pass filter, a low-pass filter, or a combination thereof may be used. To achieve the above filtering process.
  • the second intermediate image can be determined using the image to be processed and the first intermediate image.
  • the second intermediate image may be determined based on an objective function.
  • the objective function for the exemplary embodiment of the present disclosure corresponds to (i) in Equation 7 above. That is to say, according to the exemplary embodiment of the present disclosure, firstly, an intermediate function (see formula 3) can be constructed based on the degradation model of image restoration (see formula 1); next, the fidelity of the intermediate function can be determined by the auxiliary variable z. The term is decoupled from the regularization term to determine (i) in Equation 7.
  • the auxiliary variable corresponds to the first intermediate image, that is, the auxiliary variable z can reflect all the information of the first intermediate image.
  • the second intermediate image can be used as the second intermediate image in the exemplary embodiment of the present disclosure.
  • I is the identity matrix.
  • the second intermediate image x k+1 can be determined if the first intermediate image z k is known.
  • step S54 a third intermediate image is determined using the noise estimation model and the second intermediate image.
  • the noise estimation model may be a model based on a convolutional neural network.
  • Figure 6 schematically shows the network structure of the model.
  • the model can be a 7-layer convolutional neural network, including a first layer 61, a second layer 62, a third layer 63, a fourth layer 64, and a fifth layer. 65.
  • the network structure can be constructed based on dilated convolution, for example, the first layer 61 is composed of dilated convolution units and modified linear units (ReLU), the second layer 62, the third layer 63, the fourth layer 64, and the fifth layer 65.
  • the sixth layer 66 is composed of an expanded convolution unit, a batch normalization unit (BN), and a modified linear unit (ReLU), and the seventh layer 67 is composed of an expanded convolution unit.
  • the size of the sensor of the expanded convolution unit in the first layer 61 is 3 ⁇ 3, that is, the size of the convolution kernel is 3 ⁇ 3.
  • the corresponding sensor size of each layer is (2s+1)*(2s+1), where s It is the coefficient of expansion. From this, the size of the susceptor of each layer can be determined to be 3 ⁇ 3, 5 ⁇ 5, 7 ⁇ 7, 9 ⁇ 9, 7 ⁇ 7, 5 ⁇ 5, 3 ⁇ 3, respectively.
  • the dimension of each layer can be set to 64, that is, the number of feature maps (feature maps) of each layer is set to 64.
  • Using a convolutional neural network based on dilated convolution as the noise estimation model in the present disclosure can obtain semantic information more effectively, thereby ensuring the accuracy of the denoising result.
  • the model training process can be performed on the server in advance.
  • the server can obtain the training set.
  • the training set may include multiple noise images and denoising images corresponding to each noise image, and the difference in noise intensity between each noise image is within a difference threshold, where the difference threshold can be set by the developer according to pre-conducted experiments.
  • the difference threshold can be set by the developer according to pre-conducted experiments.
  • the noise level of each noise image in the training set is consistent, which is convenient to improve the training effect.
  • the images in the training set can be used to train the noise estimation model to obtain the trained model.
  • the noise image is input into the convolutional neural network.
  • the output of the convolutional neural network is the image corresponding to the noise image.
  • the training output image corresponding to the noise image and the corresponding denoising image can be used to calculate the loss function. The above process is performed by continuously inputting samples to minimize the loss function to complete the training process of the convolutional neural network.
  • the server can send the parameter information of the model to the terminal device so that the terminal device can use the noise estimation model to perform an iterative process.
  • server for model training solves the problem of insufficient processing capacity of terminal equipment.
  • the training process of the model can also be performed in the above-mentioned terminal device, which is not limited in the present disclosure.
  • the terminal device may input the second intermediate image determined in step S52 into the trained noise estimation model to determine the noise estimation value corresponding to the second intermediate image.
  • the third intermediate image can be determined based on the second intermediate image and its noise estimate.
  • formula 11 may be used to determine the third intermediate image:
  • f(x k+1 ; ⁇ ) represents the noise estimation value for the second intermediate image
  • here represents the model parameter
  • step S56 the third intermediate image is used as the first intermediate image to realize the update of the first intermediate image.
  • steps S52 to S56 are repeatedly executed in this way, and during the execution process, the similarity between the first intermediate image and the second intermediate image is continuously determined until the similarity between the first intermediate image and the second intermediate image is determined
  • the iterative process ends until the degree is greater than the similarity threshold.
  • the similarity threshold can be set by the developer according to the result of the experiment, which is not limited in the present disclosure. In the case where the similarity between the first intermediate image and the second intermediate image is greater than the similarity threshold, it can be considered that an optimal solution has been found, and the optimal solution is the denoised image.
  • step S52 to step S56 performed by the terminal device, each time the iterative process is executed, the model parameters are updated, and the updated parameters are used to execute the next iterative process. That is to say, during the iterative process, the parameters of the noise estimation model will change to ensure that the iterative process of formula 7 (1) and formula 11 is used to continuously approach the optimal solution.
  • the foregoing determines whether the iterative process is over by the similarity between the first intermediate image and the second intermediate image. It is easy to understand that when the difference between the first intermediate image and the second intermediate image is small, the iterative process ends. .
  • the index of image difference can also be used to determine whether the iterative process is over. For example, when the image difference between the first intermediate image and the second intermediate image is less than a preset threshold, the iteration can be determined The process is over.
  • the terminal device may output the first intermediate image or the second intermediate image As the processed image corresponding to the image to be processed.
  • a similarity determination process is performed. For example, after the first intermediate image is updated, if the similarity between the first intermediate image and the second intermediate image is less than the similarity threshold, the first intermediate image is output as the processed image. For another example, after the second intermediate image is updated, if the similarity between the first intermediate image and the second intermediate image is less than the similarity threshold, the second intermediate image is output as the processed image.
  • the processed image output can be directly saved to the terminal, and can also be displayed for the user to view.
  • the above process of implementing image denoising can be understood as: “going down” from the starting point, walking on one foot (solving the noise-free image directly) is difficult and the local optimum is prone to occur. In this case, another foot is introduced (the auxiliary variable z, which is the first intermediate image above), and the whole process becomes a two-step solution.
  • the exemplary embodiment of the present disclosure may be solved by using a convolutional neural network. It should also be noted that the whole process is constrained by
  • the present disclosure Based on the image processing method of the exemplary embodiment of the present disclosure, on the one hand, the present disclosure combines a noise estimation model to complete the iterative process. Compared with some technologies that only use regularization constraints to continuously optimize the iterative process, the complexity is greatly reduced. While better denoising effects can be obtained, the time-consuming is short; on the other hand, the solution of the present disclosure can effectively remove image noise, so that the high-pixel camera module can be used in low-light environments, greatly expanding the high-pixel camera model The application scenario of the group; on the other hand, the disclosed solution does not require auxiliary tools or hardware changes, and is easy to implement.
  • an image processing device is also provided in this exemplary embodiment.
  • FIG. 8 schematically shows a block diagram of an image processing apparatus according to an exemplary embodiment of the present disclosure.
  • the image processing device 8 may include an image denoising module 81 and an image output module 83.
  • the image denoising module 81 may be used to obtain the image to be processed, and use the image to be processed to perform an iterative process until the similarity between the first intermediate image and the second intermediate image is greater than the similarity threshold, the first intermediate image
  • Both the second intermediate image and the second intermediate image are images generated during the denoising process of the image to be processed; the iterative process includes: based on the objective function, the second intermediate image is determined by using the image to be processed and the first intermediate image; and the noise estimation model is used And the second intermediate image determine the third intermediate image; the third intermediate image is used as the first intermediate image.
  • the image output module 83 may be used to output the first intermediate image or the second intermediate image as the processed image corresponding to the image to be processed after the iterative process is ended.
  • the present disclosure combines a noise estimation model to complete the iterative processing process. Compared with the process of continuous optimization and iteration that only uses regularization constraints in some technologies, the complexity is greatly reduced. While it is possible to obtain a better denoising effect, it takes a short time; on the other hand, the solution of the present disclosure can effectively remove image noise, so that the high-pixel camera module can be used in a low-light environment, greatly expanding the high-pixel camera model The application scenario of the group; on the other hand, the disclosed solution does not require auxiliary tools or hardware changes, and is easy to implement.
  • the process of determining the third intermediate image by the image denoising module 81 using the noise estimation model and the second intermediate image may be configured to execute: input the second intermediate image into the noise estimation model, and determine the difference between the second intermediate image and the second intermediate image.
  • the noise estimation value corresponding to the intermediate image; the third intermediate image is determined according to the second intermediate image and the noise estimation value.
  • the image processing device 9 may further include a model training module 91.
  • the model training module 91 may be configured to execute: obtain a training set; wherein the training set includes multiple noise images and denoised images corresponding to each noise image, and the noise intensity difference between each noise image is a difference Within the threshold; input the noise image in the training set into a convolutional neural network, and the convolutional neural network outputs the training output image corresponding to the noise image; use the training output image and denoising image corresponding to the noise image to calculate the loss of the convolutional neural network Function to train the convolutional neural network; determine the trained convolutional neural network as the noise estimation model.
  • the image denoising module 81 may also be configured to execute: each time the iterative process is executed, the parameters of the convolutional neural network are updated, and the next iterative process is executed using the updated parameters.
  • a convolutional neural network includes a cascaded plurality of convolutional layers, and each convolutional layer includes an expanded convolution unit.
  • the image processing device 10 may further include an initialization module 101.
  • the initialization module 101 may be configured to perform: filter processing on the image to be processed to obtain the initialized first intermediate image, which is used as the first intermediate image for the first execution of the iterative process.
  • the image processing device 11 may further include an objective function determining module 111.
  • the objective function determination module 111 may be configured to execute: construct an intermediate function based on the degradation model of image restoration, the intermediate function includes a fidelity term and a regularization term; use an auxiliary variable to combine the fidelity term and regularization of the intermediate function The terms are decoupled, and the objective function is determined according to the decoupling result; among them, the auxiliary variable corresponds to the first intermediate image.
  • the example embodiments described here can be implemented by software, or can be implemented by combining software with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , Including several instructions to make a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiments of the present disclosure.
  • a computing device which may be a personal computer, a server, a terminal device, or a network device, etc.
  • modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory.
  • the features and functions of two or more modules or units described above may be embodied in one module or unit.
  • the features and functions of a module or unit described above can be further divided into multiple modules or units to be embodied.

Abstract

La présente invention concerne un procédé de traitement d'image, un appareil de traitement d'image, un support de stockage lisible par ordinateur et un dispositif électronique, se rapportant au domaine technique du traitement d'image. Le procédé de traitement d'image consiste à : obtenir une image à traiter et utiliser l'image à traiter pour effectuer un processus itératif, jusqu'à ce que la similarité entre une première image intermédiaire et une deuxième image intermédiaire soit supérieure au seuil de similarité (S42), à la fois la première image intermédiaire et la deuxième image intermédiaire sont des images générées pendant le processus de débruitage de l'image à traiter ; à la fin du processus itératif, délivrer en sortie la première image intermédiaire ou la deuxième image intermédiaire en tant qu'image traitée correspondant à l'image à traiter (S44) ; le processus itératif consistant à : déterminer la deuxième image intermédiaire à l'aide de l'image à traiter et de la première image intermédiaire sur la base de la fonction objective ; utiliser le modèle d'estimation de bruit et la deuxième image intermédiaire pour déterminer une troisième image intermédiaire ; utiliser la troisième image intermédiaire en tant que première image intermédiaire. Le bruit dans l'image peut être réduit.
PCT/CN2020/129437 2019-12-04 2020-11-17 Procédé et appareil de traitement d'image, support de stockage lisible par ordinateur et dispositif électronique WO2021109867A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911228475.5A CN111062883B (zh) 2019-12-04 2019-12-04 图像处理方法及装置、计算机可读介质和电子设备
CN201911228475.5 2019-12-04

Publications (1)

Publication Number Publication Date
WO2021109867A1 true WO2021109867A1 (fr) 2021-06-10

Family

ID=70299697

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/129437 WO2021109867A1 (fr) 2019-12-04 2020-11-17 Procédé et appareil de traitement d'image, support de stockage lisible par ordinateur et dispositif électronique

Country Status (2)

Country Link
CN (1) CN111062883B (fr)
WO (1) WO2021109867A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823994A (zh) * 2023-02-20 2023-09-29 阿里巴巴达摩院(杭州)科技有限公司 图像生成、模型训练方法、装置、设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062883B (zh) * 2019-12-04 2022-10-18 RealMe重庆移动通信有限公司 图像处理方法及装置、计算机可读介质和电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376568A (zh) * 2014-11-28 2015-02-25 成都影泰科技有限公司 一种基于格式的dicom医学图像处理方法
US20190188510A1 (en) * 2017-12-15 2019-06-20 Samsung Electronics Co., Ltd. Object recognition method and apparatus
CN110009052A (zh) * 2019-04-11 2019-07-12 腾讯科技(深圳)有限公司 一种图像识别的方法、图像识别模型训练的方法及装置
CN111062883A (zh) * 2019-12-04 2020-04-24 RealMe重庆移动通信有限公司 图像处理方法及装置、计算机可读介质和电子设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101237524B (zh) * 2008-03-03 2010-06-02 中国科学院光电技术研究所 一种保留高频信息的图像噪声去除方法
CN104156994B (zh) * 2014-08-14 2017-03-22 厦门大学 一种压缩感知磁共振成像的重建方法
CN106897971B (zh) * 2016-12-26 2019-07-26 浙江工业大学 基于独立分量分析和奇异值分解的非局部tv图像去噪方法
CN109658348A (zh) * 2018-11-16 2019-04-19 天津大学 基于深度学习的联合噪声估计和图像去噪方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376568A (zh) * 2014-11-28 2015-02-25 成都影泰科技有限公司 一种基于格式的dicom医学图像处理方法
US20190188510A1 (en) * 2017-12-15 2019-06-20 Samsung Electronics Co., Ltd. Object recognition method and apparatus
CN110009052A (zh) * 2019-04-11 2019-07-12 腾讯科技(深圳)有限公司 一种图像识别的方法、图像识别模型训练的方法及装置
CN111062883A (zh) * 2019-12-04 2020-04-24 RealMe重庆移动通信有限公司 图像处理方法及装置、计算机可读介质和电子设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823994A (zh) * 2023-02-20 2023-09-29 阿里巴巴达摩院(杭州)科技有限公司 图像生成、模型训练方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN111062883B (zh) 2022-10-18
CN111062883A (zh) 2020-04-24

Similar Documents

Publication Publication Date Title
WO2020156009A1 (fr) Procédé et dispositif de réparation vidéo, dispositif électronique, et support d'informations
US20190042935A1 (en) Dynamic quantization of neural networks
CN112001914A (zh) 深度图像补全的方法和装置
WO2021109867A1 (fr) Procédé et appareil de traitement d'image, support de stockage lisible par ordinateur et dispositif électronique
US9953400B2 (en) Adaptive path smoothing for video stabilization
CN111915480B (zh) 生成特征提取网络的方法、装置、设备和计算机可读介质
WO2021164269A1 (fr) Procédé et appareil d'acquisition de carte de disparité basés sur un mécanisme d'attention
WO2020001222A1 (fr) Procédé de traitement d'image, appareil, support lisible par ordinateur et dispositif électronique
WO2022143812A1 (fr) Procédé, appareil et dispositif de restauration d'image et support de stockage
WO2023005386A1 (fr) Procédé et appareil d'entraînement de modèle
US11741579B2 (en) Methods and systems for deblurring blurry images
CN111325792A (zh) 用于确定相机位姿的方法、装置、设备和介质
US20170185900A1 (en) Reconstruction of signals using a Gramian Matrix
CN114463223A (zh) 一种图像增强的处理方法、装置、计算机设备及介质
CN112418249A (zh) 掩膜图像生成方法、装置、电子设备和计算机可读介质
CN110211017B (zh) 图像处理方法、装置及电子设备
CN114792355A (zh) 虚拟形象生成方法、装置、电子设备和存储介质
Liu et al. Image inpainting algorithm based on tensor decomposition and weighted nuclear norm
Zha et al. Simultaneous nonlocal low-rank and deep priors for poisson denoising
CN113409307A (zh) 基于异质噪声特性的图像去噪方法、设备及介质
Li et al. A mixed noise removal algorithm based on multi-fidelity modeling with nonsmooth and nonconvex regularization
CN110069195B (zh) 图像拖拽变形方法和装置
CN111784726A (zh) 人像抠图方法和装置
WO2021217653A1 (fr) Procédé et appareil d'insertion de trame vidéo, et support de stockage lisible par ordinateur
CN115375909A (zh) 一种图像处理方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20896788

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20896788

Country of ref document: EP

Kind code of ref document: A1