US20210398252A1 - Image denoising method and apparatus - Google Patents
Image denoising method and apparatus Download PDFInfo
- Publication number
- US20210398252A1 US20210398252A1 US17/462,176 US202117462176A US2021398252A1 US 20210398252 A1 US20210398252 A1 US 20210398252A1 US 202117462176 A US202117462176 A US 202117462176A US 2021398252 A1 US2021398252 A1 US 2021398252A1
- Authority
- US
- United States
- Prior art keywords
- image
- feature
- processed
- resolution
- processed image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 126
- 238000012545 processing Methods 0.000 claims abstract description 87
- 238000004590 computer program Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 10
- 230000009467 reduction Effects 0.000 abstract description 5
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 238000013528 artificial neural network Methods 0.000 description 79
- 230000015654 memory Effects 0.000 description 62
- 230000008569 process Effects 0.000 description 62
- 238000012549 training Methods 0.000 description 56
- 238000013527 convolutional neural network Methods 0.000 description 42
- 238000000605 extraction Methods 0.000 description 34
- 230000006870 function Effects 0.000 description 33
- 238000011176 pooling Methods 0.000 description 31
- 239000011159 matrix material Substances 0.000 description 29
- 210000002569 neuron Anatomy 0.000 description 24
- 230000004913 activation Effects 0.000 description 21
- 238000004891 communication Methods 0.000 description 21
- 239000013598 vector Substances 0.000 description 20
- 238000010586 diagram Methods 0.000 description 18
- 230000004927 fusion Effects 0.000 description 10
- 239000000872 buffer Substances 0.000 description 7
- 230000000306 recurrent effect Effects 0.000 description 7
- 239000000284 extract Substances 0.000 description 5
- 238000007781 pre-processing Methods 0.000 description 5
- 238000013480 data collection Methods 0.000 description 4
- 238000013500 data storage Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000001174 ascending effect Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000000717 retained effect Effects 0.000 description 3
- MHABMANUFPZXEB-UHFFFAOYSA-N O-demethyl-aloesaponarin I Natural products O=C1C2=CC=CC(O)=C2C(=O)C2=C1C=C(O)C(C(O)=O)=C2C MHABMANUFPZXEB-UHFFFAOYSA-N 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 241000282326 Felis catus Species 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
Images
Classifications
-
- G06T5/002—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G06N3/0454—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/60—Image enhancement or restoration using machine learning, e.g. neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Definitions
- This application relates to the computer vision field, and more specifically, to an image denoising method and apparatus.
- Computer vision is an indispensable part of various intelligent/autonomous systems in diversified application fields, such as manufacturing, inspection, document analysis, medical diagnosis, and military affairs.
- Computer vision is a branch of knowledge about applying a camera/video camera and a computer to obtain required data and information about a photographed object.
- eyes a camera/video camera
- a brain an algorithm
- Sensing may be considered as extraction of information from a sensory signal, and therefore computer vision may be considered as a science of research on how an artificial system “senses” an image or multi-dimensional data.
- computer vision means using various imaging systems in place of an organ of vision to obtain input information, and using a computer in place of a brain to process and interpret the input information.
- An ultimate research objective of computer vision is to enable a computer to observe and understand the world through vision in a same way as humans do and to have a capability of autonomously adapting to an environment.
- an imaging device In the computer vision field, an imaging device usually needs to be used to obtain a digital image and recognize or analyze the digital image. In digitalization and transmission processes of the digital image, the digital image is usually interfered with by noise of an imaging device and noise of an external environment, resulting in a noise-corrupted image or a noise image.
- a noise-corrupted image affects an image displaying effect and image analysis and recognition. Therefore, how to better denoise an image is an issue to be resolved.
- This application provides an image denoising method and apparatus, to improve an image denoising effect.
- an image denoising method includes: obtaining K images based on a to-be-processed image; obtaining an image feature of the to-be-processed image based on the K images; and performing denoising processing on the to-be-processed image based on the image feature of the to-be-processed image to obtain a denoised image.
- the K images are images obtained by reducing a resolution of the to-be-processed image, and K is a positive integer.
- the K images include a first image to a K th image, and the to-be-processed image is a (K+1) th image.
- the first image to the (K+1) th image include an i th image and an (i+1) th image, an image feature of the ( i+ 1) th image is extracted based on an image feature of the i th image, a resolution of the ( i+ 1) th image is higher than that of the i th image, and i is a positive integer less than or equal to K.
- image resolutions of the first image to the (K+1) th image are in ascending order, where a resolution of the first image is the lowest and a resolution of the (K+1) th image is the highest.
- the obtaining an image feature of the to-be-processed image based on the K images may be extracting a high-resolution image feature based on a low-resolution image feature, and finally obtaining the image feature of the to-be-processed image (the resolution of the to-be-processed image is a highest resolution among the (K+1) images).
- the obtaining an image feature of the to-be-processed image based on the K images includes: step 1: obtaining the image feature of the i th image; step 2: extracting the image feature of the (i+1) th image based on the image feature of the i th image; and repeat step 1 and step 2 to obtain the image feature of the (K+1) th image.
- an image feature of the first image may be obtained directly by performing convolution processing on the first image; and for each of a second image to the (K+1) th image, an image feature of the image may be extracted based on an image feature of a previous image.
- an image feature of a low-resolution image is used to provide guidance on extraction of an image feature of a high-resolution image, and therefore global information of the to-be-processed image can be sensed as much as possible in a process of extracting the image feature of the high-resolution image.
- the extracted image feature of the high-resolution image is more accurate, so that a better denoising effect is achieved during image denoising performed based on the image feature of the to-be-processed image.
- an image feature of an (i+1) th image is extracted based on an image feature of an i th image includes: performing convolution processing on the (i+1) th image by using a first convolutional layer to an n th convolutional layer in N convolutional layers, to obtain an initial image feature of the (i+1) th image; fusing the initial image feature of the (i+1) th image with the image feature of the i th image to obtain a fused image feature;
- n and N are positive integers, n is less than or equal to N, and N is a total quantity of convolutional layers used when the image feature of the (i+1) th image is extracted.
- n the sooner the initial image feature of the (i+1) th image can be obtained and the sooner the initial image feature of the (i+1) th image can be fused with the image feature of the i th image, so that the finally obtained image feature of the (i+1) th image is more accurate.
- n is equal to 1.
- the obtained initial image feature of the (i+1) th image may be fused with the image feature of the i th image, so that the finally obtained image feature of the (i+1) th image is more accurate.
- the obtaining K images based on a to-be-processed image includes: performing downsampling operations on the to-be-processed image for K times, to obtain the first image to the K th image.
- the resolution of the to-be-processed image can be reduced, and image information included in the first image to the K th image can be reduced. This can reduce an amount of computation during feature extraction.
- shuffle operations are performed on the to-be-processed image for K times, to obtain the first image to the K th image whose resolutions and channel quantities are different from those of the to-be-processed image.
- shuffle operations herein are equivalent to adjustments of the resolution and the channel quantity of the to-be-processed image, so as to obtain images whose resolutions and channel quantities are different from those of the original to-be-processed image.
- a resolution and a channel quantity of any image in the first image to the K th image are different from those of the to-be-processed image.
- the resolution of the i th image in the K images obtained by performing the shuffle operations is lower than that of the to-be-processed image, and a channel quantity of the i th image is determined based on the channel quantity of the to-be-processed image, the resolution of the i th image, and the resolution of the to-be-processed image.
- the channel quantity of the i th image may be determined based on the channel quantity of the to-be-processed image and a ratio of the resolution of the i th image to the resolution of the to-be-processed image.
- the channel quantity of the i th image may be determined based on the channel quantity of the to-be-processed image and a ratio of the resolution of the to-be-processed image to the resolution of the i th image.
- a ratio of A to B refers to a value of A/B. Therefore, the ratio of the resolution of the i th image to the resolution of the to-be-processed image is a value obtained by dividing a value of the resolution of the i th image by a value of the resolution of the to-be-processed image.
- the first image to the K th image are obtained by performing the shuffle operations, and the image information can be retained when a low-resolution image is obtained based on the to-be-processed image, so that a relatively accurate image feature can be extracted during feature extraction.
- the resolutions of the first image to the K th image may be preset.
- the resolution of the to-be-processed image is M ⁇ N, and two images with relatively low resolutions need to be obtained by performing shuffle operations.
- the resolutions of the two images may be M/2 ⁇ N/2 and M/4 ⁇ N/4.
- a ratio of the channel quantity of the i th image to the channel quantity of the to-be-processed image is less than or equal to the ratio of the resolution of the i th image to the resolution of the to-be-processed image.
- the channel quantity of the i th image is Ci
- the channel quantity of the to-be-processed image is C
- the resolution of the i th image is Mi ⁇ Ni
- the resolution of the to-be-processed image is M ⁇ N.
- the ratio of the channel quantity of the i th image to the channel quantity of the to-be-processed image is Ci/C
- the ratio of the resolution of the i th image to the resolution of the to-be-processed image is (Mi ⁇ Ni)/(M ⁇ N).
- the obtaining the image features of the first image to the (K+1) th image includes: obtaining the image features of the first image to the (K+1) th image by using a neural network.
- the neural network may be a convolutional neural network, a deep convolutional neural network, or a recurrent neural network.
- the neural network includes a top sub-network, a middle sub-network, and a bottom sub-network.
- the obtaining the image features of the first image to the (K+1) th image by using a neural network includes: obtaining the image feature of the first image by using the top sub-network; obtaining the image features of the second image to the K th image by using the middle sub-network; and obtaining the image feature of the (K+1) th image by using the bottom sub-network.
- the top sub-network is used to process a lowest-resolution image
- the middle sub-network is used to process a medium-resolution image
- the bottom sub-network is used to process a highest-resolution image.
- There are (K ⁇ 1) middle sub-networks, and the (K ⁇ 1) middle sub-networks are used to process the second image to the K th image.
- Each middle sub-network is used to process a corresponding image to obtain an image feature of the image.
- the performing denoising processing on the to-be-processed image based on the image feature of the to-be-processed image to obtain a denoised image includes: performing convolution processing on the image feature of the to-be-processed image to obtain a residual estimated value of the to-be-processed image; and superimposing the residual estimated value of the to-be-processed image on the to-be-processed image to obtain the denoised image.
- an image denoising apparatus includes modules configured to perform the method in the first aspect.
- an image denoising apparatus includes: a memory, configured to store a program; and a processor, configured to execute the program stored in the memory, where when the program stored in the memory is being executed, the processor is configured to perform the method in the first aspect.
- a computer-readable medium stores program code to be executed by a device, and the program code is used for performing the method in the first aspect.
- a computer program product including an instruction is provided.
- the computer program product runs on a computer, the computer is enabled to perform the method in the first aspect.
- a chip includes a processor and a data interface, and the processor reads, through the data interface, an instruction stored in a memory, to perform the method in the first aspect.
- the chip may further include a memory.
- the memory stores an instruction.
- the processor is configured to execute the instruction stored in the memory, where when the instruction is being executed, the processor is configured to perform the method in the first aspect.
- an electronic device includes an action recognition apparatus in any one of the first aspect to the fourth aspect.
- FIG. 1 is a schematic structural diagram of a system architecture according to an embodiment of this application.
- FIG. 2 is a schematic diagram of image denoising performed based on a CNN model according to an embodiment of this application;
- FIG. 3 is a schematic diagram of a hardware structure of a chip according to an embodiment of this application.
- FIG. 4 is a schematic flowchart of an image denoising method according to an embodiment of this application.
- FIG. 5 is a schematic diagram of extracting image features by using sub-networks in a neural network
- FIG. 6 is a schematic diagram of a structure of a residual network
- FIG. 7 is a schematic block diagram of an image denoising apparatus according to an embodiment of this application.
- FIG. 8 is a schematic block diagram of an image denoising apparatus according to an embodiment of this application.
- FIG. 9 is a schematic diagram of a process of performing image denoising by an image denoising apparatus according to an embodiment of this application.
- FIG. 10 is a schematic diagram of a hardware structure of a neural network training apparatus according to an embodiment of this application.
- FIG. 11 is a schematic diagram of a hardware structure of an image denoising apparatus according to an embodiment of this application.
- An image denoising method provided in the embodiments of this application can be applied to photographing, video recording, smart city, self-driving, human-computer interaction, and scenarios in which image processing, image displaying, and low-layer or high-layer image visual processing need to be performed, such as image recognition, image classification, semantic segmentation, video semantic analysis, and video action recognition.
- the image denoising method in the embodiments of this application can be applied to a photographing scenario and a scenario of image and video-based visual computing.
- the image denoising method in the embodiments of this application may be used during the photographing or after the photographing, to denoise a photographed image.
- image quality can be improved, thereby improving an image displaying effect and improving accuracy of an image-based vision algorithm.
- image content needs to be recognized.
- image noise affects an image recognition effect to some extent.
- a neural network may include neurons.
- the neuron may be an arithmetic unit with x s and an intercept 1 as inputs.
- An output of the arithmetic unit may be:
- f is an activation function of the neuron, and is used to introduce a nonlinear characteristic into the neural network to transform an input signal in the neuron into an output signal.
- the output signal of the activation function may be used as an input of a next convolutional layer.
- the activation function may be a sigmoid function.
- the neural network is a network formed by combining a plurality of the foregoing individual neurons. In other words, an output of a neuron may be an input of another neuron.
- An input of each neuron may be connected to a partial receptive field of a previous layer, to extract a feature of the partial receptive field.
- the partial receptive field may be a region including several neurons.
- a deep neural network is also referred to as a multilayer neural network, and may be understood as a neural network having many hidden layers.
- “many” is not measured by a particular standard.
- neural networks inside the DNN may be classified into three types: an input layer, hidden layers, and an output layer. Generally, a first layer is the input layer, a last layer is the output layer, and middle layers are all the hidden layers. Layers are fully connected to each other. To be specific, any neuron at an i th layer is definitely connected to any neuron at an (i+1) th layer.
- ⁇ right arrow over (y) ⁇ a(W ⁇ right arrow over (x) ⁇ + ⁇ right arrow over (b) ⁇ ).
- ⁇ right arrow over (x) ⁇ is an input vector
- ⁇ right arrow over (y) ⁇ is an output vector
- ⁇ right arrow over (b) ⁇ is an offset vector
- W is a weight matrix (which is also referred to as a coefficient)
- a( ) is an activation function.
- the coefficient W is used as an example. It is assumed that in a three-layer DNN, a linear coefficient from a fourth neuron at a second layer to a second neuron at a third layer is defined as W 24 3 .
- a superscript 3 represents a layer number for the coefficient W, and a subscript is corresponding to an output third-layer index 2 and an input second-layer index 4 .
- a coefficient a k th neuron at an (L ⁇ 1) th layer to a j th neuron at an L th layer is defined as W jk L .
- a convolutional neural network (convolutional neuron network, CNN) is a deep neural network of a convolutional structure.
- the convolutional neural network includes a feature extractor constituted by a convolutional layer and a sub-sampling layer.
- the feature extractor may be considered as a filter, and a convolution process may be considered as performing convolution on an input image or a convolution feature map (feature map) by using a trainable filter.
- the convolutional layer is a neuron layer, in the convolutional neural network, that performs convolution processing on an input signal. At the convolutional layer in the convolutional neural network, one neuron may be connected to only some neurons at an adjacent layer.
- the convolutional layer usually includes several feature maps, and each feature map may include some neurons arranged in a shape of a rectangle. Neurons on a same feature map share a weight, and the shared weight herein is a convolution kernel.
- the shared weight may be understood as being unrelated to a manner and a location for extracting image information.
- An implicit principle thereof is: Statistical information of a part of an image is the same as that of other parts of the image. In other words, it means that image information learnt from a part can also be applied to another part. Therefore, image information obtained through same learning can be used for all locations in the image.
- a plurality of convolution kernels may be used to extract different image information. Generally, more convolution kernels mean richer image information reflected by a convolution operation.
- a convolution kernel may be initiated in a form of a matrix of a random size.
- a rational weight can be obtained through learning.
- direct advantages of the shared weight are reducing a quantity of connections between layers in the convolutional neural network, and reducing an overfitting risk.
- a recurrent neural network (recurrent neural networks, RNN) is used for processing series data.
- RNN recurrent neural networks
- an execution sequence is: an input layer, a hidden layer, and an output layer, layers are fully connected to each other, and nodes at each layer are not connected.
- Such a common neural network has resolved many difficulties, but is still helpless to deal with many problems. For example, if a next word in a sentence is to be predicted, a previous word usually needs to be used, because words in the sentence are dependent.
- the RNN is referred to as a recurrent neural network for the reason that a current output is also related to a previous output in a series.
- a specific representation form is that the network memorizes prior information and applies the prior information to computation of a current output.
- the RNN can process series data of any length. Training the RNN is the same as training a conventional CNN or DNN. They all use an error back propagation learning algorithm, but a difference lies in: If network unfolding is performed on the RNN, a parameter such as W is shared; however, as shown in the foregoing example, this is not the case with the conventional neural network.
- a parameter such as W is shared; however, as shown in the foregoing example, this is not the case with the conventional neural network.
- an output of each step depends not only on a network of a current step, but also on network statuses of several previous steps. This learning algorithm is referred to as back propagation through time (back propagation through time, BPTT) algorithm.
- a predicted value of the current network and a really expected target value may be compared, and a weight vector of each layer of neural network is updated (certainly, an initialization process is usually performed before updating for the first time, that is, a parameter is preconfigured for each layer in the deep neural network) based on a difference between the predicted value and the target value.
- a weight vector is adjusted to make the predicted value smaller, and adjustments are made continually until a really expected target value or a value that is quite close to the really expected target value can be predicted in the deep neural network. Therefore, “how to determine a difference between a predicted value and a target value” needs to be predefined.
- a loss function loss function
- an objective function object function
- the functions are important equations for measuring a difference between a predicted value and a target value. Using the loss function as an example, a larger output value (loss) of the loss function indicates a larger difference. Then, training the deep neural network is a process of reducing the loss as much as possible.
- An error back propagation (back propagation, BP) learning algorithm may be used in a convolutional neural network to modify a value of a parameter in an initial super-resolution model in a training process, so that a reconstruction error loss for the super-resolution model becomes smaller.
- an error loss occurs during forward propagation and output of an input signal.
- error loss information is back-propagated to update the parameter in the initial super-resolution model, so that the error loss converges.
- the back propagation algorithm is an error loss-oriented back propagation process with an objective of obtaining an optimal parameter for the super-resolution model, such as a weight matrix.
- a pixel value of an image may be a red-green-blue (RGB) color value, and the pixel value may be a long integer indicative of a color.
- RGB red-green-blue
- a pixel value is 256 ⁇ Red+100 ⁇ Green+76 ⁇ Blue. Blue represents a blue component, Green represents a green component, and Red represents a red component. For the color components, a smaller value indicates higher brightness, and a larger value indicates lower brightness.
- the pixel value may be a grayscale value.
- an embodiment of this application provides a system architecture 100 .
- a data collection device 160 is configured to collect training data.
- the training data includes an original image (the original image herein may be an image containing a small amount of noise) and a noise image that is obtained after noise is added to the original image).
- the original image may be an image containing a small amount of noise.
- the data collection device 160 stores the training data into a database 130 , and a training device 120 performs training based on the training data maintained in the database 130 , to obtain a target model/rule 101 .
- the training device 120 processes the input original image, and compares an output image with the original image, until a difference between the image output by the training device 120 and the original image is less than a specific threshold. In this way, training for the target model/rule 101 is completed.
- the target model/rule 101 can be used to implement the image denoising method in the embodiments of this application. To be specific, related processing is performed on a to-be-processed image, and then a processed image is input to the target model/rule 101 , so as to obtain a denoised image.
- the target model/rule 101 in this embodiment of this application may specifically be a neural network.
- the training data maintained in the database 130 is not necessarily all collected by the data collection device 160 , and may be received from another device.
- the training device 120 does not necessarily perform training to obtain the target model/rule 101 completely based on the training data maintained in the database 130 , and may perform model training by obtaining training data from a cloud client or another place. The foregoing description shall not constitute any limitation on this embodiment of this application.
- the target model/rule 101 obtained through training by the training device 120 can be applied to different systems or devices, for example, applied to an execution device 110 shown in FIG. 1 .
- the execution device 110 may be a terminal such as a mobile phone terminal, a tablet computer, a notebook computer, an augmented reality (augmented reality, AR) AR/virtual reality (virtual reality, VR), or a vehicle-mounted terminal; or may be a server, Cloud, or the like.
- an input/output (input/output, I/O) interface 112 is configured for the execution device 110 , and is configured to exchange data with an external device.
- a user may enter data to the I/O interface 112 by using customer equipment 140 .
- the input data in this embodiment of this application may include a to-be-processed image input by the customer equipment.
- a preprocessing module 113 and a preprocessing module 114 are configured to preprocess the input data (for example, the to-be-processed image) received by the I/O interface 112 .
- the input data for example, the to-be-processed image
- a computation module 111 is directly used to process the input data instead.
- the execution device 110 may invoke data, code, and the like in a data storage system 150 to perform corresponding processing, or may store data, an instruction, and the like that are obtained through corresponding processing into the data storage system 150 .
- the I/O interface 112 returns a processing result such as the obtained denoised image to the customer equipment 140 , to provide the processing result for the user.
- the training device 120 may generate corresponding target model/rules 101 for different objectives or different tasks based on different training data. Then, the corresponding target model/rules 101 may be used to achieve the objectives or accomplish the tasks, to provide a required result for the user.
- the user may enter data through manual designation, where the manual designation may be performed by using a screen provided on the I/O interface 112 .
- the customer equipment 140 may automatically send input data to the I/O interface 112 . If it is required that user permission should be obtained for the customer equipment 140 to automatically send input data, the user may set corresponding permission in the customer equipment 140 .
- the user may view, in the customer equipment 140 , a result output by the execution device 110 .
- a specific presentation form may be displaying, sound, action, or another specific manner.
- the customer equipment 140 may also be used as a data collection end to collect, as new sample data, input data of an input I/O interface 112 and an output result of an output I/O interface 112 that are shown in the figure, and store the new sample data into the database 130 .
- the customer equipment 140 may not perform collection, and the I/O interface 112 directly uses, as new sample data, the input data of the input I/O interface 112 and the output result of the output I/O interface 112 that are shown in the figure, and stores the new sample data into the database 130 .
- FIG. 1 is merely a schematic diagram of a system architecture according to this embodiment of this application, and a location relationship between a device, a component, and a module that are shown in the figure does not constitute any limitation.
- the data storage system 150 is an external memory relative to the execution device 110 . In other cases, the data storage system 150 may alternatively be disposed in the execution device 110 .
- the target model/rule 101 is obtained through training by the training device 120 .
- the target model/rule 101 may be a neural network in this application.
- the neural network provided in this embodiment of this application may be a CNN, a deep convolutional neural network (deep convolutional neural networks, DCNN), a recurrent neural network (recurrent neural network, RNNS), or the like.
- the convolutional neural network is a deep neural network of a convolutional structure, and is a deep learning (deep learning) architecture.
- the deep learning architecture means that multi-level learning is conducted at different abstraction layers by using a machine learning algorithm.
- the CNN is a feed-forward (feed-forward) artificial neural network. Each neuron in the feed-forward artificial neural network can respond to an image input to the neuron.
- the convolutional neural network (CNN) 200 may include an input layer 210 , a convolutional layer/pooling layer 220 (the pooling layer is optional), and a neural network layer 230 .
- CNN convolutional neural network
- the convolutional layer/pooling layer 220 shown in FIG. 2 may include layers 221 to 226 used as examples.
- the layer 221 is a convolutional layer
- the layer 222 is a pooling layer
- the layer 223 is a convolutional layer
- the layer 224 is a pooling layer
- the layer 225 is a convolutional layer
- the layer 226 is a pooling layer.
- the layers 221 and 222 are convolutional layers
- the layer 223 is a pooling layer
- the layers 224 and 225 are convolutional layers
- the layer 226 is a pooling layer.
- an output of a convolutional layer may be used as an input of its subsequent pooling layer, or may be used as an input of another convolutional layer, to continue performing a convolution operation.
- the following uses the convolutional layer 221 as an example to describe an internal working principle of one convolutional layer.
- the convolutional layer 221 may include many convolution operators.
- the convolution operator is also referred to as a kernel, and a function of the convolution operator in image processing is equivalent to a filter that extracts specific information from an input image matrix.
- the convolution operator may be a weight matrix in essence, and the weight matrix is usually predefined. In a process of performing a convolution operation on an image, the weight matrix usually processes, on the input image, one pixel after another (or two pixels after other two pixels, and so on; it depends on a value of a stride) in a horizontal direction, so as to complete extraction of a specific feature from the image. A size of the weight matrix needs to be related to an image size.
- a depth dimension (depth dimension) of the weight matrix is the same as that of the input image.
- the weight matrix is extended to an entire depth of the input image. Therefore, convolution with a single weight matrix generates a convolutional output of a single depth dimension.
- a single weight matrix is not used, but a plurality of weight matrices of a same size (row ⁇ column), that is, a plurality of cophenetic matrices, are used.
- Outputs of the weight matrices are stacked to form a depth dimension of a convolution image.
- the dimension herein may be understood as being dependent on the “plurality of” described above.
- Different weight matrices may be used to extract different features in an image. For example, one weight matrix is used to extract edge information of the image, one weight matrix is used to extract a specific color of the image, and another weight matrix is used to obscure unwanted noise in the image, etc.
- the plurality of weight matrices are equal in size (row ⁇ column). After extraction is performed by using the plurality of weight matrices of the same size, obtained feature maps are also equal in size, and the plurality of extracted feature maps of the same size are concatenated to form an output of the convolution computation.
- weight in these weight matrices needs to be obtained through a large amount of training in actual application.
- Each weight matrix formed by using the weight obtained through training may be used to extract information from the input image, so that the convolutional neural network 200 makes a prediction correctly.
- an initial convolutional layer usually extracts a relatively large quantity of common features, where the common features may also be referred to as low-level features.
- a feature extracted by a deeper convolutional layer is more complex, for example, a feature such as high-level semantics.
- a higher-level semantic feature is more suitable for resolving problems.
- a quantity of training parameters usually needs to be reduced, and therefore a pooling layer usually needs to be periodically used after a convolutional layer.
- one convolutional layer may be followed by one pooling layer, or a plurality of convolutional layers may be followed by one or more pooling layers.
- a unique function of a pooling layer is to reduce image space.
- the pooling layer may include a mean pooling operator and/or a maximum pooling operator for performing sampling on an input image, to obtain an image of a smaller size.
- the mean pooling operator may be used to compute a pixel value in an image within a specific range, to obtain a mean value as a mean pooling result.
- the maximum pooling operator may be used to select, within a specific range, a maximum pixel in the range as a maximum pooling result.
- a size of a weight matrix in a convolutional layer needs to be related to an image size, and similarly, an operator in a pooling layer also needs to be related to the image size.
- a size of an image that is output after being processed by a pooling layer may be less than a size of the image that is input to the pooling layer.
- Each pixel in the image that is output from the pooling layer indicates a mean value or a maximum value of a corresponding sub-region of the image that is input to the pooling layer.
- the convolutional neural network 200 After processing is performed by the convolutional layer/pooling layer 220 , the convolutional neural network 200 is insufficient to output required output information. This is because, as described above, the convolutional layer/pooling layer 220 can only perform feature extraction and reduce a quantity of parameters caused by the input image. However, to generate final output information (required class information or other related information), the convolutional neural network 200 needs to use the neural network layer 230 to generate one or one group of outputs whose quantity is equal to a quantity of required classes. Therefore, the neural network layer 230 may include a plurality of hidden layers ( 231 , 232 , . . . 23 n shown in FIG. 2 ) and an output layer 240 . Parameters included in the plurality of hidden layers may be obtained through pre-training based on related training data of specific task types. For example, the task types may include image recognition, image classification, and image super-resolution reconstruction.
- the plurality of hidden layers in the neural network layer 230 is followed by the output layer 240 , that is, a last layer in the entire convolutional neural network 200 .
- the output layer 240 has a loss function similar to a classification cross entropy, which is specifically used to compute a prediction error.
- the convolutional neural network 200 shown in FIG. 2 is merely used as an example of the convolutional neural network.
- the convolutional neural network may alternatively exist in a form of another network model.
- FIG. 3 shows a hardware structure of a chip according to an embodiment of this application.
- the chip includes a neural network processing unit 50 .
- the chip may be disposed in the execution device 110 shown in FIG. 1 , and is configured to complete computation work of the computation module 111 .
- the chip may be disposed in the training device 120 shown in FIG. 1 , and is configured to complete training work of the training device 120 and output the target model/rule 101 .
- An algorithm for each layer in the convolutional neural network shown in FIG. 2 may be implemented by the chip shown in FIG. 3 .
- the neural network processing unit NPU 50 NPU that serves as a coprocessor is mounted on a host CPU (host CPU), and the host CPU allocates a task.
- a core part of the NPU is an operation circuit 50 , and a controller 504 controls the operation circuit 503 to extract data from a memory (a weight memory or an input memory) and to perform an operation.
- the operation circuit 503 includes a plurality of processing engines (process engine, PE).
- the operation circuit 503 is a two-dimensional systolic array.
- the operation circuit 503 may be a one-dimensional systolic array, or another electronic circuit that can perform mathematical operations such as multiplication and addition.
- the operation circuit 503 is a general-purpose matrix processor.
- the operation circuit extracts corresponding data of the matrix B from the weight memory 502 , and buffers the corresponding data into each PE in the operation circuit.
- the operation circuit extracts data of the matrix A from the input memory 501 , and performs a matrix operation between the data of the matrix A and the matrix B to obtain a partial matrix result or a final matrix result, and stores the result into an accumulator (accumulator) 508 .
- a vector computation unit 507 may perform further processing on the output of the operation circuit, for example, perform vector multiplication, vector addition, an exponential operation, a logarithmic operation, and value comparison.
- the vector computation unit 507 may be used for non-convolutional layer/non-FC layer network computation in a neural network, such as pooling (pooling), batch normalization (batch normalization), and local response normalization (local response normalization).
- the vector computation unit 507 can store a processed output vector into a unified buffer 506 .
- the vector computation unit 507 may apply a nonlinear function to the output of the operation circuit 503 , for example, a vector of an accumulated value, to generate an activation value.
- the vector computation unit 507 generates a normalized value, a combined value, or a normalized value and a combined value.
- the processed output vector can be used as an activation input to the operation circuit 503 , for example, used by a subsequent layer in the neural network.
- the unified memory 506 is configured to store input data and output data.
- Input data in an external memory is migrated to the input memory 501 and/or the unified memory 506 directly by using a direct memory access controller (direct memory access controller, DMAC) 505 , weight data in the external memory is stored into the weight memory 502 , and data in the unified memory 506 is stored into the external memory.
- DMAC direct memory access controller
- a bus interface unit (bus interface unit, BIU) 510 is configured to implement interaction between the host CPU, the DMAC, and an instruction fetch buffer 509 through a bus.
- the instruction fetch buffer (instruction fetch buffer) 509 connected to the controller 504 is configured to store an instructed to be used by the controller 504 .
- the controller 504 is configured to invoke the instruction buffered in the instruction fetch buffer 509 , to control a working process of an operation accelerator.
- the unified memory 506 , the input memory 501 , the weight memory 502 , and the instruction fetch buffer 509 are all on-chip (On-Chip) memories, and the external memory is a memory outside the NPU.
- the external memory may be a double data rate synchronous dynamic random access memory (double data rate synchronous dynamic random access memory, DDR SDRAM for short), a high bandwidth memory (high bandwidth memory, HBM), or another readable and writable memory.
- An operation for each layer in the convolutional neural network shown in FIG. 2 may be performed by an operation circuit 303 or a vector computation unit 307 .
- the execution device 110 in FIG. 1 described above can perform the steps of the image denoising method in the embodiments of this application, and the CNN model shown in FIG. 2 and the chip shown in FIG. 3 may also be configured to perform the steps of the image denoising method in the embodiments of this application.
- FIG. 4 is a schematic flowchart of an image denoising method according to an embodiment of this application.
- the method shown in FIG. 4 may be performed by an image denoising apparatus.
- the image denoising apparatus herein may be an electronic device having an image processing function.
- the electronic device may specifically be a mobile terminal (for example, a smartphone), a computer, a personal digital assistant, a wearable device, a vehicle-mounted device, an Internet-of-Things device, or another device capable of performing image processing.
- the method shown in FIG. 4 includes steps 101 to 103 . The following separately details these steps.
- the K images are images obtained by reducing a resolution of the to-be-processed image, or the K images are images obtained by performing resolution reduction processing on the to-be-processed image, where K is a positive integer.
- the resolution reduction processing may specifically be a shuffle operation or a downsampling operation.
- the K images and the to-be-processed image may be numbered. Specifically, the K images may be numbered as a first image to a K th image, and the to-be-processed image may be numbered as a (K+1) th image. The K images and the to-be-processed image compose (K+1) images.
- image resolutions of the first image to the (K+1) th image are in ascending order (in the (K+1) images, a larger image number indicates a higher resolution), where the first image is an image with a lowest resolution among the (K+1) images, and the (K+1) th image is an image with a highest resolution among the (K+1) images.
- the (K+1) images include an i th image and an (i+1) th image
- a resolution of the i th image is lower than that of the (i+1) th image, where i is a positive integer less than or equal to K.
- the K images are the images obtained by reducing the resolution of the to-be-processed image
- the (K+1) images include the K images and the to-be-processed image.
- an image feature of the (i+1) th image in the (K+1) images is extracted based on an image feature of the i th image in the (K+1) images.
- the obtaining an image feature of the to-be-processed image based on the K images includes:
- Step 1 and step 2 are repeated to obtain the image feature of the (K+1) th image.
- an image feature of the first image may be first obtained by performing convolution processing on the first image, and then an image feature of a second image may be extracted based on the image feature of the first image.
- a similar process is continually performed until the image feature of the (K+1) th image is extracted based on an image feature of the K th image.
- an image feature of a lower-resolution image is used to provide guidance on extraction of an image feature of a higher-resolution image, and therefore global information of the to-be-processed image can be sensed as much as possible in the process of extracting the image feature of the higher-resolution image. In this way, the image feature extracted for the higher-resolution image is more accurate.
- the image feature of the low-resolution image may not be stored any longer. This can reduce storage overheads to some extent.
- the image feature of the first image may not be stored any longer and only the image feature of the second image needs to be stored.
- the image feature of the second image may not be stored any longer and only the image feature of the third image needs to be stored.
- a similar process is performed until the image feature of the to-be-processed image is obtained. In other words, at any moment in the process of obtaining the image feature of the to-be-processed image based on the K images, only an image feature of an image currently with a higher resolution may be stored. This can reduce storage overheads to some extent.
- the performing denoising processing on the to-be-processed image based on the image feature of the to-be-processed image to obtain a denoised image includes: performing convolution processing on the image feature of the to-be-processed image to obtain a residual estimated value of the to-be-processed image; and superimposing the residual estimated value of the to-be-processed image on the to-be-processed image to obtain the denoised image.
- an image feature of a low-resolution image is used to provide guidance on extraction of an image feature of a high-resolution image, and therefore global information of the to-be-processed image can be sensed as much as possible in a process of extracting the image feature of the high-resolution image.
- the extracted image feature of the high-resolution image is more accurate, so that a better denoising effect is achieved during image denoising performed based on the image feature of the to-be-processed image.
- the image feature of the first image needs to be first obtained, the image feature of the second image is obtained based on the image feature of the first image, the image feature of the third image is obtained based on the image feature of the second image, and so on.
- This process is equivalent to obtaining the image features of the first image to the (K+1) th image (the to-be-processed image).
- the following uses the (i+1) th image as an example to detail a process of obtaining the image feature of the (i+1) th image.
- the obtaining the image feature of the (i+1) th image includes: extracting the image feature of the (i+1) th image based on the image feature of the i th image.
- the extracting the image feature of the (i+1) th image based on the image feature of the i th image includes: performing convolution processing on the (i+1) th image by using a first convolutional layer to an n th convolutional layer in N convolutional layers, to obtain an initial image feature of the (i+1) th image; fusing the initial image feature of the (i+1) th image with the image feature of the i th image to obtain a fused image feature; and performing convolution processing on the fused image feature by using an (n+1) th convolutional layer to an N th convolutional layer in the N convolutional layers, to obtain the image feature of the (i+1) th image.
- n and N are positive integers, n is less than or equal to N, and N is a total quantity of convolutional layers used when the image feature of the (i+1) th image is extracted.
- n the sooner the initial image feature of the (i+1) th image can be obtained and the sooner the initial image feature of the (i+1) th image can be fused with the image feature of the i th image, so that the finally obtained image feature of the (i+1) th image is more accurate.
- n is equal to 1.
- the obtained initial image feature of the (i+1) th image may be fused with the image feature of the i th image, so that the finally obtained image feature of the (i+1) th image is more accurate.
- a downsampling operation When resolution reduction processing is being performed on the to-be-processed image, a downsampling operation, a shuffle operation, or another manner may be used to obtain the K images.
- the obtaining K images based on a to-be-processed image includes: performing downsampling operations on the to-be-processed image for K times, to obtain the first image to the K th image.
- the resolution of the to-be-processed image can be reduced, and image content of the first image to the K th image can be reduced. This can reduce an amount of computation during feature extraction.
- one downsampling operation may first be performed on the to-be-processed image to obtain the K images, the K th image is duplicated, and a downsampling operation is performed on the duplicated K th image to obtain an (K ⁇ 1) th image. A similar process is performed until the first image is obtained.
- a loss of image information occurs when a low-resolution image is obtained by performing a downsampling operation. Therefore, to reduce a loss of image content, a shuffle operation manner may further be used to obtain a higher-resolution image.
- the obtaining K images based on a to-be-processed image includes: performing shuffle operations on the to-be-processed image for K times, to obtain the first image to the K th image.
- the shuffle operations herein are equivalent to adjustments of the resolution and a channel quantity of the to-be-processed image, so as to obtain images whose resolutions and channel quantities are different from those of the original to-be-processed image.
- a resolution and a channel quantity of any image in the first image to the K th image are different from those of the to-be-processed image.
- the resolution of the i th image in the K images obtained by performing the shuffle operations is lower than that of the to-be-processed image, and a channel quantity of the i th image is determined based on the channel quantity of the to-be-processed image, the resolution of the i th image, and the resolution of the to-be-processed image.
- the channel quantity of the i th image may be determined based on the channel quantity of the to-be-processed image and a ratio of the resolution of the i th image to the resolution of the to-be-processed image.
- the first image to the K th image are obtained by performing the shuffle operations, and the image information can be retained when a low-resolution image is obtained based on the to-be-processed image, so that a relatively accurate image feature can be extracted during feature extraction.
- a value of K and a resolution of each of the K images may be preset.
- the K images whose resolutions are the preset resolutions can be obtained by performing resolution reduction processing on the to-be-processed image.
- the resolution of the to-be-processed image is M ⁇ N
- two images may be generated based on the to-be-processed image, and resolutions of the two images may be M/2 ⁇ N/2 and M/4 ⁇ N/4.
- a ratio of the channel quantity of the i th image to the channel quantity of the to-be-processed image is less than or equal to the ratio of the resolution of the i th image to the resolution of the to-be-processed image.
- the channel quantity of the i th image is Ci
- the channel quantity of the to-be-processed image is C
- the resolution of the i th image is Mi ⁇ Ni
- the resolution of the to-be-processed image is M ⁇ N.
- the ratio of the channel quantity of the i th image to the channel quantity of the to-be-processed image is Ci/C
- the ratio of the resolution of the i th image to the resolution of the to-be-processed image is (Mi ⁇ Ni)/(M ⁇ N).
- the ratio of the channel quantity of the i th image to the channel quantity of the to-be-processed image is equal to the ratio of the resolution of the i th image to the resolution of the to-be-processed image, image content can remain unchanged during obtaining of the i th image based on the to-be-processed image, so that the extracted image feature of the i th image is more accurate (in comparison with a case in which image content is lost, the extracted image feature is more accurate).
- specifications of the to-be-processed image are M ⁇ N ⁇ C (where the resolution is M ⁇ N and the channel quantity is C), and specifications of the i th image may be M/2 i ⁇ N/2 i ⁇ 4 i C (where the resolution is M/2 i ⁇ N/2 i and the channel quantity is 4 i C).
- a ratio of the resolution of the to-be-processed image to the resolution of the i th image is 4 i
- the ratio of the channel quantity of the i th image to the channel quantity of the to-be-processed image is 4 i , which is exactly the same as the ratio of the resolution of the to-be-processed image to the resolution of the i th image, so that image content of the i th image keeps consistent with that of the to-be-processed image. This avoids a loss of the image information, making the extracted image feature relatively accurate.
- an image resolution may alternatively vary by another multiple (for example, a multiple of 3 or 4). This is not limited herein.
- the obtaining the image features of the first image to the (K+1) th image includes: obtaining the image features of the first image to the (K+1) th image by using a neural network.
- the neural network may be a CNN, a DCNN, an RNNS, or the like.
- the neural network may include a top sub-network (top sub-network), a middle sub-network (middle sub-networks), and a bottom sub-network (bottom sub-networks).
- the image feature of the to-be-processed image can be obtained by using the neural network, and denoising processing can further be performed on the to-be-processed image based on the image feature of the to-be-processed image, to obtain the denoised image.
- the neural network herein may be a neural network obtained through training by using training data
- the training data herein may include an original image and a noise image that is obtained after noise is added to the original image.
- the noise image is input to the neural network, and denoising processing is performed on the noise image, to obtain an output image; the obtained output image is compared with the original image, and a corresponding neural network parameter when a difference between the output image and the original image is less than a preset threshold is determined as a final parameter of the neural network; and then the neural network may be used to perform the image denoising method in this embodiment of this application.
- the obtaining the image features of the first image to the (K+1) th image by using a neural network includes: obtaining the image feature of the first image by using the top sub-network; obtaining the image features of the second image to the K th image by using the middle sub-network; and obtaining the image feature of the (K+1) th image by using the bottom sub-network.
- the top sub-network may be denoted as f 1 ( ⁇ ), and the top sub-network is used to process the first image, to obtain the image feature of the first image;
- each middle sub-network is used to process a corresponding image to obtain an image feature of the image.
- the top sub-network needs to extract only the image feature of the first image, while the middle sub-networks and the bottom sub-network need to fuse, in addition to extracting the image features of the images, a lower-resolution image feature with a higher-resolution image feature, to finally obtain the image features of the images.
- a neural network in FIG. 5 includes one top sub-network, one bottom sub-network, and two middle sub-networks. Structures or compositions of these sub-networks are as follows:
- the convolutional activation layer includes a convolutional layer whose channel quantity is C and an activation layer. There are two middle sub-networks, and structures of the two middle sub-networks are the same.
- top sub-network, the middle sub-networks, and the bottom sub-network each may be a relatively integral neural network including an input layer, an output layer, and the like.
- a specific structure of each of the top sub-network, the middle sub-networks, and the bottom sub-network may be similar to that of the convolutional neural network (CNN) 200 in FIG. 2 .
- the residual network may be considered as a special deep neural network.
- the residual network may be as follows: A plurality of hidden layers in the deep neural network are connected to each other layer by layer. For example, a first hidden layer is connected to a second hidden layer, the second hidden layer is connected to a third hidden layer, and the third hidden layer is connected to a fourth hidden layer (this is a data operation path of the neural network, and may also be vividly referred to as neural network transmission).
- the residual network also includes a direct-connect branch.
- the direct-connect branch directly connects the first hidden layer to the fourth hidden layer. To be specific, data of the first hidden layer is directly transmitted to the fourth hidden layer for computation, without being processed by the second hidden layer and the third hidden layer.
- the bottom sub-network processes image features that come from the middle sub-networks.
- denoising processing may be performed on the to-be-processed image based on the image feature of the fourth image to obtain a denoised image.
- an image feature of a lower-resolution image may be considered at an initial stage of extracting an image feature of a higher-resolution image. In this way, a more accurate image feature is extracted for a high-resolution image.
- the bottom sub-network obtains the initial image feature of the fourth image
- the initial image feature is corresponding to four feature maps and the image feature of the third image is also corresponding to four feature maps
- a quantity of corresponding feature maps becomes eight after the initial image feature of the fourth image is concatenated with the image feature of the third image and a concatenated image feature is obtained.
- the concatenated image feature is processed to obtain the image feature of the fourth image, the quantity of feature maps needs to be adjusted, so that the image feature of the fourth image is also corresponding to four feature maps.
- convolution processing may be first performed on the image feature of the to-be-processed image to obtain a residual estimated value of the to-be-processed image, and then the residual estimated value of the to-be-processed image may be superimposed on the to-be-processed image to obtain the denoised image.
- the following describes a process of obtaining the denoised image based on the residual estimated value of the to-be-processed image.
- the bottom sub-network includes four convolutional activation layers and one convolutional layer ( FIG. 5 shows a convolutional layer whose channel quantity is 1).
- a specific process of obtaining the denoised image by the bottom sub-network is as follows:
- the fourth image is the to-be-processed image described above; the image feature of the to-be-processed image can be obtained according to the procedures (1) to (3); the residual estimated value of the to-be-processed image can be obtained according to the procedure (4); and the denoised image can be obtained according to the procedure (5).
- the residual networks in the top sub-network and the middle sub-networks in FIG. 5 may be residual networks in which g (where g may be an integer greater than or equal to 1) convolutional layers are skipped, while no residual network is used in the bottom sub-network.
- the residual networks that skip a connection are used in the top sub-network and the middle sub-networks, so that denoising can be better implemented in a convolution processing process (which means that denoising is performed in the convolution process).
- the residual network includes three convolutional layers (CONN-C, where CONV-C represents a convolutional layer whose channel quantity is C) and two activation function layers (ReLU).
- CONN-C convolutional layers
- ReLU activation function layers
- the following describes a main workflow of the residual network in FIG. 6 .
- an input image feature is a first intermediate image feature
- the first intermediate image feature may be an image feature that is obtained after the top sub-network or the middle sub-networks in FIG. 5 perform convolution processing on the fused image feature
- the first intermediate image feature directly skips the three convolutional layers and the two activation function layers in FIG. 6 (which means that the first intermediate image feature is not processed), and then what is obtained is still the first intermediate image feature.
- the first intermediate image feature is input to the three convolutional layers and the two activation function layers in FIG. 6 for processing, to obtain a residual estimated value of a first intermediate image.
- first intermediate image feature and the residual estimated value of the first intermediate image that are obtained by using the two processing procedures are superimposed, to obtain a second intermediate image feature.
- convolution processing may continue to be performed on the second intermediate image feature in the top sub-network or the middle sub-networks in FIG. 5 , to obtain an image feature of an image corresponding to a corresponding sub-network.
- FIG. 7 is a schematic block diagram of an image denoising apparatus according to an embodiment of this application.
- the image denoising apparatus 600 shown in FIG. 7 includes:
- an image feature of a low-resolution image is used to provide guidance on extraction of an image feature of a high-resolution image, and therefore global information of the to-be-processed image can be sensed as much as possible in a process of extracting the image feature of the high-resolution image.
- the extracted image feature of the high-resolution image is more accurate, so that a better denoising effect is achieved during image denoising performed based on the image feature of the to-be-processed image.
- the obtaining module 601 and the denoising module 602 in the image denoising apparatus 600 are modules obtained through division based on logical functions. Actually, the image denoising apparatus 600 may alternatively be divided into other functional modules based on a specific processing process of performing image denoising by the image denoising apparatus 600 .
- FIG. 8 is a schematic block diagram of an image denoising apparatus according to an embodiment of this application.
- the image denoising apparatus 700 includes a shuffle module 701 (which is configured to perform shuffle operations to obtain images with different resolutions), an image feature extraction module 702 a, several image feature extraction and fusion modules 702 b, and a feature application module 703 .
- the shuffle module 701 , the image feature extraction module 702 a, and the several image feature extraction and fusion modules 702 b in the image denoising apparatus 700 are equivalent to the obtaining module 601 in the image denoising apparatus 600
- the feature application module 703 is equivalent to the denoising module 602 in the image denoising apparatus 600 .
- the image denoising apparatus 700 obtains images with different resolutions by performing shuffle operations, and therefore the image denoising apparatus 700 includes the shuffle module 701 . However, if the image denoising apparatus 700 obtains images with different resolutions by performing downsampling operations, the image denoising apparatus 700 may include a downsampling module 701 .
- the image denoising apparatus 700 performs shuffle operations on an input image to obtain the images with different resolutions, extracts an image feature of a low-resolution image, and transfers the extracted image feature of the low-resolution image layer by layer, so as to provide guidance on extraction of a feature of a high-resolution image.
- an input-image bootstrapping method is used, to fully utilize a large amount of context information to implement efficient multilayer image information concatenation. This can achieve a better denoising effect.
- the following describes functions of the modules in the image denoising apparatus in FIG. 8 in the whole denoising process.
- the shuffle module performs shuffle (shuffle) operations to reconstruct a structure of the input image, so as to obtain several images with different resolution tensors (tensor) and different channel tensors.
- a structure of the input image so as to obtain several images with different resolution tensors (tensor) and different channel tensors.
- tensor tensor
- M ⁇ N ⁇ C an input image with a size of M ⁇ N and a channel quantity of C
- M/2 ⁇ N/2 ⁇ 4C a channel quantity of 4 C
- Feature extraction module For a lowest-resolution image, an image feature of the image needs to be extracted only, and feature fusion does not need to be performed. Therefore, the feature extraction module may be configured to extract the image feature of the lowest-resolution image.
- the feature extraction and fusion module is configured to concatenate, by using a concatenation (concatenate) operation, an image feature extracted in a previous-layer network that processes a lower-resolution image with an initial feature extracted by a current-layer network for a higher-resolution image, to finally obtain an image feature of the current-layer image.
- concatenation concatenate
- the feature application module applies the finally obtained feature to the input image, to obtain a final output.
- the finally obtained output image is an image obtained after denoising processing is performed on the input image.
- FIG. 9 is a schematic diagram of a process of performing image denoising by an image denoising apparatus according to an embodiment of this application.
- the shuffle module performs the shuffle operations on the input image, to obtain a first image to a (K+1) th image.
- a resolution of the first image is the lowest, a resolution of the (K+1) th image is the highest, and the (K+1) th image is the input image.
- a module corresponding to each image may be used to process the image, to obtain an image feature of the image.
- a feature extraction module corresponding to the first image is configured to extract an image feature of the first image
- a feature extraction and fusion module corresponding to a second image is configured to extract an image feature of the second image based on the image feature of the first image
- a feature extraction and fusion module corresponding to a K th image is configured to extract an image feature of the K th image based on the image feature of the (K ⁇ 1) th image
- a feature extraction and fusion module corresponding to a (K+1) th image is configured to extract an image feature of the (K+1) th image based on the image feature of the K th image.
- the feature application module may be configured to: obtain a residual estimated value of the (K+1) th image, and superimpose the residual estimated value on the input image to obtain an output image.
- the output image is a denoised image.
- FIG. 10 is a schematic diagram of a hardware structure of a neural network training apparatus according to an embodiment of this application.
- the neural network training apparatus 800 (the apparatus 800 may specifically be a computer device) shown in FIG. 10 includes a memory 801 , a processor 802 , a communications bus 803 , and a bus 804 .
- the memory 801 , the processor 802 , and the communications interface 803 implement mutual communication connections through the bus 804 .
- the memory 801 may be a read only memory (read only memory, ROM), a static storage device, a dynamic storage device, or a random access memory (random access memory, RAM).
- the memory 801 may store a program, and when the program stored in the memory 801 is being executed by the processor 802 , the processor 802 and the communications interface 803 are configured to perform the steps in a neural network training method in the embodiments of this application.
- the processor 802 may be configured to execute a related program by using a general-purpose central processing unit (central processing unit, CPU), a microprocessor, an application-specific integrated circuit (ASIC), a graphics processing unit (graphics processing unit, GPU), or one or more integrated circuits, to implement functions that need to be performed by units in the neural network training apparatus in this embodiment of this application, or to perform the neural network training method in the method embodiment of this application.
- CPU central processing unit
- ASIC application-specific integrated circuit
- GPU graphics processing unit
- integrated circuits graphics processing unit
- the processor 802 may be an integrated circuit chip and has a signal processing capability. In an implementation process, the steps of the neural network training method in this application may be completed by using an integrated logic circuit of hardware in the processor 802 or an instruction in a software form.
- the foregoing processor 802 may be a general purpose processor, a digital signal processor (digital signal processing, DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (field programmable gate array, FPGA) or another programmable logical device, a discrete gate or transistor logic device, or a discrete hardware component. It may implement or perform the methods, the steps, and logical block diagrams that are disclosed in the embodiments of this application.
- the general purpose processor may be a microprocessor, or the processor may be any conventional processor or the like. Steps of the methods disclosed with reference to the embodiments of this application may be directly executed and accomplished by a hardware decoding processor, or may be executed and accomplished by using a combination of hardware and software modules in the decoding processor.
- a software module may be located in a mature storage medium in the art, such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, or a register. The storage medium is located in the memory 801 .
- the processor 802 reads information from the memory 801 , and in combination with the hardware of the processor 802 , implements the functions that need to be performed by the units included in the neural network training apparatus in this embodiment of this application, or performs the neural network training method in the method embodiment of this application.
- the communications interface 803 implements communication between the apparatus 800 and another device or a communications network by using a transceiver apparatus, for example, but not limited to, a transceiver.
- the communications interface 803 may obtain training data (for example, an original image and a noise image that is obtained after noise is added to the original image in the embodiments of this application).
- the bus 804 may include a path for transmitting information between the components (for example, the memory 801 , the processor 802 , and the communications interface 803 ) of the apparatus 800 .
- FIG. 11 is a schematic diagram of a hardware structure of an image denoising apparatus according to an embodiment of this application.
- the image denoising apparatus 900 (the apparatus 900 may specifically be a computer device) shown in FIG. 11 includes a memory 901 , a processor 902 , a communications bus 903 , and a bus 904 .
- the memory 901 , the processor 902 , and the communications interface 903 implement mutual communication connections through the bus 904 .
- the memory 901 may be a ROM, a static storage device, and a RAM.
- the memory 901 may store a program, and when the program stored in the memory 901 is being executed by the processor 902 , the processor 902 and the communications interface 903 are configured to perform the steps in the image denoising method in this embodiment of this application.
- the processor 902 may be configured to execute a related program by using a general-purpose CPU, a microprocessor, an ASIC, a GPU, or one or more integrated circuits, to implement functions that need to be performed by units in the image denoising apparatus in this embodiment of this application, or to perform the image denoising method in the method embodiment of this application.
- the processor 902 may alternatively be an integrated circuit chip and has a signal processing capability. In an implementation process, the steps of the image denoising method in this application may be completed by using an integrated logic circuit of hardware in the processor 902 or an instruction in a software form.
- the foregoing processor 902 may alternatively be a general purpose processor, a DSP, an ASIC, an FPGA or another programmable logical device, a discrete gate or transistor logic device, or a discrete hardware component. It may implement or perform the methods, the steps, and logical block diagrams that are disclosed in the embodiments of this application.
- the general purpose processor may be a microprocessor, or the processor may be any conventional processor or the like.
- Steps of the methods disclosed with reference to the embodiments of this application may be directly executed and accomplished by a hardware decoding processor, or may be executed and accomplished by using a combination of hardware and software modules in the decoding processor.
- a software module may be located in a mature storage medium in the art, such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, or a register.
- the storage medium is located in the memory 901 .
- the processor 902 reads information from the memory 901 , and in combination with the hardware of the processor 902 , implements the functions that need to be performed by the units included in the image denoising apparatus in this embodiment of this application, or performs the image denoising method in the method embodiment of this application.
- the communications interface 903 implements communication between the apparatus 900 and another device or a communications network by using a transceiver apparatus, for example, but not limited to, a transceiver.
- a transceiver apparatus for example, but not limited to, a transceiver.
- the communications interface 903 may obtain training data.
- the bus 904 may include a path for transmitting information between the components (for example, the memory 901 , the processor 902 , and the communications interface 903 ) of the apparatus 900 .
- the obtaining module 601 and the denoising module 602 in the image denoising apparatus 600 are equivalent to the processor 902 .
- apparatuses 800 and 900 shown in FIG. 10 and FIG. 11 merely show the memory, the processor, and the communications interface, in a specific implementation process, a person skilled in the art should understand that the apparatuses 800 and 900 further include other components necessary to implement normal running. In addition, according to a specific requirement, a person skilled in the art should understand that the apparatus 800 and 900 may further include hardware components for implementing other additional functions. In addition, a person skilled in the art should understand that the apparatuses 800 and 900 may alternatively include components necessary to implement the embodiments of this application, but unnecessarily include all the components shown in FIG. 10 or FIG. 11 .
- the apparatus 800 is equivalent to the training device 120 in FIG. 1
- the apparatus 900 is equivalent to the execution device 110 in FIG. 1 .
- the disclosed system, apparatus, and method may be implemented in other manners.
- the described apparatus embodiment is merely an example.
- the unit division is merely logical function division and may be other division in actual implementation.
- a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed.
- the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented by using some interfaces.
- the indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
- the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of the embodiments.
- the functions When the functions are implemented in the form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of this application essentially, or some of the technical solutions may be implemented in a form of a software product.
- the software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in the embodiments of this application.
- the foregoing storage medium includes: any medium that can store program code, such as a USB flash drive, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disc.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Neurology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910156951.0 | 2019-03-01 | ||
CN201910156951.0A CN109993707B (zh) | 2019-03-01 | 2019-03-01 | 图像去噪方法和装置 |
PCT/CN2020/076928 WO2020177607A1 (fr) | 2019-03-01 | 2020-02-27 | Procédé et appareil de débruitage d'image |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/076928 Continuation WO2020177607A1 (fr) | 2019-03-01 | 2020-02-27 | Procédé et appareil de débruitage d'image |
Publications (2)
Publication Number | Publication Date |
---|---|
US20210398252A1 true US20210398252A1 (en) | 2021-12-23 |
US12062158B2 US12062158B2 (en) | 2024-08-13 |
Family
ID=67129944
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/462,176 Active 2041-03-26 US12062158B2 (en) | 2019-03-01 | 2021-08-31 | Image denoising method and apparatus |
Country Status (4)
Country | Link |
---|---|
US (1) | US12062158B2 (fr) |
EP (1) | EP3923233A4 (fr) |
CN (1) | CN109993707B (fr) |
WO (1) | WO2020177607A1 (fr) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210209735A1 (en) * | 2020-08-25 | 2021-07-08 | Sharif University of Technologies | Machine learning-based denoising of an image |
US11735315B2 (en) * | 2020-09-25 | 2023-08-22 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method, apparatus, and device for fusing features applied to small target detection, and storage medium |
US20230326013A1 (en) * | 2022-03-28 | 2023-10-12 | Taipei Medical University | Method for predicting epidermal growth factor receptor mutations in lung adenocarcinoma |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109993707B (zh) | 2019-03-01 | 2023-05-12 | 华为技术有限公司 | 图像去噪方法和装置 |
US11540798B2 (en) | 2019-08-30 | 2023-01-03 | The Research Foundation For The State University Of New York | Dilated convolutional neural network system and method for positron emission tomography (PET) image denoising |
CN112668366B (zh) * | 2019-10-15 | 2024-04-26 | 华为云计算技术有限公司 | 图像识别方法、装置、计算机可读存储介质及芯片 |
CN111091503B (zh) * | 2019-11-09 | 2023-05-02 | 复旦大学 | 基于深度学习的图像去失焦模糊方法 |
CN111402146B (zh) * | 2020-02-21 | 2022-05-10 | 华为技术有限公司 | 图像处理方法以及图像处理装置 |
CN111402166A (zh) * | 2020-03-18 | 2020-07-10 | 上海嘉沃光电科技有限公司 | 图像去噪方法及装置、服务终端及计算机可读存储介质 |
CN111582457A (zh) * | 2020-05-11 | 2020-08-25 | 陈永聪 | 一种模仿人类记忆来实现通用机器智能的方法 |
CN111951195A (zh) * | 2020-07-08 | 2020-11-17 | 华为技术有限公司 | 图像增强方法及装置 |
CN112990215B (zh) * | 2021-03-04 | 2023-12-12 | 腾讯科技(深圳)有限公司 | 图像去噪方法、装置、设备及存储介质 |
CN113591843B (zh) * | 2021-07-12 | 2024-04-09 | 中国兵器工业计算机应用技术研究所 | 仿初级视觉皮层的目标检测方法、装置及设备 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080253678A1 (en) * | 2007-04-10 | 2008-10-16 | Arcsoft, Inc. | Denoise method on image pyramid |
US20190049540A1 (en) * | 2017-08-10 | 2019-02-14 | Siemens Healthcare Gmbh | Image standardization using generative adversarial networks |
US20200184252A1 (en) * | 2018-12-10 | 2020-06-11 | International Business Machines Corporation | Deep Learning Network for Salient Region Identification in Images |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102203826B (zh) * | 2008-12-25 | 2015-02-18 | 梅迪奇视觉成像解决方案有限公司 | 医学图像的降噪 |
JP2011180798A (ja) * | 2010-03-01 | 2011-09-15 | Sony Corp | 画像処理装置、および画像処理方法、並びにプログラム |
CN105335947B (zh) * | 2014-05-26 | 2019-03-01 | 富士通株式会社 | 图像去噪方法和图像去噪装置 |
CN106447616B (zh) * | 2015-08-12 | 2021-10-08 | 中兴通讯股份有限公司 | 一种实现小波去噪的方法和装置 |
CN108604369B (zh) * | 2016-07-27 | 2020-10-27 | 华为技术有限公司 | 一种去除图像噪声的方法、装置、设备及卷积神经网络 |
CN107578377A (zh) * | 2017-08-31 | 2018-01-12 | 北京飞搜科技有限公司 | 一种基于深度学习的超分辨率图像重建方法及系统 |
CN108492258B (zh) * | 2018-01-17 | 2021-12-07 | 天津大学 | 一种基于生成对抗网络的雷达图像去噪方法 |
CN108416748A (zh) * | 2018-02-26 | 2018-08-17 | 阿博茨德(北京)科技有限公司 | Jpeg压缩文档的图像预处理方法及装置 |
CN108805840B (zh) * | 2018-06-11 | 2021-03-26 | Oppo(重庆)智能科技有限公司 | 图像去噪的方法、装置、终端及计算机可读存储介质 |
CN109993707B (zh) * | 2019-03-01 | 2023-05-12 | 华为技术有限公司 | 图像去噪方法和装置 |
-
2019
- 2019-03-01 CN CN201910156951.0A patent/CN109993707B/zh active Active
-
2020
- 2020-02-27 WO PCT/CN2020/076928 patent/WO2020177607A1/fr unknown
- 2020-02-27 EP EP20767267.6A patent/EP3923233A4/fr active Pending
-
2021
- 2021-08-31 US US17/462,176 patent/US12062158B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080253678A1 (en) * | 2007-04-10 | 2008-10-16 | Arcsoft, Inc. | Denoise method on image pyramid |
US20190049540A1 (en) * | 2017-08-10 | 2019-02-14 | Siemens Healthcare Gmbh | Image standardization using generative adversarial networks |
US20200184252A1 (en) * | 2018-12-10 | 2020-06-11 | International Business Machines Corporation | Deep Learning Network for Salient Region Identification in Images |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210209735A1 (en) * | 2020-08-25 | 2021-07-08 | Sharif University of Technologies | Machine learning-based denoising of an image |
US11887279B2 (en) * | 2020-08-25 | 2024-01-30 | Sharif University Of Technology | Machine learning-based denoising of an image |
US11735315B2 (en) * | 2020-09-25 | 2023-08-22 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method, apparatus, and device for fusing features applied to small target detection, and storage medium |
US20230326013A1 (en) * | 2022-03-28 | 2023-10-12 | Taipei Medical University | Method for predicting epidermal growth factor receptor mutations in lung adenocarcinoma |
Also Published As
Publication number | Publication date |
---|---|
CN109993707A (zh) | 2019-07-09 |
EP3923233A4 (fr) | 2022-04-13 |
CN109993707B (zh) | 2023-05-12 |
WO2020177607A1 (fr) | 2020-09-10 |
EP3923233A1 (fr) | 2021-12-15 |
US12062158B2 (en) | 2024-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12062158B2 (en) | Image denoising method and apparatus | |
WO2020177651A1 (fr) | Procédé de segmentation d'image et dispositif de traitement d'image | |
CN112446834B (zh) | 图像增强方法和装置 | |
CN110188795B (zh) | 图像分类方法、数据处理方法和装置 | |
US20220108546A1 (en) | Object detection method and apparatus, and computer storage medium | |
JP7289918B2 (ja) | 物体認識方法及び装置 | |
CN112446270B (zh) | 行人再识别网络的训练方法、行人再识别方法和装置 | |
CN111914997B (zh) | 训练神经网络的方法、图像处理方法及装置 | |
US20220215227A1 (en) | Neural Architecture Search Method, Image Processing Method And Apparatus, And Storage Medium | |
CN111797882B (zh) | 图像分类方法及装置 | |
CN113011562B (zh) | 一种模型训练方法及装置 | |
US20220157046A1 (en) | Image Classification Method And Apparatus | |
US12039440B2 (en) | Image classification method and apparatus, and image classification model training method and apparatus | |
CN110309856A (zh) | 图像分类方法、神经网络的训练方法及装置 | |
US12026938B2 (en) | Neural architecture search method and image processing method and apparatus | |
CN112446380A (zh) | 图像处理方法和装置 | |
CN111695673B (zh) | 训练神经网络预测器的方法、图像处理方法及装置 | |
EP4293628A1 (fr) | Procédé de traitement d'image et appareil associé | |
CN112529904A (zh) | 图像语义分割方法、装置、计算机可读存储介质和芯片 | |
CN114693986A (zh) | 主动学习模型的训练方法、图像处理方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: HUAWEI TECHNOLOGIES CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SONG, FENGLONG;LIU, LIU;WANG, TAO;SIGNING DATES FROM 20210930 TO 20211026;REEL/FRAME:057922/0172 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |