CN113168673A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN113168673A
CN113168673A CN201980079484.4A CN201980079484A CN113168673A CN 113168673 A CN113168673 A CN 113168673A CN 201980079484 A CN201980079484 A CN 201980079484A CN 113168673 A CN113168673 A CN 113168673A
Authority
CN
China
Prior art keywords
image
color
processing
type
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980079484.4A
Other languages
Chinese (zh)
Inventor
李蒙
胡慧
陈海
郑成林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN113168673A publication Critical patent/CN113168673A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control

Abstract

The application provides an image processing method and device and an electronic device. In the technical scheme of the application, an image to be processed is obtained; processing the image to be processed through a first branch of a pre-trained neural network model to obtain a first type of parameter, wherein the first type of parameter is used for carrying out global color processing on the image; processing the image to be processed through a second branch of the neural network model to obtain a second type of parameters, wherein the second type of parameters are used for carrying out local color processing on the image; and carrying out color processing on the image to be processed according to the first type of parameters and the second type of parameters to obtain the image after color processing. In the technical scheme, the input of the neural network model is the image to be processed, different branches output different types of parameters, the problem of error accumulation in the parameter calculation process can be avoided, and more accurate parameters can be obtained. Furthermore, the image to be processed is processed according to the obtained parameters, so that the color correction effect of the image can be improved.

Description

Image processing method and device and electronic equipment Technical Field
The present application relates to the field of image processing, and more particularly, to an image processing method, apparatus, and electronic device.
Background
Image Signal Processing (ISP) mainly performs post-processing on an image signal output by a front-end image sensor. Depending on the ISP, images obtained under different optical conditions can better restore the site details.
The ISP processing flow is as shown in fig. 1, a natural scene 101 obtains a bayer (bayer) image through a lens (lens)102, then obtains an analog electrical signal 105 through a photoelectric conversion 104, further obtains a digital image signal (i.e. a raw image) 107 through noise reduction and analog-to-digital processing 106, and then enters a digital signal processing chip 100. The steps in the digital signal processing chip 100 are core steps of ISP processing, and the digital signal processing chip 100 generally includes Black Level Correction (BLC) 108, lens shading correction (lens shading correction)109, Bad Pixel Correction (BPC) 110, demosaic (demosaic)111, bayer domain noise reduction (noise) 112, Auto White Balance (AWB) 113, Ygamma114, Auto Exposure (AE) 115, Auto Focus (AF) (not shown in fig. 1), Color Correction (CC) 116, gamma (gamma) correction 117, color gamut conversion 118, denoising/detail enhancement 119, Color Enhancement (CE) 120, a braider (coder) 121, input/output (input/output, I/output) 122, and other analog control blocks.
The color-related modules in the ISP processing mainly include several modules such as AWB113, CC116, CE120, and the like. Wherein the AWB and CC modules are global color processing modules and the CE is a local color processing module. The color-related block in the ISP process may be implemented by a neural network model, but since the ISP process is a serial process, i.e., an output of a previous block is used as an input of a next block, there is a problem of error accumulation.
Disclosure of Invention
The application provides an image processing method and device, which can avoid the problem of error accumulation in a serial ISP image color processing flow and improve the image color processing effect.
In a first aspect, the present application provides an image processing method, including: acquiring an image to be processed; processing the image to be processed through a first branch of a pre-trained neural network model to obtain a first type of parameter, wherein the first type of parameter is used for carrying out global color processing on the image; processing the image to be processed through a second branch of the neural network model to obtain a second type of parameters, wherein the second type of parameters are used for carrying out local color processing on the image; and carrying out color processing on the image to be processed according to the first type of parameters and the second type of parameters to obtain the image after color processing.
In the technical scheme, the input of the neural network model is the image to be processed, different branches output different types of parameters, the problem of error accumulation in the parameter calculation process can be avoided, and more accurate parameters can be obtained. Furthermore, the image to be processed is processed according to the obtained parameters, so that the color correction effect of the image can be improved.
In a possible implementation manner, the image to be processed is a raw image.
In the technical scheme, the input of the neural network model is a raw image, so that the information of the image is retained to the maximum extent, the obtained first-class parameters and second-class parameters are more accurate, and the image color correction effect is better.
In one possible implementation, the global color processing includes automatic white balancing and/or color correction, and the local color processing includes color rendering and/or color enhancement.
In the technical scheme, the first type of parameters and the second type of parameters can correspond to the traditional ISP module, so that when the image color correction effect is not ideal, the first type of parameters and the second type of parameters are adjusted according to the debugging experience of the traditional ISP, and the inherent problem that the subjective effect of a neural network is not adjustable is solved.
In one possible implementation, the first branch and the second branch share a shared parameter layer of the neural network model.
It can be understood that, in the case that obtaining both the first type of parameter and the second type of parameter requires obtaining intermediate feature layer data, the layer of the first branch obtaining the intermediate feature layer data and the layer of the second branch obtaining the intermediate feature layer data may share structural parameters. For example, the image to be processed is processed through a shared parameter layer of the pre-trained neural network model to obtain intermediate feature layer data; processing the intermediate characteristic layer data through a first branch (except a shared parameter layer) of the pre-trained neural network model to obtain the first type of parameters; and processing the intermediate characteristic layer data through a second branch (except for a shared parameter layer) of the neural network model to obtain the second type of parameters.
In the technical scheme, the second branch of the neural network model can directly use the shared parameter layer to obtain the intermediate characteristic layer data, namely, the first branch and the second branch can multiplex part of the calculation process, so that the calculation complexity can be reduced, and the occupation of a storage space can be reduced.
In one possible implementation, the first type of parameter is in a matrix form; the color processing of the image to be processed according to the first kind of parameters and the second kind of parameters comprises: performing matrix multiplication on the image to be processed and the first type of parameters to obtain a first image; and according to the second type of parameters, carrying out local color processing on the first image to obtain a second image.
In a possible implementation manner, the performing, according to the second type of parameter, local color processing on the first image includes: calculating a difference value between a numerical value of a color channel and a numerical value of a brightness channel of the first image; adjusting the difference value according to the second type of parameter; and adding the adjusted difference value to the value of the brightness channel of the first image.
In a possible implementation, the image format of the first image is a color RGB format, and the second type of parameters includes a color processing coefficient beta1 of a color channel R, a color processing coefficient beta2 of a color channel G, and a color processing coefficient beta3 of a color channel B; the local color processing on the first image according to the second type of parameters comprises: according to the formula
Figure PCTCN2019083693-APPB-000001
And carrying out local color adjustment on the first image, wherein Y ' is a numerical value of a brightness channel of the first image, R ', G ', B ' are respectively a numerical value of a color channel R, a numerical value of a color channel G and a numerical value of a color channel B of the second image, and R ', G ', B ' are respectively a numerical value of a color channel R, a numerical value of a color channel G and a numerical value of a color channel B of the first image.
In a possible implementation manner, before performing color processing on the image to be processed according to the first class parameter and the second class parameter, the method further includes: and performing demosaicing processing and denoising processing on the image to be processed.
In a second aspect, the present application provides an image processing apparatus comprising means for performing the first aspect or any one of the implementations of the first aspect.
In a third aspect, the present application provides an image processing apparatus, comprising a memory and a processor, configured to perform the method of the first aspect or any one of the implementation manners of the first aspect.
In a fourth aspect, the present application provides a chip, where the chip is connected to a memory, and is configured to read and execute a software program stored in the memory, so as to implement the method according to the first aspect or any implementation manner of the first aspect.
In a fifth aspect, the present application provides an electronic device, comprising a processor and a memory, configured to perform the method of the first aspect or any one of the implementation manners of the first aspect.
In a sixth aspect, the present application provides a computer-readable storage medium comprising instructions that, when executed on an electronic device, cause the electronic device to perform the method of the first aspect or any one of the implementations of the first aspect.
In a seventh aspect, the present application provides a computer program product, which when run on an electronic device, causes the electronic device to perform the method of the first aspect or any one of the implementation manners of the first aspect.
Drawings
Fig. 1 is a schematic flow chart of ISP processing.
Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application.
Fig. 3 is a specific example of an image processing method according to an embodiment of the present application.
Fig. 4 is a network framework of a neural network model of an embodiment of the present application.
FIG. 5 is a network framework of a neural network model of another embodiment of the present application.
FIG. 6 is a network framework of a neural network model of another embodiment of the present application.
Fig. 7 is a schematic configuration diagram of an image processing apparatus provided in an embodiment of the present application.
Fig. 8 is a schematic configuration diagram of an image processing apparatus according to another embodiment of the present application.
Fig. 9 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
The technical scheme of the application can be applied to any scene needing color processing on the image, such as a safe city, remote driving, human-computer interaction and other scenes needing to take a picture, record a video or display the image.
It should be understood that the terms "system" and "network" are often used interchangeably herein in this application. The term "and/or" in this application is only one kind of association relationship describing the associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application. Method 200 may be performed by any device having image processing capabilities. As shown in fig. 2, method 200 may include at least some of the following.
At 210, a to-be-processed image is acquired.
At 220, the image to be processed is processed through a first branch of the pre-trained neural network model to obtain a first type of parameter, and the first type of parameter is used for performing global color processing on the image.
At 230, the image to be processed is processed through a second branch of the neural network model to obtain a second type of parameters, and the second type of parameters are used for performing local color processing on the image.
In 240, color processing is performed on the image to be processed according to the first kind of parameters and the second kind of parameters to obtain a color-processed image.
In the method 200, the inputs of different branches of the neural network model are all to-be-processed images, and different types of parameters are output under the condition of the same input, that is, the first branch and the second branch of the neural network model have the same input, so that the processing processes of the first branch and the second branch to-be-processed images are parallel processing, and compared with a serial processing process, the problem of error accumulation in the parameter calculation process can be avoided, and more accurate parameters can be obtained. Furthermore, the image to be processed is processed according to the obtained parameters, so that the color correction effect of the image can be improved. And the output of the neural network model is a color processing parameter, the parameter can correspond to a parameter corresponding to a traditional ISP processing module, when the image color processing effect is not ideal, the parameter can be finely adjusted according to the debugging experience of the traditional ISP parameter, the image color correcting effect is further improved, and the problem that the most-scaled parameter of the neural network cannot be debugged is solved.
The image to be processed in the embodiment of the present application may be acquired image data, an image signal, or the like. In practical applications, the image to be processed may be acquired by an image acquisition device (e.g., a lens and a sensor), or may be received from other devices, and the embodiment of the present application is not particularly limited.
The image to be processed may be a raw image (raw image). Since the raw image is raw data obtained by converting a captured light source signal into a digital signal by a Complementary Metal Oxide Semiconductor (CMOS) or a Charge Coupled Device (CCD) image sensor, the raw image is lossless and thus contains original information of an object. The input of the neural network model is a raw image, so that the information of the image is retained to the maximum extent, the obtained first-class parameters and second-class parameters can remind the information of a real scene, and further the image color correction effect is better. Of course, the image to be processed may also be an image after being subjected to other image processing processes other than color processing. Other image processing processes include any one or combination of any plurality of black level correction, lens shading correction, dead pixel correction, demosaicing, bayer domain noise reduction, auto-exposure, auto-focus, and the like.
The pre-trained neural network model of the embodiment of the application may be stored on an image processing apparatus, for example, on a mobile phone terminal, a tablet computer, a notebook computer, an Augmented Reality (AR) AR/Virtual Reality (VR), a vehicle-mounted terminal, or the like. The pre-trained neural network model of the embodiment of the application can also be stored in a server or a cloud terminal and the like. The pre-trained neural network model may be a corresponding target model/rule generated based on different training data for different targets (or different tasks), and the corresponding target model/rule may be used to achieve the target or complete the task, so as to provide a desired result for the user. For example, in the embodiment of the present application, the first type parameter and the second type parameter for image color processing may be output.
Different branches of the neural network model can be understood as different processes of the neural network model. In an embodiment of the application, the first branch of the neural network model corresponds to generating a first type of parameters (i.e. parameters for global color processing of the image) and the second branch of the neural network model corresponds to generating a second type of parameters (i.e. parameters for local color processing of the image).
The first branch and the second branch may be two independent neural network models. That is, the neural network model of the embodiment of the present application is a set of a first neural network model corresponding to a first branch and a second neural network model corresponding to a second branch, and inputs of the first neural network model and the second neural network model are to-be-processed images; the first branch and the second branch may also be two parts of the same neural network model, and the embodiment of the present application is not particularly limited.
Alternatively, the first branch and the second branch may multiplex or share part of the processing. For example, the first branch and the second branch share a shared parameter layer of the neural network model. It can be understood that, in the case that obtaining both the first type of parameter and the second type of parameter requires obtaining intermediate feature layer data, the layer of the first branch obtaining the intermediate feature layer data and the layer of the second branch obtaining the intermediate feature layer data may share structural parameters. For example, the image to be processed is processed through a shared parameter layer of the pre-trained neural network model to obtain intermediate feature layer data; processing the intermediate characteristic layer data through a first branch (except a shared parameter layer) of the pre-trained neural network model to obtain the first type of parameters; and processing the intermediate characteristic layer data through a second branch (except for a shared parameter layer) of the neural network model to obtain the second type of parameters. In this way, the second branch of the neural network model can directly use the shared parameter layer to obtain the intermediate feature layer data, that is, the first branch and the second branch can multiplex part of the calculation process, so that the calculation complexity can be reduced, and the occupation of the storage space can be reduced.
The first type of parameter is used for global color processing of the image, wherein the global color processing comprises at least one type of global color processing. For example, the first type of parameter is used for automatic white balance processing and/or color correction processing of an image. The global color processing is processing of the entire image. Optionally, the first type of parameters may include M parameters, where the M parameters correspond to N kinds of global color processing, and M and N are integers greater than or equal to 1. The relationship between the M parameters and the global color processing in N may be one-to-one, one-to-many, or many-to-one, and this embodiment of the present application is not particularly limited. Taking the first kind of parameters as an example for performing the automatic white balance processing and/or the color correction processing on the image, the first kind of parameters may include a first parameter corresponding to the automatic white balance processing and/or a second kind of parameters corresponding to the color correction processing; the first type of parameters may also include a first parameter corresponding to an automatic white balance process and a third parameter corresponding to a color correction process; the first type of parameters may also include a fourth parameter and a fifth parameter corresponding to the automatic white balance process, and a sixth parameter corresponding to the color correction process, and the like. When the at least one global color process corresponds to a plurality of parameters, the execution order of the at least one global color process is not limited in the embodiment of the present application. Alternatively, the first type of parameter may be in the form of a matrix. Taking the first kind of parameters for performing the automatic white balance processing and the color correction processing on the image as an example, the first kind of parameters includes a matrix for the automatic white balance processing and a matrix for the color correction processing.
When the first type of parameter is in a matrix form, color processing is performed on the image to be processed according to the first type of parameter, which may be matrix multiplication between the first type of parameter and the image to be processed.
For example, the automatic white balance processing of the image to be processed may be processed according to the following formula:
Figure PCTCN2019083693-APPB-000002
wherein a, B, c can be determined by a neural network model, R ', G ', B ' are the numerical value of the color channel R, the numerical value of the color channel G and the numerical value of the color channel B of the image after the automatic white balance processing, respectively, and R, G, B are the numerical value of the color channel R, the numerical value of the color channel G and the numerical value of the color channel B of the image before the automatic white balance processing. In the embodiment of the present application, R ', G ', and B ' are used to indicate the values of the color channel R, the color channel G, and the color channel B of the image after being subjected to one or more kinds of global color processing, and R, G, B indicates the values of the color channel R, the color channel G, and the color channel B of the image before being subjected to one or more kinds of color processing, which will not be explained below.
For example, the color correction process for the image to be processed may be performed according to the following formula:
Figure PCTCN2019083693-APPB-000003
wherein R ', G ', B ' are the numerical value of the color channel R, the numerical value of the color channel G, and the numerical value of the color channel B of the image after the color correction processing, respectively, and R, G, B are the numerical value of the color channel R, the numerical value of the color channel G, and the numerical value of the color channel B of the image before the color correction processing, respectively.
The embodiment of the application can also use the following quadratic term, cubic term, evolution term and the like to perform color correction treatment:
ρ 2,3=(R,G,B,R 2,G 2,B 2,RG,GB,RB) T
Figure PCTCN2019083693-APPB-000004
Figure PCTCN2019083693-APPB-000005
r, G, B are the values of the color channel R, the color channel G, and the color channel B, respectively, of the image before color correction processing, with T representing the transpose. Corresponding to the above formats, there are different formats for the matrix used for color correction, and taking the color correction matrix with quadratic terms as an example, the color correction process performed on the image to be processed can be performed according to the following formula, and the matrix M used for color correction can be a3 × 10 matrix:
Figure PCTCN2019083693-APPB-000006
wherein R ', G ', B ' are the numerical value of the color channel R, the numerical value of the color channel G, and the numerical value of the channel B of the image after the color correction processing, respectively, and R, G, B are the numerical value of the color channel R, the numerical value of the color channel G, and the numerical value of the color channel B of the image before the color correction processing, respectively.
It is to be understood that the at least one global color process described above may also correspond to only one matrix, and the embodiment of the present application is not particularly limited.
The second type of parameter is used for performing local color processing on the image, wherein the local color processing comprises at least one type of local color processing. Local color processing is the processing of a part or portion of an image. For example, the second type of parameter is used to color enhance or color render a portion of the image. Similarly, the relationship between the second type of parameter and the at least one local color processing may be one-to-one, one-to-many, or many-to-one, and the embodiment of the present application is not particularly limited. When the at least one local color processing corresponds to a plurality of parameters, the execution order of the at least one local color processing is not limited in the embodiment of the present application. Alternatively, the second type of parameters may be filter function parameters, color adjustment coefficients, and the like.
The sequence of executing the global color processing and the local color processing is not particularly limited in the embodiment of the present application. Taking global color processing firstly, taking a first type of parameter as a matrix form as an example, carrying out matrix multiplication on the image to be processed and the first type of parameter to obtain a first image; and according to the second type of parameters, carrying out local color processing on the first image to obtain a second image.
There are many ways to perform local color processing on the image to be processed according to the second type of parameter, and the embodiment of the present application is not particularly limited. As an example, a difference between a value of a color channel of the first image and a value of a luminance channel of the first image is calculated, the difference is adjusted according to the second parameter, and the adjusted difference is added to the value of the luminance channel of the first image to obtain a value of a color channel of a second image.
There are many ways to adjust the difference value according to the second parameter, and the embodiment of the present application is not particularly limited. For example, the second type of parameter may be multiplied by the difference, the second type of parameter and the difference may be input to a preconfigured function, the second type of parameter may be added to the difference, etc.
The image format of the first image is not particularly limited in the embodiment of the present application, and may be, for example, a color RGB format, a YUV format, or the like. Taking the first image as an example of a color RGB format, the second type of parameters may include a color processing coefficient of a color channel R, a color processing coefficient of a color channel G, and a color processing coefficient of a color channel B, and the local color adjustment may be performed on the first image according to the following formula:
Figure PCTCN2019083693-APPB-000007
wherein Y "is a value of a luminance channel of the first image, R", G ", B" are a value of a color channel R, a value of a color channel G and a value of a color channel B of the second image, respectively, R, G, B are a value of a color channel R, a value of a color channel G and a value of a color channel B of the first image, respectively, and beta1, beta2, beta3 are a color processing coefficient of a color channel R, a color processing coefficient of a color channel G, a color processing coefficient of a color channel B, respectively.
Y "is generally obtained by Y" ═ a × R "+ B × G" + c × B ", and R", G ", and B" are the numerical value of the color channel R, the numerical value of the color channel G, and the numerical value of the color channel B, respectively, of the image before being subjected to the local color processing.
In some embodiments, before color processing the image to be processed according to the first type of parameter and the second type of parameter, some basic processing, such as denoising, demosaicing, etc., may be performed on the image to be processed.
Fig. 3 is a specific example of an image processing method according to an embodiment of the present application. It should be understood that fig. 3 is merely exemplary, merely to assist those skilled in the art in understanding the embodiments of the present application, and is not intended to limit the embodiments of the present application to the particular scenarios illustrated. As shown in fig. 3, a raw image (raw image) is demosaiced and denoised to obtain a linear RGB image, and meanwhile, the raw image enters a pre-trained neural network model, and is processed by the neural network model to obtain a global color correction matrix M and a local color processing coefficient beta, where the beta may be divided into R, G, B3 channels and is as large as the raw image, the linear RGB image is processed by the global color matrix M to obtain an R "G" B "image, and the R" G "B" image is a result of global color correction (e.g., automatic white balance and/or color correction), and then is processed by local color processing to obtain an R ' "G '" B ' "image after local color processing (e.g., color rendering and/or color enhancement).
The image color processing method focuses on global color processing and local color processing at the same time, and uses the same neural network model to complete all color processing from raw image of the sensor to the final image. Because the adjustment parameters used by each color processing are obtained by using the raw image, firstly, the problem of error accumulation can be avoided, and secondly, because the raw image keeps the information of the image to a large extent, the obtained adjustment parameters are more accurate.
Fig. 4 to 6 show three structural forms of the neural network model according to the embodiment of the present application, and it should be understood that fig. 4 to 6 are only exemplary and are only for assisting those skilled in the art in understanding the embodiment of the present application, and the embodiment of the present application is not limited to the specific illustrated scenarios. The neural network model of the embodiment of the present application may also be in other structural forms as long as the method of the embodiment of the present application is implemented.
The neural network model 400 in fig. 4 includes maximum pooling (max _ pooling)401, convolution (convolution)402, deconvolution (deconvolution)403, connection (connection) 404, global pooling (global _ pooling)405, full connection (full connection) 406, and remodeling (reshape) 407.
The convolution 402 acts in the image processing as a filter to extract specific information from the input image matrix. The convolution layer includes a plurality of convolution operators, which are also referred to as kernels, and the convolution operator may be essentially a weight matrix, which is usually predefined, and during the convolution operation on the image, the weight matrix is usually processed on the input image along the horizontal direction one pixel after another pixel (or two pixels after two pixels, or three pixels after three pixels, and so on, and the number of interval pixels depends on the value of the step length stride), so as to complete the task of extracting the specific feature from the image. Deconvolution 403, also referred to as transposed convolution, is the inverse of convolution 402.
Convolution 402 is often followed by periodic pooling, either by one layer of convolution followed by one layer of pooling, or by multiple layers of convolution followed by one or more layers of pooling. During image processing, the only purpose of the pooling layer is to reduce the spatial size of the image. The pooling process may include average pooling using an average pooling operator and/or maximum pooling using a maximum pooling operator 401 for sampling the input image to smaller sized images. The average pooling operator may calculate pixel values in the image over a certain range to produce an average as a result of the average pooling. The max pooling operator may take the pixel with the largest value in a particular range as the result of the max pooling. The embodiment of the present application employs max pooling 401.
The first branch of the neural network model 400 starts with 4 layers of image data with the size of M × N, and is processed by convolution 402 and maximum pooling 401 to obtain 512 layers of M/16 × N/16 image data (i.e., intermediate feature layer data), and is further processed by global pooling 405, full-connection 406, and reshaping 407 to obtain a first type of parameters. The second branch starts from 4 layers of image data with the size of M N, 512 layers of M/16N/16 image data (namely intermediate feature layer data) are obtained through convolution 402 processing and maximum pooling 401 processing, and the second type of parameters are finally obtained through deconvolution 403 processing, convolution 402 processing and connection 404 processing. Thus, the input of the first branch and the second branch are both 4 layers of image data with size M × N.
Alternatively, the first and second branches of the neural network model 400 may multiplex portions of image data (i.e., intermediate feature layer data) that begin with 4 layers of size M × N image data, undergo convolution 402 processing, and maximal pooling 401 processing to yield 512 layers of M/16 × N/16 image data.
The neural network model 500 in fig. 5 includes maximum pooling (max _ pooling)501, convolution (convolution)502, tiling (tiling)503, connection (connection) 504, global pooling (global _ pooling)505, full connection (full connection) 506, and reshaping (reshape) 507. The functions of the processes can refer to the related description in fig. 4, and are not described herein again.
The first branch of the neural network model 500 starts with 4 layers of image data with size M × N, and is processed by convolution 502, maximum pooling 501 and global pooling 505 to obtain 512 layers of 1 × 1 image data (i.e., intermediate feature layer data), and is further processed by full connection 506 and reshaping 507 to obtain a first type of parameters. The second branch starts from 4 layers of image data with the size of M x N, 512 layers of image data (namely intermediate feature layer data) with the size of 1 x 1 are obtained through convolution 502 processing, maximum pooling 501 processing and global pooling 505 processing, 512 layers of image data with the size of M x N are obtained through further processing by tilting 503, meanwhile, the second branch also conducts convolution 502 processing on the 4 layers of image data with the size of the image unchanged to obtain 512 layers of image data with the size of M x N, two parts of 512 layers of image data with the size of M x N are connected 504 to obtain 1024 layers of image data with the size of M x N, and the second type of parameters are obtained through further convolution 402 processing. Thus, the input of the first branch and the second branch are both 4 layers of image data with size M × N.
Alternatively, the first and second branches of the neural network model 500 may multiplex portions of 512 layers of 1 × 1 image data (i.e., intermediate feature layer data) starting with 4 layers of size M × N image data, processed by convolution 502, maximum pooling 501, and global pooling 505.
The neural network model 600 in fig. 6 includes maximum pooling (max _ pooling)601, convolution (convolution)602, global pooling (global _ pooling)605, full _ connect 606, reshaping (reshape)607, and the like. The functions of the processes can refer to the related description in fig. 4, and are not described herein again.
The first branch of the neural network model 600 starts with 4 layers of image data with size M × N, 32 layers of image data with size M × N (i.e., intermediate feature layer data) are obtained through convolution 602, and further are subjected to convolution 602, maximum pooling 601 processing, global pooling 605 processing, full connection 606 processing, reshaping 607 and the like, so as to finally obtain the first type of parameters. The second branch starts with 4 layers of image data with size M × N, and 32 layers of image data with size M × N (i.e., intermediate feature layer data) are obtained through convolution 602, and further subjected to convolution 602 to obtain a second type of parameters. Thus, the input of the first branch and the second branch are both 4 layers of image data with size M × N.
Alternatively, the first and second branches of the neural network model 600 may multiplex portions of 32 layers of M × N image data (i.e., intermediate feature layer data) that are processed by convolution 602, starting with 4 layers of M × N image data.
Embodiments of the apparatus or device of the present application are described below in conjunction with fig. 7-9.
Fig. 7 is a schematic configuration diagram of an image processing apparatus provided in an embodiment of the present application. As shown in fig. 7, the apparatus 700 includes an acquisition module 710 and a processing module 720.
The obtaining module 710 is configured to obtain an image to be processed.
The processing module 720 is configured to process the image to be processed through a first branch of a pre-trained neural network model to obtain a first type of parameter, where the first type of parameter is used to perform global color processing on the image.
The processing module 720 is further configured to process the image to be processed through a second branch of the neural network model to obtain a second type of parameter, where the second type of parameter is used to perform local color processing on the image.
The processing module 720 is further configured to perform color processing on the image to be processed according to the first type of parameter and the second type of parameter, so as to obtain a color-processed image.
Optionally, the image to be processed is a raw image.
Optionally, the global color processing comprises automatic white balancing and/or color correction, and the local color processing comprises color rendering and/or color enhancement.
Optionally, the first branch and the second branch share a shared parameter layer of the neural network model.
It can be understood that, in the case that obtaining both the first type of parameter and the second type of parameter requires obtaining intermediate feature layer data, the layer of the first branch obtaining the intermediate feature layer data and the layer of the second branch obtaining the intermediate feature layer data may share structural parameters. For example, the image to be processed is processed through a shared parameter layer of the pre-trained neural network model to obtain intermediate feature layer data; processing the intermediate characteristic layer data through a first branch (not including a shared parameter layer part) of the pre-trained neural network model to obtain the first type of parameters; and processing the intermediate characteristic layer data through a second branch (not comprising a shared parameter layer part) of the neural network model to obtain the second type of parameters.
Optionally, the first type of parameter is in a matrix form; the processing module 720 is specifically configured to perform matrix multiplication on the image to be processed and the first type of parameter to obtain a first image; and according to the second type of parameters, carrying out local color processing on the first image to obtain a second image.
Optionally, the processing module 720 is specifically configured to calculate a difference between a numerical value of a color channel and a numerical value of a brightness channel of the first image; adjusting the difference value according to the second type of parameter; and adding the adjusted difference value to the value of the brightness channel of the first image.
Optionally, the image format of the first image is a color RGB format, and the second type of parameters includes a color processing coefficient beta1 of color channel R, a color processing coefficient beta2 of color channel G, and a color processing coefficient beta3 of color channel B; the processing module 720 is specifically configured to perform the following formula
Figure PCTCN2019083693-APPB-000008
And carrying out local color adjustment on the first image, wherein Y ' is a numerical value of a brightness channel of the first image, R ', G ', B ' are respectively a numerical value of a color channel R, a numerical value of a color channel G and a numerical value of a color channel B of the second image, and R ', G ', B ' are respectively a numerical value of a color channel R, a numerical value of a color channel G and a numerical value of a color channel B of the first image.
The acquisition module 710 may be implemented by a transceiver or a processor. The processing module 720 may be implemented by a processor. The specific functions and advantages of the obtaining module 710 and the processing module 720 can refer to the method shown in fig. 2, and are not described herein again.
Fig. 8 is a schematic configuration diagram of an image processing apparatus according to another embodiment of the present application. As shown in fig. 8, the apparatus 800 may include a processor 820, a memory 830.
Only one memory and processor are shown in fig. 8. In an actual image processing apparatus product, there may be one or more processors and one or more memories. The memory may also be referred to as a storage medium or a storage device, etc. The memory may be provided independently of the processor, or may be integrated with the processor, which is not limited in this embodiment.
The processor 820 and the memory 830 communicate with each other via internal communication paths to transfer control and/or data signals.
Specifically, the processor 820 is configured to obtain an image to be processed; the image processing device is used for processing the image to be processed through a first branch of a pre-trained neural network model to obtain a first type of parameter, and the first type of parameter is used for carrying out global color processing on the image; the image processing device is also used for processing the image to be processed through a second branch of the neural network model to obtain a second type of parameters, and the second type of parameters are used for carrying out local color processing on the image; and the image processing device is also used for carrying out color processing on the image to be processed according to the first type of parameters and the second type of parameters so as to obtain the image after color processing.
The memory 830 according to various embodiments of the present application is used for storing computer instructions and parameters required for the operation of the processor.
The detailed operation and beneficial effects of the apparatus 800 can be seen from the description of the embodiment shown in fig. 2, and are not described herein again.
The embodiment of the application also provides the electronic equipment, and the electronic equipment can be terminal equipment. The device may be adapted to perform the functions/steps of the above-described method embodiments.
Fig. 9 is a schematic structural diagram of an electronic device provided in an embodiment of the present application. As shown in fig. 9, the electronic device 900 includes a processor 910 and a transceiver 920. Optionally, the electronic device 900 may also include a memory 930. The processor 910, the transceiver 920 and the memory 930 may communicate with each other via internal connection paths to transmit control and/or data signals, the memory 930 may be used for storing a computer program, and the processor 910 may be used for calling and running the computer program from the memory 930.
Optionally, the electronic device 900 may further include an antenna 940 for transmitting the wireless signal output by the transceiver 920.
The processor 910 and the memory 930 may be combined into a single processing device, or more generally, separate components, and the processor 910 is configured to execute the program code stored in the memory 930 to implement the functions described above. In particular implementations, the memory 930 may be integrated with the processor 910 or may be separate from the processor 910. The processor 910 may correspond to the processing module 820 in the apparatus 800 in fig. 8.
In addition, to further improve the functionality of the electronic device 900, the electronic device 900 may further comprise one or more of an input unit 960, a display unit 970, an audio circuit 980, a camera 990, a sensor 901, etc., which may further comprise a speaker 982, a microphone 984, etc. The display unit 970 may include a display screen, among others.
Optionally, the electronic device 900 may further include a power supply 950 for supplying power to various devices or circuits in the terminal device.
It should be understood that the electronic device 900 shown in fig. 9 is capable of implementing the processes of the method embodiments shown in fig. 2-6. The operations and/or functions of the respective modules in the electronic device 900 are respectively for implementing the corresponding flows in the above-described method embodiments. Reference may be made specifically to the description of the above method embodiments, and a detailed description is appropriately omitted herein to avoid redundancy.
The processor described in the various embodiments of the present application may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The processor described in the embodiments of the present application may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a Random Access Memory (RAM), a flash memory, a read-only memory (ROM), a programmable ROM, an electrically erasable programmable memory, a register, or other storage media that are well known in the art. The storage medium is located in a memory, and a processor reads instructions in the memory and combines hardware thereof to complete the steps of the method.
In the embodiments of the present application, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the inherent logic of the processes, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware or any other combination. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

  1. An image processing method, comprising:
    acquiring an image to be processed;
    processing the image to be processed through a first branch of a pre-trained neural network model to obtain a first type of parameter, wherein the first type of parameter is used for carrying out global color processing on the image;
    processing the image to be processed through a second branch of the neural network model to obtain a second type of parameters, wherein the second type of parameters are used for carrying out local color processing on the image;
    and carrying out color processing on the image to be processed according to the first type of parameters and the second type of parameters to obtain the image after color processing.
  2. The method according to claim 1, wherein the image to be processed is a raw image.
  3. The method according to claim 1 or 2, wherein the global color processing comprises automatic white balancing and/or color correction and the local color processing comprises color rendering and/or color enhancement.
  4. The method of any one of claims 1 to 3, wherein the first branch and the second branch share a shared parameter layer of the neural network model.
  5. The method according to any one of claims 1 to 4, wherein the first type of parameters are in the form of a matrix;
    the color processing of the image to be processed according to the first kind of parameters and the second kind of parameters comprises:
    performing matrix multiplication on the image to be processed and the first type of parameters to obtain a first image;
    and according to the second type of parameters, carrying out local color processing on the first image to obtain a second image.
  6. The method according to claim 5, wherein said locally color processing said first image according to said second type of parameters comprises:
    calculating a difference value between a numerical value of a color channel and a numerical value of a brightness channel of the first image;
    adjusting the difference value according to the second type of parameter;
    and adding the adjusted difference value to the value of the brightness channel of the first image.
  7. The method of claim 6, wherein the image format of the first image is a color RGB format, and the second type of parameters comprises color processing coefficients of a color channel R, color processing coefficients of a color channel G, and color processing coefficients of a color channel B;
    the local color processing on the first image according to the second type of parameters comprises:
    according to the formula
    Figure PCTCN2019083693-APPB-100001
    Performing a local color adjustment on the first image,
    wherein Y "is a numerical value of a luminance channel of the first image, R '", G ' ", B '" are respectively a numerical value of a color channel R, a numerical value of a color channel G, and a numerical value of a color channel B of the second image, and R ", G", B "are respectively a numerical value of a color channel R, a numerical value of a color channel G, and a numerical value of a color channel B of the first image.
  8. An image processing apparatus characterized by comprising:
    the acquisition module is used for acquiring an image to be processed;
    the processing module is used for processing the image to be processed through a first branch of a pre-trained neural network model to obtain a first type of parameter, and the first type of parameter is used for carrying out global color processing on the image;
    the processing module is further configured to process the image to be processed through a second branch of the neural network model to obtain a second type of parameter, where the second type of parameter is used to perform local color processing on the image;
    the processing module is further configured to perform color processing on the image to be processed according to the first type of parameter and the second type of parameter.
  9. The apparatus according to claim 8, wherein the image to be processed is an original raw image.
  10. The apparatus according to claim 8 or 9, wherein the global color processing comprises automatic white balancing and/or color correction, and the local color processing comprises color rendering and/or color enhancement.
  11. The apparatus of any one of claims 8 to 10, wherein the first branch and the second branch share a shared parameter layer of the neural network model.
  12. The apparatus according to any one of claims 8 to 11, wherein the first type of parameter is in a matrix form;
    the processing module is specifically configured to perform matrix multiplication on the image to be processed and the first type of parameter to obtain a first image; and according to the second type of parameters, carrying out local color processing on the first image to obtain a second image.
  13. The apparatus of claim 12,
    the processing module is specifically configured to calculate a difference between a numerical value of a color channel and a numerical value of a luminance channel of the first image; adjusting the difference value according to the second type of parameter; and adding the adjusted difference value to the value of the brightness channel of the first image.
  14. The apparatus of claim 13, wherein the image format of the first image is a color RGB format, and the second type of parameters comprises a color processing coefficient beta1 for color channel R, a color processing coefficient beta2 for color channel G, a color processing coefficient beta3 for color channel B;
    the processing module is specifically used for generating a formula
    Figure PCTCN2019083693-APPB-100002
    And carrying out local color adjustment on the first image, wherein Y ' is a numerical value of a brightness channel of the first image, R ', G ', B ' are respectively a numerical value of a color channel R, a numerical value of a color channel G and a numerical value of a color channel B of the second image, and R ', G ', B ' are respectively a numerical value of a color channel R, a numerical value of a color channel G and a numerical value of a color channel B of the first image.
  15. A chip, characterized in that it is connected to a memory for reading and executing a software program stored in said memory for implementing the method according to any one of claims 1 to 7.
  16. An electronic device comprising a processor and a memory for performing the method of any one of claims 1 to 7.
  17. A computer-readable storage medium comprising instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-7.
CN201980079484.4A 2019-04-22 2019-04-22 Image processing method and device and electronic equipment Pending CN113168673A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/083693 WO2020215180A1 (en) 2019-04-22 2019-04-22 Image processing method and apparatus, and electronic device

Publications (1)

Publication Number Publication Date
CN113168673A true CN113168673A (en) 2021-07-23

Family

ID=72940548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980079484.4A Pending CN113168673A (en) 2019-04-22 2019-04-22 Image processing method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN113168673A (en)
WO (1) WO2020215180A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115190226A (en) * 2022-05-31 2022-10-14 华为技术有限公司 Parameter adjusting method, method for training neural network model and related device
WO2023005115A1 (en) * 2021-07-28 2023-02-02 爱芯元智半导体(上海)有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium
WO2023028866A1 (en) * 2021-08-31 2023-03-09 华为技术有限公司 Image processing method and apparatus, and vehicle
CN116721038A (en) * 2023-08-07 2023-09-08 荣耀终端有限公司 Color correction method, electronic device, and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022194345A1 (en) * 2021-03-16 2022-09-22 Huawei Technologies Co., Ltd. Modular and learnable image signal processor

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI545523B (en) * 2013-08-30 2016-08-11 國立中央大學 Image distortion correction method and image distortion correction device using the same
US10460231B2 (en) * 2015-12-29 2019-10-29 Samsung Electronics Co., Ltd. Method and apparatus of neural network based image signal processor
CN106412547B (en) * 2016-08-29 2019-01-22 厦门美图之家科技有限公司 A kind of image white balance method based on convolutional neural networks, device and calculate equipment
CN107145902B (en) * 2017-04-27 2019-10-11 厦门美图之家科技有限公司 A kind of image processing method based on convolutional neural networks, device and mobile terminal
CN107578390B (en) * 2017-09-14 2020-08-07 长沙全度影像科技有限公司 Method and device for correcting image white balance by using neural network
CN108364267B (en) * 2018-02-13 2019-07-05 北京旷视科技有限公司 Image processing method, device and equipment
US10791310B2 (en) * 2018-10-02 2020-09-29 Intel Corporation Method and system of deep learning-based automatic white balancing

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023005115A1 (en) * 2021-07-28 2023-02-02 爱芯元智半导体(上海)有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium
WO2023028866A1 (en) * 2021-08-31 2023-03-09 华为技术有限公司 Image processing method and apparatus, and vehicle
CN115190226A (en) * 2022-05-31 2022-10-14 华为技术有限公司 Parameter adjusting method, method for training neural network model and related device
CN115190226B (en) * 2022-05-31 2024-04-16 华为技术有限公司 Parameter adjustment method, neural network model training method and related devices
CN116721038A (en) * 2023-08-07 2023-09-08 荣耀终端有限公司 Color correction method, electronic device, and storage medium

Also Published As

Publication number Publication date
WO2020215180A1 (en) 2020-10-29

Similar Documents

Publication Publication Date Title
WO2021051996A1 (en) Image processing method and apparatus
CN113168673A (en) Image processing method and device and electronic equipment
US10916036B2 (en) Method and system of generating multi-exposure camera statistics for image processing
EP3308534A1 (en) Color filter array scaler
CN108391060B (en) Image processing method, image processing device and terminal
WO2014125659A1 (en) Image processing device, image capture device, filter generating device, image restoration method, and program
JP6308748B2 (en) Image processing apparatus, imaging apparatus, and image processing method
CN107704798B (en) Image blurring method and device, computer readable storage medium and computer device
US9672599B2 (en) Image processing devices for suppressing color fringe, and image sensor modules and electronic devices including the same
CN113850367A (en) Network model training method, image processing method and related equipment thereof
CN105453540B (en) Image processing apparatus, photographic device, image processing method
CN104469191A (en) Image denoising method and device
US10600170B2 (en) Method and device for producing a digital image
WO2019104047A1 (en) Global tone mapping
US9389678B2 (en) Virtual image signal processor
CN114331916B (en) Image processing method and electronic device
US20180204311A1 (en) Image processing device, image processing method, and program
WO2024027287A9 (en) Image processing system and method, and computer-readable medium and electronic device
US20240129446A1 (en) White Balance Processing Method and Electronic Device
WO2020215263A1 (en) Image processing method and device
CN109309784B (en) Mobile terminal
CN111383188A (en) Image processing method, system and terminal equipment
KR20110083888A (en) Image interpolating method by bayer-pattern-converting signal and program recording medium
JP7183015B2 (en) Image processing device, image processing method, and program
CN114298889A (en) Image processing circuit and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination