WO2020215180A1 - Procédé et appareil de traitement d'image, et dispositif électronique - Google Patents

Procédé et appareil de traitement d'image, et dispositif électronique Download PDF

Info

Publication number
WO2020215180A1
WO2020215180A1 PCT/CN2019/083693 CN2019083693W WO2020215180A1 WO 2020215180 A1 WO2020215180 A1 WO 2020215180A1 CN 2019083693 W CN2019083693 W CN 2019083693W WO 2020215180 A1 WO2020215180 A1 WO 2020215180A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
color
processing
type
parameters
Prior art date
Application number
PCT/CN2019/083693
Other languages
English (en)
Chinese (zh)
Inventor
李蒙
胡慧
陈海
郑成林
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201980079484.4A priority Critical patent/CN113168673A/zh
Priority to PCT/CN2019/083693 priority patent/WO2020215180A1/fr
Publication of WO2020215180A1 publication Critical patent/WO2020215180A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control

Definitions

  • This application relates to the field of image processing, and more specifically, to image processing methods, devices, and electronic equipment.
  • image signal processing image signal processing
  • ISP image signal processing
  • the ISP processing flow is shown in Figure 1.
  • the natural scenery 101 obtains a Bayer image through a lens 102, then obtains an analog electrical signal 105 through a photoelectric conversion 104, and further obtains a digital image signal through denoising and analog-to-digital processing 106 (That is, the raw image) 107, which will enter the digital signal processing chip 100 next.
  • the steps in the digital signal processing chip 100 are the core steps of ISP processing.
  • the digital signal processing chip 100 generally includes black level compensation (BLC) 108, lens shading correction 109, and dead pixel correction ( bad pixel correction, BPC) 110, demosaic (demosaic) 111, Bayer domain noise reduction (denoise) 112, auto white balance (AWB) 113, Ygamma 114, auto exposure (AE) 115, auto focus (auto focus, AF) (not shown in Figure 1), color correction (CC) 116, gamma correction 117, color gamut conversion 118, color denoising/detail enhancement 119, color enhancement (color Enhance (CE) 120, formater 121, input/output (input/output, I/O) control 122 and other modules.
  • BLC black level compensation
  • BPC dead pixel correction
  • demosaic demosaic
  • Bayer domain noise reduction denoise
  • ARB auto white balance
  • AE auto exposure
  • AF auto focus
  • CE color Enhance
  • CE color Enhance
  • the color-related modules in ISP processing mainly include AWB113, CC116, CE120 and other modules.
  • AWB and CC modules are global color processing modules
  • CE is a local color processing module.
  • the color-related modules in the ISP processing can be realized by the neural network model, but because the ISP processing is serial processing, that is, the output of the previous module is used as the input of the next module, there is a problem of error accumulation.
  • the present application provides an image processing method and device, which can avoid the error accumulation problem in the serial ISP image color processing flow and improve the image color processing effect.
  • the present application provides an image processing method, which includes: acquiring an image to be processed; processing the image to be processed through the first branch of a pre-trained neural network model to obtain the first type of parameters, The first type of parameters are used to perform global color processing on the image; the image to be processed is processed through the second branch of the neural network model to obtain the second type of parameters, and the second type of parameters are used to Partial color processing is performed on the image; color processing is performed on the image to be processed according to the first type of parameter and the second type of parameter to obtain a color processed image.
  • the input of the neural network model is the image to be processed, and different branches output different types of parameters, which can avoid the problem of error accumulation in the parameter calculation process and obtain more accurate parameters. Further, processing the image to be processed according to the obtained parameters can improve the color correction effect of the image.
  • the image to be processed is a raw image.
  • the input of the neural network model is a raw image, which preserves the image information to the greatest extent, so that the first type of parameters and the second type of parameters obtained are more accurate, and the image color correction effect is also better.
  • the global color processing includes automatic white balance and/or color correction
  • the local color processing includes color rendering and/or color enhancement
  • the first type of parameter and the second type of parameter can correspond to the traditional ISP module, so that when the image color correction effect is not ideal, the first type of parameter and the second type of parameter can be adjusted according to the experience of traditional ISP debugging , which solves the inherent problem that the subjective effect of neural network cannot be adjusted.
  • the first branch and the second branch share a shared parameter layer of the neural network model.
  • the first branch obtains the intermediate feature layer data layer and the second branch obtains the intermediate feature layer data.
  • the layers of the middle feature layer data can share structural parameters. For example, through the shared parameter layer of the pre-trained neural network model, the image to be processed is processed to obtain the intermediate feature layer data; through the first branch of the pre-trained neural network model (except for the shared parameter The part other than the layer) process the intermediate feature layer data to obtain the first type of parameters; the intermediate feature layer data is processed through the second branch of the neural network model (other than the shared parameter layer) Processing to obtain the second type of parameters.
  • the second branch of the neural network model in the above technical solution can directly use the shared parameter layer to obtain the intermediate feature layer data, that is, the first branch and the second branch can reuse part of the calculation process, so the calculation complexity can be reduced Degree, reduce the occupation of storage space.
  • the first-type parameters are in matrix form; the performing color processing on the to-be-processed image according to the first-type and second-type parameters includes: The image and the first type of parameters are subjected to matrix multiplication to obtain a first image; according to the second type of parameters, local color processing is performed on the first image to obtain a second image.
  • the performing local color processing on the first image according to the second type of parameters includes: calculating the difference between the value of the color channel of the first image and the value of the brightness channel Value; adjust the difference value according to the second type of parameter; add the adjusted difference value to the value of the brightness channel of the first image.
  • the image format of the first image is a color RGB format
  • the second type of parameters include color processing coefficient beta1 of color channel R, color processing coefficient beta2 of color channel G, and color channel B
  • the color processing coefficient beta3; said performing partial color processing on the first image according to the second type of parameters includes: according to the formula Perform local color adjustment on the first image, where Y" is the value of the brightness channel of the first image, and R"', G"', and B"' are the values of the color channel R of the second image, respectively ,
  • the value of the color channel G and the value of the color channel B, R", G", and B" are the value of the color channel R, the value of the color channel G, and the value of the color channel B of the first image, respectively.
  • the method before performing color processing on the image to be processed according to the parameters of the first type and the parameter of the second type, the method further includes: performing demosaicing and processing on the image to be processed. To noise processing.
  • the present application provides an image processing device, which includes a module for executing the first aspect or any one of the implementation manners of the first aspect.
  • the present application provides an image processing device, including a memory and a processor, configured to execute the method described in the first aspect or any one of the implementation manners of the first aspect.
  • the present application provides a chip, which is connected to a memory, and is used to read and execute a software program stored in the memory to implement the first aspect or any one of the first aspects.
  • the present application provides an electronic device including a processor and a memory, configured to execute the method described in the first aspect or any one of the implementation manners of the first aspect.
  • the present application provides a computer-readable storage medium, including instructions, which when run on an electronic device, cause the electronic device to execute the method described in the first aspect or any one of the implementation manners of the first aspect.
  • the present application provides a computer program product that, when running on an electronic device, causes the electronic device to execute the method described in the first aspect or any one of the implementation manners of the first aspect.
  • FIG. 1 is a schematic flowchart of ISP processing.
  • Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application.
  • Fig. 3 is a specific example of the image processing method of the embodiment of the present application.
  • Fig. 4 is a network framework of a neural network model of an embodiment of the present application.
  • Fig. 5 is a network framework of a neural network model according to another embodiment of the present application.
  • Fig. 6 is a network framework of a neural network model according to another embodiment of the present application.
  • Fig. 7 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an image processing device provided by another embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the technical solution of the present application can be applied to any scene that requires color processing of images, such as safe city, remote driving, human-computer interaction, and other scenes that require photographing, video recording, or image display.
  • Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application.
  • the method 200 can be executed by any device with image processing functions. As shown in FIG. 2, the method 200 may include at least part of the following content.
  • an image to be processed is acquired.
  • the image to be processed is processed by the first branch of the pre-trained neural network model to obtain a first type of parameter, and the first type of parameter is used to perform global color processing on the image.
  • the image to be processed is processed through the second branch of the neural network model to obtain a second type of parameter, and the second type of parameter is used to perform local color processing on the image.
  • color processing is performed on the image to be processed according to the first type parameters and the second type parameters to obtain a color processed image.
  • the inputs of different branches of the neural network model are all images to be processed, and with the same input, different types of parameters are output, that is, the first branch and the second branch of the neural network model have With the same input, the processing of the first branch and the second branch of the image to be processed is parallel processing. Compared with the serial processing, the problem of error accumulation in the parameter calculation process can be avoided and more accurate parameters can be obtained. Further, processing the image to be processed according to the obtained parameters can improve the color correction effect of the image. And the output of the neural network model is the color processing parameters.
  • the above parameters can correspond to the parameters of the traditional ISP processing module. When the image color processing effect is not ideal, the parameters can be fine-tuned according to the traditional ISP parameter debugging experience to further improve the image color correction effect. And solve the problem that the most criticized parameters of the neural network cannot be debugged.
  • the image to be processed in the embodiment of the present application may be obtained image data, image signal, etc.
  • the image to be processed may be acquired through image acquisition equipment (for example, a lens and a sensor), or may be received from other equipment, which is not specifically limited in the embodiment of the present application.
  • the image to be processed may be a raw image. Since raw image is a complementary metal oxide semiconductor (CMOS) or charge coupled device (CCD) image sensor that converts the captured light source signal into a digital signal to obtain the original data, it is lossless, So it contains the original information of the object.
  • the input of the neural network model is a raw image, which preserves the image information to the greatest extent.
  • the first type of parameters and the second type of parameters obtained in this way can remind the information of the real scene, and the image color correction effect is also better.
  • the image to be processed may also be an image after other image processing processes besides color processing. Other image processing procedures include any one or a combination of black level correction, lens shading correction, dead pixel correction, demosaicing, Bayer domain noise reduction, auto exposure, auto focus, etc.
  • the pre-trained neural network model of the embodiment of the application can be stored on an image processing device, for example, stored in a mobile phone terminal, a tablet computer, a notebook computer, augmented reality (AR) AR/virtual reality (VR) , On-board terminal, etc.
  • the pre-trained neural network model of the embodiment of the present application can also be stored in a server or cloud.
  • the pre-trained neural network model can be a corresponding target model/rule generated based on different training data for different goals (or different tasks), and the corresponding target model/rule can be used to achieve the above goals or complete The above tasks provide the user with the desired result.
  • the first type of parameters and the second type of parameters for image color processing may be output.
  • the first branch of the neural network model corresponds to generating the first type of parameters (that is, the parameters used to perform global color processing on the image)
  • the second branch of the neural network model corresponds to generating the second type of parameters ( That is, the parameters used to perform partial color processing on the image).
  • the first branch and the second branch can be two independent neural network models.
  • the neural network model of the embodiment of the present application is a set consisting of a first neural network model corresponding to the first branch and a second neural network model corresponding to the second branch, and the first neural network model and the second neural network model
  • the inputs of the two neural network models are all images to be processed; the first branch and the second branch may also be two parts of the same neural network model, which is not specifically limited in the embodiment of the present application.
  • the first branch and the second branch may multiplex or share part of the processing procedure.
  • the first branch and the second branch share a shared parameter layer of the neural network model.
  • the first branch obtains the intermediate feature layer data layer and the second branch obtains the intermediate feature layer data.
  • the layers of the middle feature layer data can share structural parameters.
  • the image to be processed is processed to obtain the intermediate feature layer data; through the first branch of the pre-trained neural network model (except for the shared parameter The part other than the layer) process the intermediate feature layer data to obtain the first type of parameters; the intermediate feature layer data is processed through the second branch of the neural network model (other than the shared parameter layer) Processing to obtain the second type of parameters.
  • the second branch of the neural network model can directly use the shared parameter layer to obtain the intermediate feature layer data, that is, the first branch and the second branch can reuse part of the calculation process, so the calculation complexity can be reduced , Reduce the occupation of storage space.
  • the first type of parameters is used to perform global color processing on the image, where the global color processing includes at least one global color processing.
  • the first type of parameters are used to perform automatic white balance processing and/or color correction processing on the image.
  • Global color processing is to process the entire image.
  • the first type of parameters may include M parameters, the M parameters corresponding to N types of global color processing, and both M and N are integers greater than or equal to 1.
  • the relationship between the M parameters and the global color processing in N may be one-to-one correspondence, one-to-many, or many-to-one, which is not specifically limited in the embodiment of the present application.
  • the first type of parameters may include the first parameter corresponding to the automatic white balance processing and/or the second parameter corresponding to the color correction processing.
  • Type parameters the first type of parameters may also include the first parameter corresponding to the automatic white balance processing and the third parameter of the color correction processing; the first type of parameters may also include the fourth parameter and the fifth parameter corresponding to the automatic white balance processing , And the sixth parameter corresponding to the color correction process.
  • the first type of parameters may be in matrix form. Taking the first type of parameters for performing automatic white balance processing and color correction processing on an image as an example, the first type of parameters includes a matrix used for automatic white balance processing and a matrix used for color correction processing.
  • the first type of parameters are in the form of a matrix
  • color processing is performed on the image to be processed according to the first type of parameters, which may be a matrix multiplication of the first type of parameters and the image to be processed.
  • the automatic white balance processing of the image to be processed can be processed according to the following formula:
  • a, b, and c can be determined by the neural network model.
  • R', G', and B' are respectively the value of the color channel R, the value of the color channel G, and the value of the color channel B of the image after automatic white balance processing.
  • R, G, and B are respectively the value of the color channel R, the value of the color channel G, and the value of the color channel B of the image before the automatic white balance processing.
  • R , G, B represent the value of the color channel R, the value of the color channel G, and the value of the color channel B of the image before the one or more color processing, and will not be explained below.
  • the color correction processing of the image to be processed can be processed according to the following formula:
  • R', G', and B' are the values of color channel R, color channel G, and color channel B of the image after color correction processing.
  • R, G, and B are the values before color correction processing.
  • the embodiments of the present application may also use the following quadratic terms, cubic terms, and square root terms to perform color correction processing:
  • ⁇ 2,3 (R,G,B,R 2 ,G 2 ,B 2 ,RG,GB,RB) T
  • R, G, and B are the value of the color channel R, the value of the color channel G, and the value of the color channel B of the image before the color correction processing, and T represents the transposition.
  • the matrix used for color correction will also have different formats. Take the color correction matrix with quadratic terms as an example.
  • the color correction processing of the image to be processed can be processed according to the following formula, which is used for color correction at this time
  • the matrix M can be a 3*10 matrix:
  • R', G', B' are the value of the color channel R, the value of the color channel G, and the value of the channel B of the image after the color correction processing
  • R, G, B are the image before the color correction processing respectively The value of color channel R, the value of color channel G, and the value of color channel B.
  • the at least one global color process described above may also only correspond to one matrix, which is not specifically limited in the embodiment of the present application.
  • the second type of parameters is used to perform local color processing on the image, where the local color processing includes at least one local color processing. Partial color processing is to process part or part of an image.
  • the second type of parameters are used to perform color enhancement or color rendering processing on parts of the image.
  • the relationship between the second type of parameters and the at least one partial color processing may be one-to-one correspondence, one-to-many or many-to-one, which is not specifically limited in the embodiment of the present application.
  • the second type of parameters may be filter function parameters, color adjustment coefficients, and so on.
  • the embodiment of the present application does not specifically limit the sequence of performing global color processing and local color processing.
  • the first type of parameters are in matrix form as an example, the image to be processed and the first type of parameters are matrix multiplied to obtain the first image; according to the second type of parameters, all The first image is subjected to partial color processing to obtain a second image.
  • the second type of parameter can be multiplied by the difference, the second type of parameter and the difference can be input into a pre-configured function, the second type of parameter can be added to the difference, and so on.
  • the embodiment of the present application does not specifically limit the image format of the first image.
  • it may be a color RGB format, a YUV format, and the like.
  • the second type of parameters may include color processing coefficients of color channel R, color processing coefficients of color channel G, and color processing coefficients of color channel B.
  • the parameters can be calculated according to the following formula Partial color adjustment of the first image:
  • R is the value of the brightness channel of the first image
  • R”, G”, and B are the value of the color channel R, the value of the color channel G, and the value of the color channel B of the second image
  • R, G , B are the value of the color channel R of the first image
  • beta1, beta2, and beta3 are the color processing coefficient of the color channel R, the color processing coefficient of the color channel G, and the color The color processing coefficient of channel B.
  • some basic processing such as denoising and demosaicing may be performed on the image to be processed.
  • Fig. 3 is a specific example of the image processing method of the embodiment of the present application. It should be understood that FIG. 3 is only exemplary, which is only used to help those skilled in the art understand the embodiments of the present application, rather than limiting the embodiments of the present application to the specific scenarios illustrated.
  • the raw image is processed by demosaicing and denoising to obtain a linear RGB image.
  • the raw image enters the pre-trained neural network model, and the neural network model is processed to obtain a global color correction matrix M and a local color processing coefficient beta, where beta can be divided into 3 channels R, G, and B, and the size is the same as the original image.
  • the linear RGB image is processed by the global color matrix M to obtain the R"G"B" image.
  • R"G"B" image is the result of global color correction (for example, automatic white balance and/or color correction), and then after partial color processing, local color processing (for example, color rendering and/or color enhancement) is obtained The R"'G"'B"' image.
  • the above image color processing method focuses on global color processing and local color processing at the same time, and uses the same neural network model to complete all color processing from the raw image of the sensor to the final image. Since the adjustment parameters used in each color processing are obtained using raw image, firstly, the problem of error accumulation can be avoided. Secondly, because the raw image is larger and retains the image information, the obtained adjustment parameters are more accurate.
  • Figures 4 to 6 show the three structural forms of the neural network model of the embodiments of the present application. It should be understood that Figures 4 to 6 are only exemplary, and are only used to help those skilled in the art understand the embodiments of the present application. It is not intended to limit the embodiments of the present application to the specific scenarios illustrated.
  • the neural network model of the embodiment of the present application may also be in other structural forms, as long as it can be a method for implementing the embodiment of the present application.
  • the neural network model 400 in Figure 4 includes max pooling 401, convolution 402, deconvolution 403, connect 404, global pooling 405, and full connection (full connect) 406, reshape (reshape) 407 and other processing.
  • the role of convolution 402 in image processing is equivalent to a filter that extracts specific information from the input image matrix.
  • the convolution layer includes multiple convolution operators.
  • the convolution operator is also called the kernel.
  • the convolution operator can essentially be a weight matrix. This weight matrix is usually predefined. During the convolution operation on the image , The weight matrix is usually one pixel after one pixel in the horizontal direction on the input image (or two pixels after two pixels, or three pixels after three pixels, etc.). The number of interval pixels depends on the stride Value) to complete the work of extracting specific features from the image.
  • Deconvolution 403 is also called transposed convolution, which is the inverse process of convolution 402.
  • pooling processing which can be a layer of convolution processing followed by a layer of pooling processing, or a multi-layer convolution processing followed by one or more layers of pooling processing.
  • the pooling process may include average pooling using an average pooling operator and/or maximum pooling 401 using a maximum pooling operator to sample the input image to obtain a smaller size image.
  • the average pooling operator can calculate the pixel values in the image within a specific range to generate an average value as the result of average pooling.
  • the maximum pooling operator can take the pixel with the largest value within a specific range as the result of the maximum pooling.
  • the embodiment of the present application adopts the maximum pooling 401.
  • the first branch of the neural network model 400 starts with 4 layers of image data with a size of M*N.
  • the second branch starts with 4 layers of image data with a size of M*N.
  • 512 layers of M/16*N/16 image data ie, intermediate feature layer data
  • Deconvolution 403 processing, convolution 402 processing, and connection 404 processing and finally get the second type of parameters.
  • the inputs of the first branch and the second branch are both 4 layers of image data with a size of M*N.
  • the first branch and the second branch of the neural network model 400 can be multiplexed starting from 4 layers of image data with a size of M*N, after convolution 402 processing, and maximum pooling 401 processing to obtain 512 layers of M/16* N/16 image data (that is, the middle feature layer data) part.
  • the neural network model 500 in Figure 5 includes max pooling 501, convolution 502, tiling 503, connect 504, global pooling 505, and full connection ( Full connect 506, reshape 507 and other processing.
  • max pooling 501 convolution 502, tiling 503, connect 504, global pooling 505, and full connection ( Full connect 506, reshape 507 and other processing.
  • the first branch of the neural network model 500 starts with 4 layers of image data with a size of M*N.
  • the second branch starts with 4 layers of image data with a size of M*N.
  • 512 layers of 1*1 image data ie, intermediate feature layer data
  • tiling503 512-layer M*N image data is obtained.
  • the second branch also performs convolution 502 on the 4-layer M*N image data without changing the image size to obtain 512-layer M*N image data.
  • Part of the 512-layer M*N image data is connected 504 to obtain 1024-layer M*N image data, which is further processed by convolution 402 to finally obtain the second type of parameters.
  • the inputs of the first branch and the second branch are both 4 layers of image data with a size of M*N.
  • the first branch and the second branch of the neural network model 500 can be multiplexed starting from 4 layers of image data with a size of M*N, and processed through convolution 502, maximum pooling 501, and global pooling 505. 512 layer 1*1 image data (that is, the middle feature layer data) part.
  • the neural network model 600 in Figure 6 includes max pooling 601, convolution 602, global pooling 605, full connect 606, reshape 607, etc. .
  • max pooling 601 convolution 602
  • global pooling 605 full connect 606, reshape 607, etc.
  • the first branch of the neural network model 600 starts with 4 layers of image data with a size of M*N, and is processed by convolution 602 to obtain 32 layers of M*N image data (ie, intermediate feature layer data). Pooling 601 processing, global pooling 605 processing, fully connected 606 processing, reshaping 607 and other processing, finally get the first type of parameters.
  • the second branch starts with 4 layers of image data with a size of M*N, and is processed by convolution 602 to obtain 32 layers of M*N image data (ie, intermediate feature layer data), and further processed by convolution 602 to obtain the second type of parameters.
  • the inputs of the first branch and the second branch are both 4 layers of image data with a size of M*N.
  • the first branch and the second branch of the neural network model 600 can be multiplexed starting from 4 layers of image data with a size of M*N, and processed by convolution 602 to obtain 32 layers of M*N image data (ie, intermediate features). Layer data).
  • Fig. 7 is a schematic structural diagram of an image processing device provided by an embodiment of the present application. As shown in FIG. 7, the apparatus 700 includes an acquisition module 710 and a processing module 720.
  • the acquiring module 710 is configured to acquire an image to be processed.
  • the processing module 720 is configured to process the image to be processed through the first branch of the pre-trained neural network model to obtain a first type of parameter, and the first type of parameter is used to perform global color processing on the image.
  • the processing module 720 is further configured to process the to-be-processed image through the second branch of the neural network model to obtain a second type of parameter, and the second type of parameter is used to perform local color processing on the image.
  • the processing module 720 is further configured to perform color processing on the image to be processed according to the first-type parameters and the second-type parameters to obtain a color-processed image.
  • the image to be processed is a raw image.
  • the global color processing includes automatic white balance and/or color correction
  • the local color processing includes color rendering and/or color enhancement
  • the first branch and the second branch share a shared parameter layer of the neural network model.
  • the first branch obtains the intermediate feature layer data layer and the second branch obtains the intermediate feature layer data.
  • the layers of the middle feature layer data can share structural parameters. For example, through the shared parameter layer of the pre-trained neural network model, the image to be processed is processed to obtain the intermediate feature layer data; through the first branch of the pre-trained neural network model (not including shared Parameter layer part) to process the intermediate feature layer data to obtain the first type of parameters; process the intermediate feature layer data through the second branch of the neural network model (excluding the shared parameter layer part), Obtain the second type of parameters.
  • the first-type parameters are in matrix form; the processing module 720 is specifically configured to perform matrix multiplication on the image to be processed and the first-type parameters to obtain a first image; For the second type of parameters, local color processing is performed on the first image to obtain a second image.
  • the processing module 720 is specifically configured to calculate the difference between the value of the color channel of the first image and the value of the brightness channel; adjust the difference according to the second type of parameter; The difference value is added to the value of the brightness channel of the first image.
  • the image format of the first image is a color RGB format
  • the second type of parameters include color processing coefficient beta1 of color channel R, color processing coefficient beta2 of color channel G, and color processing coefficient beta3 of color channel B
  • the processing module 720 is specifically configured to perform according to the formula Perform local color adjustment on the first image, where Y" is the value of the brightness channel of the first image, and R"', G"', and B"' are the values of the color channel R of the second image, respectively ,
  • the value of the color channel G and the value of the color channel B, R", G", and B" are the value of the color channel R, the value of the color channel G, and the value of the color channel B of the first image, respectively.
  • the acquisition module 710 may be implemented by a transceiver or a processor.
  • the processing module 720 may be implemented by a processor. The specific functions and beneficial effects of the obtaining module 710 and the processing module 720 can be referred to the method shown in FIG. 2, which will not be repeated here.
  • FIG. 8 is a schematic structural diagram of an image processing device provided by another embodiment of the present application. As shown in FIG. 8, the apparatus 800 may include a processor 820 and a memory 830.
  • FIG 8. Only one memory and processor are shown in Figure 8. In actual image processing device products, there may be one or more processors and one or more memories.
  • the memory may also be referred to as a storage medium or storage device.
  • the memory may be set independently of the processor, or may be integrated with the processor, which is not limited in the embodiment of the present application.
  • the processor 820 and the memory 830 communicate with each other through internal connection paths, and transfer control and/or data signals.
  • the processor 820 is used to obtain the image to be processed; and used to process the image to be processed through the first branch of the pre-trained neural network model to obtain the first type of parameters, and the first type of parameters are used for Perform global color processing on the image; it is also used to process the image to be processed through the second branch of the neural network model to obtain a second type of parameter, which is used to perform local color processing on the image ; Is also used to perform color processing on the image to be processed according to the first type of parameter and the second type of parameter to obtain a color processed image.
  • the memory 830 described in the embodiments of the present application is used to store computer instructions and parameters required by the processor to run.
  • the embodiment of the present application also provides an electronic device, which may be a terminal device.
  • the device can be used to execute the functions/steps in the above method embodiments.
  • FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device 900 includes a processor 910 and a transceiver 920.
  • the electronic device 900 may further include a memory 930.
  • the processor 910, the transceiver 920, and the memory 930 can communicate with each other through an internal connection path to transfer control and/or data signals.
  • the memory 930 is used to store computer programs, and the processor 910 is used to download from the memory 930. Call and run the computer program.
  • the electronic device 900 may further include an antenna 940 for transmitting the wireless signal output by the transceiver 920.
  • the above-mentioned processor 910 and the memory 930 may be integrated into a processing device, and more commonly, they are components independent of each other.
  • the processor 910 is configured to execute program codes stored in the memory 930 to implement the above-mentioned functions.
  • the memory 930 may also be integrated in the processor 910, or independent of the processor 910.
  • the processor 910 may correspond to the processing module 820 in the apparatus 800 in FIG. 8.
  • the electronic device 900 may also include one or more of an input unit 960, a display unit 970, an audio circuit 980, a camera 990, and a sensor 901.
  • the audio The circuit may also include a speaker 982, a microphone 984, and so on.
  • the display unit 970 may include a display screen.
  • the aforementioned electronic device 900 may further include a power supply 950 for providing power to various devices or circuits in the terminal device.
  • the electronic device 900 shown in FIG. 9 can implement each process of the method embodiments shown in FIGS. 2 to 6.
  • the operations and/or functions of each module in the electronic device 900 are respectively for implementing the corresponding processes in the foregoing method embodiments.
  • the processor described in each embodiment of the present application may be an integrated circuit chip with signal processing capability. In the implementation process, the steps of the above method can be completed by hardware integrated logic circuits in the processor or instructions in the form of software.
  • the processor described in each embodiment of the present application may be a general-purpose processor, a digital signal processor (digital signal processor, DSP), an application specific integrated circuit (ASIC), and a field programmable gate array (field programmable gate array). , FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in random access memory (RAM), flash memory, read-only memory (read-only memory, ROM), programmable read-only memory, or electrically erasable programmable memory, registers, etc. mature in the field Storage medium.
  • the storage medium is located in the memory, and the processor reads the instructions in the memory and completes the steps of the above method in combination with its hardware.
  • the size of the sequence number of each process does not mean the order of execution.
  • the execution order of each process should be determined by its function and internal logic, and should not constitute the implementation process of the embodiments of this application. Any restrictions.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a digital video disc (DVD)), or a semiconductor medium (for example, a solid state disk (SSD)), etc.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Color Image Communication Systems (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé et un appareil de traitement d'image, et un dispositif électronique. La solution technique de la présente invention consiste à obtenir une image à traiter ; à traiter ladite image au moyen d'une première branche d'un modèle de réseau neuronal pré-entraîné pour obtenir un premier type de paramètres, le premier type de paramètres étant utilisé pour effectuer un traitement de couleur global sur l'image ; à traiter ladite image au moyen d'une seconde branche du modèle de réseau neuronal pour obtenir un second type de paramètres, le second type de paramètres étant utilisé pour effectuer un traitement de couleur local sur l'image ; et à effectuer un traitement de couleur sur ladite image selon le premier type de paramètres et le second type de paramètres pour obtenir une image soumise à un traitement de couleur. Dans la solution technique, l'entrée du modèle de réseau neuronal est une image à traiter, et différentes branches délivrent différents types de paramètres, de telle sorte que le problème d'accumulation d'erreurs pendant un processus de calcul de paramètres peut être évité pour obtenir des paramètres plus précis. En outre, l'image à traiter est traitée selon les paramètres obtenus, de telle sorte que l'effet de correction de couleur de l'image peut être amélioré.
PCT/CN2019/083693 2019-04-22 2019-04-22 Procédé et appareil de traitement d'image, et dispositif électronique WO2020215180A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980079484.4A CN113168673A (zh) 2019-04-22 2019-04-22 图像处理方法、装置和电子设备
PCT/CN2019/083693 WO2020215180A1 (fr) 2019-04-22 2019-04-22 Procédé et appareil de traitement d'image, et dispositif électronique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/083693 WO2020215180A1 (fr) 2019-04-22 2019-04-22 Procédé et appareil de traitement d'image, et dispositif électronique

Publications (1)

Publication Number Publication Date
WO2020215180A1 true WO2020215180A1 (fr) 2020-10-29

Family

ID=72940548

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/083693 WO2020215180A1 (fr) 2019-04-22 2019-04-22 Procédé et appareil de traitement d'image, et dispositif électronique

Country Status (2)

Country Link
CN (1) CN113168673A (fr)
WO (1) WO2020215180A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022194345A1 (fr) * 2021-03-16 2022-09-22 Huawei Technologies Co., Ltd. Processeur de signal d'image modulaire et pouvant être appris
WO2023005115A1 (fr) * 2021-07-28 2023-02-02 爱芯元智半导体(上海)有限公司 Procédé de traitement d'image, appareil de traitement d'image, dispositif électronique et support de stockage lisible

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023028866A1 (fr) * 2021-08-31 2023-03-09 华为技术有限公司 Procédé et appareil de traitement d'image, et véhicule
CN115190226B (zh) * 2022-05-31 2024-04-16 华为技术有限公司 参数调整的方法、训练神经网络模型的方法及相关装置
CN116721038A (zh) * 2023-08-07 2023-09-08 荣耀终端有限公司 颜色修正方法、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150063685A1 (en) * 2013-08-30 2015-03-05 National Central University Image distortion correction method and image distortion correction device using the same
CN106412547A (zh) * 2016-08-29 2017-02-15 厦门美图之家科技有限公司 一种基于卷积神经网络的图像白平衡方法、装置和计算设备
CN106934426A (zh) * 2015-12-29 2017-07-07 三星电子株式会社 基于图像信号处理的神经网络的方法和设备
CN107145902A (zh) * 2017-04-27 2017-09-08 厦门美图之家科技有限公司 一种基于卷积神经网络的图像处理方法、装置及移动终端
CN107578390A (zh) * 2017-09-14 2018-01-12 长沙全度影像科技有限公司 一种使用神经网络进行图像白平衡校正的方法及装置
CN108364267A (zh) * 2018-02-13 2018-08-03 北京旷视科技有限公司 图像处理方法、装置及设备
US20190045163A1 (en) * 2018-10-02 2019-02-07 Intel Corporation Method and system of deep learning-based automatic white balancing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004048740A (ja) * 2002-06-25 2004-02-12 Texas Instruments Inc ニューラル・ネットワーク・マッピングによる輝度得点自動露出を通しての自動白色バランシング
US8660355B2 (en) * 2010-03-19 2014-02-25 Digimarc Corporation Methods and systems for determining image processing operations relevant to particular imagery
EP3216217B1 (fr) * 2015-03-18 2021-05-05 Huawei Technologies Co., Ltd. Dispositif et procédé de traitement d'image permettant un équilibrage de couleur

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150063685A1 (en) * 2013-08-30 2015-03-05 National Central University Image distortion correction method and image distortion correction device using the same
CN106934426A (zh) * 2015-12-29 2017-07-07 三星电子株式会社 基于图像信号处理的神经网络的方法和设备
CN106412547A (zh) * 2016-08-29 2017-02-15 厦门美图之家科技有限公司 一种基于卷积神经网络的图像白平衡方法、装置和计算设备
CN107145902A (zh) * 2017-04-27 2017-09-08 厦门美图之家科技有限公司 一种基于卷积神经网络的图像处理方法、装置及移动终端
CN107578390A (zh) * 2017-09-14 2018-01-12 长沙全度影像科技有限公司 一种使用神经网络进行图像白平衡校正的方法及装置
CN108364267A (zh) * 2018-02-13 2018-08-03 北京旷视科技有限公司 图像处理方法、装置及设备
US20190045163A1 (en) * 2018-10-02 2019-02-07 Intel Corporation Method and system of deep learning-based automatic white balancing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022194345A1 (fr) * 2021-03-16 2022-09-22 Huawei Technologies Co., Ltd. Processeur de signal d'image modulaire et pouvant être appris
WO2023005115A1 (fr) * 2021-07-28 2023-02-02 爱芯元智半导体(上海)有限公司 Procédé de traitement d'image, appareil de traitement d'image, dispositif électronique et support de stockage lisible

Also Published As

Publication number Publication date
CN113168673A (zh) 2021-07-23

Similar Documents

Publication Publication Date Title
WO2021051996A1 (fr) Procédé et appareil de traitement d'image
WO2020215180A1 (fr) Procédé et appareil de traitement d'image, et dispositif électronique
US10916036B2 (en) Method and system of generating multi-exposure camera statistics for image processing
US8547450B2 (en) Methods and systems for automatic white balance
WO2021057474A1 (fr) Procédé et appareil de mise au point sur un sujet, dispositif électronique et support de stockage
EP3308534A1 (fr) Module de mise à l'échelle de réseau de filtres colorés
US20140078247A1 (en) Image adjuster and image adjusting method and program
WO2023010754A1 (fr) Procédé et appareil de traitement d'image, équipement terminal et support d'enregistrement
US10600170B2 (en) Method and device for producing a digital image
WO2020011112A1 (fr) Procédé et système de traitement d'image, support de stockage lisible et terminal
US20150365612A1 (en) Image capture apparatus and image compensating method thereof
WO2024027287A9 (fr) Système et procédé de traitement d'image, et support lisible par ordinateur et dispositif électronique associés
WO2023010750A1 (fr) Procédé et appareil de mappage de couleur d'image, dispositif électronique, et support d'enregistrement
WO2019104047A1 (fr) Mise en correspondance de tonalité globale
CN104469191A (zh) 图像降噪的方法及其装置
US20240129446A1 (en) White Balance Processing Method and Electronic Device
CN114331916B (zh) 图像处理方法及电子设备
CN110807735A (zh) 图像处理方法、装置、终端设备及计算机可读存储介质
US20140168452A1 (en) Photographing apparatus, method of controlling the same, and non-transitory computer-readable storage medium for executing the method
CN115802183B (zh) 图像处理方法及其相关设备
WO2023010751A1 (fr) Procédé et appareil de compensation d'informations pour une zone mise en évidence d'une image, dispositif et support d'enregistrement
US9654756B1 (en) Method and apparatus for interpolating pixel colors from color and panchromatic channels to color channels
CN107295261A (zh) 图像去雾处理方法、装置、存储介质和移动终端
CN109309784B (zh) 移动终端
WO2021179142A1 (fr) Procédé de traitement d'image et appareil associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19926604

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19926604

Country of ref document: EP

Kind code of ref document: A1