CN110647930B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN110647930B
CN110647930B CN201910893726.5A CN201910893726A CN110647930B CN 110647930 B CN110647930 B CN 110647930B CN 201910893726 A CN201910893726 A CN 201910893726A CN 110647930 B CN110647930 B CN 110647930B
Authority
CN
China
Prior art keywords
color
channel
neural network
white balance
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910893726.5A
Other languages
Chinese (zh)
Other versions
CN110647930A (en
Inventor
张水发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910893726.5A priority Critical patent/CN110647930B/en
Publication of CN110647930A publication Critical patent/CN110647930A/en
Application granted granted Critical
Publication of CN110647930B publication Critical patent/CN110647930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Colour balance circuits, e.g. white balance circuits or colour temperature control

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Signal Processing (AREA)
  • Processing Of Color Television Signals (AREA)
  • Image Processing (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

The disclosure relates to an image processing method, an image processing device and electronic equipment, which are used for solving the problem of low accuracy of white balance processing on an image in the prior art, the embodiment of the disclosure inputs a target image to be processed into a convolutional neural network and acquires a multi-channel characteristic diagram output by the convolutional neural network; determining a white balance coefficient corresponding to the target image through a multi-channel feature map; and processing the target image according to the determined white balance coefficient. According to the image processing method provided by the embodiment of the disclosure, the convolutional neural network learns the capability of acquiring the multi-channel feature map corresponding to the image through training of the convolutional neural network, the trained convolutional neural network is used for determining the white balance coefficient corresponding to the target image, the scene to which the target image belongs does not need to be judged, and the target image is processed by using the preset white balance coefficient of the scene, so that the white balance coefficient of the target image is more accurately obtained, and the accuracy of white balance processing on the target image is improved.

Description

Image processing method and device and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
White balance is an index for describing the accuracy of white color generated by mixing three primary colors of red, green and blue (RGB) in a display, is a very important concept in the field of television photography, is an important parameter for realizing that a camera image accurately reflects the color condition of a shot object, and can solve a series of problems of color restoration and color tone processing. White balance is generated along with the real reproduction color of electronic images, is earlier applied to the white balance in the field of professional shooting, and is widely applied to household electronic products such as video cameras, digital cameras, mobile phones with shooting functions and the like at present.
The traditional method for white balance processing of images depends on artificial scene design, and a white balance coefficient corresponding to each artificially designed scene is preset. In the process of shooting the image, judging the acquired image data, determining a scene to which the current image belongs, taking a white balance coefficient corresponding to the scene as the white balance coefficient of the current image, and processing the current image by using the determined white balance coefficient. However, the artificially designed scene generally has limitations, and if the current image data to be subjected to the white balance processing is complex, the current image is subjected to wrong scene classification, so that a wrong white balance coefficient is determined.
In summary, the method for performing white balance processing on an image in the prior art is not accurate enough.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, and an electronic device, which are used to solve the problem in the prior art that the accuracy of white balance processing on an image is not high. The technical scheme of the disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided an image processing method, including:
inputting a target image needing to be processed into a convolutional neural network, and acquiring a multi-channel feature map output by the convolutional neural network, wherein multiple channels in the convolutional neural network comprise a plurality of color channels and weight channels used for representing weight coefficients corresponding to pixel points in the target image;
determining a white balance coefficient corresponding to the target image through a multi-channel feature map;
and processing the target image according to the determined white balance coefficient.
In a possible implementation manner, the multi-channel feature map includes a color feature map corresponding to a color channel and a weight feature map corresponding to a weight channel;
the determining the white balance coefficient corresponding to the target image through the multi-channel feature map comprises the following steps:
aiming at any color channel, determining a color characteristic value corresponding to the color channel through a color characteristic graph corresponding to the color channel and a weight characteristic graph corresponding to the weight channel;
and determining a white balance coefficient corresponding to the target image according to the determined color characteristic value corresponding to each color channel.
In a possible implementation manner, the determining, by using the color feature map corresponding to each color channel and the weight feature map corresponding to the weight channel, a color feature value corresponding to each color channel includes:
for any color channel, determining a weight characteristic value of a pixel point according to a characteristic value of the pixel point in a color characteristic diagram corresponding to the color channel and a weight coefficient of the pixel point at the same position in the weight characteristic diagram;
and determining the color characteristic value corresponding to the color channel according to the weight characteristic values of all the pixel points in the color characteristic diagram.
In one possible implementation, the convolutional neural network is trained according to the following:
training the convolutional neural network by taking a training image as the input of the convolutional neural network and taking a multi-channel characteristic diagram as the output of the convolutional neural network;
in the process of training the convolutional neural network, determining a predicted white balance coefficient corresponding to the training image according to a multi-channel feature map output by the convolutional neural network;
and adjusting parameters of the convolutional neural network according to the loss value between the predicted white balance coefficient and the real white balance coefficient corresponding to the training image.
In one possible implementation, the true white balance coefficient is determined by:
determining a real white balance coefficient of a training image corresponding to the original image according to a color block on a color correction plate included in the original image;
the training image corresponding to the original image is obtained by replacing a color correction plate included in the original image with a specific color.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
the acquiring unit is configured to input a target image needing to be processed into a convolutional neural network, and acquire a multi-channel feature map output by the convolutional neural network, wherein multiple channels in the convolutional neural network comprise a plurality of color channels and weight channels used for representing weight coefficients corresponding to pixel points in the target image;
a determining unit configured to determine a white balance coefficient corresponding to the target image through the multi-channel feature map;
a processing unit configured to process the target image according to the determined white balance coefficient.
In one possible implementation manner, the multi-channel feature map includes a color feature map corresponding to a color channel and a weight feature map corresponding to a weight channel;
the determining unit is configured to determine, for any one color channel, a color feature value corresponding to the color channel through a color feature map corresponding to the color channel and a weight feature map corresponding to a weight channel; and determining a white balance coefficient corresponding to the target image according to the determined color characteristic value corresponding to each color channel.
In a possible implementation manner, the determining unit is configured to determine, for any color channel, a weight feature value of a pixel point according to a feature value of the pixel point in a color feature map corresponding to the color channel and a weight coefficient of the pixel point at the same position in the weight feature map; and determining the color characteristic value corresponding to the color channel according to the weight characteristic values of all the pixel points in the color characteristic diagram.
In one possible implementation manner, the obtaining unit is configured to train the convolutional neural network by taking a training image as an input of the convolutional neural network and taking a multi-channel feature map as an output of the convolutional neural network; in the process of training the convolutional neural network, determining a predicted white balance coefficient corresponding to the training image according to a multi-channel feature map output by the convolutional neural network; and adjusting parameters of the convolutional neural network according to the loss value between the predicted white balance coefficient and the real white balance coefficient corresponding to the training image.
In a possible implementation manner, the obtaining unit is configured to determine a real white balance coefficient of a training image corresponding to an original image according to color blocks on a color correction plate included in the original image; the training image corresponding to the original image is obtained by replacing a color correction plate included in the original image with a specific color.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a memory for storing executable instructions;
a processor configured to read and execute the executable instructions stored in the memory to implement the image processing method according to any one of the first aspects of the embodiments of the disclosure.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a non-volatile storage medium, wherein instructions that, when executed by a processor of an image processing apparatus, enable the image processing apparatus to perform the image processing method described in the first aspect of the embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the image processing method provided by the embodiment of the disclosure, a large number of training images are used for training the convolutional neural network, so that the convolutional neural network learns the capability of acquiring the multi-channel feature map corresponding to the image, then the trained convolutional neural network is used for acquiring the multi-channel feature map corresponding to the target image to determine the white balance coefficient corresponding to the target image, and the target image does not need to be processed by using the preset white balance coefficient of the scene after the scene type to which the target image belongs is judged, so that the white balance coefficient of the target image is more accurately acquired, and the accuracy of white balance processing on the target image is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a schematic diagram of a digital camera imaging process, according to an exemplary embodiment;
FIG. 2 is a flow diagram illustrating a method of image processing according to an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating an application scenario in accordance with an illustrative embodiment;
FIG. 4 is a schematic diagram illustrating a multi-channel feature map corresponding to a target image in accordance with an illustrative embodiment;
FIG. 5 is a schematic diagram illustrating a multi-channel feature map corresponding to another target image in accordance with an illustrative embodiment;
FIG. 6 is an overall flow diagram illustrating an image processing method according to an exemplary embodiment;
FIG. 7 is a flow diagram illustrating a method of training a convolutional neural network in accordance with an exemplary embodiment;
FIG. 8 is an overall flow diagram illustrating a method of training a convolutional neural network in accordance with an exemplary embodiment;
FIG. 9 is a block diagram of an image processing apparatus according to an exemplary embodiment;
FIG. 10 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment;
FIG. 11 is a block diagram illustrating another electronic device in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
Hereinafter, some terms in the embodiments of the present disclosure are explained to facilitate understanding by those skilled in the art.
(1) The term "plurality" in the embodiments of the present disclosure means two or more, and other terms are similar thereto.
(2) The term "white balance coefficient" in the embodiments of the present disclosure may be a vector, and may specifically be represented as [ R/G, B/G ]; wherein, R represents the brightness value corresponding to the R channel, G represents the brightness value corresponding to the G channel, and B represents the brightness value corresponding to the B channel.
(3) The term "color channel" in the embodiments of the present disclosure means that a channel for storing color information of an image is called a color channel, and each color channel stores information of color elements in the image. The colors in all the color channels are superposed and mixed to generate the colors of the pixels in the image;
wherein, the color channel includes: r (red), G (green), B (blue) channels.
(4) The term "loss function" in the embodiments of the present disclosure refers to a function that maps the value of a random event or its associated random variable to a non-negative real number to represent the "risk" or "loss" of the random event. In application, the loss function is usually associated with the optimization problem as a learning criterion, i.e. the model is solved and evaluated by minimizing the loss function. Where L1loss is a kind of loss function, which means taking the absolute value of the difference between the estimated value and the true value.
(5) The term "RAW image" in the embodiments of the present disclosure refers to an original image file containing data processed from an image sensor of a digital camera, scanner, or motion picture film scanner. The RAW file has not been processed, printed or used for editing. In general, the original image has a wide color gamut of internal colors, which can be adjusted precisely, and some simple modifications, such as white balance processing, can be made to the RAW image before conversion.
The white balance is a setting which is provided by the camera and ensures that the color of a shooting object is not influenced by the environment when shooting, and when the white balance is applied to shooting, the original color of the shooting object can be restored as much as possible, and the color deviation problem of the camera caused by the influence of the ambient light is corrected. Such as: in a cloudy environment, the color of a shot object is more blue than the color of the object, and the camera can increase yellow in a white balance mode to eliminate color cast and enable the object to be close to the color of the camera as far as possible.
Taking a digital camera as an example for taking images, the general imaging process of the digital camera is shown in fig. 1, light is irradiated onto a photosensitive element of the camera through a lens, the photosensitive element converts the light into electronic signals forming image data, and the electronic signals form RAW data. When the camera shoots, if the set storage image format is JPEG format, the image processor of the camera carries out color processing and compression processing on RAW data to finally form a JPEG image and stores the JPEG image in the camera; if the set format of the saved image is the RAW format, the camera saves the RAW image which is not processed by the image processor. Since the adjustability of the RAW image is high, the white balance processing of the image is often performed on the image in the RAW format.
At present, in an automatic white balance mode of most smart cameras, white balance coefficients corresponding to some scenes are preset, and then, during shooting, a camera intelligently recognizes RAW image data currently shot, judges a scene to which a RAW image belongs, and then performs white balance correction processing on the RAW image by using the white balance coefficients corresponding to the scenes. When the white balance coefficients corresponding to scenes are preset, the artificially designed scenes cannot cover all natural scenes, but the currently shot images may not belong to any one of the artificially designed scenes, or the scenes to which the currently shot images belong are some combination of the artificially designed scenes, the camera may apply the white balance coefficient of the scene closest to the currently shot images in the white balance processing of the currently shot images, a certain deviation may occur in color restoration of the currently shot images, and the accuracy of the white balance processing is not high.
The embodiment of the disclosure discloses a method for determining an image white balance coefficient by using a convolutional neural network with a self-learning function, which does not need to judge the scene type of the image, and can more accurately determine the white balance coefficient of the image needing white balance processing based on the trained convolutional neural network after the training of the convolutional neural network is completed, so as to improve the accuracy of the white balance processing.
To make the objects, technical solutions and advantages of the present disclosure clearer, the present disclosure will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present disclosure, rather than all embodiments. All other embodiments, which can be derived by one of ordinary skill in the art from the embodiments disclosed herein without making any creative effort, shall fall within the scope of protection of the present disclosure.
Embodiments of the present disclosure are described in further detail below.
FIG. 2 is a flow chart illustrating an image processing method according to an exemplary embodiment, as shown in FIG. 2, including the steps of:
in step S21, a target image to be processed is input into a convolutional neural network, and a multi-channel feature map output by the convolutional neural network is obtained, where multiple channels in the convolutional neural network include multiple color channels and weight channels for representing weight coefficients corresponding to pixel points in the target image.
In step S22, the white balance coefficient corresponding to the target image is determined from the multi-channel feature map.
In step S23, the target image is processed in accordance with the determined white balance coefficient.
As can be seen from the above, in the embodiment of the present disclosure, after a target image to be processed is input to a convolutional neural network and a multi-channel feature map output by the convolutional neural network is obtained, a white balance coefficient corresponding to the target image is determined through the multi-channel feature map, and the target image is processed according to the determined white balance coefficient. Because the processing speed of the convolutional neural network is high, the multichannel characteristic diagram corresponding to the target image can be quickly acquired through the convolutional neural network. In addition, in the training process of the convolutional neural network, a large number of training samples are often used for training the convolutional neural network, so that the convolutional neural network learns the capability of acquiring the multi-channel feature map corresponding to the image, and the feature map of the color channel and the weight channel which are highly matched with the target image can be obtained by using the trained convolutional neural network; when the white balance coefficient corresponding to the target image is determined by using the color channel and the weight channel characteristic diagram corresponding to the target image output by the trained convolutional neural network, a more accurate white balance coefficient can be obtained, so that the accuracy of white balance processing on the target image is improved.
An alternative application scenario may be as shown in fig. 3, where an image processing application is installed in the terminal device 31, and when the user 30 obtains a target image to be processed based on the image processing application of the terminal device 31, the image processing application sends the target image to the server 32. The server 32 inputs the target image into a convolutional neural network, and obtains a multi-channel feature map output by the convolutional neural network, wherein a plurality of channels in the convolutional neural network include a plurality of color channels and weight channels for representing weight coefficients corresponding to pixel points in the target image. The server 32 sends the multi-channel feature map output by the convolutional neural network to the image processing application program, and the image processing application program determines a white balance coefficient corresponding to the target image through the multi-channel feature map and processes the target image according to the determined white balance coefficient.
It should be noted that the target image that needs to be processed in the embodiment of the present disclosure may be a RAW image.
The multi-channel in the embodiment of the disclosure comprises a plurality of color channels and weight channels for representing weight coefficients corresponding to pixel points in a target image;
correspondingly, the multi-channel feature map comprises a color feature map corresponding to a plurality of color channels and a weight feature map corresponding to a weight channel.
Wherein the plurality of color channels may include: r channel, G channel, B channel.
For example: fig. 4 is a multi-channel feature map obtained after a target image is input to a convolutional neural network, and the multi-channel feature map includes a color feature map corresponding to an R channel, a color feature map corresponding to a G channel, a color feature map corresponding to a B channel, and a weight feature map corresponding to a W channel. Each cell in the graph represents a pixel point, and the cells at the same position of different feature graphs represent different color features of the same pixel point.
It is assumed that after the target image is input into the convolutional neural network, an obtained multi-channel feature map output by the convolutional neural network is shown in fig. 4, where a feature value of a first pixel in the color feature map corresponding to the R channel is 218, a feature value of a first pixel in the color feature map corresponding to the G channel is 112, a feature value of a first pixel in the color feature map corresponding to the B channel is 214, and a weight coefficient of a first pixel in the weight feature map corresponding to the weight channel is 0.3.
After a multi-channel feature map output by the convolutional neural network is obtained, the white balance coefficient corresponding to the target image can be determined according to the multi-channel feature map;
an optional implementation manner is that the white balance coefficient corresponding to the target object is determined according to the following manner:
determining a color characteristic value corresponding to the color channel according to a color characteristic diagram corresponding to the color channel and a weight characteristic diagram corresponding to the weight channel aiming at any color channel; and determining a white balance coefficient corresponding to the target image according to the determined color characteristic value corresponding to each color channel.
In the embodiment of the disclosure, the convolutional neural network can quickly and accurately acquire the multi-channel feature map corresponding to the target image, and the color feature value corresponding to each color channel is determined according to the color feature map and the weight feature map in the multi-channel feature map; because the influence of the light on different pixel points in the same image is possibly different, the weights occupied by the different pixel points are also different when the color characteristic value is determined, and the weight characteristic diagram is referred when the color characteristic value corresponding to the color channel is calculated, so that the determined color characteristic value is more accurate, and the accuracy of white balance processing on the target image is improved.
In implementation, the color characteristic value corresponding to each color channel needs to be determined, for example, if the color channel includes an R channel, a G channel, and a B channel, the color characteristic value corresponding to the R channel, the color characteristic value corresponding to the G channel, and the color characteristic value corresponding to the B channel need to be determined;
after the color characteristic value corresponding to each color channel is determined, determining a white balance coefficient according to the color characteristic value corresponding to each color channel; the white balance coefficient may be a vector, and may be specifically represented as [ R/G, B/G ], where R, B, G is a color feature value corresponding to an R channel, a color feature value corresponding to a B channel, and a color feature value corresponding to a G channel, respectively.
The method for determining the color characteristic value corresponding to each color channel according to the embodiments of the present disclosure is described in detail below.
In implementation, for any color channel, the color characteristic value corresponding to the color channel is determined in the following manner:
determining the weight characteristic value of the pixel point according to the characteristic value of the pixel point in the color characteristic diagram corresponding to the color channel and the weight coefficient of the pixel point at the same position in the weight characteristic diagram; and determining a color characteristic value corresponding to the color channel according to the weight characteristic value of the pixel point in the color characteristic diagram.
In the embodiment of the disclosure, the color feature map and the weight feature map corresponding to each color channel of the target image can be quickly and accurately obtained through the convolutional neural network, and the feature value and the weight coefficient of each pixel point need to be calculated according to the color feature value corresponding to the color channel; the influence of light on different pixel points in the same image is possibly different, the weights occupied by different pixel points are also different when the white balance coefficient is calculated, the weight characteristic diagram corresponding to the weight channel comprises the weight coefficients of different pixel points, the color characteristic value corresponding to each color channel is obtained by weighting the characteristic value of the pixel point in each color characteristic diagram and the weight coefficient of the pixel point in the weight characteristic diagram, the white balance coefficient of the target image can be more accurately determined by the obtained color characteristic values corresponding to all the color channels, and therefore the accuracy of white balance processing on the target image is improved.
In the process of determining the color characteristic value corresponding to the color channel, the color characteristic value corresponding to the color channel is determined according to the determined weight characteristic values of all the pixel points;
specifically, when determining the weight characteristic value of a pixel in the color characteristic graph, the product of the characteristic value of the pixel in the color characteristic graph corresponding to the color channel and the weight coefficient of the pixel at the same position in the weight characteristic graph is used as the weight characteristic value of the pixel in the color characteristic graph.
For example, as shown in fig. 5, the color feature map and the weight feature map corresponding to the R channel are sequentially from top to bottom and from left to right, where a feature value of a first pixel point in the color feature map is 24, a weight coefficient corresponding to the first pixel point is 0.1, a feature value of a second pixel point is 26, a weight coefficient corresponding to the second pixel point is 0.2, a feature value of a third pixel point is 28, a weight coefficient corresponding to the third pixel point is 0.3, a feature value of a fourth pixel point is 28, a weight coefficient corresponding to the fourth pixel point is 0.2, a feature value of a fifth pixel point is 40, a weight coefficient corresponding to the fifth pixel point is 0.3, a feature value of a sixth pixel point is 40, a weight coefficient corresponding to the sixth pixel point is 0.4, a feature value of a seventh pixel point is 26, a weight coefficient corresponding to the seventh pixel point is 0.5, a feature value of an eighth pixel point is 27, a weight coefficient corresponding to the seventh pixel point is 0.3, The weighting coefficient corresponding to the eighth pixel point is 0.7, the characteristic value of the ninth pixel point is 28, and the weighting coefficient corresponding to the ninth pixel point is 0.9.
Then, the weight characteristic value of the first pixel point is: 24 × 0.1 ═ 2.4;
the weight characteristic value of the second pixel point is: 26 × 0.2 ═ 5.2;
the weight characteristic value of the third pixel point is: 28 × 0.3 ═ 8.4;
the weight characteristic value of the fourth pixel point is: 28 × 0.2 ═ 5.6;
the weight characteristic value of the fifth pixel point is: 40 × 0.3 ═ 12;
the weight characteristic value of the sixth pixel point is: 40 × 0.4 ═ 16;
the weight characteristic value of the seventh pixel point is: 26 × 0.5 ═ 13;
the weight characteristic value of the eighth pixel point is: 27 × 0.7 ═ 18.9;
the weight characteristic value of the ninth pixel point is: 28 × 0.9 ═ 25.2.
And when the color characteristic value corresponding to the color channel is determined, taking the sum of the weight characteristic values of all the pixel points in the color characteristic graph as the color characteristic value corresponding to the color channel.
For example, as shown in fig. 5, after the weight characteristic value of each pixel is determined, the sum of the weight characteristic values of all pixels in the color characteristic diagram corresponding to the R channel is determined as the color characteristic value corresponding to the R channel, and then the color characteristic value corresponding to the R channel is: r-2.4 +5.2+8.4+5.6+12+16+13+18.9+ 25.2-106.7.
FIG. 6 is an overall flow diagram illustrating an image processing method according to an exemplary embodiment, as shown in FIG. 6, including the steps of:
in step S61, the target image is input to a convolutional neural network.
In step S62, acquiring a multi-channel feature map corresponding to a target image output by the convolutional neural network;
the multi-channel feature map comprises a color feature map corresponding to an R channel, a color feature map corresponding to a G channel, a color feature map corresponding to a B channel and a weight feature map.
The following steps S63 and S64 are performed for the color feature map corresponding to the R channel, the color feature map corresponding to the G channel, and the color feature map corresponding to the B channel:
in step S63, the weight feature value of the pixel is determined according to the feature value of the pixel in the color feature map corresponding to the color channel and the weight coefficient of the pixel at the same position in the weight feature map.
In step S64, a color feature value corresponding to the color channel is determined according to the weight feature values of all the pixel points in the color feature map.
In step S65, a white balance coefficient corresponding to the target image is determined according to the color feature values corresponding to the plurality of color channels.
In step S66, the target image is processed by the white balance coefficient corresponding to the target image.
The following explains the image processing method provided by the present disclosure by taking a target image including 9 pixels as an example.
It should be noted that the number of the pixel points included in the target image is merely an example, and the feature values of the pixel points in the color feature map corresponding to the color channel and the numerical values of the weight coefficients of the pixel points in the weight feature map corresponding to the weight channel are merely exemplary, and are only used to describe the image processing method for determining the white balance coefficient provided by the present disclosure.
Inputting a target image containing 9 pixel points into a convolutional neural network, and assuming that a multi-channel feature map output by the convolutional neural network is as shown in fig. 5.
1. And determining the corresponding color characteristic value of each color channel.
For the R channel:
determining a weight characteristic value of each pixel point in the color characteristic graph corresponding to the R channel according to the color characteristic graph corresponding to the R channel and the weight characteristic graph;
and aiming at each pixel point, taking the product of the characteristic value of the pixel point in the color characteristic graph corresponding to the R channel and the weight coefficient corresponding to the pixel point in the weight characteristic graph as the weight characteristic value of the pixel point.
Specifically, as shown in the color feature map corresponding to the R channel in fig. 5, the weight feature value of the first pixel point is 24 × 0.1 — 2.4 in the order from top to bottom and from left to right;
the weight characteristic value of the second pixel point is 26 multiplied by 0.2 which is 5.2;
the weight characteristic value of the third pixel point is: 28 × 0.3 ═ 8.4;
the weight characteristic value of the fourth pixel point is: 28 × 0.2 ═ 5.6;
the weight characteristic value of the fifth pixel point is as follows: 40 × 0.3 ═ 12;
the weight characteristic value of the sixth pixel point is: 40 × 0.4 ═ 16;
the weight characteristic value of the seventh pixel point is: 26 × 0.5 ═ 13;
the weight characteristic value of the eighth pixel point is: 27 × 0.7 ═ 18.9;
the weight characteristic value of the ninth pixel point is: 28 × 0.9 ═ 25.2.
Taking the sum of the weight characteristic values of nine pixel points in the color characteristic diagram corresponding to the R channel as the color characteristic value corresponding to the R channel, wherein the color characteristic value corresponding to the R channel is as follows:
2.4+5.2+8.4+5.6+12+16+13+18.9+25.2=106.7。
for the G channel:
determining a weight characteristic value of each pixel point in the color characteristic graph corresponding to the G channel according to the color characteristic graph corresponding to the G channel and the weight characteristic graph;
and aiming at each pixel point, taking the product of the characteristic value of the pixel point in the color characteristic diagram corresponding to the G channel and the weight coefficient corresponding to the pixel point in the weight characteristic diagram as the weight characteristic value of the pixel point.
Specifically, as shown in the color feature diagram corresponding to the G channel in fig. 5, the weight feature value of the first pixel point is 26 × 0.1 — 2.6 in the order from top to bottom and from left to right;
the weight characteristic value of the second pixel point is 30 multiplied by 0.2 to 6;
the weight characteristic value of the third pixel point is: 50 × 0.3 ═ 15;
the weight characteristic value of the fourth pixel point is: 21 × 0.2 ═ 4.2;
the weight characteristic value of the fifth pixel point is: 24 × 0.3 ═ 7.2;
the weight characteristic value of the sixth pixel point is: 40 × 0.4 ═ 16;
the weight characteristic value of the seventh pixel point is: 12 × 0.5 ═ 6;
the weight characteristic value of the eighth pixel point is: 15 × 0.7 ═ 10.5;
the weight characteristic value of the ninth pixel point is: 18 × 0.9 ═ 16.2.
Taking the sum of the weight characteristic values of nine pixel points in the color characteristic diagram corresponding to the G channel as the color characteristic value corresponding to the G channel, wherein the color characteristic value corresponding to the G channel is as follows:
2.6+6+15+4.2+7.2+16+6+10.5+16.2=83.7。
for the B channel:
determining a weight characteristic value of each pixel point in the color characteristic graph corresponding to the B channel according to the color characteristic graph and the weight characteristic graph corresponding to the B channel;
and aiming at each pixel point, taking the product of the characteristic value of the pixel point in the color characteristic diagram corresponding to the B channel and the weight coefficient corresponding to the pixel point in the weight characteristic diagram as the weight characteristic value of the pixel point.
Specifically, as shown in the color feature map corresponding to the B channel in fig. 5, the weight feature value of the first pixel point is 18 × 0.1 — 1.8 in the order from top to bottom and from left to right;
the weight characteristic value of the second pixel point is 30 multiplied by 0.2 to 6;
the weight characteristic value of the third pixel point is: 25 × 0.3 ═ 7.5;
the weight characteristic value of the fourth pixel point is: 21 × 0.2 ═ 4.2;
the weight characteristic value of the fifth pixel point is: 42 × 0.3 ═ 12.6;
the weight characteristic value of the sixth pixel point is: 12 × 0.4 ═ 4.8;
the weight characteristic value of the seventh pixel point is: 25 × 0.5-12.5;
the weight characteristic value of the eighth pixel point is: 40 × 0.7 ═ 28;
the weight characteristic value of the ninth pixel point is: 30 × 0.9 ═ 27.
Taking the sum of the weight characteristic values of nine pixel points in the color characteristic diagram corresponding to the G channel as the color characteristic value corresponding to the G channel, wherein the color characteristic value corresponding to the G channel is as follows:
1.8+6+7.5+4.2+12.6+4.8+12.5+28+27=104.4。
2. and determining a white balance coefficient corresponding to the target image according to the color characteristic values corresponding to the multiple color channels.
The white balance coefficient in the embodiment of the disclosure is [ R/G, B/G ];
wherein, R/G is 106.7/83.7 is 1.27;
B/G=104.4/83.7=1.24;
the white balance coefficient was determined to be [1.27, 1.24 ].
3. And processing the target image according to the determined white balance coefficient.
It should be noted that, in the embodiment of the present disclosure, after the white balance coefficient is determined, the target image is processed according to the determined white balance coefficient, and a specific processing manner may adopt a manner in the prior art, which is not described in detail herein.
The embodiment of the present disclosure further provides a method for training a convolutional neural network, and after the training of the convolutional neural network is completed, the image processing method in fig. 2 and 6 may be performed by the trained convolutional neural network.
FIG. 7 is a flowchart illustrating a method of training a convolutional neural network, as shown in FIG. 7, including the steps of:
in step S71, the convolutional neural network is trained using the training image as an input of the convolutional neural network and the multi-channel feature map as an output of the convolutional neural network.
In step S72, in the process of training the convolutional neural network, a predicted white balance coefficient corresponding to the training image is determined according to the multi-channel feature map output by the convolutional neural network.
In step S73, parameters of the convolutional neural network are adjusted according to the loss value between the predicted white balance coefficient and the true white balance coefficient corresponding to the training image.
In the embodiment of the disclosure, a convolutional neural network is trained through a large number of training images, a predicted white balance coefficient corresponding to a training image can be determined according to a multi-channel feature map output by the convolutional neural network in the training process, and parameters of the convolutional neural network are adjusted through a loss value between the predicted white balance coefficient corresponding to the training image and a real white balance coefficient, so that the predicted white balance coefficient determined according to the multi-channel feature map output by the convolutional neural network is closer to the real white balance coefficient, and the convolutional neural network learns the capability of acquiring the multi-channel feature map corresponding to the image. After the convolutional neural network training is completed, a more accurate multi-channel feature map can be obtained according to the trained convolutional neural network, so that an accurate white balance coefficient is determined according to the multi-channel feature map, and the accuracy of white balance processing on a target image is improved.
When the convolutional neural network is selected, the convolutional neural network which is sensitive to the change of illumination and can well learn the image characteristics can be selected.
For example, the convolutional neural network of the embodiments of the present disclosure may be mobilenetV 2.
Note that the training image in the embodiment of the present disclosure is a RAW image.
Before inputting training images into the convolutional neural network, embodiments of the present disclosure need to determine the true white balance coefficient of each training image;
an optional embodiment is that, the true white balance coefficient corresponding to the training image is determined according to the following manner:
determining a real white balance coefficient of a training image corresponding to the original image according to a color block on a color correction plate included in the original image;
it should be noted that the training images in the embodiments of the present disclosure are obtained by replacing the color correction plates included in the original image with specific colors.
In the embodiment of the disclosure, a large number of original images comprising a color correction plate are collected, and a real white balance coefficient corresponding to the original images is determined through color blocks on the color correction plate in the original images; replacing the position of the color correction plate in the original image by using a specific color to obtain a training image corresponding to the original image, wherein the real white balance coefficient corresponding to the original image is the real white balance coefficient corresponding to the training image; because parameters of the convolutional neural network need to be adjusted according to the loss value between the predicted white balance coefficient and the real white balance coefficient in the process of training the convolutional neural network, the embodiment of the disclosure provides a way of accurately determining the real white balance coefficient, and provides an adjustment basis for adjusting the parameters of the convolutional neural network, so that the predicted white balance coefficient obtained by the convolutional neural network is closer to the real white balance coefficient.
In the implementation, when an original image is shot, a color correction plate is placed at a proper position of a shot picture;
when the original image is shot, the placement of the color correction plate meets the following conditions:
placing the color correction plate at a position which can represent the most environment illumination of the original image;
the proportion of the color correction plate in the original image cannot be too large;
the four edges of the color correction plate are equal to the edges of the original image, and no obvious distortion can be generated.
Therefore, the real white balance coefficient of the original image can be determined more accurately through the color blocks on the color correction plate.
After an original image containing a color correction plate is obtained, RGB values of white color blocks or gray color blocks on the color correction plate in the original image are obtained, and then white balance coefficients of the original image are determined according to the RGB values. The white balance coefficients in the embodiment of the present disclosure refer to [ R/G, B/G ], which is a vector. Assume that the RGB values of the white patches on the color palette are obtained as: (255, 251, 240), the white balance coefficient of the original image can be determined to be [255/251, 240/251], i.e., [1.02, 0.96 ].
And then covering the position of the color correction plate in the original image with white or black to obtain a training image corresponding to the original image, and reducing the influence of the color correction plate on the training image as much as possible. Because the training image is obtained by only covering the position of the color correction plate with white or black by the original image, and other positions except the position of the color correction plate are not processed, the real white balance coefficient corresponding to the training image is the white balance coefficient corresponding to the original image.
After the training image and the real white balance coefficient corresponding to the training image are obtained, the training image is input into the convolutional neural network to obtain a multi-channel feature map output by the convolutional neural network.
The multi-channel system comprises a plurality of channels, a plurality of image processing units and a plurality of channels, wherein the channels comprise a plurality of color channels and weight channels used for representing weight coefficients corresponding to pixel points in a training image;
the plurality of color channels in the embodiments of the present disclosure may include: r channel, G channel, B channel.
In an alternative embodiment, the predicted white balance coefficient corresponding to the training image is determined according to the following manner:
determining a color characteristic value corresponding to the color channel according to a color characteristic diagram corresponding to the color channel and a weight characteristic diagram corresponding to the weight channel aiming at any color channel;
and determining a predicted white balance coefficient corresponding to the training image according to the determined color characteristic value corresponding to each color channel.
In implementation, the color characteristic value corresponding to each color channel needs to be determined, for example, when the color channel includes an R channel, a G channel, and a B channel, the color characteristic value corresponding to the R channel, the color characteristic value corresponding to the G channel, and the color characteristic value corresponding to the B channel need to be determined;
after the color characteristic value corresponding to each color channel is determined, determining a predicted white balance coefficient corresponding to the training image according to the color characteristic value corresponding to each color channel; specifically, the predicted white balance coefficient is [ R/G, B/G ], where R, B, G is a color feature value corresponding to an R channel, a color feature value corresponding to a B channel, and a color feature value corresponding to a G channel, respectively.
The method for determining the color feature value corresponding to each color channel is the same as the method for determining the color feature value corresponding to each color channel in the image processing method provided by the embodiment of the present disclosure, and is not described herein again.
And after the predicted white balance coefficient corresponding to the training image is determined, adjusting the parameters of the convolutional neural network according to the loss value between the predicted white balance coefficient corresponding to the training image and the real white balance coefficient corresponding to the training image.
Specifically, a loss value (L1loss) between a predicted white balance coefficient [1.27, 1.24] corresponding to the training image and a real white balance coefficient [1.02, 0.96] corresponding to the training image is calculated through a loss function, and when the determined loss value is larger than a preset threshold value, parameters of the convolutional neural network are adjusted.
After the parameters of the convolutional neural network are adjusted, the training image is input into the convolutional neural network until the loss value between the predicted white balance coefficient and the real white balance coefficient calculated through the loss function is smaller than a preset threshold value, and the convolutional neural network is determined to be trained completely.
The convolutional neural network is trained through a large number of training pictures, so that the convolutional neural network learns the capability of acquiring the multi-channel feature map corresponding to the image, and the predicted white balance coefficient determined according to the multi-channel feature map is closer to the real white balance coefficient.
FIG. 8 is an overall flow diagram illustrating a method of training a convolutional neural network, according to an exemplary embodiment, as shown in FIG. 8, including the steps of:
in step S81, an original image including a plurality of color correction plates is acquired;
in step S82, determining a true white balance coefficient corresponding to each original image through the color correction plate;
in step S83, covering the position of the color correction plate in the original image with a specific color to obtain a training image corresponding to each original image;
in step S84, a plurality of training images are input to the convolutional neural network;
in step S85, acquiring a multi-channel feature map corresponding to a training image output by the convolutional neural network;
in step S86, determining a color feature value corresponding to each color channel according to the multi-channel feature map output by the convolutional neural network;
in step S87, determining a predicted white balance coefficient corresponding to the training image according to the color feature values corresponding to the plurality of color channels;
in step S88, calculating a loss value between the predicted white balance coefficient corresponding to the training image and the true white balance coefficient corresponding to the original image;
in step S89, determining whether the loss value is greater than a predetermined threshold, if so, performing step S810, and if not, performing step S811;
in step S810, adjusting parameters of the convolutional neural network, and returning to step S84;
in step S811, it is determined that the convolutional neural network training is completed.
The embodiment of the present disclosure further provides an image processing apparatus, and as the apparatus corresponds to the apparatus corresponding to the image processing method in the embodiment of the present disclosure, and the principle of the apparatus for solving the problem is similar to the method, the implementation of the apparatus may refer to the implementation of the method, and repeated parts are not described again.
Fig. 9 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment. Referring to fig. 9, the apparatus includes an acquisition unit 900, a determination unit 901, and a processing unit 902.
An obtaining unit 900, configured to input a target image to be processed into a convolutional neural network, and obtain a multi-channel feature map output by the convolutional neural network, where multiple channels in the convolutional neural network include multiple color channels and weight channels used for representing weight coefficients corresponding to pixel points in the target image;
a determining unit 901 configured to perform determining a white balance coefficient corresponding to the target image through the multi-channel feature map;
a processing unit 902 configured to process the target image according to the determined white balance coefficient.
In one possible implementation manner, the multi-channel feature map includes a color feature map corresponding to a color channel and a weight feature map corresponding to a weight channel;
the determining unit 901 is configured to determine, for any color channel, a color feature value corresponding to the color channel through a color feature map corresponding to the color channel and a weight feature map corresponding to a weight channel; and determining a white balance coefficient corresponding to the target image according to the determined color characteristic value corresponding to each color channel.
In a possible implementation manner, the determining unit 901 is configured to determine, for any color channel, a weight feature value of a pixel point according to a feature value of the pixel point in a color feature map corresponding to the color channel and a weight coefficient of the pixel point at the same position in the weight feature map; and determining the color characteristic value corresponding to the color channel according to the weight characteristic values of all the pixel points in the color characteristic diagram.
In one possible implementation, the obtaining unit 900 is configured to train the convolutional neural network by taking a training image as an input of the convolutional neural network and taking a multi-channel feature map as an output of the convolutional neural network; in the process of training the convolutional neural network, determining a predicted white balance coefficient corresponding to the training image according to a multi-channel feature map output by the convolutional neural network; and adjusting parameters of the convolutional neural network according to the loss value between the predicted white balance coefficient and the real white balance coefficient corresponding to the training image.
In a possible implementation manner, the obtaining unit 900 is configured to determine a true white balance coefficient of a training image corresponding to an original image according to color patches on a color correction plate included in the original image; the training image corresponding to the original image is obtained by replacing a color correction plate included in the original image with a specific color.
With regard to the apparatus in the above-described embodiment, the specific manner in which each unit executes the request has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 10 is a block diagram illustrating an electronic device 1000 according to an example embodiment, the electronic device including:
a processor 1010;
a memory 1020 for storing instructions executable by the processor 1010;
wherein the processor 1010 is configured to execute the instructions to implement the image processing method in the embodiments of the present disclosure.
In an exemplary embodiment, a non-volatile storage medium comprising instructions, such as the memory 1020 comprising instructions, executable by the processor 1010 of the electronic device 1000 to perform the above-described method is also provided. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
FIG. 11 is a block diagram illustrating another electronic device 1100, according to an example embodiment, that includes: radio Frequency (RF) circuit 1110, power supply 1120, processor 1130, memory 1140, input unit 1150, display unit 1160, camera 1170, communication interface 1180, and Wireless Fidelity (WiFi) module 1190. Those skilled in the art will appreciate that the configuration of the electronic device shown in fig. 11 does not constitute a limitation of the electronic device, and that embodiments of the present disclosure provide electronic devices that may include more or fewer components than those shown, or that certain components may be combined, or that a different arrangement of components may be provided.
The following describes each component of the electronic device 1100 in detail with reference to fig. 11:
the RF circuitry 1110 may be used for receiving and transmitting data during a communication or conversation. In particular, RF circuit 1110, after receiving downlink data from the base station, sends the downlink data to processor 1130 for processing; and in addition, sending the uplink data to be sent to the base station. In general, RF circuit 1110 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
In addition, the RF circuit 1110 can also communicate with a network and other terminals through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), etc.
The WiFi technology belongs to a short-distance wireless transmission technology, and the electronic device 1100 is connected to an Access Point (AP) through a WiFi module 1190, thereby implementing Access to a data network. WiFi module 1190 may be used for receiving and transmitting data during communication.
The electronic device 1100 may be physically connected to other terminals via the communication interface 1180. Optionally, the communication interface 11780 is connected with a communication interface of another terminal through a cable, so as to realize data transmission between the electronic device 1100 and the other terminal.
Since the electronic device 1100 is capable of implementing a communication service to send information to other contacts in the embodiment of the present disclosure, the electronic device 1100 needs to have a data transmission function, that is, the electronic device 1100 needs to include a communication module inside. Although fig. 11 illustrates communication modules such as RF circuitry 1110, WiFi module 1190, and communication interface 1180, it is to be appreciated that at least one of the foregoing components or other communication modules (e.g., bluetooth modules) for enabling communications may be present in electronic device 1100 for data transfer.
For example, when the electronic device 1100 is a mobile phone, the electronic device 1100 may include the RF circuit 1110 and may also include the WiFi module 1190; when the electronic device 1100 is a computer, the electronic device 1100 may include the communication interface 1180 and may further include the WiFi module 1190; when the electronic device 1100 is a tablet computer, the electronic device 1100 may include a WiFi module 1190.
Memory 1140 may be used to store software programs and modules. The processor 1130 executes the software programs and modules stored in the memory 1140 so as to execute various functional applications and data processing of the electronic device 1100, and when the processor 1130 executes the program codes in the memory 1140, part or all of the processes in fig. 2, 6, 7 and 8 according to the embodiments of the present invention can be implemented.
Alternatively, the memory 1140 may mainly include a program storage area and a data storage area. Wherein, the storage program area can store an operating system, various application programs (such as communication application), and various modules for WLAN connection, etc.; the storage data area may store data created according to use of the electronic device, and the like.
Further, the memory 1140 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 1150 may be used to receive numeric or character information input by a user and generate key signal inputs related to user settings and function control of the electronic device 1100.
Optionally, the input unit 1150 may include a touch panel 1151 and other input terminals 1152.
Touch panel 1151, also called a touch screen, may collect touch operations of a user on or near the touch panel 1151 (for example, operations of a user on or near the touch panel 1151 by using any suitable object or accessory such as a finger or a stylus pen), and drive a corresponding connection device according to a preset program. Alternatively, the touch panel 1151 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1130, and can receive and execute commands sent by the processor 1130. In addition, the touch panel 1151 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave.
Optionally, other input terminals 1152 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1160 may be used to display information input by or provided to the user and various menus of the electronic device 1100. The display unit 1160 is a display system of the electronic device 1100, and is used for presenting an interface to implement human-computer interaction.
The display unit 1160 may include a display panel 1161. Alternatively, the Display panel 1161 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
Further, touch panel 1151 may cover display panel 1161, and when touch panel 1151 detects a touch operation on or near touch panel 1151, the touch panel is transmitted to processor 1130 to determine the type of touch event, and then processor 1130 provides a corresponding visual output on display panel 1161 according to the type of touch event.
Although in FIG. 11, touch panel 1151 and display panel 1161 are shown as two separate components to implement the input and output functions of electronic device 1100, in some embodiments, touch panel 1151 and display panel 1161 may be integrated to implement the input and output functions of electronic device 1100.
The processor 1130 is a control center of the electronic device 1100, connects the respective components using various interfaces and lines, performs various functions of the electronic device 1100 and processes data by operating or executing software programs and/or modules stored in the memory 1140 and calling data stored in the memory 1140, thereby implementing various services based on the electronic device.
Optionally, processor 1130 may include one or more processing units. Optionally, processor 1130 may integrate an application processor, which handles primarily the operating system, user interfaces, application programs, etc., and a modem processor, which handles primarily wireless communications. It is to be appreciated that the modem processor described above may not be integrated into the processor 1130.
The camera 1170 is configured to implement a shooting function of the electronic device 1100, and shoot a picture or a video.
The electronic device 1100 also includes a power supply 1120 (such as a battery) for powering the various components. Optionally, the power supply 1120 may be logically connected to the processor 1130 through a power management system, so as to manage charging, discharging, power consumption, and the like through the power management system.
Although not shown, the electronic device 1100 may also include at least one sensor, audio circuitry, and the like, which are not described in detail herein.
The embodiments of the present disclosure further provide a computer program product, which, when running on an electronic device, causes the electronic device to execute any one of the above image processing methods or any one of the methods that may be involved in implementing any one of the above image processing methods according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An image processing method, characterized in that the method comprises:
inputting a target image to be processed into a convolutional neural network, and acquiring a multi-channel feature map output by the convolutional neural network, wherein multiple channels in the convolutional neural network comprise a plurality of color channels and weight channels used for representing weight coefficients corresponding to pixel points in the target image, and the multi-channel feature map comprises a color feature map corresponding to the color channels and a weight feature map corresponding to the weight channels;
aiming at any color channel, determining a color characteristic value corresponding to the color channel through a color characteristic graph corresponding to the color channel and a weight characteristic graph corresponding to the weight channel; determining a white balance coefficient corresponding to the target image according to the determined color characteristic value corresponding to each color channel, wherein the white balance coefficient is a two-dimensional vector;
and processing the target image according to the determined white balance coefficient.
2. The method of claim 1, wherein determining the color feature value corresponding to each color channel through the color feature map corresponding to each color channel and the weight feature map corresponding to the weight channel comprises:
for any color channel, determining a weight characteristic value of a pixel point according to a characteristic value of the pixel point in a color characteristic diagram corresponding to the color channel and a weight coefficient of the pixel point at the same position in the weight characteristic diagram;
and determining a color characteristic value corresponding to the color channel according to the weight characteristic value of the pixel point in the color characteristic diagram.
3. The method of claim 1, wherein the convolutional neural network is trained according to the following:
training the convolutional neural network by taking a training image as the input of the convolutional neural network and taking a multi-channel characteristic diagram as the output of the convolutional neural network;
in the process of training the convolutional neural network, determining a predicted white balance coefficient corresponding to the training image according to a multi-channel feature map output by the convolutional neural network;
and adjusting parameters of the convolutional neural network according to the loss value between the predicted white balance coefficient and the real white balance coefficient corresponding to the training image.
4. The method of claim 3, wherein the true white balance coefficients are determined by:
determining a real white balance coefficient of a training image corresponding to the original image according to a color block on a color correction plate included in the original image;
the training image corresponding to the original image is obtained by replacing a color correction plate included in the original image with a specific color.
5. An image processing apparatus characterized by comprising:
the acquiring unit is configured to input a target image needing to be processed into a convolutional neural network, and acquire a multi-channel feature map output by the convolutional neural network, wherein multiple channels in the convolutional neural network comprise a plurality of color channels and weight channels used for representing weight coefficients corresponding to pixel points in the target image, and the multi-channel feature map comprises color feature maps corresponding to the color channels and weight feature maps corresponding to the weight channels;
the determining unit is configured to determine a color characteristic value corresponding to any color channel through a color characteristic graph corresponding to the color channel and a weight characteristic graph corresponding to the weight channel; determining a white balance coefficient corresponding to the target image according to the determined color characteristic value corresponding to each color channel, wherein the white balance coefficient is a two-dimensional vector;
a processing unit configured to process the target image according to the determined white balance coefficient.
6. The apparatus according to claim 5, wherein the determining unit is configured to determine, for any color channel, a weight feature value of a pixel point according to a feature value of the pixel point in a color feature map corresponding to the color channel and a weight coefficient of a pixel point at the same position in the weight feature map; and determining the color characteristic value corresponding to the color channel according to the weight characteristic values of all the pixel points in the color characteristic diagram.
7. The apparatus of claim 5, wherein the acquisition unit is configured to train a convolutional neural network with a training image as an input to the convolutional neural network and a multi-channel feature map as an output from the convolutional neural network; in the process of training the convolutional neural network, determining a predicted white balance coefficient corresponding to the training image according to a multi-channel feature map output by the convolutional neural network; and adjusting parameters of the convolutional neural network according to the loss value between the predicted white balance coefficient and the real white balance coefficient corresponding to the training image.
8. The apparatus according to claim 7, wherein the obtaining unit is configured to determine a true white balance coefficient of a training image corresponding to the original image according to a color patch on a color correction plate included in the original image; the training image corresponding to the original image is obtained by replacing a color correction plate included in the original image with a specific color.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method according to any one of claims 1 to 4.
10. A storage medium, wherein instructions in the storage medium, when executed by a processor of an image processing electronic device, enable the image processing electronic device to perform the image processing method of any one of claims 1 to 4.
CN201910893726.5A 2019-09-20 2019-09-20 Image processing method and device and electronic equipment Active CN110647930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910893726.5A CN110647930B (en) 2019-09-20 2019-09-20 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910893726.5A CN110647930B (en) 2019-09-20 2019-09-20 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110647930A CN110647930A (en) 2020-01-03
CN110647930B true CN110647930B (en) 2022-08-05

Family

ID=69010900

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910893726.5A Active CN110647930B (en) 2019-09-20 2019-09-20 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110647930B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340048B (en) * 2020-02-28 2022-02-22 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium
WO2021204202A1 (en) * 2020-04-10 2021-10-14 华为技术有限公司 Image auto white balance method and apparatus
CN111818318B (en) * 2020-06-12 2022-01-11 北京阅视智能技术有限责任公司 White balance tuning method, device, equipment and storage medium for image processor
CN114071106B (en) * 2020-08-10 2023-07-04 合肥君正科技有限公司 Cold start fast white balance method for low-power-consumption equipment
CN112333437B (en) * 2020-09-21 2022-05-31 宁波萨瑞通讯有限公司 AI camera debugging parameter generator
CN112949504B (en) * 2021-03-05 2024-03-19 深圳市爱培科技术股份有限公司 Stereo matching method, device, equipment and storage medium
CN114677291B (en) * 2022-02-25 2023-05-12 荣耀终端有限公司 Image processing method, device and related equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5398156B2 (en) * 2008-03-04 2014-01-29 キヤノン株式会社 WHITE BALANCE CONTROL DEVICE, ITS CONTROL METHOD, AND IMAGING DEVICE
CN107578390B (en) * 2017-09-14 2020-08-07 长沙全度影像科技有限公司 Method and device for correcting image white balance by using neural network
CN109040729B (en) * 2018-08-16 2020-04-07 Oppo广东移动通信有限公司 Image white balance correction method and device, storage medium and terminal
CN109348206A (en) * 2018-11-19 2019-02-15 Oppo广东移动通信有限公司 Image white balancing treatment method, device, storage medium and mobile terminal

Also Published As

Publication number Publication date
CN110647930A (en) 2020-01-03

Similar Documents

Publication Publication Date Title
CN110647930B (en) Image processing method and device and electronic equipment
CN107438163B (en) Photographing method, terminal and computer readable storage medium
CN107038715B (en) Image processing method and device
CN107302663B (en) Image brightness adjusting method, terminal and computer readable storage medium
CN101489051B (en) Image processing apparatus and image processing method and image capturing apparatus
US11470294B2 (en) Method, device, and storage medium for converting image from raw format to RGB format
CN105100764B (en) Image pickup method and device
KR101586954B1 (en) Techniques to reduce color artifacts in a digital image
CN107690065A (en) A kind of white balance correcting, device and computer-readable recording medium
CN108200352B (en) Method, terminal and storage medium for adjusting picture brightness
CN106165409B (en) Image processing apparatus, photographic device, image processing method and program
CN108184105B (en) Method and device for adjusting brightness and computer readable storage medium
CN107705247B (en) Image saturation adjusting method, terminal and storage medium
CN112840636A (en) Image processing method and device
CN108200421B (en) White balance processing method, terminal and computer readable storage medium
CN105791790A (en) Image processing method and apparatus
CN109377531A (en) Image color cast method of adjustment, device, mobile terminal and readable storage medium storing program for executing
CN111935418B (en) Video processing method and device, electronic equipment and storage medium
CN109151428A (en) automatic white balance processing method, device and computer storage medium
CN106445970B (en) Loading processing method and device for placeholder map
CN108093233B (en) Image processing method, terminal and computer readable storage medium
CN116668656B (en) Image processing method and electronic equipment
CN108053453B (en) Color optimization method, terminal and computer-readable storage medium
CN114845044B (en) Image processing method, intelligent terminal and storage medium
WO2018082130A1 (en) Salient map generation method and user terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant