CN113191376A - Image processing method, image processing device, electronic equipment and readable storage medium - Google Patents

Image processing method, image processing device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113191376A
CN113191376A CN202110663491.8A CN202110663491A CN113191376A CN 113191376 A CN113191376 A CN 113191376A CN 202110663491 A CN202110663491 A CN 202110663491A CN 113191376 A CN113191376 A CN 113191376A
Authority
CN
China
Prior art keywords
image
processed
determining
target
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110663491.8A
Other languages
Chinese (zh)
Inventor
刘永劼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aixin Technology Co ltd
Original Assignee
Beijing Aixin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aixin Technology Co ltd filed Critical Beijing Aixin Technology Co ltd
Priority to CN202110663491.8A priority Critical patent/CN113191376A/en
Publication of CN113191376A publication Critical patent/CN113191376A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Processing Of Color Television Signals (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

The application provides an image processing method, an image processing device, an electronic device and a readable storage medium, wherein a specific implementation mode of the method comprises the following steps: acquiring an image to be processed; determining a weight parameter corresponding to each pixel of the image to be processed, wherein the weight parameter is used for representing the degree that the color effect of the pixel approaches to pure white; and processing the image to be processed by using the weight parameters to obtain a target image. The method can process the image to be processed by utilizing the weight parameters so as to enable the color of the target image to be closer to the real effect.

Description

Image processing method, image processing device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of information processing, and in particular, to an image processing method, an apparatus, an electronic device, and a readable storage medium.
Background
White balance is an index describing the accuracy of white color generated by mixing three primary colors of red, green and blue in a display, and is a very important concept in the field of television photography, by which a series of problems of color restoration and color tone processing can be solved. The white balance algorithm is a calculation method for digital image color processing, which can accurately restore the colors of other objects by restoring the colors of white objects (generating pure white color effect).
In the related art, when white balance processing is performed on an image to be processed, the image to be processed is generally processed by using the obtained white balance correction coefficient to obtain a target image. However, the white balance correction coefficient is often directly output by using a neural network model, and an effective intermediate parameter generated in the white balance correction coefficient process cannot be obtained, which brings difficulty to the subsequent white balance processing process.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method, an image processing apparatus, an electronic device, and a readable storage medium, which are used for processing an image to be processed by using a weight parameter, so that colors of a target image are closer to a real effect.
In a first aspect, an embodiment of the present application provides an image processing method, including: acquiring an image to be processed; determining a weight parameter corresponding to each pixel of the image to be processed, wherein the weight parameter is used for representing the degree that the color effect of the pixel approaches to pure white; and processing the image to be processed by using the weight parameters to obtain a target image. Therefore, the image to be processed can be processed by utilizing the weight parameters, so that the color of the target image is closer to the real effect.
Optionally, the determining the weight parameter corresponding to each pixel of the image to be processed includes: determining a weight parameter corresponding to each pixel of the image to be processed by utilizing a predetermined neural network model; the neural network model is obtained by training based on a plurality of sample training sets, and each sample training set is composed of initial weight parameters corresponding to sample images and sample target images. Therefore, the converged neural network model can be used to obtain a target image with a color effect close to the real color.
Optionally, the weight parameter is recorded by a grid map, and a grid of the weight grid map matches with the pixel arrangement of the image to be processed.
Optionally, the neural network model is obtained in advance based on the following steps: acquiring a training sample set; the training sample set comprises a plurality of sample images and sample target images corresponding to the sample images; the sample target image comprises a real image generated from a real illumination vector; taking the sample image as the input of an initial neural network model, and enabling the initial neural network model to output an initial weight grid graph; determining an initial target image based on the initial weight grid map and the sample image; determining a loss function value between the initial target image and the sample target image by using a preset loss function; and adjusting model parameters corresponding to the initial neural network model by using the loss function values so as to make the initial neural network model converge and obtain the neural network model.
Optionally, after determining the weight parameter corresponding to each pixel of the image to be processed, the method further includes: for each color channel, determining a weight parameter recorded in each grid corresponding to the color channel; accumulating the weight parameters recorded in each grid to obtain a weight accumulated sum corresponding to the color channel; determining a first confidence corresponding to the color characteristic value of the color channel based on the weight accumulated sum and the weight parameter recorded in each grid of the color channel; and determining a second confidence coefficient of the white balance correction coefficient obtained according to the weight parameter based on the first confidence coefficient. Therefore, the confidence coefficient of the white balance correction coefficient can be analyzed by using the weight parameter so as to conveniently determine whether the color effect of the image to be processed approaches to reality or not.
Optionally, after the to-be-processed image is processed by using the weight parameter to obtain a target image, the method further includes: determining a white balance correction coefficient based on a first color characteristic value corresponding to the target image and a second color characteristic value corresponding to the image to be processed; and carrying out white balance processing on the image to be processed by utilizing the white balance correction coefficient to obtain a first target image. Therefore, the white balance correction coefficient can be obtained by using the weight parameter, and the white balance processing of the image to be processed is facilitated.
Optionally, the image to be processed includes the same scene image under the illumination of a plurality of light sources; and determining a white balance correction coefficient based on a first color characteristic value corresponding to the target image and a second color characteristic value corresponding to the image to be processed, including: determining initial correction coefficients corresponding to the color channels of the image to be processed based on the first color characteristic value and the second color characteristic value; clustering the initial correction coefficients to obtain target correction coefficients corresponding to the color channels under different types of light sources; determining the average value of target correction coefficients corresponding to all color channels of the image to be processed under the same type of light source; and the white balance processing is carried out on the image to be processed by utilizing the white balance correction coefficient to obtain a first target image, and the method comprises the following steps: and carrying out white balance processing on the image to be processed by using the target correction coefficient average value to obtain the first target image. In this way, the color of the first target image can be made closer to the real effect.
Optionally, the clustering the initial correction coefficients to obtain the target correction coefficients corresponding to the color channels under different types of light sources includes: clustering the initial correction coefficients according to a preset category number to obtain the target correction coefficients; or clustering the initial correction coefficient based on density to obtain the target correction coefficient. So as to cluster the initial correction coefficients.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: the acquisition module is used for acquiring an image to be processed; the determining module is used for determining a weight parameter corresponding to each pixel of the image to be processed, wherein the weight parameter is used for representing the degree that the color effect of the pixel approaches to pure white; and the processing module is used for processing the image to be processed by utilizing the weight parameters to obtain a target image.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, where the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, the steps in the method as provided in the first aspect are executed.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps in the method as provided in the first aspect.
Additional features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 2 is a flowchart of the training steps of a neural network model according to an embodiment of the present application;
fig. 3 is a block diagram of an image processing apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device for executing an image processing method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
In the related art, there is a problem that the white balance processing process is difficult; in order to solve the problem, the present application provides an image processing method, apparatus, electronic device and readable storage medium; further, the method comprises the steps of firstly obtaining an image to be processed; then, determining a weight parameter corresponding to each pixel of the image to be processed, wherein the weight parameter is used for representing the degree that the color effect of the pixel approaches to pure white; and finally, processing the image to be processed by using the weight parameters to obtain a target image. Therefore, the image to be processed can be processed by utilizing the weight parameters, so that the color of the target image is closer to the real effect, and the effect consistent with the white balance processing is achieved. In some application scenarios, the image processing method can be applied to electronic products such as video cameras, digital cameras, mobile phones with a photographing function, and the like, for example, to reproduce real colors of images.
The above solutions in the related art are all the results of practical and careful study of the inventor, and therefore, the discovery process of the above problems and the solutions proposed by the following embodiments of the present invention to the above problems should be the contribution of the inventor to the present invention in the course of the present invention.
Referring to fig. 1, a flowchart of an image processing method according to an embodiment of the present application is shown. As shown in fig. 1, the image processing method includes the following steps 101 to 103.
Step 101, acquiring an image to be processed;
the image to be processed may include, for example, a picture taken by a photographing apparatus. Which may be acquired after the photographing apparatus completes the photographing operation.
Step 102, determining a weight parameter corresponding to each pixel of the image to be processed, wherein the weight parameter is used for representing the degree that the color effect of the pixel approaches to pure white;
the above-mentioned weighting parameters can be used to characterize the degree that the color effect of each pixel in the image to be processed approaches to pure white. That is, each weight parameter corresponds to a degree to which the color effect of one pixel approaches pure white. For example, when the unit "1" is regarded as the weight parameter when the color effect is pure white, if the weight parameter of a certain pixel is "0.9" or "0.95" which is closer to the unit "1", the color effect of the pixel can be regarded as closer to the pure white. If the weight parameters of a plurality of pixels constituting a certain object are all closer to the unit "1", the object can be regarded as a white subject.
After the image to be processed is acquired, a weight parameter corresponding to the image to be processed may be determined. In some application scenarios, the above weight parameters may be recorded by a table, text, or other carrier that can record the weight parameters of pixels substantially one by one.
And 103, processing the image to be processed by using the weight parameters to obtain a target image.
After the weight parameter corresponding to the image to be processed is determined, the image to be processed can be processed by using the weight parameter, so that the real effects of other colors in the image to be processed can be restored by using the degree that the color effect of the pixel approaches to pure white. That is, since the weighting parameter characterizes the degree to which the color effect of the pixel approaches pure white, the weighting parameter can be used to restore other colors in the image to be processed. Therefore, the step 103 can be regarded as the white balance processing is performed on the image to be processed.
In this embodiment, through the above steps 101 to 103, the image to be processed may be processed by using the weight parameter, so that the color of the target image is closer to the real effect, and an effect consistent with the white balance processing is achieved.
In some optional implementations, the step 102 may include: determining a weight parameter corresponding to each pixel of the image to be processed by utilizing a predetermined neural network model; the neural network model is obtained by training based on a plurality of sample training sets, and each sample training set is composed of initial weight parameters corresponding to sample images and sample target images.
In some application scenarios, the weight parameter may be determined by a predetermined neural network model. Further, the neural network model described above may include, for example, a neural network model of a U-shaped structure (a uet neural network model).
In these application scenarios, the neural network model may be trained based on the initial weight parameters corresponding to the sample images and the sample target images. The initial weight parameter may be an output of the neural network model. In these application scenarios, the corresponding image may be determined, for example, by an initial weight parameter. Then, based on the image and the sample target image, model parameters of the neural network model can be continuously optimized to obtain a converged neural network model.
Different from the related art in which the neural network model is used to directly output the white balance correction coefficient, in this embodiment, the neural network model outputs the weight parameter, and then the white balance correction coefficient can be calculated by using the weight parameter. Therefore, when the white balance correction coefficient is obtained, other calculations can be performed by using the weight parameter, so that the calculation difficulty of the subsequent white balance algorithm is reduced. Further, in the related art, when training a neural network model, training data including a sample image as an input and a sample target image obtained from a white balance correction coefficient as an output is used, and the neural network model is continuously optimized by a loss therebetween. The training data adopted in the application comprises weight parameters output by the neural network model and sample target images, and the model parameters of the neural network model are continuously optimized through loss between the images obtained through the weight parameters and the sample target images. Therefore, the target image with the color effect closer to the real color can be obtained by using the weight parameters.
In some optional implementation manners, the weight parameter is recorded through a grid map, and a grid of the weight grid map is matched with the pixel arrangement of the image to be processed, so as to facilitate the matrix product calculation of the weight grid map and the image to be processed. In some application scenarios, the grid size of the weighted grid map may be determined by the resolution of the image to be processed. For example, the output weight grid map is 64 × 3, and if the resolution of the image to be processed is 6400 × 6400 (pixels per foot), the grid size of the weight grid map may be 100 × 100 (feet).
Referring to fig. 2, fig. 2 is a flowchart illustrating a training procedure of a neural network model according to an embodiment of the present application; as shown in fig. 2, the neural network model can be obtained in advance based on the following steps:
step 201, acquiring a training sample set; the training sample set comprises a plurality of sample images and sample target images corresponding to the sample images; the sample target image comprises a real image generated from a real illumination vector;
the sample image may be captured by a camera, for example, and the sample target image may include a real image generated from a real illumination vector, for example. In some application scenarios, for example, the image with dimension H × W × 3 (i.e., the color image) may be downsampled to obtain a sample image of H × W × 3. Here, H can be regarded as the length of the sample image (e.g., 64 pixel points); w may be considered as the width of the sample image (e.g., 64 pixels), and 3 may be considered as the number of color channels (i.e., three color channels of red, green, and blue). In these application scenarios, after the sample image is acquired, the sample image may be subjected to white balance processing to obtain a sample target image. Here, the sample target image may be obtained by processing through a white balance algorithm in the related art, which is different from the present application, for example, and is not described herein again.
Step 202, taking the sample image as the input of an initial neural network model, and enabling the initial neural network model to output an initial weight grid graph;
after the sample image is acquired, the sample image can be used as an input of the neural network model, so that the neural network model can determine the initial weight grid map corresponding to the sample image. For example, after 64 × 3 sample images are input into the neural network model, an initial weight grid of 64 × 3 may be obtained. Here, since the initial weight parameters recorded in the initial weight grid map need to be calculated from the color feature values corresponding to the three color channels of the sample image, the specification of the obtained initial weight grid map may be 64 × 3 based on the 64 × 3 sample image.
Step 203, determining an initial target image based on the initial weight grid graph and the sample image;
after the neural network model outputs the initial weight grid map, the initial target image can be determined by using the initial weight parameters recorded in the initial weight grid map and the sample image. In some application scenarios, the initial target image may be obtained by multiplying the initial weight parameter by a pixel matrix corresponding to the sample image, for example. In some application scenarios, for example, after multiplying the initial weight grid map of 64 × 3 with the pixel matrix corresponding to the sample image, an initial target image of 64 × 3 may be obtained. In some application scenarios, the initial weight grid graph may be viewed as an adaptive kernel function that converts a sample image to an initial target image, for example. In these application scenarios, the color effect of the initial target image can be considered to approach the color effect after the white balance processing.
Step 204, determining a loss function value between the initial target image and the sample target image by using a preset loss function;
after the initial target image is obtained, a loss calculation may be performed on the initial target image and the sample target image. Further, a preset loss function such as a minimum absolute value deviation function (L1 norm loss function) or a minimum square error function (L2 norm loss function) may be used to determine a loss function value between the initial target image and the sample target image.
Step 205, adjusting the model parameters corresponding to the initial neural network model by using the loss function values, so that the initial neural network model converges to obtain the neural network model.
After obtaining the loss function value, the initial neural network model may be adjusted using the loss function value. That is, after a plurality of loss function values are obtained by processing a plurality of sample images in a training sample set, the model parameters are continuously adjusted based on the plurality of loss function values. In some application scenarios, when the change of the loss function value tends to be flat or reaches a minimum value, the current initial neural network model may be considered to be converged, and then the converged neural network model may be used in an actual application scenario.
In the related art, there is a method of outputting a white balance correction coefficient by using a neural network model and performing white balance processing by using the white balance correction coefficient. However, since the neural network model directly outputs the white balance correction coefficient, other effective intermediate parameters (such as the above-mentioned weight parameters) cannot be output, so that the subsequent white balance algorithm cannot obtain other parameters based on the effective intermediate parameters, which brings difficulty to the white balance processing process. The effective intermediate parameters herein may also include, for example, the confidence level of obtaining the white balance correction factor.
In some optional application scenarios, after determining the weight parameter corresponding to each pixel of the image to be processed, the image processing method further includes: for each color channel, determining a weight parameter recorded in each grid corresponding to the color channel; accumulating the weight parameters recorded in each grid to obtain a weight accumulated sum corresponding to the color channel; determining a first confidence corresponding to the color characteristic value of the color channel based on the weight accumulated sum and the weight parameter recorded in each grid of the color channel; and determining a second confidence coefficient of the white balance correction coefficient obtained according to the weight parameter based on the first confidence coefficient.
That is, after the weight parameter is obtained, the confidence of the white balance correction coefficient may be analyzed based on the weight parameter. In some application scenarios, the second confidence level of the white balance correction coefficient may be determined by determining the first confidence level corresponding to the color feature value of each color channel.
Specifically, for each color channel, a weight parameter in a weight grid map corresponding to the color channel may be determined; these weight parameters may then be accumulated to obtain a weight accumulated sum corresponding to the color channel, and then the first confidence may be determined. For example, in the 3 × 3 weighted grid map, the weighting parameters A, B, C corresponding to the R channel and located at (0, 0), (0,1), and (0, 2) may be added, and in this case, the first confidence of the color channel may be, for example, a/(a + B + C), B/(a + B + C), and C/(a + B + C). Here, A, B, C may be any value having a magnitude between (0, 1).
After the first confidence level is obtained, a second confidence level of the corresponding white balance correction coefficient may be analyzed. Then, after the image to be processed is processed through the white balance correction coefficient corresponding to the second confidence coefficient with a higher value, the color effect of the processed image can be closer to the real effect. Therefore, whether the color effect of the image to be processed approaches to reality or not can be conveniently determined through the second confidence coefficient.
In some application scenarios, if the weight parameter is output based on the neural network model with a good convergence effect, the color effect of the image obtained based on the weight parameter should approach the color effect of the real image corresponding to the image to be processed. Therefore, the weight parameter presented by the weight grid at this time will take a larger value on the principal diagonal (R-R, G-G, B-B) and the weight parameter on the non-principal diagonal (R-B; G, B-R; G, G-R; B) will take a smaller value. Therefore, the weight parameters on the main diagonal line can be compared with the weight parameters on the non-main diagonal line, and a third confidence corresponding to the output of the neural network model is obtained. And analyzing a second confidence coefficient of the obtained white balance correction coefficient through the third confidence coefficient. Therefore, in these application scenarios, the first confidence may also be considered as the second confidence.
Here, the process of obtaining the white balance correction coefficient according to the weight parameter may be the same as or similar to steps 104 and 105 below, for example.
In some optional implementations, after the step 103, the image processing method may further include the steps of:
104, determining a white balance correction coefficient based on a first color characteristic value corresponding to the target image and a second color characteristic value corresponding to the image to be processed;
after the target image is obtained, a white balance correction coefficient can be obtained based on the target image and the image to be processed. In some application scenarios, the first color feature value includes color values respectively corresponding to an R channel, a G channel, and a B channel corresponding to the first target image; the second color feature value includes color values respectively corresponding to an R channel, a G channel, and a B channel corresponding to the image to be processed. The color values may include any number in the range 0-255.
In these application scenarios, for example, the first color feature value and the second color feature value may be subjected to a quotient processing to obtain a white balance correction coefficient. For example, when the first color feature value is (107,144,90) and the second color feature value is (112,168,94), the obtained white balance correction coefficient may be (0.95, 0.85, 0.95) if 2-bit significant figures are retained after the values of the respective color channels are subjected to the quotient.
And 105, performing white balance processing on the image to be processed by using the white balance correction coefficient to obtain a first target image.
After obtaining the white balance correction coefficient, the white balance processing operation may be performed using the white balance correction coefficient. In some application scenarios, the white balance processing operation performed may include, for example, operations such as determining a color difference between the target image and the real image. In other application scenarios, the white balance processing operation may be, for example, an operation of performing white balance processing on the image to be processed to directly obtain the target image (in this case, the first target image may be regarded as the target image), and the like.
In some optional implementations, the image to be processed includes the same scene image under illumination of multiple light sources.
In some application scenarios, the to-be-processed image may include the same scene image under illumination of multiple light sources. That is, the image to be processed may represent an image obtained when the same scene is illuminated by a plurality of light sources at the same time. The plurality of light sources may be light sources having different illumination intensities or light sources having different illumination colors, for example.
Thus, the above step 104 may comprise the following sub-steps:
a substep 1041 of determining an initial correction coefficient corresponding to each color channel of the image to be processed based on the first color feature value and the second color feature value;
after the target image is obtained, the initial white balance correction coefficient may be determined based on the first color characteristic value of the target image and the second color characteristic value of the image to be processed. Here, the process of obtaining the initial correction coefficient may be obtained by, for example, dividing the first color feature value by the second color feature value.
The substep 1042 is to perform clustering processing on the initial correction coefficients to obtain target correction coefficients corresponding to the color channels under different types of light sources;
after the initial correction coefficient is obtained, the initial correction coefficient may be subjected to cluster calculation to obtain the target correction coefficient.
In some alternative implementations, the sub-step 1042 may include: clustering the initial correction coefficients according to a preset category number to obtain the target correction coefficients; or clustering the initial correction coefficient based on density to obtain the target correction coefficient.
That is, when the initial correction coefficients are clustered, the clustering may be performed in a manner of clustering by a preset number of categories or in a manner of clustering based on density. Here, the clustering method according to the preset number of categories may include, for example, a method of calculating by a K-means clustering algorithm (simply referred to as "K-means clustering algorithm"); the density-based clustering method may include a method calculated by a noise-based density clustering method (abbreviated as "DBSCAN"), for example. For example, after clustering the weight parameters in the weight grid graph of 64 × 3, N categories can be obtained. The N categories here can be determined from the actual clustering results.
Substep 1043, determining an average value of target correction coefficients corresponding to each color channel of the image to be processed under the same class of light sources;
after the target correction coefficients corresponding to the color channels under different types of light sources are obtained through a clustering mode, the average value of the target correction coefficients corresponding to the color channels under the same type of light sources can be determined. For example, for the first light source, the target correction coefficient corresponding to the R channel is A, B, C, D, and the target correction coefficient of the R channel under that category of light sources is (a + B + C + D)/4. Here, A, B, C, D may be any value that can characterize the white balance correction factor. By analogy, the target correction coefficient average value of each color channel corresponding to each category of light source can be obtained.
Thus, the step 105 may include: and processing the image to be processed by using the target correction coefficient average value to obtain the first target image.
After obtaining the target correction coefficient, the white balance processing may be performed on the image to be processed by using the target correction coefficient. And then, after the image to be processed obtained in the multi-light source environment is subjected to white balance processing, a first target image with colors closer to the real effect can be obtained.
Referring to fig. 3, a block diagram of an image processing apparatus provided in an embodiment of the present application is shown, where the image processing apparatus may be a module, a program segment, or code on an electronic device. It should be understood that the apparatus corresponds to the above-mentioned embodiment of the method of fig. 1, and can perform various steps related to the embodiment of the method of fig. 1, and the specific functions of the apparatus can be referred to the description above, and the detailed description is appropriately omitted here to avoid redundancy.
Optionally, the image processing apparatus includes an obtaining module 301, a determining module 302, and a processing module 303. The acquiring module 301 is configured to acquire an image to be processed; a determining module 302, configured to determine a weight parameter corresponding to each pixel of the image to be processed, where the weight parameter is used to represent a degree that a color effect of the pixel approaches pure white; and the processing module 303 is configured to process the image to be processed by using the weight parameter to obtain a target image.
Optionally, the determining module 302 is further configured to: determining a weight parameter corresponding to each pixel of the image to be processed by utilizing a predetermined neural network model; the neural network model is obtained by training based on a plurality of sample training sets, and each sample training set is composed of initial weight parameters corresponding to sample images and sample target images.
Optionally, the weight parameter is recorded by a grid map, and a grid of the weight grid map matches with the pixel arrangement of the image to be processed.
Optionally, the neural network model is obtained in advance based on the following steps: acquiring a training sample set; the training sample set comprises a plurality of sample images and sample target images corresponding to the sample images; the sample target image comprises a real image generated from a real illumination vector; taking the sample image as the input of an initial neural network model, and enabling the initial neural network model to output an initial weight grid graph; determining an initial target image based on the initial weight grid map and the sample image; determining a loss function value between the initial target image and the sample target image by using a preset loss function; and adjusting model parameters corresponding to the initial neural network model by using the loss function values so as to make the initial neural network model converge and obtain the neural network model.
Optionally, the image processing apparatus further comprises a confidence level determining module, wherein the confidence level determining module is configured to: after determining the weight parameters corresponding to the pixels of the image to be processed, determining the weight parameters recorded in each grid corresponding to each color channel; accumulating the weight parameters recorded in each grid to obtain a weight accumulated sum corresponding to the color channel; determining a first confidence corresponding to the color characteristic value of the color channel based on the weight accumulated sum and the weight parameter recorded in each grid of the color channel; and determining a second confidence coefficient of the white balance correction coefficient obtained according to the weight parameter based on the first confidence coefficient.
Optionally, the image processing method further includes a white balance processing module, where the white balance processing module is configured to: after the image to be processed is processed by using the weight parameter to obtain a target image, determining a white balance correction coefficient based on a first color characteristic value corresponding to the target image and a second color characteristic value corresponding to the image to be processed; and carrying out white balance processing on the image to be processed by utilizing the white balance correction coefficient to obtain a first target image.
Optionally, the image to be processed includes the same scene image under the illumination of a plurality of light sources; and the white balance processing module is further configured to: determining initial correction coefficients corresponding to the color channels of the image to be processed based on the first color characteristic value and the second color characteristic value; clustering the initial correction coefficients to obtain target correction coefficients corresponding to the color channels under different types of light sources; determining the average value of target correction coefficients corresponding to all color channels of the image to be processed under the same type of light source; and the white balance processing is carried out on the image to be processed by utilizing the white balance correction coefficient to obtain a first target image, and the method comprises the following steps: and carrying out white balance processing on the image to be processed by using the target correction coefficient average value to obtain the first target image.
Optionally, the white balance processing module is further configured to: clustering the initial correction coefficients according to a preset category number to obtain the target correction coefficients; or clustering the initial correction coefficient based on density to obtain the target correction coefficient.
It should be noted that, for the convenience and conciseness of description, the specific working processes of the system and the device described above may refer to the corresponding processes in the foregoing method embodiments, and the description is not repeated here.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device for executing an image processing method according to an embodiment of the present application, where the electronic device may include: at least one processor 401, e.g., a CPU, at least one communication interface 402, at least one memory 403 and at least one communication bus 404. Wherein the communication bus 404 is used for realizing direct connection communication of these components. The communication interface 402 of the device in the embodiment of the present application is used for performing signaling or data communication with other node devices. The memory 403 may be a high-speed RAM memory, or may be a non-volatile memory (e.g., at least one disk memory). The memory 403 may optionally be at least one memory device located remotely from the aforementioned processor. The memory 403 stores computer readable instructions, and when the computer readable instructions are executed by the processor 401, the electronic device can execute the method process shown in fig. 1.
It will be appreciated that the configuration shown in fig. 4 is merely illustrative and that the electronic device may include more or fewer components than shown in fig. 4 or may have a different configuration than shown in fig. 4. The components shown in fig. 4 may be implemented in hardware, software, or a combination thereof.
Embodiments of the present application provide a readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program may perform the method processes performed by an electronic device in the method embodiment shown in fig. 1.
The present embodiment discloses a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the method provided by the above-mentioned method embodiments, for example, the method may comprise: acquiring an image to be processed; determining a weight parameter corresponding to each pixel of the image to be processed, wherein the weight parameter is used for representing the degree that the color effect of the pixel approaches to pure white; and processing the image to be processed by using the weight parameters to obtain a target image.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (11)

1. An image processing method, comprising:
acquiring an image to be processed;
determining a weight parameter corresponding to each pixel of the image to be processed, wherein the weight parameter is used for representing the degree that the color effect of the pixel approaches to pure white;
and processing the image to be processed by using the weight parameters to obtain a target image.
2. The method according to claim 1, wherein the determining the weight parameter corresponding to each pixel of the image to be processed comprises:
determining a weight parameter corresponding to each pixel of the image to be processed by utilizing a predetermined neural network model; the neural network model is obtained by training based on a plurality of sample training sets, and each sample training set is composed of initial weight parameters corresponding to sample images and sample target images.
3. The method according to claim 2, wherein the weight parameters are recorded by a grid map, the grid of which matches the pixel arrangement of the image to be processed.
4. The method of claim 3, wherein the neural network model is derived in advance based on the steps of:
acquiring a training sample set; the training sample set comprises a plurality of sample images and sample target images corresponding to the sample images; the sample target image comprises a real image generated from a real illumination vector;
taking the sample image as the input of an initial neural network model, and enabling the initial neural network model to output an initial weight grid graph;
determining an initial target image based on the initial weight grid map and the sample image;
determining a loss function value between the initial target image and the sample target image by using a preset loss function;
and adjusting model parameters corresponding to the initial neural network model by using the loss function values so as to make the initial neural network model converge and obtain the neural network model.
5. The method according to claim 3, wherein after the determining the weight parameter corresponding to each pixel of the image to be processed, the method further comprises:
for each color channel, determining a weight parameter recorded in each grid corresponding to the color channel;
accumulating the weight parameters recorded in each grid to obtain a weight accumulated sum corresponding to the color channel;
determining a first confidence corresponding to the color characteristic value of the color channel based on the weight accumulated sum and the weight parameter recorded in each grid of the color channel;
and determining a second confidence coefficient of the white balance correction coefficient obtained according to the weight parameter based on the first confidence coefficient.
6. The method according to claim 1, wherein after the processing the image to be processed by using the weight parameter to obtain a target image, the method further comprises:
determining a white balance correction coefficient based on a first color characteristic value corresponding to the target image and a second color characteristic value corresponding to the image to be processed;
and carrying out white balance processing on the image to be processed by utilizing the white balance correction coefficient to obtain a first target image.
7. The method of claim 6, wherein the image to be processed comprises the same scene image under illumination of a plurality of light sources; and
determining a white balance correction coefficient based on a first color characteristic value corresponding to the target image and a second color characteristic value corresponding to the image to be processed, including:
determining initial correction coefficients corresponding to the color channels of the image to be processed based on the first color characteristic value and the second color characteristic value;
clustering the initial correction coefficients to obtain target correction coefficients corresponding to the color channels under different types of light sources;
determining the average value of target correction coefficients corresponding to all color channels of the image to be processed under the same type of light source; and
the white balance processing is performed on the image to be processed by using the white balance correction coefficient to obtain a first target image, and the white balance processing method includes:
and carrying out white balance processing on the image to be processed by using the target correction coefficient average value to obtain the first target image.
8. The method according to claim 7, wherein the clustering the initial correction coefficients to obtain the target correction coefficients corresponding to the color channels under different light sources comprises:
clustering the initial correction coefficients according to a preset category number to obtain the target correction coefficients; or
And clustering the initial correction coefficient based on density to obtain the target correction coefficient.
9. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring an image to be processed;
the determining module is used for determining a weight parameter corresponding to each pixel of the image to be processed, wherein the weight parameter is used for representing the degree that the color effect of the pixel approaches to pure white;
and the processing module is used for processing the image to be processed by utilizing the weight parameters to obtain a target image.
10. An electronic device comprising a processor and a memory, the memory storing computer readable instructions that, when executed by the processor, perform the method of any of claims 1-8.
11. A readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-8.
CN202110663491.8A 2021-06-15 2021-06-15 Image processing method, image processing device, electronic equipment and readable storage medium Pending CN113191376A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110663491.8A CN113191376A (en) 2021-06-15 2021-06-15 Image processing method, image processing device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110663491.8A CN113191376A (en) 2021-06-15 2021-06-15 Image processing method, image processing device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN113191376A true CN113191376A (en) 2021-07-30

Family

ID=76976535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110663491.8A Pending CN113191376A (en) 2021-06-15 2021-06-15 Image processing method, image processing device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113191376A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115412712A (en) * 2022-11-03 2022-11-29 深圳比特微电子科技有限公司 White balance method and device in multi-light-source scene and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108551576A (en) * 2018-03-07 2018-09-18 浙江大华技术股份有限公司 A kind of white balance method and device
CN110545384A (en) * 2019-09-23 2019-12-06 Oppo广东移动通信有限公司 focusing method and device, electronic equipment and computer readable storage medium
CN110647930A (en) * 2019-09-20 2020-01-03 北京达佳互联信息技术有限公司 Image processing method and device and electronic equipment
US20200193934A1 (en) * 2018-04-28 2020-06-18 Boe Technology Group Co., Ltd. Image data processing method and apparatus, image display method and apparatus, storage medium and display device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108551576A (en) * 2018-03-07 2018-09-18 浙江大华技术股份有限公司 A kind of white balance method and device
US20200193934A1 (en) * 2018-04-28 2020-06-18 Boe Technology Group Co., Ltd. Image data processing method and apparatus, image display method and apparatus, storage medium and display device
CN110647930A (en) * 2019-09-20 2020-01-03 北京达佳互联信息技术有限公司 Image processing method and device and electronic equipment
CN110545384A (en) * 2019-09-23 2019-12-06 Oppo广东移动通信有限公司 focusing method and device, electronic equipment and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115412712A (en) * 2022-11-03 2022-11-29 深圳比特微电子科技有限公司 White balance method and device in multi-light-source scene and readable storage medium

Similar Documents

Publication Publication Date Title
CN110163080B (en) Face key point detection method and device, storage medium and electronic equipment
CN113763296B (en) Image processing method, device and medium
CN113034358B (en) Super-resolution image processing method and related device
CN111383232B (en) Matting method, matting device, terminal equipment and computer readable storage medium
CN111681177B (en) Video processing method and device, computer readable storage medium and electronic equipment
CN111985281B (en) Image generation model generation method and device and image generation method and device
CN110991380A (en) Human body attribute identification method and device, electronic equipment and storage medium
US10846560B2 (en) GPU optimized and online single gaussian based skin likelihood estimation
CN109948699B (en) Method and device for generating feature map
CN112840636A (en) Image processing method and device
CN112116551A (en) Camera shielding detection method and device, electronic equipment and storage medium
CN110599554A (en) Method and device for identifying face skin color, storage medium and electronic device
CN113066020B (en) Image processing method and device, computer readable medium and electronic equipment
CN115115552B (en) Image correction model training method, image correction device and computer equipment
CN114663570A (en) Map generation method and device, electronic device and readable storage medium
JP2021189527A (en) Information processing device, information processing method, and program
CN113191376A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN105989571A (en) Control of computer vision pre-processing based on image matching using structural similarity
CN113706400A (en) Image correction method, image correction device, microscope image correction method, and electronic apparatus
CN112489144A (en) Image processing method, image processing apparatus, terminal device, and storage medium
CN111274145A (en) Relationship structure chart generation method and device, computer equipment and storage medium
CN112215237B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN115205168A (en) Image processing method, device, electronic equipment, storage medium and product
CN114463221A (en) Self-supervision color correction method for multi-device domain AWB enhancement
US11055881B2 (en) System and a method for providing color vision deficiency assistance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination