CN116668656B - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN116668656B
CN116668656B CN202310906122.6A CN202310906122A CN116668656B CN 116668656 B CN116668656 B CN 116668656B CN 202310906122 A CN202310906122 A CN 202310906122A CN 116668656 B CN116668656 B CN 116668656B
Authority
CN
China
Prior art keywords
color
image
training
color difference
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310906122.6A
Other languages
Chinese (zh)
Other versions
CN116668656A (en
Inventor
伍德亮
李思奇
唐巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310906122.6A priority Critical patent/CN116668656B/en
Publication of CN116668656A publication Critical patent/CN116668656A/en
Application granted granted Critical
Publication of CN116668656B publication Critical patent/CN116668656B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Colour balance circuits, e.g. white balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/77Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The application provides an image processing method and electronic equipment, wherein the method can comprise the following steps: determining an objective function based on the M training correction parameters and the K training color difference values in response to the first color difference value between the first color card image and the reference color card image being greater than a preset threshold; the value range of the independent variable of the objective function comprises M training correction parameters, and the value range of the function value of the objective function comprises K training color difference values; determining a target correction parameter according to the target function; the target correction parameter is a training correction parameter corresponding to the minimum function value of the target function, and the minimum function value is smaller than a preset threshold; performing color correction on the image to be corrected according to the target correction parameters; the reference color card image is a standard color card reference image, and the first color card image is an image obtained by shooting the standard color card; m and K are integers greater than 0. The application can efficiently and automatically determine the target correction parameters, thereby improving the accuracy of color correction.

Description

Image processing method and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and an electronic device.
Background
Currently in the field of color processing of video or images, color correction matrices (color correction matrix, CCM) are commonly used to color correct images. The color correction matrix can be used for correcting the difference between the response of a sensor (sensor) in electronic equipment such as a mobile phone, a camera and the like to the spectrum and the response of human eyes to the spectrum, so that the color of an image output by the electronic equipment is more similar to the true color seen by the human eyes. To improve the effect of electronic devices on color correction of images, and to improve the quality of the image, it is often necessary for a technician to empirically set and adjust the color correction matrix. In general, a technician needs to adjust the color correction matrix multiple times to make the color of the image output by the electronic device approach the true color. The process is time-consuming and labor-consuming, has high requirements on technicians, and is low in efficiency because the color correction matrix is required to be set independently for each electronic device.
Disclosure of Invention
The embodiment of the application provides an image processing method and electronic equipment, which can effectively and automatically determine target correction parameters and improve the accuracy of color correction.
In a first aspect, embodiments of the present application provide a method of image processing, which may be performed by an electronic device, or by an apparatus matched to an electronic device, for example, by a processor, a chip, or a chip system, etc. The method may include: determining an objective function based on the M training correction parameters and the K training color difference values in response to the first color difference value between the first color card image and the reference color card image being greater than a preset threshold; the value range of the independent variable of the objective function comprises M training correction parameters, and the value range of the function value of the objective function comprises K training color difference values; determining a target correction parameter according to the target function; the target correction parameter is a training correction parameter corresponding to the minimum function value of the target function, and the minimum function value is smaller than a preset threshold; performing color correction on the image to be corrected according to the target correction parameters; wherein the reference color card image is a reference image of a standard color card, and the first color card image is an image obtained by shooting the standard color card; m and K are integers greater than 0.
Therefore, the electronic equipment in the embodiment of the application can automatically determine the target correction parameters based on the training correction parameters and the training color difference values, and can improve the efficiency of determining the target correction parameters. And carrying out color correction on the image to be corrected based on the determined target correction parameters, so that the accuracy of the color correction can be improved, and the picture quality of the electronic equipment can be improved, namely the color display effect of the image after the color correction is improved.
In one possible implementation manner, the training correction parameter x corresponds to a training color difference value y, the training color difference value y is a color difference value between a training color card image z and a reference color card image, and the training color card image z is an image obtained by performing color correction on the first color card image based on the training correction parameter x; the training correction parameter x is one training correction parameter of M training correction parameters, and the training color difference value y is one training color difference value of K training color difference values.
In one possible implementation, the method for determining the objective function based on the M training correction parameters and the K training color difference values may further include: estimating a functional relationship between M training correction parameters and K training color difference values based on a preset agent model and a preset acquisition function; based on the functional relationship, an objective function is determined.
Therefore, based on the preset agent model and the preset acquisition function, the functional relation between the training correction parameters and the training color difference value can be accurately determined, so that the accurate target correction parameters can be determined, and the color correction error can be reduced.
In one possible implementation, the method for determining the target correction parameter according to the target function may further include: performing evolution operation on the M training correction parameters to obtain M1 training correction parameters; the evolution operation includes one or more of a selection operation, a crossover operation, and a mutation operation; determining K1 training color difference values based on the objective function; training correction parameters M in the M1 training correction parameters correspond to training color difference values K in the K1 training color difference values; determining the minimum training color difference value in the K1 training color difference values as the minimum function value in the objective function, and determining the training correction parameter corresponding to the minimum training color difference value as the objective correction parameter; wherein M1 is an integer greater than 0 and less than M; k1 is an integer greater than 0 and less than K, M and K are integers greater than 0, M is less than or equal to M1, and K is less than or equal to K1.
Therefore, other different training correction parameters can be generated by performing evolutionary operation on the training correction parameters and used for determining the target correction parameters, so that more accurate target correction parameters can be determined, and errors of color correction can be reduced.
In one possible implementation, the method may further include: the first color difference value between a first color card image and the reference color card image is determined based on a standard color card.
In one possible implementation, a standard color chip may include N color chip regions; based on this, a method of determining a first color difference value between a first color card image and the reference color card image based on a standard color card may include: converting the color space of the first color card image from the first color space to a second color space to obtain a first intermediate image; converting the color space of the reference color card image from a first color space to a second color space to obtain a reference intermediate image; calculating the distance between an ith color card area in the first intermediate image and an ith color card area in the reference intermediate image to obtain a color difference intermediate value corresponding to the ith color card area; determining a first color difference value based on a color difference intermediate value corresponding to the ith color card area; wherein N is an integer greater than 0, and i is an integer greater than 0 and less than or equal to N.
It can be seen that the respective color spaces of the first color card image and the reference image can be converted into a second color space in which a first color difference value between the first color card image and the reference color card image can be calculated, reducing errors in color difference calculation. The first color difference value may be determined by the color difference value of the weight of each color card area.
In one possible implementation, a standard color chip may include N color chip regions; based on this, a method of determining a first color difference value between a first color card image and the reference color card image based on a standard color card may include: converting the color space of the first color card image from the first color space to a second color space to obtain a first intermediate image; converting the color space of the reference color card image from a first color space to a second color space to obtain a reference intermediate image; calculating the distance between an ith color card area in the first intermediate image and an ith color card area in the reference intermediate image to obtain a color difference value corresponding to the ith color card area; and obtaining N color difference values based on the color difference value corresponding to the ith color card area, wherein the first color difference value comprises N color difference values.
It can be seen that the respective color spaces of the first color card image and the reference image can be converted into a second color space in which a first color difference value between the first color card image and the reference color card image can be calculated, reducing errors in color difference calculation. Furthermore, the first color difference value may be used to represent N color difference values.
In one possible implementation, the method may further include: in response to at least one of the N color difference values being greater than a preset threshold, determining that the first color difference value is greater than the preset threshold.
It can be seen that when a color difference value greater than the preset threshold exists in the first color difference value, it can be indicated that the first color difference value is greater than the preset threshold.
In a second aspect, an embodiment of the present application provides an electronic device, including: one or more processors and memory; the memory is coupled to the one or more processors, the memory for storing computer program code comprising computer instructions that the one or more processors call to cause the electronic device to perform the method as described in the first aspect or any implementation of the first aspect.
In a third aspect, embodiments of the present application provide a computer program product comprising instructions which, when run on an electronic device, cause the electronic device to perform the method according to the first aspect or any implementation of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform the method according to the first aspect or any implementation of the first aspect.
Drawings
FIG. 1 is a schematic diagram of a 24-color standard color chart according to an embodiment of the present application;
fig. 2 is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 3 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 4 is a flowchart of a correction parameter search algorithm according to an embodiment of the present application;
FIG. 5 is a flowchart of another correction parameter search algorithm according to an embodiment of the present application;
FIG. 6 is a schematic diagram of color difference values under different weights according to an embodiment of the present application;
FIG. 7 is a diagram of search results of a correction parameter search method according to an embodiment of the present application;
FIG. 8 is a flow chart of interaction of modules in an electronic device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 10 is a schematic diagram of a software framework of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims and drawings are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
As used in this specification, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between 2 or more computers. Furthermore, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from two components interacting with one another in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
First, terms related to the embodiments of the present application are described by way of example, not by way of limitation, so as to be understood by those skilled in the art.
1. Color space (color space)
Color space, also referred to as color space, color model, etc. The color space is used to describe the colors under a unified standard.
An image may be represented in a different color space with the visual effects (e.g., content, color, brightness, etc. of the image) being the same. The expression of the color space is various, different color spaces have different characteristics, but because the different color spaces are isomorphic, they can be mutually converted. The color space to which the present application relates includes an RGB color space, a Lab color space, and an XYZ color space.
(1) RGB color space
RGB represents the three primary colors of red, green and blue, and the RGB color space is a color space in which colors are described by the three primary colors. Wherein R, G, B respectively represent three color channels (channels), R refers to red (red), which may be referred to as red channel, G refers to green (green), i.e. green channel, and B refers to blue (blue), i.e. blue channel. At a geometric level, the RGB color space can be represented by a spatial coordinate system consisting of R, G, B three mutually perpendicular axes. Typically, a color image acquired by an electronic device such as a camera may be stored in three components R, G, B. For example, a color image may be represented by chromaticity coordinates (R, G, B), where the value of R, G, B typically includes any value in the range of 0, 255.
There is a certain correlation between the components in the RGB color space, which in most cases are proportional, mainly as if a certain channel is large in a natural scene, the other channel values of the image are also large. This means that if the colors of the image are to be processed, it is often necessary to modify the three components of the pixels simultaneously so as not to affect the realism of the image, which will greatly increase the complexity of the color adjustment process. Therefore, color migration in the RGB color space is complex, and the visual effect obtained is unnatural. Furthermore, the RGB color space is a device-dependent color space. For example, different scanners scan the same image, which results in different colors of image data, and different types of displays display the same image, which results in different colors of display.
(2) Lab color space
Lab is a color space consisting of one luminance channel (channel) and two color channels. The three components in the Lab color space can be represented by L, a, b, respectively. Where L represents luminance (luminance), a represents a component from green to red, and b represents a component from blue to yellow. Typically, the range of L is [0, 100], the ranges of a and b can be [ -127, 128], a represents the range from red to green, b represents the range from yellow to blue, a and b are positive for warm and negative for cold.
(3) XYZ color space
The XYZ color space, also known as 1931CIE-XYZ system, is obtained by mathematically selecting three ideal primary colors instead of the actual three primary colors based on the RGB color space. Wherein the three desired primary colors may be referred to as spectral tristimulus values.
(4) Color space conversion
There is a certain correlation between the components in the RGB color space, which in most cases are proportional, mainly as if a certain channel is large in a natural scene, the other channel values of the image are also large. This means that if the colors of the image are to be processed, it is often necessary to modify the three components of the pixels simultaneously so as not to affect the realism of the image, which increases the complexity of the color correction process. Furthermore, the RGB color space is a device-dependent color space. For example, different scanners scan the same image, which results in different colors of image data, and different types of displays display the same image, which results in different colors of display. Therefore, the RGB color space is not suitable for image processing.
In the Lab color space, the components used to represent the brightness and color of the image are independent, i.e., the L-channel represents brightness alone, independent of color, and the a-channel and b-channel represent color alone, independent of brightness. If only the brightness is required to be adjusted (such as sharpening, blurring and the like on the image), only the L channel can be adjusted; if color adjustment is required (e.g., to adjust the saturation of the image), the a-channel and the b-channel can be adjusted separately. In addition, the Lab color space is a device-independent color space, and values of L, a, and b of the same image in different electronic devices may be the same.
Therefore, according to different image processing requirements, the images can be converted into different color spaces for processing, so that the operation amount and the operation difficulty are reduced. Typically, the image may be converted from an RGB color space to a Lab color space to facilitate image processing operations such as color correction of the image by the electronic device.
The RGB color space cannot be directly converted into the Lab color space, and the RGB color space needs to be converted into the XYZ color space by means of the XYZ color space, and then the XYZ color space is converted into the Lab color space. By way of example, the conversion of the RGB color space to the XYZ color space can be expressed as:
based on this, the manner of converting the XYZ color space to the Lab color space can be expressed by the following formula:
wherein the function f (t) can be expressed as the following formula:
for example, taking the example of converting the color space of the 24-color standard color chart from the RGB color space to the Lab color space, referring to table 1, table 1 shows the values of three channels, i.e., L, a, b, of the 24-color standard color chart from the RGB color space to the R, G, B three channels, and after the conversion to the Lab color space.
TABLE 1 conversion of RGB color space to Lab color space
Among them, as shown in fig. 1, in the 24 color chart, 24 color chart areas are included, and from left to right to top to bottom, the first 18 color chart areas (i.e., color chart 1 to color chart 18) include additive color three primary colors (red, green, blue), subtractive color three primary colors (yellow, magenta, cyan), and other colors for simulating the true colors of natural objects. Color chips 19 through 24 are six gray scale color chips.
2. Image signal processing (image signal process, ISP)
The image signal processing is a process in which an image signal processor (image signal processor) processes a digital signal of an image output from a front-end image sensor. ISPs may include, but are not limited to, noise removal, automatic white balance adjustment (automatic white balance, AWB), dead spot removal, automatic exposure control, black level correction (black level correction, BLC), color interpolation, non-linear gamma (gamma) correction, color correction (color correction), and the like. In general, the processing flow related to the color of the image in the image signal processing may sequentially include a flow of BLC, AWB, CCM, nonlinear gamma correction, and the like.
(1) Black level correction
The black level (black level) may also be referred to as dark current, and refers to the lowest level value of black data in an image, and generally refers to the corresponding sensor signal level value when the photosensitive image data is 0. The black level correction means correcting a black level in image data output from a sensor so that the image data subjected to BLC processing can restore real image level data.
(2) Automatic white balance adjustment
White balance (white balance) refers to a technique of reducing a white object to white under any light source. Images captured by electronic devices such as image sensors under different light rays can be influenced by the color of the light source to generate a color cast phenomenon. For example, an image taken in a clear sky may be bluish, while an object taken in candelas may be reddish in color.
Through AWB, the color cast phenomenon of the image can be corrected, the color of the shooting subject is restored, and the image shot under different light sources is similar to the color of the picture watched by human eyes.
(3) Color correction
Color correction refers to a technique for correcting the difference in response of the sensor to light by the human eye so that the color of the corrected image is more nearly true.
A) Color correction matrix (color correction matrix, CCM)
Typically, CCM may be used for color calibration. Wherein CCM is a 3×3 matrix, comprising 9 CCM parameters in total. One CCM may be referred to as one correction parameter. Illustratively, CCM may be represented by the following matrix:
wherein C is ij Represents any one of 9 CCM parameters in CCM, i is more than or equal to 0 and less than or equal to 2, and j is more than or equal to 0 and less than or equal to 2. For one image, the CCM may be multiplied by R, G, B channels in the RGB color space for each pixel of the image so that the new R, G, B generated is closer to the color seen by the naked eye, thereby achieving color correction. Illustratively, with (R) in ,G in ,B in ) Representing the initial chromaticity coordinates of the image, in (R out ,G out ,B out ) Representing chromaticity coordinates of an image after color correction, the color correction process can be represented by the following formula:
typically, color calibration using CCM is performed after BLC, AWB, and before nonlinear gamma correction of the image. To ensure that the white color of the image remains white after the color correction, the 9 CCM parameters in one CCM need to satisfy the white balance constraint, i.e., the CCM parameters for each row in the CCM add to 1, which can be expressed by the following formula:
based on the white balance constraint condition, the parameters of the main diagonal in the CCM can be constrained, and the parameters of the off-diagonal are determined as free variables, which are respectively x 1 To x 6 The method comprises the steps of carrying out a first treatment on the surface of the The parameters on the diagonal can be converted into dependent variables, so CCM can be represented by the following matrix:
the embodiment of the application uses a color correction matrix to include 6 CCM parameters (x 1 To x 6 ) That is, one correction parameter including the above 6 CCM parameters is exemplified.
In general, in CCM, the range of values of the diagonal CCM parameters may be [1,3], and the absolute value of the off-diagonal CCM parameter is smaller than or equal to the value of the diagonal CCM parameter. The value range [1,3] of the CCM parameter is a continuous value range, which means that the number of values of the CCM parameter is infinite. Determining a correction parameter from infinite values, that is, determining 9 CCM parameters to form a color correction matrix, requires more time and is more difficult.
B) Chromatic aberration
Color difference Δe refers to the distance between two points (each point may represent one color) in a color space, and may represent a visual color difference between two colors, or may represent a color difference between two images. Illustratively, it is assumed that chromaticity coordinates of two colors in the Lab color space are (L 1 ,b 1 ,a 1 ) Sum (L) 2 ,b 2 ,a 2 ) The color difference Δe between the two colors can be calculated from the following formula:
wherein Δl represents the difference in brightness between the two colors; Δa represents the difference in green-red hue between the two colors; Δb represents the difference in the blue-yellow hue between the two colors. The calculation methods of Δl, Δa, Δb can be expressed by the following formulas, respectively:
the smaller the color difference Δe, the smaller the difference between the two colors, and conversely the larger the difference between the two colors.
C) Determination of color correction matrix
In general, the color correction process is to compare an image to be corrected captured by an image sensor in an electronic device with a contrast image, and calculate a color difference between the two images, so as to calculate a color correction matrix. The calculated color correction matrix is the color correction matrix corresponding to the electronic equipment. After the electronic device determines the corresponding color correction matrix, the electronic device can default to use the color correction matrix to perform color correction on the image shot by the electronic device, so as to output an image visible to a user.
Typically, color correction may be performed based on the color chart, i.e. the color correction matrix of the electronic device is determined based on the color chart. For example, the color correction matrix may be adjusted based on the color difference between the respective color chart areas between the color chart image to be corrected and the reference color chart image, thereby improving the effect of color correction. The color chart image to be corrected is an image obtained by photographing a standard color chart (for example, a 24-standard color chart as shown in fig. 1) by an electronic device, and the reference color chart image is an image obtained by photographing the same standard color chart by other devices, or the reference color chart image may be an image obtained by photographing the same standard color chart and performing image processing. The color effect of the reference color chip image is the color effect that the electronic device is expected to achieve, e.g., the color of the reference color chip image is similar or identical to the color of the actual standard color chip.
The process may be optimized by an objective function. Wherein the objective function f obj The following formula can be used to represent:
wherein DeltaE i Representing a color difference between an ith color chart region in the color chart image to be corrected and an ith color chart region in the reference color chart image, such as a color difference between a color chart 1 in the color chart image to be corrected and a color chart 1 in the reference color chart; objective function f obj Representing the total color difference between the color card image to be corrected and the reference color card image. The optimization objective of the objective function may be expressed as being such that min x f obj (x) X is established. That is, a color correction matrix (also referred to as a correction parameter) is determined so that the color difference between the image color-corrected by the color correction matrix and the reference color chart image is minimized.
In some cases, a weight (weight) may be set for each color chart region in the color chart image to be corrected, the weight w of each color chart region i May be the same or different. The weight of a color chart region may be used to represent the degree of correction for color correction of that color chart region. For example, when the weight of the color chart 1 is higher than that of the other color chart region, it may be indicated that the degree of color correction of the color chart 1 is higher than that of the other color chart region in the color correction process. At this time, the objective function f obj The following formula can be used to represent:
based on the objective function, the minimum function value in the range of the independent variable value and the independent variable corresponding to the minimum function value can be obtained. In other words, the minimum color difference value and the correction parameter corresponding to the color difference value can be determined based on the objective function, i.e. x in the color correction matrix is determined 1 To x 9 Is a value of (a). In other words, the color correction can be converted into an optimization problem of the objective function (also referred to as CCM optimization problem), i.e. the min is found x f obj (x) X is established.
(4) Nonlinear gamma correction
The nonlinear gamma correction is used for adjusting the overall brightness of the image, and the parts with higher brightness and lower brightness in the image can be adjusted to coordinate the brightness of each part in the image. In general, after an electronic device such as an image sensor captures an image, the image may be sequentially subjected to processing such as BLC, AWB, CCM and nonlinear gamma correction, so as to obtain a color-reduced image. If the nonlinear gamma correction is directly performed without performing the Color Correction (CCM), the color saturation of the image may be reduced, and the quality of the image may be degraded. Therefore, color correction is required before nonlinear gamma correction is performed.
3. Bayesian optimization algorithm (bayesian optimization, BO)
The Bayesian optimization algorithm is an approximate approximation algorithm, a proxy function is used for fitting the relation between the super-parameters and the model evaluation, then hopeful super-parameter combinations are selected for iteration, and finally the super-parameter combination with the best effect is obtained. In short, the bayesian optimization algorithm may estimate a functional relationship between the input parameter and the output parameter through the proxy function under the condition that a mapping relationship between the input parameter and the output parameter is unknown, and find a globally optimal solution based on the determined functional relationship. In the present application, a bayesian optimization algorithm may be used to determine the CCM, i.e. to determine the correction parameters, so that the color effect of the image after color correction by this CCM is optimized, e.g. so that the color of the image after color correction is close to the real color.
4. Evolutionary algorithm (evolutionary algorithm, EA)
The evolution algorithm originates from computer simulation research on a biological system, is a random global search optimization method, and starts from any population (position) through simulating the phenomena of replication, crossover (crossover) and mutation (mutation) and the like occurring in natural selection and inheritance, and generates a group of individuals more suitable for the environment through selection, crossover (crossover) and mutation (mutation) operation, so that the group is evolved to a better and better area in a search space, the generation of the generation is continuously propagated and evolved, and finally converges to a group of individuals (index) most suitable for the environment, thereby obtaining a high-quality solution of the optimization problem. Wherein the fitness of each individual is typically solved based on a fitness function to measure whether the individual is "more suitable for the environment. The fitness function (fitness function) is used for evaluating the fitness (fitness) of the individual, and the individual is subjected to the winner and the winner according to the size of the fitness. Fitness is non-negative, with a larger fitness value indicating that the individual is more excellent. In the present application, an evolutionary algorithm may be used to determine the target correction parameters. The fitness can be used to represent the magnitude of the color difference value corresponding to the correction parameter, and the smaller the color difference value corresponding to one correction parameter, the higher the fitness of the correction parameter.
Referring to fig. 2, fig. 2 is a schematic diagram of an application scenario provided in an embodiment of the present application. As shown in fig. 2, after an electronic device such as a mobile phone or a camera photographs, an image generated first is in an original (raw) format, that is, a raw image. In general, image signal processing is required for a raw image, and for example, a flow such as Black Level Correction (BLC), automatic white balance correction (AWB), color Correction (CCM), and nonlinear gamma correction is performed in order for the raw image, so that the color aspect of the raw image is optimized to obtain a final image. In general, a CCM used for color correction of an image requires a technician to empirically set and adjust correction parameters, that is, adjust each CCM parameter in a 3×3 matrix of CCMs, apply the adjusted CCM parameter to the image, and then judge whether the color correction effect of the CCM needs to be further adjusted according to the color effect seen by naked eyes. Because the value space of the CCM parameters is continuous, the adjustable space of each parameter in the CCM is infinite, and the constraint conditions such as white balance constraint conditions are also considered, so that the adjustment process is time-consuming and labor-consuming and has low efficiency.
The embodiment of the application provides an image processing method which can be used for determining a color correction matrix, namely determining correction parameters so as to improve the efficiency of optimizing the color correction matrix; the electronic equipment can carry out color correction on the image through the determined correction parameters, and the picture quality of the electronic equipment can be improved.
Next, a detailed description will be given of a specific flow of the image processing method according to the embodiment of the present application with reference to fig. 3. The image processing method may be executed by the electronic device, or by an image signal processor in the electronic device, or may be executed by a chip or a chip system or the like having the function of the image signal processor. As shown in fig. 3, taking an example of an image processing method performed by an electronic device, the method may include, but is not limited to, the steps of:
s301, a first color card image and a reference color card image are acquired.
The first color card image may be an image obtained by photographing a standard color card through an electronic device having a photographing function such as a mobile phone or a camera. The standard color chart may be a 24-color standard color chart as shown in fig. 1. The first color card image needs to shoot at least the first 18 color cards in the 24 color cards. The reference color chart image refers to a reference image of a 24-color standard color chart. For example, the reference color chart image may be an image obtained by photographing the same 24-color chart under the same photographing condition as that of the first color chart image, and may have a color effect (or may be said to have a color effect meeting a specified requirement, for example, a color saturation of the reference color chart image is high) which is expected to be achieved by the electronic device. For another example, the reference color chart image may be an image of the same color as the true 24-color reference color chart obtained by image processing. The first color card image may be pre-stored in the electronic device, or the electronic device may receive a reference color card image entered by a technician.
The first color card image and the reference color card image can be used for determining a color correction matrix of the electronic device, so that when the electronic device uses the determined color correction matrix to perform color correction on an image obtained by any shooting, the color effect of the image is improved.
The first color card image may be color corrected by the initial correction parameters. For example, assume that the electronic device photographs a 24-color standard color chart to obtain an initial color chart image, and performs initial color correction on the initial color chart image based on initial correction parameters, so that a first color chart image can be obtained. The initial correction parameters refer to initialized correction parameters, including 9 CCM parameters, forming a color correction matrix, and based on a white balance constraint condition, the number of CCM parameters actually required to be adjusted is 6. The initial correction parameters may be a set of parameters selected randomly from the range of values of CCM parameters, or may be a set of parameters selected by a technician. By way of example, the initial correction parameters may be represented by the following matrix:
illustratively, assume that the chromaticity coordinates of the initial color card image in the RGB color space are expressed as (R 1 ,G 1 ,B 1 ) The chromaticity coordinates of the first color card image are (R 2 ,G 2 ,B 2 ) The process of color correcting the initial color chart image based on the initial correction parameters can be represented by the following formula:
s302, converting color spaces of the first color card image and the reference color card image respectively to obtain a first intermediate image and a reference intermediate image.
Specifically, converting a color space of a first color card image from a first color space to a second color space to obtain a first intermediate image; the color space of the reference color card image is converted from the first color space to the second color space, and a reference intermediate image is obtained.
Typically, the colors of an image are often defined based on the RGB color space. For example, the colors of the first color card image and the reference color card image are defined based on the RGB color space, i.e., the color space of the first color card image is the RGB color space, and the color space of the reference color card image is also the RGB color space. Therefore, the color space of the first color card image and the reference color card image needs to be converted from the RGB color space to the Lab color space before the color difference calculation is performed. Wherein the first color space may be an RGB color space and the second color space may be a Lab color space. For an image, the color space is converted to change the representation of the chromaticity coordinates of the image, for example, from (R, G, B) to (L, a, B), without changing the visual effect (i.e., without changing the content, color, brightness, etc. of the image), so that the visual effect of the first color card image and the first intermediate image is the same, and the visual effect of the reference color card image and the reference intermediate image is the same. In other words, the first intermediate image and the first color card image include the same N color card areas, and the reference intermediate image and the reference color card image include the same N color card areas.
S303, calculating a color difference value between an ith color card area in the first intermediate image and an ith color card area in the reference intermediate image, and obtaining a color difference value corresponding to the ith color card area. Wherein i is an integer greater than 0 and less than or equal to N.
Assuming that the standard color chart is a 24-color standard color chart image as shown in fig. 1, the standard color chart includes 24 color chart areas, and the same 24 color chart areas are included in the reference color chart image and the first color chart image. Illustratively, the color chip area of the reference color chip image constitutes a 24-color standard color chip as shown in fig. 1. The color card area configuration of the first color card image can also be referred to in fig. 1. As shown in fig. 1, 24 color chip areas, such as color chip 1 to color chip 24, are obtained by numbering 24 color chip areas from left to right and from top to bottom. The 24 color card areas in the standard color card may be numbered separately using the numbering principle shown in fig. 1. In the 24-color standard color chart, the color chart 19 to the color chart 24 are gray color charts, and the 6 color chart areas can be omitted in the color correction process. Accordingly, the first color chart image and the first intermediate image may include only 18 color chart areas of the color chart 1 to the color chart 18, and the reference color chart image and the reference intermediate image may include only the 18 color chart areas.
The color difference value corresponding to the ith color card area can represent the color difference value corresponding to any one color card area in the N color card areas. The color difference value corresponding to the i-th color chart region represents the color difference value between the two color chart regions at the same position. Based on this, S303 is performed,color difference values corresponding to the N color card areas can be determined, and N color difference values can be determined in total. For example, the color difference value between color card 1 in the first color card image and color card 1 in the reference color card image may be regarded as the first color difference value of the N color difference values, and may be expressed as ΔE 1 The method comprises the steps of carrying out a first treatment on the surface of the The color difference value between the color chart 2 in the first color chart image and the color chart 2 in the reference color chart image can be regarded as a second color difference value, which can be expressed as delta E 2 . Based on this, the obtained N color difference values may include 18 color difference values, which may be respectively expressed as (. DELTA.E) 1 ,△E 2 ,△E 3 ,…,△E 18 )。
The embodiment of the present application is described by taking n=18, that is, the standard color chart includes 18 color chart areas such as color chart 1 to color chart 18.
In one possible implementation manner, the first color difference value may also be determined by calculating a vector included angle between a kth pixel point in the first reference image and a kth pixel point in the reference color card image; alternatively, the RGB color space of the first color card region and the reference color card region may be converted into an HSI color space in which the first color difference value is determined according to the chromaticity coordinates.
For example, assume that N color chip regions of a standard color chip may correspond to Q pixels, Q being an integer, and k being an integer greater than 0 and less than or equal to Q. The chromaticity coordinate of the kth pixel point in the first color card image is (R 1 ,G 1 ,B 1 ) The chromaticity coordinate of the kth pixel point in the reference color chart image is (R 2 ,G 2 ,B 2 ). Based on the following formula, three-dimensional vectors corresponding to the two pixel points can be obtained 1 And l 2 Based on l 1 And l 2 The angle alpha between two pixel points can be determined.
Based on the above, the included angle between any one pixel point of the first color card image and the same pixel point in the reference color card image can be determined. And taking the cosine value of the included angle as the color difference value between the two pixel points, and determining the total color difference value, namely the first color difference value, based on the color difference value of the kth pixel point. The first color difference value may be an average value of color difference values of all pixel points added up.
S304, determining that the optimization mode of the first color card image is single-target optimization or multi-target optimization, if the optimization mode is single-target optimization, executing S305, otherwise executing S306.
When the color correction is performed on the first color card image, the optimization mode of the first color card image can be determined to be single-target optimization or multi-target optimization through configuration information in the electronic equipment or the image signal processor. For example, in the same electronic device, configuration information may be stored in the image signal processor for specifying that single-objective optimization or multi-objective optimization is to be employed in performing color correction.
S305, determining a color difference intermediate value corresponding to the ith color card area, and determining a first color difference value based on the color difference intermediate value.
The color difference value corresponding to the ith color card area may represent a color difference value corresponding to any one of the N color card areas, based on which S305 is executed, one color difference intermediate value corresponding to each of the N color card areas may be determined, and the N color difference intermediate values may be determined. Further, based on the color difference value corresponding to the ith color card area and the weight corresponding to the ith color card area, a color difference intermediate value corresponding to the ith color card area can be obtained. Exemplary, the color difference intermediate value corresponding to the ith color chip area may be expressed as w i △E i ,w i Representing the weight of the ith color card area, ΔE i And the color difference value corresponding to the ith color card area is represented.
In response to the optimization mode of the first color card image being single-objective optimization, the first color difference value f can be obtained for the color difference intermediate value corresponding to each color card area in the standard color card through the following formula sig
Wherein DeltaE i Representing a color difference value between an i-th color card region in a first intermediate image, i.e., a first color card image, and a reference intermediate image, i.e., an i-th color card region in a reference color card image; w (w) i The weight of the i-th color chart region is represented. Based on this, the first color difference value may be understood as a weighted sum of the N color difference values.
The electronic device may obtain the weight of each color card area in the first color card image from the configuration information in the electronic device or the image processor, or the electronic device may accept each weight input by the user, which is not limited in the present application.
S306, determining a first color difference value based on the color difference value corresponding to the ith color card area.
The color difference value corresponding to the ith color card area may represent the color difference value corresponding to any one of the N color card areas, based on which, S304 is executed, and the color difference values corresponding to the N color card areas may be determined, that is, N color difference values may be determined.
In response to the optimization mode of the first color card image being a multi-objective optimization, it may be determined that the first color value includes N color values. For example, the ith color difference value of the N color difference values may be expressed as f i mul By way of example, the following formula may be used:
/>
wherein DeltaE i Representing a color difference value between an i-th color card region in the first color card image and an i-th color card region in the reference color card image; 0<i is less than or equal to N and is an integer. Based on this, the first color difference value may be understood as comprising a set of N color difference values, e.g. the first color difference value may be expressed as (Δe) 1 ,△E 2 ,△E 3 ,…,△E 18 )。
S307, judging whether the first color difference value is larger than a preset threshold value, if so, executing S308, otherwise, executing S311.
Wherein the preset threshold may be set by a technician. Assuming a preset thresholdA (a is a real number greater than 0), then in the case of a single target optimization, the first color difference value f sig Satisfy f sig Upon a, S308 may be performed, otherwise S311 is performed. Similarly, in the case of multi-objective optimization, at least one of the N color difference values in the first color difference value, e.g. the ith color difference value f i mul Satisfy f i mul Upon a, S308 may be performed, otherwise S311 is performed.
S308, determining a correction parameter searching algorithm, and executing S309 if the correction parameter searching algorithm is a Bayesian optimization algorithm; if the correction parameter search algorithm is an evolutionary algorithm, then S310 is performed.
The correction parameter searching algorithm is used for determining target correction parameters. In some implementations, the electronic device may also employ other correction parameter search algorithms. The embodiments of the present application are exemplified by correction parameter search algorithms including bayesian optimization algorithms and evolutionary algorithms.
Before S309 or S310 is performed, M training correction parameters and K training color difference values may be acquired first, M, K being an integer greater than 0. In general, the value of M may be 500, 1000, or the like. One of the M training correction parameters corresponds to a training color difference value n of the K training color difference values, wherein the training color difference value n may be a color difference value between the training color card image n and the reference color card image, and the training color card image n is an image obtained by performing color correction on the first color card image based on the training correction parameter M. The M training correction parameters and the K training color difference values are experimental data obtained through experiments, and may be stored in the electronic device or an image processor in the electronic device in advance, or may be input into the electronic device by a technician before implementing the embodiment of the present application. Wherein a training correction parameter may represent a color correction matrix.
S309, determining target correction parameters based on the Bayesian optimization algorithm in response to the correction parameter search algorithm being the Bayesian optimization algorithm.
In the process of the Bayesian optimization algorithm, firstly, a functional relation between M training correction parameters and K training color difference values can be estimated based on a preset agent model and a preset acquisition function, and an objective function is determined based on the functional relation. The independent variable of the objective function is used to represent the training correction parameter, and the dependent variable (function value) is used to represent the training color difference value. In other words, the independent variable value range of the objective function includes M training correction parameters, and the value range of the function value of the objective function includes K training color difference values.
Further, in the independent variable value range of the objective function, the minimum function value of the objective function, that is, the training color difference value with the minimum value, and the value of the independent variable corresponding to the minimum function value, that is, the training correction parameter corresponding to the training color difference value with the minimum value, can be determined. This minimum function value may be determined as the second color difference value and this training correction parameter as the target correction parameter. In the Bayesian optimization process, the training correction parameters can be optimized, so that the color difference value corresponding to the determined target correction parameters is smaller than K training color difference values, in other words, the minimum function value, namely the second color difference value, is smaller than a preset threshold. The second color difference value can be used for representing a color difference value between the second color card image and the reference color card image; the second color card image may represent an image obtained by performing color correction on the first color card image based on the target correction parameter; m, K is an integer greater than 0, and the number of M and K may be the same.
Specifically, the implementation process of determining the target correction parameters based on the bayesian optimization algorithm may refer to the flowchart shown in fig. 4. By way of example, as shown in fig. 4, the implementation may include the steps of:
s401, a j-th dataset is acquired, j=1.
When j=1, the j-th data set is the 1-th data set, which may be referred to as an initial data set. M sets of data may be included in the j-th dataset, each set of data including one input parameter and one output parameter. Wherein, input parameter refers to training correction parameter, and output parameter refers to training color difference value. In other words, the j-th dataset includes M training correction parameters and K training color difference values. The M training correction parameters may be represented as X= (X 1 , x 2 , x 3 ,…, x M ). Training schoolThe positive parameter corresponds to a training color difference value. For example, based on a training correction parameter, e.g. training correction parameter x 1 The first color card image is subjected to color correction to obtain a training color card image P 1 Training color chart image P 1 The color difference value between the reference color card image and the training correction parameter y 1 In this case, the training correction parameter x can be called 1 Corresponding to the training correction parameter y 1 . Thus, the K training color difference values may be represented as y= (Y) 1 , y 2 , y 3 ,…, y K ) Thereby generating a j-th dataset, which can be represented as D j Initial dataset D j=1 ={(x 1 , y 1 ), (x 2 , y 2 ),…, (x M , y K ) }. Wherein x is M Represents a superparameter, i.e. a training correction parameter (total of 6 CCM parameters), y K Representing the training color difference value (x) corresponding to the training correction parameter M , y K ) Represents the mth data.
S402, determining an ith function based on a preset agent model and a jth data set.
Wherein the preset proxy model is for the j-th data set D j The model obtained by fitting, for example, the preset agent model may be a random forest model, a tree-structured parzen estimator (tree-structured parzen estimators, TPE) model, a gaussian model, etc., which is not limited in the present application. The ith function f (x) is used to describe the functional relationship between the training correction parameters and the training correction color difference values in the jth dataset. In the i-th function, an argument is used to represent a training correction parameter, and a function value (dependent variable) is used to represent a training correction color difference value.
Wherein i is an integer greater than 0. i may be the same as j. For example, when j=1, i.e., based on the preset proxy model and the 1 st data set, the 1 st function, i.e., i=1, may be determined. When j=2, i.e. based on the preset proxy model and the 2 nd dataset, the 2 nd function, i.e. i=2, can be determined.
S403, judging whether the ith function meets the cut-off condition, if so, executing S407; otherwise, S404 is performed.
The cutoff condition may refer to the number of current iterations having exceeded a preset iteration number threshold. For example, assuming that the preset iteration number threshold is 100, the i-th function obtained by current estimation is the 101-th function, which indicates that the iteration number is 101, and the preset iteration number threshold is exceeded, it may be determined that the 101-th function satisfies the cutoff condition. The skilled person can set different cut-off conditions according to different application scenes, and the application is not limited to this.
S404, determining an ith sampling point through the acquisition function, and determining an ith output value corresponding to the ith sampling point based on the ith function to obtain nth data.
Wherein, assuming that the ith function is expressed as f (x), the value range of x is a real number, one x represents a training correction parameter, f (x) represents a corresponding training color difference value, e.g. x 1 The training color difference value is f (x 1 ). The i-th sampling point represents a new training correction parameter, which is a training correction parameter determined in the value range of the M training correction parameters (i.e., in the value range of x), and may be different from any one of the M training correction parameters. The preset acquisition function (acquisition function) is used for finding the ith sampling point X within the value range of X (which can be expressed as X epsilon X) s Where the letter s denotes sampling (sampling). Ith sample point x s The following formula may be used to determine:
any preset collection function can be adopted to determine the sample sampling point. For example, the preset acquisition function may be a probability increment (probability of improvement, PI) function, a desired increment (expected improvement, EA) function, a confidence upper limit (upper confidence bound) function, or the like.
Based on the ith function, the ith sampling point x can be passed s Calculating x s Corresponding output parameter y s I.e. the training color difference value corresponding to the i-th sampling point, thereby obtaining the n-th data, which can be expressed as (x) s ,y s )。
S405, let j=j+1, and based on the nth data, a jth data set is obtained.
The nth data (x s ,y s ) Added to the j-th dataset D j A new data set may be generated. For example, the ith data is added to the initial data set D j=1 In (2) a data set can be obtained. Exemplary, the 2 nd data set may be represented as D j=2 ={(x 1 , y 1 ), (x 2 , y 2 ),…, (x M , y K ), (x s ,y s )}。
S406, let i=i+1, n=n+1, and S402 is executed.
When the ith function does not meet the cutoff condition, i=i+1, n=n+1, namely, iteration is continued, and a new ith function is determined according to the generated new nth data and the new jth data set until the ith function meeting the cutoff condition is determined.
S407, determining the ith function as an objective function.
If the current ith function satisfies the cutoff condition, for example, the current ith function is 101 th function, and the preset iteration number threshold has been exceeded, the current ith function may be determined as the objective function.
S408, determining a target correction parameter based on the target function.
Based on the objective function, the minimum function value (namely, the training color difference value with the minimum value) of the objective function and the independent variable corresponding to the minimum value of the function, namely, the training correction parameter corresponding to the training color difference value with the minimum value can be obtained in the value range of the independent variable. In other words, based on the objective function, the second color difference value and the training correction parameter corresponding to the second color difference value can be found. The training correction parameter corresponding to the second color difference value may be determined as the target correction parameter.
S310, determining a target correction parameter based on the evolutionary algorithm in response to the correction parameter search algorithm being the evolutionary algorithm.
In the process of the evolutionary algorithm, the objective function may be first determined based on the M training correction parameters and the training color difference values corresponding to the K training correction parameters. The objective function is used to describe a functional relationship between the training correction parameters and the training color difference values, wherein the independent variables are used to represent the training correction parameters and the dependent variables (function values) are used to represent the training color difference values. In other words, the independent variable value range of the objective function includes M training correction parameters, and the value range of the function value of the objective function includes K training color difference values.
And carrying out evolution operation on part or all of the M training correction parameters to obtain M1 training correction parameters. The M1 training correction parameters obtained include at least one correction parameter different from any one of the M training correction parameters. Wherein the evolution operation includes one or more of a selection operation, a crossover operation, and a mutation operation. Based on the objective function, K1 training color difference values can be obtained, wherein one training correction parameter of the M1 training correction parameters corresponds to one training color difference value of the K1 training color difference values. Further, from the K1 training color difference values, a minimum training color difference value may be determined, and this minimum training color difference value may be determined as a second color difference value, and a training correction parameter corresponding to the second color difference value may be determined as a target correction parameter. Wherein M1 and K1 are integers greater than 0 and less than M, M1 is less than M, and K1 is less than K.
In particular, the implementation of determining the target correction parameters based on the evolutionary algorithm may refer to a flowchart as shown in fig. 5. By way of example, as shown in fig. 5, the implementation may include the steps of:
s501, an i-th population is obtained, i=1.
Wherein the ith population refers to a population comprising a plurality of training correction parameters. When i=1, the ith population may be a randomly generated initial population, the ith population including M individuals, each individual exhibiting a training correction parameter. For example, 1000 training correction parameters may be included in a population, each training correction parameter including 6 CCM parameters.
The ith population is generated by coding a plurality of training correction parameters through a certain coding mode. For example, each training correction may be binary coded, which may facilitate interleaving and mutation. The specific encoding scheme adopted in the present application is not limited.
S502, determining the fitness of each training correction parameter in the ith population based on the objective function.
Assuming i=1, the objective function may be determined based on the M training correction parameters and the training color difference values corresponding to the K training correction parameters. The objective function is used to describe a functional relationship between the training correction parameters and the training color difference values, wherein the independent variables are used to represent the training correction parameters and the dependent variables (function values) are used to represent the training color difference values. In other words, the range of values of the independent variables of the objective function includes M training correction parameters, and the range of values of the function values of the objective function includes K training color difference values.
The fitness (fitness) can be used for evaluating the survival advantage degree of individuals in a population, and in the application, the fitness of one training correction parameter is used for evaluating the effect of the training correction parameter on color correction of an image according to a training color difference value corresponding to the training correction parameter. The higher the fitness, the smaller the training color difference value corresponding to the training correction parameter, the better the effect of color correction of the training correction parameter can be indicated. The fitness of a training correction parameter refers to the mapping value of the training color difference value corresponding to the training correction parameter. For example, based on the training correction parameter a, the first color card image is color corrected to obtain a training color card image a, and a color difference value between the training color card image a and the reference color card image is a training color difference value a. Based on a preset mapping function, the training color difference value can be mapped to a non-negative value range space to obtain a non-negative value, namely a mapping value of the training color difference value A, and the mapping value can be called as the fitness A of the training correction parameter A. The preset mapping function may be a sigmoid function, through which the training color difference value a may be mapped to a value range of (0, 1). It should be noted that, the specific form of the preset mapping function is not limited in the present application.
S503, performing evolution operation on M training correction parameters in the ith population to obtain M1 training correction parameters. Wherein M1 is an integer greater than 0 and less than M.
Selection operations, i.e., after calculating the fitness of each individual in the population, use a selection process to determine which individuals in the population will remain in the population for reproduction and generation of the next generation, with individuals with higher fitness being more likely to be selected and passing their genetic material to the next generation.
And a crossover operation and a mutation operation for generating new training correction parameters among the plurality of training correction parameters selected after the selection operation. Wherein crossing, which may also be referred to as recombination, refers to the process of selecting two individuals to exchange part of the gene (which may be understood as exchanging part of the coding) in a specific crossing manner, thereby forming two new individuals. Mutation, which is the formation of a new individual after crossover manipulation, has the potential to undergo genetic mutation, such as a change in one or more codes in the individual, thereby producing the new individual.
For example, the selection operation in the evolution operation refers to that 1000 training correction parameters are included in the ith population, 1000 training correction parameters can be ranked according to fitness from big to small, and 500 training correction parameters with the fitness size ranked at the top 500 can be selected. Based on the 500 training correction parameters selected, crossover operations and mutation operations may be performed, resulting in 500 new training correction parameters.
S504, i=i+1, based on the objective function and M1 training correction parameters, an i-th population is obtained.
Based on the objective function, fitness of each of the M1 training correction parameters may be determined. According to the adaptability of the M1 training correction parameters, P correction parameters with the adaptability meeting the preset conditions can be selected from the M1 correction parameters. P correction parameters are used as a new population. Assuming that i=1 before S504 is performed, when i=i+1, i=2, the 2 nd population can be obtained. Wherein P is an integer greater than 0 and less than or equal to M1.
S505, judging whether the ith population meets the cut-off condition, if the ith population meets the cut-off condition, executing S506, otherwise executing S502.
The cutoff condition refers to a condition for determining whether to stop the evolutionary algorithm. For example, a cutoff condition may refer to the generation of the ith population having reached the maximum generation number. For example, the maximum generation number may be set to 100 generations, and then the evolutionary algorithm may be stopped when the 100 th population (i=100) is generated. The cutoff condition may also refer to that in the newly generated generation population, the individual may stop the evolutionary algorithm without significant improvement. Wherein the individual can be measured for significant improvement by calculating the difference between the optimal fitness for each generation. For example, the best fitness obtained for each generation is stored, the difference between the best fitness of the current generation and the best fitness obtained for the previous generation is calculated, and if the difference is less than a certain preset threshold, it indicates that the individual has not improved significantly.
If the ith population does not meet the cutoff condition, S502 may be executed, i.e., the evolution operation is performed again on each training correction parameter based on the new population, and the new population is iteratively generated.
S506, determining target correction parameters based on the ith population.
If the ith population meets the cutoff condition, determining a target correction parameter from the P training correction parameters included in the ith population. For example, one of the training correction parameters in which the fitness is greatest may be determined as the target correction parameter based on the magnitude of the fitness corresponding to each of the training correction parameters in the ith population. At this time, the fitness of the target correction parameter is the maximum, and the training color difference value (which may be referred to as a second color difference value) corresponding to the target correction parameter is the minimum function value in the objective function. In other words, the first color card image is color corrected based on the target correction parameter to obtain a second color card image, and the second color difference value between the second color card image and the reference color card image is the minimum function value in the target function.
The fitness of the individuals remaining in the new population generated (i.e., the training correction parameters) is higher than the fitness of the individuals in the initial population based on the superior-inferior principle of the evolutionary algorithm. The determined target correction parameter is the individual with the highest adaptability in the ith population, so that the color difference value corresponding to the target correction parameter is smaller than the color difference value corresponding to any training correction parameter in the initial population, and based on the determined target correction parameter, it can be determined that the color difference value corresponding to the target correction parameter is smaller than a preset threshold, namely the minimum function value (second color difference value) is smaller than the preset threshold.
The target correction parameters may include 6 CCM parameters, i.e., off-diagonal parameters in the color correction matrix, from which the other 3 CCM parameters in the color correction matrix may be determined according to the white balance constraint.
In one possible implementation, if the optimization mode of the first color card image is single-target optimization, the weights of the single or multiple color card regions in the first color card image can be adjusted independently in the process of determining the target correction parameters based on the evolutionary algorithm, so as to realize the key adjustment of the single or multiple color card regions. The evolutionary algorithm employed in this case may be referred to as a single-objective optimized evolutionary algorithm. For example, at ΔE i When the value of (2) is large, the i-th color chart region can be focused on, and the weight of the i-th color chart region can be adjusted to be larger. Therefore, in the subsequent process, the color card area with the weight added is adjusted with emphasis. Based on this, the color correction effect of the whole image can be improved after the color correction.
For example, assuming that the weights of color chart 1 to color chart 18 in the first color chart image are all 1, the second color chart image can be obtained after optimization by the evolutionary algorithm of single-objective optimization. The color difference values between the color chart 1 to the color chart 18 of the second color chart image and the color chart 1 to the color chart 18 of the reference color chart image may be as shown in (a) of fig. 6. Wherein the vertical axis represents the color difference value; f1 to f18 represent color chart 1 to color chart 18, respectively; the thick line represents the basic line, i.e. the color difference curve of the correction parameter manually determined by the technician; the dashed line represents the color difference curve after optimization by the evolutionary algorithm of single-objective optimization. As can be seen from fig. 6 (a), the color difference values of f2 and f3 (i.e., color chart 2 and color chart 3) optimized by the evolution algorithm of the single-objective optimization are higher than the base line. Based on this, the weights of f2 and f3 can be increased alone, for example, the weights of f2 and f3 are increased to 6, and the weights of 16 color card areas other than f2 and f3 are still set to 1. As shown in fig. 6 (b), after the weights of f2 and f3 are increased, the color difference between the color cards f1 to f18 of the third color card image is lower than the basic line, which means that the color difference between the color of the third color card image and the color of the reference color card image is smaller. Therefore, the individual color cards can be adjusted by independently adjusting the weights of the individual color cards, and the error of color correction is reduced.
S311, determining a target correction parameter based on the first color card image.
The first color card image is an image obtained by photographing a 24-color standard color card shown in fig. 1 by an electronic device. As shown in fig. 2, in the process from shooting a standard color card to generating a first color card image, the first color card image is obtained by first generating an image in an original (raw) format, that is, a raw image, and the raw image needs to be subjected to image signal processing, for example, processes of BLC and AWB, performing color correction based on initial correction parameters, performing nonlinear gamma correction, and the like, on the raw image in sequence, where the first color card image may be in an image format such as jpg. When the first color difference value between the first color card image and the reference color card image is smaller than or equal to a preset threshold value, the color correction effect indicating the initial correction parameter is good, and then the initial correction parameter can be determined as the target correction parameter.
S312, performing color correction on the image to be corrected based on the target correction parameters to obtain a corrected image.
The electronic equipment can shoot any scene to obtain an image to be corrected, and then, based on the target correction parameters, the image to be corrected is subjected to color correction to obtain a corrected image of the image to be corrected. The image to be corrected may be in raw format. Based on the determined target correction parameters, the electronic device can perform color correction on any other image by adopting the target correction parameters. Based on the method, the color correction effect of the electronic equipment on the image can be improved, so that the picture quality of the electronic equipment is improved.
For example, referring to fig. 7, fig. 7 is a schematic diagram of a search process of the correction parameter search algorithm, and the search time is one minute. Fig. 7 (a) is a schematic diagram of a bayesian optimization algorithm of single-objective optimization, and fig. 7 (b) is a schematic diagram of an evolutionary algorithm of single-objective optimization. In fig. 7 (a) and (b), the abscissa is the rotation of the search iteration; the ordinate is a training color difference value and is used for representing the color difference between the training correction parameters determined by each round of searching and the reference color card image after the training correction parameters are used for carrying out color correction on the first color card image; a dot represents a training correction parameter (including 6 parameters) determined during a set of searches; the broken line represents a basic line, which is used for representing a training correction which is considered to be optimal and is determined by a technician through experience, and after the training correction parameter is used for color correction, the training color difference value is 10.233; x represents the optimal one of the plurality of training correction parameters determined from the start of the search iteration to the current search iteration, i.e., the one with the smallest corresponding training color difference value.
As can be seen from fig. 7, the bayesian optimization algorithm with single-objective optimization can determine a training correction parameter better than the training correction parameter determined by the technician after about 150 search iterations, and the convergence value of the training color difference value is 5.7 (i.e. the determined second color difference value is 5.7), so that the gain is 44%. The evolution algorithm of the single-target optimization can also determine more optimal training correction parameters after fewer search iterations, the convergence value of the training color difference value is 5.2, and the gain can be calculated to be 49%. Based on the method, the optimal training correction parameters are determined according to the correction parameter searching algorithm, and compared with the method that a technician manually determines the optimal training correction parameters, the color difference value between the first color card image and the reference color card image can be smaller, and the color correction efficiency is higher.
It will be appreciated that in fig. 7 (a) and (b), a plurality of training correction parameters, i.e., a plurality of dots, may be determined for each search iteration; the minimum function value in this search iteration can be determined for each search iteration, i.e., one x for each abscissa. For simplicity of description, all the training correction parameters determined in fig. 7 (a) and (b) are not shown, and the number of origins and the number x shown in the drawing are used as examples only and are not limiting.
Therefore, according to the embodiment of the application, the electronic equipment or the image signal processor and the like can automatically and efficiently determine the target correction parameters through the correction parameter searching method. The color correction is carried out on the image based on the target correction parameters, so that the color of the corrected image obtained after correction can be close to the true color or close to the color of the reference color card image, and the image quality is improved.
The above-described embodiment of fig. 3 illustrates a flow of image processing performed by an electronic device, and in the following, with reference to fig. 8, a flowchart of interaction between modules in the process of image processing performed by the electronic device in the above-described embodiment of fig. 3 is illustrated. For example, referring to fig. 8, fig. 8 is a flowchart of interaction between modules of an electronic device according to an embodiment of the present application, and in a process of image processing of the electronic device, the interaction between modules in the electronic device is as follows:
S801, a camera application is started.
The camera application is an application program having a photographing function in the electronic device. For example, after the electronic device detects that color correction is required, the camera application is started, and after the camera application is started, the electronic device may display a shooting interface for previewing a picture to be shot.
S802, the camera shoots to obtain a first color card image, and the first color card image is cached in the cache area.
Specifically, after the camera application is started, the camera is triggered to start, and the camera can store the current shooting scene in a buffer area (buffer) in the form of an image. As shown in the embodiment of fig. 3, the first color card image may be an image obtained by photographing a 24-color standard color card.
Because the storage space of the buffer area is limited, when the number of the stored first color card images exceeds the upper limit threshold of the buffer area, the stored part of the preview images are cleared.
S803, the image processing module acquires a first color card image and a reference color card image.
The image processing module can acquire the first color card image from the buffer area. The reference color card image is a reference image of the 24-standard color card, for example, the reference image can be an image obtained by shooting the 24-standard color card by other electronic equipment, or an image with the color close to the true color of the 24-standard color card after image processing. The reference color chart image has an effect that the electronic device expects an image that it captures and outputs.
The image processing module can acquire the reference color card image from the buffer area. Alternatively, other memory areas of the electronic device may be used to store the reference color chart image, for example, a memory area in the image processing module may store the reference color chart image, and the reference color chart image may be acquired from the memory area when S803 is performed.
S804, the image processing module determines a first color difference value between the first color card image and the reference color card image.
Specifically, please refer to S302 to S305 shown in fig. 3, or refer to S302 to S304 and S306, which will not be repeated here.
S805, the image processing module determines a correction parameter search algorithm in response to the first color difference value being greater than a preset threshold.
The correction parameter searching algorithm is used for determining target correction parameters. Correction parameter search algorithms may include bayesian optimization algorithms and evolutionary algorithms. If the image processing module determines that the correction parameter search algorithm is a bayesian optimization algorithm, S806 is executed; if the image processing module determines that the correction parameter search algorithm is a bayesian optimization algorithm, S807 is performed.
S806, the image processing module responds to the correction parameter searching algorithm as a Bayesian optimization algorithm, and determines a target correction parameter based on the Bayesian optimization algorithm.
Specifically, the process of determining the target correction parameter by the image processing module based on the bayesian optimization algorithm is shown in S309 and fig. 4 in fig. 3, and will not be described herein.
S807, the image processing module determines the target correction parameter based on the evolutionary algorithm in response to the correction parameter search algorithm being the evolutionary algorithm.
Specifically, the detailed process of determining the target correction parameter by the image processing module according to the evolutionary algorithm is shown in S310 and fig. 5 in fig. 3, and will not be described herein.
S808, the camera shoots, an image to be corrected is obtained, and the image to be corrected is transmitted to the image processing module.
Optionally, the camera application may receive a shooting instruction of the user, so that the camera application may trigger the camera to shoot, and obtain an image to be corrected. The image to be corrected refers to an image that is not subjected to image signal processing (which may include BLC, AWB, CCM and the like) after being photographed by a camera.
S809, the image processing module acquires the image to be corrected, and performs color correction on the image to be corrected based on the target correction parameters to obtain a corrected image.
The image processing module can process image signals of the image to be corrected, and comprises the processes of BLC, AWB, color correction based on target correction parameters, nonlinear gamma correction and the like in sequence, so that the corrected image is obtained.
S810, the image processing module transmits the corrected image to the display module. Correspondingly, the display module acquires the correction image and displays the correction image on the display screen.
The display module can display the corrected image on a display screen, and a user can view the corrected image through the display screen. By the color correction, the color effect of the corrected image can be approximated to that of the reference color chart image.
The structure of the electronic device 100 is described below. Referring to fig. 9, fig. 9 is a schematic structural diagram of an electronic device 100 according to an embodiment of the application. It should be understood that electronic device 100 may have more or fewer components than shown in fig. 9, may combine two or more components, or may have a different configuration of components. The various components shown in fig. 9 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor module 180, keys 190, motor 191, indicator 192, camera 193, display 194, and subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown in FIG. 9, or may combine certain components, or split certain components, or a different arrangement of components. The components shown in fig. 9 may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light emitting diode (AMOLED), a flexible light-emitting diode (flex), a mini, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1. The display screen 194 may be used to display an image captured by the camera 193 or an image color-corrected by the ISP in the processor 110.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISPs can also be used to color correct images. For example, an ISP may be used to determine target correction parameters and color correct the image based on the target correction parameters.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. For example, camera 193 may take a picture of a 24-color standard color card, resulting in a first color card image.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, a file such as a compressed drive file is stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an operating system, an application required for at least one function (such as a face recognition function, a fingerprint recognition function, a mobile payment function, etc.), and the like. The storage data area may store data created during use of the electronic device 100 (e.g., face information template data, fingerprint information templates, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The internal memory 121 may include a buffer. The buffer area can be used for buffering images obtained by shooting by the camera, such as a first color card image. Optionally, the buffer may also be used to buffer the reference color card image.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
In embodiments of the present application, the software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
As shown in fig. 10, the electronic device may include: an application layer, an application framework, a hardware abstraction layer (hardware abstraction layer, HAL) layer, and a kernel layer (kernel). Wherein:
the application layer may include a series of application packages. As shown in fig. 10, the application package may include applications such as a camera application, a calendar, a map, a gallery, a music application, a short message, a call, and the like. Optionally, the application package may further include applications for navigation, WLAN, bluetooth, video, etc. The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. As shown in fig. 10, the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture. For another example, the display interface may be used to display images of camera applications at the application layer.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
The hardware abstraction layer may include a plurality of functional modules. Such as an image processing module.
The image processing module can be used for carrying out color correction on the first color card image; the method can also be used for determining target correction parameters based on a correction parameter searching algorithm, and carrying out color correction on the image shot by the electronic equipment based on the target correction parameters to obtain a corrected image.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The camera driver is used for triggering the camera to be started when a trigger command sent by a camera application located in the application program layer is received. The camera driver is also used for triggering the camera to shoot and outputting images when receiving a triggering command of the image processing module positioned on the HAL layer.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: ROM or random access memory RAM, magnetic or optical disk, etc.
In summary, the foregoing description is only exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made according to the disclosure of the present application should be included in the protection scope of the present application.

Claims (8)

1. An image processing method, the method comprising:
determining an ith function based on a preset proxy model and a jth dataset in response to a first color difference value between the first color card image and the reference color card image being greater than a preset threshold; the j-th data set comprises M training correction parameters and K training color difference values;
determining an ith sampling point based on a preset acquisition function in response to the iteration number corresponding to the ith function does not exceed a preset iteration number threshold, and determining an ith output value corresponding to the ith sampling point based on the ith function to obtain nth data; obtaining a j+1 data set based on the nth data; the j+1th data set includes the M training correction parameters, the K training color difference values, and the nth data;
Determining an i+1th function based on the preset proxy model and the j+1th dataset;
determining the (i+1) th function as an objective function in response to the iteration number corresponding to the (i+1) th function exceeding the preset iteration number threshold; the value range of the independent variable of the objective function comprises the M training correction parameters, and the value range of the function value of the objective function comprises the K training color difference values; wherein M, K, i, j and n are integers greater than 0;
determining a target correction parameter according to the target function; the target correction parameters are training correction parameters corresponding to the minimum function value of the target function, and the minimum function value is smaller than the preset threshold;
performing color correction on the image to be corrected according to the target correction parameters;
the reference color card image is a reference image of a standard color card, and the first color card image is an image obtained by shooting the standard color card.
2. The method of claim 1, wherein a training correction parameter x corresponds to a training color difference value y, the training correction parameter x being one of the M training correction parameters, the training color difference value y being one of the K training color difference values; the training color difference value y is a color difference value between a training color card image z and the reference color card image, and the training color card image z is an image obtained by performing color correction on the first color card image based on the training correction parameter x.
3. The method according to claim 1, wherein the method further comprises:
the first color difference value between the first color card image and the reference color card image is determined based on the standard color card.
4. A method according to claim 3, wherein the standard color card comprises N color card areas; n is an integer greater than 0;
the determining the first color difference value between the first color card image and the reference color card image based on the standard color card comprises:
converting the color space of the first color card image from a first color space to a second color space to obtain a first intermediate image;
converting the color space of the reference color card image from the first color space to the second color space to obtain a reference intermediate image;
calculating the distance between an ith color card area in the first intermediate image and an ith color card area in the reference intermediate image to obtain a color difference value corresponding to the ith color card area;
obtaining a color difference intermediate value corresponding to the ith color card area based on the color difference value corresponding to the ith color card area and the weight corresponding to the ith color card area;
And determining the first color difference value based on the color difference intermediate value corresponding to the ith color card area.
5. A method according to claim 3, wherein the standard color card comprises N color card areas; n is an integer greater than 0;
the determining the first color difference value between the first color card image and the reference color card image based on the standard color card comprises:
converting the color space of the first color card image from a first color space to a second color space to obtain a first intermediate image;
converting the color space of the reference color card image from the first color space to the second color space to obtain a reference intermediate image;
calculating the distance between an ith color card area in the first intermediate image and an ith color card area in the reference intermediate image to obtain a color difference value corresponding to the ith color card area;
and obtaining N color difference values based on the color difference value corresponding to the ith color card area, wherein the first color difference value comprises the N color difference values.
6. The method of claim 5, wherein the method further comprises:
and determining that the first color difference value is greater than the preset threshold value in response to at least one color difference value of the N color difference values being greater than the preset threshold value.
7. An electronic device, comprising: a memory, a processor; wherein:
the memory is used for storing a computer program, and the computer program comprises program instructions;
the processor is configured to invoke the program instructions to cause the electronic device to perform the method of any of claims 1-6.
8. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the method according to any of claims 1-6.
CN202310906122.6A 2023-07-24 2023-07-24 Image processing method and electronic equipment Active CN116668656B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310906122.6A CN116668656B (en) 2023-07-24 2023-07-24 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310906122.6A CN116668656B (en) 2023-07-24 2023-07-24 Image processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN116668656A CN116668656A (en) 2023-08-29
CN116668656B true CN116668656B (en) 2023-11-21

Family

ID=87728219

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310906122.6A Active CN116668656B (en) 2023-07-24 2023-07-24 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116668656B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117014733B (en) * 2023-10-08 2024-04-12 荣耀终端有限公司 Shooting correction method, device and equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103079076A (en) * 2013-01-22 2013-05-01 无锡鸿图微电子技术有限公司 Method and device for generating color calibration matrix of self-adaption gamma calibration curve
CN109523485A (en) * 2018-11-19 2019-03-26 Oppo广东移动通信有限公司 Image color correction method, device, storage medium and mobile terminal
CN109903256A (en) * 2019-03-07 2019-06-18 京东方科技集团股份有限公司 Model training method, chromatic aberration calibrating method, device, medium and electronic equipment
CN111292246A (en) * 2018-12-07 2020-06-16 上海安翰医疗技术有限公司 Image color correction method, storage medium, and endoscope
CN113947533A (en) * 2020-07-16 2022-01-18 浙江宇视科技有限公司 Image color correction method, device, equipment and storage medium
CN114972065A (en) * 2022-04-12 2022-08-30 博奥生物集团有限公司 Training method and system of color difference correction model, electronic equipment and mobile equipment
CN115426487A (en) * 2022-08-22 2022-12-02 北京奕斯伟计算技术股份有限公司 Color correction matrix adjusting method and device, electronic equipment and readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103079076A (en) * 2013-01-22 2013-05-01 无锡鸿图微电子技术有限公司 Method and device for generating color calibration matrix of self-adaption gamma calibration curve
CN109523485A (en) * 2018-11-19 2019-03-26 Oppo广东移动通信有限公司 Image color correction method, device, storage medium and mobile terminal
CN111292246A (en) * 2018-12-07 2020-06-16 上海安翰医疗技术有限公司 Image color correction method, storage medium, and endoscope
CN109903256A (en) * 2019-03-07 2019-06-18 京东方科技集团股份有限公司 Model training method, chromatic aberration calibrating method, device, medium and electronic equipment
CN113947533A (en) * 2020-07-16 2022-01-18 浙江宇视科技有限公司 Image color correction method, device, equipment and storage medium
CN114972065A (en) * 2022-04-12 2022-08-30 博奥生物集团有限公司 Training method and system of color difference correction model, electronic equipment and mobile equipment
CN115426487A (en) * 2022-08-22 2022-12-02 北京奕斯伟计算技术股份有限公司 Color correction matrix adjusting method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN116668656A (en) 2023-08-29

Similar Documents

Publication Publication Date Title
US20210058595A1 (en) Method, Device, and Storage Medium for Converting Image
US11477383B2 (en) Method for providing preview and electronic device for displaying preview
CN110069974B (en) Highlight image processing method and device and electronic equipment
CN112887582A (en) Image color processing method and device and related equipment
CN113810602A (en) Shooting method and electronic equipment
CN116668656B (en) Image processing method and electronic equipment
CN108200347A (en) A kind of image processing method, terminal and computer readable storage medium
CN108200352A (en) A kind of method, terminal and storage medium for reconciling picture luminance
CN115761271A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN111724447B (en) Image processing method, system, electronic equipment and storage medium
CN110225331B (en) Selectively applying color to an image
CN114463191B (en) Image processing method and electronic equipment
CN113066020A (en) Image processing method and device, computer readable medium and electronic device
CN113727085A (en) White balance processing method and electronic equipment
CN112489144A (en) Image processing method, image processing apparatus, terminal device, and storage medium
CN116668862B (en) Image processing method and electronic equipment
CN116055699B (en) Image processing method and related electronic equipment
WO2023015993A1 (en) Chromaticity information determination method and related electronic device
WO2022105850A1 (en) Light source spectrum acquisition method and device
CN114222072B (en) Image processing method, device, electronic equipment and storage medium
CN115514948B (en) Image adjusting method and electronic device
CN114038370B (en) Display parameter adjustment method and device, storage medium and display equipment
EP4042405A1 (en) Perceptually improved color display in image sequences on physical displays
CN115908596B (en) Image processing method and electronic equipment
CN116668838B (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant