CN117692788A - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN117692788A
CN117692788A CN202311119808.7A CN202311119808A CN117692788A CN 117692788 A CN117692788 A CN 117692788A CN 202311119808 A CN202311119808 A CN 202311119808A CN 117692788 A CN117692788 A CN 117692788A
Authority
CN
China
Prior art keywords
image
pixel
color
cast
color cast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311119808.7A
Other languages
Chinese (zh)
Inventor
乔晓磊
肖斌
李怀乾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Glory Smart Technology Development Co ltd
Original Assignee
Shanghai Glory Smart Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Glory Smart Technology Development Co ltd filed Critical Shanghai Glory Smart Technology Development Co ltd
Priority to CN202311119808.7A priority Critical patent/CN117692788A/en
Publication of CN117692788A publication Critical patent/CN117692788A/en
Pending legal-status Critical Current

Links

Landscapes

  • Color Image Communication Systems (AREA)

Abstract

The application provides an image processing method and electronic equipment, wherein the method can comprise the following steps: responding to clicking operation for a shooting control, acquiring an initial image, and performing automatic white balance processing on the initial image to obtain a first image; determining a second image based on the initial image and the first image; the second image comprises a color cast region of the first image; inputting the first image and the second image into a preset network model to obtain bilateral grid information output by the preset network model; the bilateral grid information comprises an adjustment matrix corresponding to the color-cast area; performing color cast correction processing on the first image based on the adjustment matrix corresponding to the color cast region to obtain a corrected image; and displaying the corrected image on the preview interface. The method and the device can correct the color cast of the image, and improve the problem of local color cast of the image caused by automatic white balance processing.

Description

Image processing method and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and an electronic device.
Background
The color of the object is determined by its reflectivity characteristics, and the color of the object may be different under the influence of different lighting conditions and other factors. In the image acquisition process, due to the change of external color temperature, the switching of shooting scenes or the adjustment deviation of white balance (white balance), the electronic equipment can be influenced by the external light environment, the characteristics of imaging photosensitive components and the like, so that a certain degree of error exists between imaging colors and the true colors of objects, namely color cast. Color cast may cause an image to lack realism, thereby degrading image quality. Typically, the electronic device employs automatic white balancing (auto white balance, AWB) to adjust the color cast of the image. However, AWB is a global adjustment manner, and in a mixed light source scene, local color cast of an image is easily caused. In a mixed light source scene, how to solve the problem of local color cast of an image caused by AWB processing is a problem to be solved.
Disclosure of Invention
The embodiment of the application provides an image processing method and electronic equipment, which can reduce the problem of local color cast of an image caused by automatic white balance processing.
In a first aspect, the method may be performed by an electronic device or by an apparatus compatible with an electronic device, e.g. by a processor, a chip or a system-on-chip or the like. The method may include: responding to clicking operation for a shooting control, acquiring an initial image, and performing automatic white balance processing on the initial image to obtain a first image; determining a second image based on the initial image and the first image; the second image comprises a color cast region of the first image; inputting the first image and the second image into a preset network model to obtain bilateral grid information output by the preset network model; the bilateral grid information comprises an adjustment matrix corresponding to the color-cast area; performing color cast correction processing on the first image based on the adjustment matrix corresponding to the color cast region to obtain a corrected image; and displaying the corrected image on the preview interface.
Therefore, the method and the device can determine the color cast area based on the initial image and the first image, and obtain bilateral grid information based on a preset network model; based on the bilateral grid information, the color cast correction can be carried out on the color cast areas to obtain corrected images, so that the problem of local color cast of the images caused by automatic white balance processing can be eliminated, and the authenticity and the image quality of the images are improved.
In one possible implementation manner, the determining a second image based on the initial image and the first image may include: determining that a pixel i is a color cast pixel or a non-color cast pixel based on an RGB value of the pixel i in the first image and an RGB value of a pixel j in the initial image; the pixel i corresponds to the pixel j; determining a color cast region and a non-color cast region of the first image based on the pixel i as a color cast pixel or a non-color cast pixel; and marking the color cast area and the non-color cast area in the first image respectively to obtain a second image.
Therefore, whether the pixel i is color cast or not can be determined based on the RGB value of the pixel i in the first image and the RGB value of the pixel j corresponding to the pixel i in the initial image, so that the color cast area and the non-color cast area of the first image can be determined, and the accuracy of determining the local color cast of the image can be improved.
In one possible implementation, determining pixel i as a color cast pixel or a non-color cast pixel based on the RGB values of pixel i in the first image and the RGB values of pixel j in the initial image may include: determining that pixel i is a color cast pixel in response to the channel size relationship of the RGB values of pixel i in the first image being different from the channel size relationship of the RGB values of pixel j in the initial image; determining that pixel i is a non-color cast pixel in response to the channel size relationship of the RGB values of pixel i in the first image being different from the channel size relationship of the RGB values of pixel j in the initial image; wherein the channel size relationship is used to indicate the size relationship between the individual channels in the RGB values.
It can be seen that whether a pixel is a color cast pixel can be determined based on the channel size relationship, which is beneficial to improving the accuracy of determining the color cast region.
In one possible implementation, the bilateral mesh information further includes a plurality of brightness intervals, a plurality of mesh regions, and a plurality of adjustment matrices; based on this, the above method may further include: determining a brightness section corresponding to the pixel i from a plurality of brightness sections based on the brightness of the pixel i; the luminance of pixel i is determined based on the RGB values of pixel i; determining a grid region corresponding to the pixel i from the plurality of grid regions based on the position of the pixel i in the first image; determining an adjustment matrix corresponding to the pixel i from a plurality of adjustment matrices based on the grid region corresponding to the pixel i and the brightness interval corresponding to the pixel i; and determining an adjustment matrix corresponding to the color-cast region and an adjustment matrix corresponding to the non-color-cast region of the first image based on the adjustment matrix corresponding to the pixel i.
It can be seen that the corresponding adjustment matrix can be found from the bilateral mesh information by the RGB value of each pixel in the first image.
In one possible implementation manner, performing color cast correction processing on the first image based on the adjustment matrix corresponding to the color cast region to obtain a corrected image may include: performing color cast correction on the first image based on the adjustment matrix corresponding to the color cast region and the adjustment matrix corresponding to the non-color cast region to obtain a corrected image; wherein, the adjusting matrix corresponding to the color-biasing area is a non-identity matrix, and the adjusting matrix corresponding to the non-color-biasing area is an identity matrix.
Therefore, the adjustment matrix corresponding to the color-cast area and the adjustment matrix corresponding to the non-color-cast area can be found from the bilateral grid information, the color-cast correction can be carried out on the color-cast area, and meanwhile, the non-color-cast area is ensured to be unchanged, so that the local color-cast correction can be carried out on the first image, and the accuracy and the effectiveness of the color-cast correction are improved.
In one possible implementation manner, performing color cast correction on the first image based on the adjustment matrix corresponding to the color cast region and the adjustment matrix corresponding to the non-color cast region to obtain a corrected image may include: obtaining a target RGB matrix corresponding to the pixel i based on the RGB matrix corresponding to the RGB value of the pixel i and the corresponding adjustment matrix of the pixel i; determining a target RGB value of the pixel i based on a target RGB matrix corresponding to the pixel i; determining a target RGB value corresponding to the color-cast area and a target RGB value corresponding to the non-color-cast area based on the target RGB value of the pixel i; and determining a corrected image based on the target RGB values corresponding to the colored areas and the target RGB values corresponding to the non-colored areas.
It can be seen that the RGB values of each pixel can be adjusted based on the adjustment matrix, thereby realizing color cast correction for the color cast region.
In one possible implementation manner, the method may further include: downsampling the first image to obtain a first image with a preset size; based on this, the inputting the first image and the second image into the preset network model to obtain bilateral grid information output by the preset network model may include: inputting a first image and a second image with preset sizes into a preset network model to obtain bilateral grid information output by the preset network model; the image size of the second image is a preset size.
Therefore, the first image is downsampled based on the preset size, so that the thumbnail of the first image (namely the first image with the preset size) can be obtained without changing the color cast area and the non-color cast area in the first image; and inputting the first image and the second image with preset sizes into a preset network model, so that the preset network model is facilitated to generate bilateral grid information.
In a second aspect, embodiments of the present application provide an electronic device, including: one or more processors and memory; the memory is coupled to the one or more processors, the memory for storing computer program code comprising computer instructions that the one or more processors call to cause the electronic device to perform the method as described in the first aspect or any implementation of the first aspect.
In a third aspect, embodiments of the present application provide a computer program product comprising instructions which, when run on an electronic device, cause the electronic device to perform the method according to the first aspect or any implementation of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium comprising instructions that, when executed on an electronic device, cause the electronic device to perform the method of the first aspect or any implementation of the first aspect.
Drawings
FIGS. 1A-1B are schematic diagrams illustrating the effects of an image processing according to embodiments of the present application;
FIG. 2 is a schematic illustration of a user interface provided by an embodiment of the present application;
fig. 3 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 4A is a schematic diagram of a grid area division provided in an embodiment of the present application;
FIG. 4B is a schematic diagram of a coordinate system according to an embodiment of the present application;
FIG. 4C is a schematic illustration of a first image provided in an embodiment of the present application;
FIG. 4D is a schematic illustration of a second image provided in an embodiment of the present application;
fig. 5 is an interaction schematic diagram of each module of an electronic device provided in an embodiment of the present application;
fig. 6 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 7 is a schematic diagram of a software framework of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims of this application and in the drawings, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
As used in this specification, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between 2 or more computers. Furthermore, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from two components interacting with one another in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
Color temperature, for defining the light source color. Color temperature refers to the color that an absolute black body assumes after warming from absolute zero (-273 ℃). When heated to a certain temperature, the spectral component contained in the light emitted by an absolute black body is called the color temperature at this temperature, and the measurement unit is Kelvin (K). For example, an absolute black body changes its color gradually from black to red, then gradually from yellow to white, and finally from blue after being heated. If the light emitted by a certain light source is the same as the spectral component contained in the light emitted by the black body at a certain temperature, the light is called a certain K color temperature. If the color of the light emitted by the 100W bulb is the same as that of the absolute blackbody at 2527 ℃, the color temperature of the light emitted by the bulb is: (2527+273) k=2800k. When the color of the light is reddish, orange, yellow, it can be called low color temperature. When the color of the light is cyan, blue or bluish-violet, it may be referred to as a high color temperature. When the color of the light is white, it may be referred to as a normal color temperature. The color of the object can only show the normal color of the object under the irradiation of white light.
For example, refer to the color temperature lookup table of the common light source types shown in table 1.
Table 1, color temperature control Table
Type of light source Color temperature (K) RGB
Match flame 1700 (255,124,0)
Candle flame 1800 (255,126,0)
Incandescent bulb 2700-3300 (255,174,84)
Photographic floodlight 3400 (255,198,130)
Moonlight 4100 (255,215,166)
Horizon sunlight 5000 (255,231,204)
Electronic flash lamp 5500-6000 (255,238,222)
Fluorescent lamp 6500 (255,249,251)
As shown in table 1, the color temperatures of the different types of light sources are different, and the different colors can be represented by different RGB values. Wherein, the color temperature of the incandescent bulb is 2700K to 3300K, and (255, 174, 84) in the table 1 is RGB value corresponding to 2700K; the color temperature of the electronic flash was 5500K to 6000K, and (255, 238, 222) in table 1 was 5500K corresponding RGB values.
Due to the different spectral characteristics of different illuminations, the color tone presented by the object is different under the illumination of different color temperatures, the higher the color temperature of the light source is, the more blue the object is, the lower the color temperature of the light source is, and the more yellow the object is. In the image acquisition process, the electronic equipment, such as a smart phone, can cause the color cast phenomenon of an image generated by the electronic equipment under the influence of factors such as color temperature change of a photographing environment, photographing scene switching or deviation of camera white balance adjustment, and the like, so that the reduction degree of imaging on a real scene is reduced. Therefore, it is necessary to correct color cast of images photographed by electronic devices under light sources of different color temperatures.
Since the white scene has the same reflection on the red, green and blue channels, three channels of the white scene have equal values, so that the color cast caused by the light sources with different color temperatures can be corrected by taking white as a reference. For example, the electronic device will perform an automatic white balance (auto white balance, AWB) process on the image before outputting the taken photograph, implementing global color cast correction. Typically, in the process of AWB of an image, a white point in the image is first determined, a color temperature of a light source is determined according to a pixel value of the white point, a white balance adjustment parameter of the image is then determined according to the color temperature, and global white balance processing is performed on the image by using the white balance adjustment parameter. However, the correction method is only suitable for images under a single light source, and for two or more mixed light source scenes, the white balance processing method performs global color cast correction according to one of the light sources or based on weighted average of multiple light sources, so that local color cast of the images can occur, and the color cast problem cannot be solved.
By way of example, a typical fluorescent lamp typically has a color temperature of 4500K to 6000K, and a fluorescent lamp typically has a color temperature of 6500K, where K represents the unit of color temperature in kelvin. In a mixed light source scene of two color temperatures of 4500K and 6500K, by the above-described white balance processing method, after determining the white point corresponding to 4500K and the white point corresponding to 6500K, the image is subjected to white balance processing at an intermediate color temperature 5500K of 4500K and 6500K, and thus, it is inevitable that local color shift will occur in the image. In addition, under a mixed light source, when an image area corresponding to a certain color temperature occupies only a small part of an image, white balance processing on the area may be lost, and image quality may be reduced.
In the mixed light source scenario shown in fig. 1A, the light source is from an LED light box, tungsten filament lamp, floor lamp, etc. Wherein the tungsten filament lamp emits yellow light; the floor lamp emits white light; the LED lamp strip in the LED lamp box originally emits white light, but the surface of the LED lamp box is covered with advertisement contents comprising blue, orange, yellow and the like. The color temperatures of the various light sources are different from each other, and under the irradiation of the color mixing light sources, a photo obtained by the electronic device shooting a picture as shown in fig. 1A has a local color cast phenomenon after the AWB processing. For example, the text "hi" on the surface of the LED light box and the white text such as "hi" may be represented as (183, 255, 255) by the RGB value of the "hi" word.
In order to reduce the problem of local color cast of an image caused by AWB processing in a mixed light source scene, the embodiment of the application provides an image processing method which can be applied to local color cast correction in the mixed light source scene, can identify the color cast area of the image, and can carry out targeted correction on the color cast area, thereby not only solving the problem of local color cast of the image, but also not affecting the color effect of other areas of the image.
By way of example, by adopting the image processing method provided by the embodiment of the present application to perform image processing on the image shown in fig. 1A, the color-cast area in the image can be accurately identified and local color cast correction can be performed, so that the image shown in fig. 1B can be obtained. As can be seen by comparing fig. 1A and fig. 1B, the problem of local color cast of fig. 1B has been significantly improved, wherein the white text on the LED light box shown in fig. 1B appears white, for example, the RGB values of the hi word after correction can be expressed as (251, 253, 253).
First, a user interface according to an embodiment of the present application will be described.
As shown in fig. 2 a, the electronic device 100 may display a home screen interface 210, where a page with application icons is displayed in the home screen interface 210, the page including a plurality of application icons, e.g., camera application icons 211. The electronic device 100 may detect an operation acting on the camera application icon 211, for example, the electronic device 100 may detect an operation in which the user clicks or touches the camera application icon 211. In response to this operation, the electronic apparatus 100 may display the photographing interface 220 as shown in fig. 2B.
As shown in fig. 2B, the shooting interface 220 may display a scene display area 224 including a camera conversion control 221, a shooting control 222, an image back display control 223. Wherein, the image back display control 223 can be used for displaying the shot picture; the shooting control 222 is used for triggering the camera to shoot and save images; the camera conversion control 221 may be used to switch the camera to take a photograph, for example, from a rear camera to a front camera; a preview image of the current scene, such as the scene shown in B in fig. 2, may be displayed in real-time in the scene display area 224. When the electronic device 100 detects an operation (e.g., a click, a touch, etc.) for the capture control 222, the electronic device 100 may invoke the camera to capture and display a capture interface 230 as shown in fig. 2C.
As shown in fig. 2C, in the image playback control 223 in the photographing interface 230, a preview photo obtained by photographing in response to the operation on the photographing control 222 may be displayed, and this preview photo is a photo after the local color shift correction has been performed. When the electronic device 100 detects an operation, such as a click operation, for the image return control 223, a photo preview interface 240, as shown in fig. 2D, may be displayed. As shown in fig. 2D, the photo preview interface 240 displays a photo generated by the camera application icon 211. The photo displayed in the photo preview interface 240 is an image after the local color shift correction.
An exemplary image processing method provided in the embodiments of the present application is described below with reference to the accompanying drawings. The image processing method may be executed by the electronic device, or by an image processor in the electronic device, or may be executed by a chip or a chip system or the like having an image processor function. Referring to fig. 3, as shown in fig. 3, the image processing method includes, but is not limited to, the following steps:
s301, responding to clicking operation for a shooting control, and acquiring an initial image.
Wherein the initial image is an image in an original (RAW) format. The RAW format image, including the original data of converting the captured light source signal into a digital signal by the image sensor in the electronic device, is lossless and contains the original color information of the object.
For example, the electronic device can invoke a camera to take a photograph after detecting a click operation on the photographing control 222 as shown in fig. 2B to obtain an initial image.
Typically, the color space of the first image is an RGB color space, in other words, the color mode of the first image is an RGB mode. Wherein RGB represents three primary colors of red, green and blue, and the RGB color space is a color space in which colors are described by the three primary colors. Wherein R, G, B respectively represent three channels, R means red (red), which may be called red channel, G means green (green), i.e. green channel, and B means blue (blue), i.e. blue channel. In general, color images collected by electronic devices such as cameras and mobile phones can be stored in three components R, G, B. For example, a color image may be represented by RGB values, i.e., (R, G, B), wherein R, G, B each includes any integer within the range of 0, 255.
S302, performing automatic white balance processing on the initial image to obtain a first image.
The image format of the first image is different from the image format of the initial image, for example, the first image may be an image in JPEG format. The color space of the first image is the same as that of the initial image, for example, if the color space of the initial image is an RGB color space, the color space of the first image is also an RGB color space. Wherein an automatic white balance process (AWB) is used to perform global color cast correction on the initial image.
Optionally, the image signal processing may be performed on the initial image to obtain the first image, where the image signal processing includes any one or more of automatic white balance processing, noise removal, dead pixel removal, automatic exposure control, black level correction (black level correction, BLC), color interpolation, nonlinear gamma (gamma) correction, color correction (color correction), and the like.
S303, determining a second image based on the initial image and the first image. The second image comprises a color cast area and a non-color cast area of the first image.
The second image is a prior image (priority map) in which the color-cast region and the non-color-cast region have different marks. For example, a colored region is labeled 1 and a non-colored region is labeled 0. The size of the second image is a preset size, and the preset size can be configured to be different values based on the precision requirements of different application scenes. For example, the preset size may be a size of 512 pixels×512 pixels, which is not limited in this application.
The detailed steps for determining the second image are described below.
Step one, a color cast area and a non-color cast area of a first image are determined based on the first image and an initial image.
For example, assuming that the first image includes N pixels in total, for each pixel in the first image, taking any one pixel in the first image, such as pixel i as an example, the process of determining the color cast region and the non-color cast region of the first image may include the steps of:
(1) the RGB values of pixel i are obtained and the RGB values of pixel j in the initial image are obtained.
Wherein pixel j is any one of the pixels in the initial image, and pixel i corresponds to pixel j. Since the first image is an image obtained by performing image signal processing on the initial image, the position of the pixel i in the first image is the same as the position of the pixel j in the initial image. The difference between pixel i and pixel j is that the RGB values of pixel i and pixel j may be different due to the influence of the AWB process.
For example, the RGB value of pixel i may be expressed as (R 1 ,G 1 ,B 1 ) The RGB values of pixel i can be expressed as (R 2 ,G 2 ,B 2 ). Wherein R is 1 ,G 1 ,B 1 ,R 2 ,G 2 ,B 2 The value of (c) includes 0, 255]Any integer within the range.
(2) A channel size relationship of RGB values for pixel i is determined, and a channel size relationship of RGB values for pixel j is determined.
The channel size relationship refers to the size relationship among an R channel, a G channel and a B channel in the RGB value. Exemplary, if the RGB value of pixel i is (R 1 ,G 1 ,B 1 ) = (121, 188, 255), then the channel size relationship of the RGB values of pixel i may be represented as R 1 <G 1 <B 1 . Similarly, the channel size relationship of the RGB values of pixel j may also be determined based on the numerical sizes of the three channels R, G, B in the RGB values of pixel j.
(3) And determining the pixel i as a color cast pixel in response to the channel size relationship of the RGB value of the pixel i being different from the channel size relationship of the RGB value of the pixel j.
Exemplary, assume that the RGB values (R 1 ,G 1 ,B 1 ) Wherein R is 1 <G 1 <B 1 The method comprises the steps of carrying out a first treatment on the surface of the Let the RGB values of pixel j (R 2 ,G 2 ,B 2 ) Wherein R is 2 <G 2 ,R 2 >B 2 And G 2 >B 2 . Therefore, if the channel size relationship of the RGB values of the pixel i is different from the channel size relationship of the RGB values of the pixel j, the pixel i is a color cast pixel.
Or, (4) determining that pixel i is a non-color cast pixel in response to the channel size relationship of the RGB values of pixel i being the same as the channel size relationship of the RGB values of pixel j.
Illustratively, assume that the channel size relationship is R 1 <G 1 <B 1 The channel size relationship is G 2 >R 2 ,G 2 >B 2 And R is 2 >B 2 The corresponding region of pixel i may be determined to be a non-color cast region.
(5) And determining a color cast region and a non-color cast region of the first image based on the pixel i as a color cast pixel or a non-color cast pixel.
Wherein, each pixel in the first image can be determined to be a color cast pixel or a non-color cast pixel by adopting the modes shown in the steps (1) - (3) or the modes shown in the steps (1), (2), (4). The color cast region of the first image may include a plurality of color cast pixels, and the non-color cast region may also include a plurality of non-color cast pixels. For example, if pixel i, pixel i+1, and pixels i+8 to i+25 are all color cast pixels, the region formed by pixel i and pixel i+1 may be referred to as a color cast region; the region constituted by the pixels i+8 to i+25 may also be referred to as one color shift region.
And step two, determining a second image based on the color cast area and the non-color cast area of the first image.
Specifically, in the first step, the color-shifting region and the non-color-shifting region may be marked in the first image, for example, the color-shifting region is marked as 1, the non-color-shifting region is marked as 0, and all the color-shifting regions and the non-color-shifting regions are combined according to the original arrangement mode of each pixel in the first image, so as to obtain the second image.
For example, assume that the first image is as shown in fig. 4C, and includes a total of 64 pixels from pixel 1 to pixel 64. If 14 pixels such as pixel 13, pixel 14, pixel 21, pixel 22, pixel 29, pixel 30, pixel 37, pixel 48, pixel 15, pixel 46, pixel 53, pixel 54, pixel 61, and pixel 62 are color cast pixels, and the other pixels are non-color cast pixels, the color cast areas of the 14 pixels can be marked with 1, and the remaining non-color cast areas can be marked with 0, and a second image as shown in fig. 4D can be obtained.
S304, downsampling the first image to obtain a first image with a preset size.
When the second image to be generated is of a preset size, the first image can be downsampled (subsampled) based on the preset size to obtain the first image of the preset size, so that the size of the downsampled first image is the same as the size of the second image. Wherein the first image of the preset size may be referred to as a thumbnail image of the first image. Alternatively, the manner of downsampling the first image may be achieved by performing a Max-pooling (Max-pooling) process on the first image.
The thumbnail of the first image (namely the first image with the preset size) can be obtained by downsampling the first image, so that the processing amount of the preset network model can be properly reduced by the obtained thumbnail, and the processing efficiency is improved; in addition, the downsampling may preserve the color cast features in the first image, i.e., the color cast regions and non-color cast regions in the first image are not changed.
S305, inputting the first image and the second image with preset sizes into a preset network model to obtain bilateral grid information output by the preset network model.
The electronic device is configured with a preset network model, and the preset network model is a network model constructed based on a bilateral grid (grid) technology and a neural network technology. The preset network model can obtain the color cast characteristics of the first image based on the first image and the second image with preset sizes, and finally obtain bilateral grid information of the first image based on the color cast characteristics of the first image.
The bilateral grid information comprises a plurality of brightness intervals, a plurality of grid areas and a plurality of adjustment matrixes. Alternatively, the parameter pattern of the bilateral mesh information may be expressed as c×k×l×9, where c×k represents a plurality of mesh areas; l represents a plurality of brightness intervals; 9 denotes a plurality of adjustment matrices, and each adjustment matrix is a 3×3 matrix.
The three parameters of c× K, L and 9 in the bilateral mesh information are described below.
(1) C and K denote dividing the first image of the preset size into c×k grid areas, for example, the first image of the preset size may be equally divided into 32×32, and 1024 grid areas in total. If one pixel belongs to the grid region of the 1 st row and 1 st column among the c×k grid regions, the coordinates thereof may be expressed as (1, 1). For the sake of simplicity of description, it is assumed that the first image of the preset size is divided into 4×4 as shown in fig. 4A, and 16 mesh areas in total, and coordinates corresponding to each mesh area are shown in fig. 4A.
(2) L represents a luminance interval. For example, the preset network model may divide the luminance (denoted as L) with the value range of [0, 255] into 8 sections equally, and when the luminance L of a pixel is 16, l=1 indicates that the pixel belongs to the first luminance section. Alternatively, the brightness may be equally divided into 16, 64 and other brightness intervals according to different precision requirements, which is not limited in this application.
If the RGB value of the pixel i in the first image is (r i ,g i ,b i ) The brightness of pixel i can be determined in several ways:
mode one: any one channel of the RGB values of the pixel i is randomly selected as the luminance channel of the pixel i, and the luminance of the pixel i is the value of the selected channel. For example, if the B channel is selected as the luminance channel, the luminance l of the pixel i is B 1
It should be noted that, if the electronic device selects the B channel as the luminance channel, when traversing all pixels in the first image, the same channel is selected as the luminance channel.
Mode two: the average value of the RGB values of the pixel i is taken as the luminance of the pixel i. For example, the luminance l of the pixel i may be calculated by the formula l=1/3× (r i +b i +g i ) And (5) determining.
Mode three: the color space of the first image is converted from RGB color space to YUV color space and the value of the Y channel is taken as the luminance of pixel i. Wherein, Y channel, namely brightness channel, is used for representing brightness, U channel and V channel are used for describing color saturation, and the value of Y channel is 0, 255]The U channel has a value range of [ -112, 112]The V channel has a value in the range of [ -157, 157]. The YUV values for pixel i in the first image may be represented as (y i ,u i ,v i ) This can be calculated by the following formula:
y i =0.299r i +0.587g i +0.114b i
u i =-0.147r i -0.289g i +0.436b i =0.492(b i -y i )
v i =0.615r i -0.289g i +0.436b i =0.877(r i -y i )
combining C x K with L, a three-dimensional coordinate system similar to XYZ coordinates can be constructed. For example, as shown in fig. 4B, C and K may represent the X-axis and Y-axis in the XYZ coordinate system, respectively, and L may represent the Z-axis in the XYZ coordinate system. For example, assuming that the position of the pixel i in the first image corresponds to the region of the 2 nd row and the 3 rd column in c×k and the luminance section of the pixel i is l=2, the three-dimensional coordinates of the pixel i in the bilateral mesh information may be expressed as (2, 3, 2). In other words, the pixel i corresponds to one point of coordinates (2, 3, 2) in the three-dimensional coordinate system.
(3) Parameter 9 in the bilateral mesh information represents 3×3 adjustment matrix C 3×3 . The RGB values of any one pixel in the first image may be represented in a 1 x 3 matrix form, e.g. the RGB matrix of pixel i may be represented as rgb= [ r ] i b i b i ]The coefficient matrix is used to multiply the RGB matrix of a pixel, the RGB value of this pixel being changeable. Illustratively, one coefficient matrix may be expressed as:
the coefficient matrix in the bilateral grid information corresponds to the points in the three-dimensional space formed by C, K, L one by one. For example, if the three-dimensional coordinate of the pixel i in the bilateral mesh information is represented as (2, 3, 2), the coefficient matrix corresponding to (2, 3, 2) may be found in the bilateral mesh information based on the three-dimensional coordinate.
S306, based on the bilateral grid information, an adjustment matrix corresponding to the color-cast area and an adjustment matrix corresponding to the non-color-cast area are determined.
The color cast region may include a plurality of color cast pixels, and the non-color cast region may also include a plurality of non-color cast pixels; the adjustment matrix corresponding to the color-cast region comprises an adjustment matrix corresponding to each color-cast pixel in the color-cast region, and the adjustment matrix corresponding to the non-color-cast region comprises an adjustment matrix corresponding to each non-color-cast pixel in the non-color-cast region. Specifically, for any pixel in the first image, for example, for a pixel i in the first image, firstly determining a grid region corresponding to the pixel i from bilateral grid information based on the position of the pixel i in the first image, that is, determining which region of the c×k regions the pixel i belongs to; determining the brightness of the pixel i based on the RGB value of the pixel i, and determining a brightness interval corresponding to the pixel i from bilateral grid information based on the brightness of the pixel i; based on the grid region and the brightness interval corresponding to the pixel i, an adjustment matrix corresponding to the pixel i can be determined from the bilateral grid information.
For example, assuming that the parameter pattern of the bilateral mesh information of the first image is 32×32×8×9, if the pixel i belongs to the area of the 3 rd row and the 6 th column in c×k, it may be expressed as (3, 6); determining the luminance of the pixel i based on the RGB values of the pixel i, if y=33 of the pixel i, the luminance l=33 of the pixel i belongs to the second luminance section, and l=2, so that the coordinates of the pixel i in the three-dimensional space coordinate system of c×k×l can be expressed as (3,6,2); searching for an adjustment matrix C corresponding to coordinates (3,6,2) among a plurality of adjustment matrices included in the bilateral mesh information i C is then i The adjustment matrix corresponding to pixel i.
If the pixel i is a color cast pixel, the adjustment matrix corresponding to the pixel i is a non-identity matrix; if the pixel j in the first image is determined to be the unbiased pixel in S303, the adjustment matrix corresponding to the pixel j is the identity matrix C E Identity matrix C E The diagonal element of (1) and the off-diagonal element of (0) can be represented by the following matrix:
in one possible implementation manner, after executing S303, the electronic device may directly input the first image and the second image into a preset network model, where the preset network model may perform preprocessing on the input first image and the second image, for example, perform size modification on the first image to obtain a first image with a preset size, so that the size of the first image is the same as that of the second image.
S307, performing color cast correction on the first image based on the adjustment matrix to obtain a corrected image.
The adjustment matrix comprises an adjustment matrix corresponding to the color-cast area and an adjustment matrix corresponding to the non-color-cast area. Specifically, for each pixel in the first image, such as pixel i, an RGB matrix corresponding to the RGB values of pixel i is combined with an adjustment matrix C corresponding to pixel i i Multiplying to obtain an adjusted RGB matrix; based on the adjusted RGB matrix, a target RGB value corresponding to the pixel i, that is, the adjusted RGB value, can be obtained. Illustratively, let the RGB value of pixel i be (r i ,g i ,b i ) RGB matrix corresponding to RGB value is RGB i =[r i g i b i ]The adjusted RGB matrix, i.e. the target RGB matrix RGB corresponding to pixel i i * Can be calculated by the following formula:
based on the above formula, a target RGB matrix RGB of the pixel i is obtained i * =[r i * g i * b i * ]The target RGB value for pixel i may be determined based on the target RGB matrix. Illustratively, r in the target RGB matrix may be i * ,g i * ,b i * Respectively converting into values of [0, 255 ] meeting R channel, G channel and B channel]To obtain the target RGB value for pixel i. If the pixel j in the first image is a non-color cast pixel, the adjustment matrix corresponding to the pixel j is an identity matrix C E The target RGB value for pixel j is the same as the target RGB value for pixel j. Based on this, only the color shift correction can be performed on the color shift region, and the non-color shift region can be kept unchanged, so that the influence on the non-color shift region during the correction process can be avoided, for example, the generation of color shift in the non-color shift region during the adjustment process can be avoided.
Based on this, a target RGB value for each pixel in the first image may be determined, i.e., a target RGB value for each color cast pixel in the color cast region may be determined, as well as a target RGB value for each non-color cast pixel in the non-color cast region. In the first image, the RGB values of the color cast pixels of the first image may be updated to target RGB values, and the RGB values of the non-color cast pixels may be kept unchanged, so that an image, that is, a corrected image, may be recombined.
S308, displaying the corrected image on the preview interface.
Illustratively, the preview interface may be a photo preview interface 240 as shown in fig. 2D. Optionally, the preview interface may also be an image playback control 223 as shown in fig. 2C, where the correction image displayed in the image playback control 223 is a thumbnail; if the electronic device detects a click operation for the image return control 223, a photo preview interface 240 as shown in fig. 2D may be displayed and a correction image may be displayed on the photo preview interface 240.
Therefore, by the image processing method provided by the embodiment of the application, the local color-cast areas caused by AWB processing in the image can be accurately identified under the mixed light source scene, the local color-cast areas can be efficiently corrected, the true color of the image is restored, and the quality of the image is improved.
The flow of image processing performed by the electronic device is described in the embodiment of fig. 3, and a flowchart of interaction between each module of the electronic device in the embodiment of fig. 3 is described below with reference to fig. 5. Referring to fig. 5, fig. 5 is a flowchart of interaction between each module of an electronic device provided in an embodiment of the present application, and in a process of image processing of the electronic device, the interaction between each module in the electronic device is as follows:
s501, the camera application receives an instruction associated with a click operation for a shooting control, and sends a call instruction to the camera. Correspondingly, the camera receives the call instruction.
The electronic device may detect a clicking operation on the photographing control, and accordingly, the user may click on the photographing control, such as clicking on the photographing control 222, in the photographing interface. When the electronic device detects a click operation for the shutter control, an instruction associated with the click operation for the capture control may be sent to the camera application to instruct the camera application to launch. The calling instruction is used for calling the camera to shoot.
S502, the camera shoots, an initial image is obtained, and the initial image is transmitted to the image processing module. Accordingly, the image processing module receives the initial image.
Wherein the initial image is an image in RAW format.
S503, the image processing module performs automatic white balance processing on the initial image to obtain a first image.
Specifically, the implementation process of obtaining the first image based on the initial image may refer to S302 as shown in fig. 3.
The image processing module determines a color cast region and a non-color cast region of the first image based on the first image and the initial image S504.
Specifically, the implementation manner of determining the color cast area and the non-color cast area of the first image may refer to step one in S303.
S505, a second image is determined based on the color cast region and the non-color cast region of the first image.
For example, the second image of the first image may be as shown in fig. 4D. Specifically, the specific implementation process of determining the second image may refer to step two in S303, which is not described herein.
S506, the image processing module determines bilateral grid information based on the first image and the second image.
Specifically, the specific implementation process of determining the bilateral mesh information may refer to S305.
S507, the image processing module determines an adjustment matrix corresponding to the color-cast area and an adjustment matrix corresponding to the non-color-cast area based on the bilateral grid information.
Specifically, the specific implementation process of determining the adjustment matrix may refer to S306.
S508, the image processing module performs color cast correction on the first image based on the adjustment matrix to obtain a corrected image; the corrected image is transmitted to the display screen. Accordingly, the display screen receives the corrected image.
The first image includes N pixels, and an adjustment matrix corresponding to each pixel of the N pixels may be determined, that is, N adjustment matrices may be obtained altogether, and an ith pixel of the N pixels corresponds to an ith adjustment matrix of the N adjustment matrices. N is an integer greater than 0. Specifically, the specific implementation process of obtaining the corrected image may refer to S307.
Alternatively, the image processing module may send an indication including the corrected image to the display screen, thereby transmitting the corrected image to the display screen.
S509, the display screen displays the corrected image.
The display screen is a device in an electronic device, which may display a photo preview interface 240 as shown in fig. 2D, for example. The user may view the corrected image in the photo preview interface 240.
The structure of the electronic device 100 is described below. Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application. It should be understood that electronic device 100 may have more or fewer components than shown in fig. 6, may combine two or more components, or may have a different configuration of components. The various components shown in fig. 6 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor module 180, keys 190, motor 191, indicator 192, camera 193, display 194, and subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown in FIG. 6, or may combine certain components, or split certain components, or a different arrangement of components. The components shown in fig. 6 may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, for example: the processors may include an application processor (application processor, AP), a modem (also referred to as a baseband processor), a graphics processor (Graphics Processing Unit, GPU), an image signal processor (Image Signal Processor, ISP), a controller, a video codec, a digital signal processor (Digital Signal Processor, DSP), and/or a Neural-network processor (Neural-network Processing Unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. The processor is a neural center and a command center of the electronic device 100, and the controller can generate operation control signals according to instruction operation codes and time sequence signals to complete instruction fetching and instruction execution control.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active matrix organic light-emitting diode (active-matrix organic light emitting diode, AMOLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1. In an embodiment of the present application, the display screen 194 may be used to display a home screen interface as shown in FIG. 2A, including a capture interface to capture controls, as well as a correction image.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. In the embodiment of the application, the ISP may perform image signal processing on the initial image, thereby obtaining the first image.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. In embodiments of the present application, camera 193 may generate an initial image by capturing a still image.
The digital signal processor is used for processing digital signals, such as digital image signals. In embodiments of the present application, a digital signal processor may be used to perform image signal processing on an initial image.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc. In the embodiment of the application, the NPU may be applied to a preset network model.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an operating system, an application required for at least one function (such as a face recognition function, a fingerprint recognition function, a mobile payment function, etc.), and the like. The storage data area may store data created during use of the electronic device 100 (e.g., face information template data, fingerprint information templates, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. In the present embodiment, the internal memory 121 may be used to store an initial image, a first image, a second image, and a corrected image.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194.
The ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph.
The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194. In the embodiment of the present application, the touch screen formed by the touch sensor 180K and the display screen 194 is used to detect a touch operation for a shooting control, and transmit the detected touch operation to an application processor in the camera application, so as to determine that the touch operation is used to instruct the camera application to take a picture.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
In addition, an operating system is run on the components. Such as iOS operating systems, android (Android) open source operating systems, windows operating systems, and the like.
The operating system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In this embodiment, taking an Android system with a layered architecture as an example, a software and hardware structure of the electronic device 100 is illustrated. Although the Android system is taken as an example for explanation, the basic principle of the embodiment of the present application is equally applicable to electronic devices based on iOS, windows, and other operating systems.
Fig. 7 is a software configuration block diagram of the electronic device 100. The software structure adopts a layered architecture, the layered architecture divides the software into a plurality of layers, and each layer has clear roles and division work. The layers communicate with each other through a software interface. Taking an Android system as an example, the Android system runs on an AP, in some embodiments, the Android system is divided into five layers, namely an application layer, an application framework layer (framework), a system runtime library layer, a hardware abstraction layer (hard abstract layer, HAL) and a system kernel layer (kernel) from top to bottom.
The application layer may include a series of application packages, among other things. Application packages may include APP for cameras, gallery, calendar, talk, map, WLAN, bluetooth, music, video, short message, etc. In embodiments of the present application, a gallery in an application package may be used to store a first image and a corrected image of the first image.
The application layer may also include a system user interface (system user interface, system UI) for displaying interfaces of the electronic device 100, such as displaying signal icons corresponding to SIM cards, displaying a call interface, etc. In embodiments of the present application, the system UI may be used to display interfaces such as those shown in FIGS. 2A-D, including a home screen interface 210, a take interface 220, a take interface 230, and a photo preview interface 240.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. For example, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like. A telephone manager (telephone) is used to provide telephony functions of the electronic device 100, such as management of telephony states (including on, off, etc.).
The system operation library layer is divided into two parts, namely a C/C++ program library and an Android operation time library. The C/C++ library mainly comprises a browser engine (Webkit), a multimedia library (media framework), and the like. The Android runtime library mainly includes a runtime environment (Android rontime, ART) and the like.
The hardware abstraction layer is used for isolating the application program framework layer from the kernel layer and avoiding the android system from excessively depending on the kernel layer, so that the development of the application program framework layer can be carried out on the premise of not considering a driver. The hardware abstraction layer may include a plurality of functional modules. For example, display HAL, camera HAL, audio HAL, sensor HAL, etc.
In an embodiment of the present application, the camera HAL may include an image processing module configured to determine a first image based on the initial image, determine a local color cast region of the first image based on the initial image and the first image, and perform color cast correction on the local color cast region to obtain a corrected image.
The kernel layer is a layer between hardware and software. The kernel layer at least comprises a display driver, a camera driver, an audio driver, a sensor driver and a shared memory driver.
In the embodiment of the application, the camera driver is used for triggering the camera to be started when a trigger command sent by a camera application located in an application program layer is received. The camera driver is also used for calling the camera to shoot and generating an initial image.
Furthermore, some embodiments of the present application provide an electronic device, including: one or more processors and memory; the memory is used to store computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform the image processing method described above.
Some embodiments of the present application provide a chip system applied to an electronic device, the chip system including at least one processor and an interface for receiving instructions and transmitting the instructions to the at least one processor; the at least one processor executes instructions that cause the electronic device to perform the image processing method described above. The System On Chip (SOC) may be a modem processor or a System On Chip (SOC) including a modem processor, and the image processing method may be implemented by a modem processor.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc., specifically may be a processor in the computer device) to perform all or part of the steps of the above-mentioned method of the embodiments of the present application. Wherein the aforementioned storage medium may comprise: a U-disk, a removable hard disk, a magnetic disk, a compact disk, a read-only memory (ROM), a random access memory (random access memory, RAM), or the like, which can store program codes.
The above embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. An image processing method, the method comprising:
responding to clicking operation for a shooting control, acquiring an initial image, and performing automatic white balance processing on the initial image to obtain a first image;
determining a second image based on the initial image and the first image; the second image comprises a color cast region of the first image;
inputting the first image and the second image into a preset network model to obtain bilateral grid information output by the preset network model; the bilateral grid information comprises an adjustment matrix corresponding to the color-cast area;
performing color cast correction processing on the first image based on the adjustment matrix corresponding to the color cast region to obtain a corrected image;
and displaying the corrected image on a preview interface.
2. The method of claim 1, wherein the determining a second image based on the initial image and the first image comprises:
determining that a pixel i in the first image is a color cast pixel or a non-color cast pixel based on an RGB value of the pixel i and an RGB value of a pixel j in the initial image; the pixel i corresponds to the pixel j;
Determining a color cast region and a non-color cast region of the first image based on the pixel i being a color cast pixel or a non-color cast pixel;
and marking the color cast areas and the non-color cast areas in the first image respectively to obtain a second image.
3. The method of claim 2, wherein the determining that pixel i is a color cast pixel or a non-color cast pixel based on the RGB values of pixel i in the first image and the RGB values of pixel j in the initial image comprises:
determining that a pixel i in the first image is a color cast pixel in response to a channel size relationship of RGB values of the pixel i being different from a channel size relationship of RGB values of a pixel j in the initial image;
determining that a pixel i in the first image is a non-color cast pixel in response to a channel size relationship of RGB values of the pixel i being different from a channel size relationship of RGB values of a pixel j in the initial image;
the channel size relationship is used for indicating the size relationship among all channels in the RGB value.
4. A method according to claim 2 or 3, wherein the bilateral mesh information further comprises a plurality of luminance intervals, a plurality of mesh regions, and a plurality of adjustment matrices, the method further comprising:
Determining a brightness interval corresponding to the pixel i from the brightness intervals based on the brightness of the pixel i; the brightness of the pixel i is determined based on the RGB values of the pixel i;
determining a grid region corresponding to the pixel i from the plurality of grid regions based on the position of the pixel i in the first image;
determining an adjustment matrix corresponding to the pixel i from the plurality of adjustment matrices based on the grid region corresponding to the pixel i and the brightness interval corresponding to the pixel i;
and determining an adjustment matrix corresponding to the color-cast area and an adjustment matrix corresponding to the non-color-cast area of the first image based on the adjustment matrix corresponding to the pixel i.
5. The method according to claim 4, wherein the performing a color cast correction process on the first image based on the adjustment matrix corresponding to the color cast region to obtain a corrected image includes:
performing color cast correction on the first image based on the adjustment matrix corresponding to the color cast region and the adjustment matrix corresponding to the non-color cast region to obtain a corrected image;
the adjustment matrix corresponding to the color-cast area is a non-identity matrix, and the adjustment matrix corresponding to the non-color-cast area is an identity matrix.
6. The method of claim 5, wherein performing color cast correction on the first image based on the adjustment matrix corresponding to the color cast region and the adjustment matrix corresponding to the non-color cast region to obtain a corrected image, comprising:
determining a target RGB matrix corresponding to the pixel i based on an RGB matrix corresponding to the RGB value of the pixel i and a corresponding adjustment matrix of the pixel i;
determining a target RGB value of the pixel i based on a target RGB matrix corresponding to the pixel i;
determining a target RGB value corresponding to the color-cast region and a target RGB value corresponding to the non-color-cast region based on the target RGB value of the pixel i;
and determining a correction image based on the target RGB value corresponding to the color-cast region and the target RGB value corresponding to the non-color-cast region.
7. The method according to claim 1, wherein the method further comprises:
downsampling the first image to obtain a first image with a preset size;
inputting the first image and the second image into a preset network model to obtain bilateral grid information output by the preset network model, wherein the method comprises the following steps:
inputting the first image and the second image with the preset size into a preset network model to obtain bilateral grid information output by the preset network model; the image size of the second image is the preset size.
8. An electronic device, comprising: a memory, a processor; wherein:
the memory is used for storing a computer program, and the computer program comprises program instructions;
the processor is configured to invoke the program instructions to cause the electronic device to perform the method of any of claims 1-7.
9. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the method according to any of claims 1-7.
10. A computer program product comprising instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 1-7.
CN202311119808.7A 2023-08-31 2023-08-31 Image processing method and electronic equipment Pending CN117692788A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311119808.7A CN117692788A (en) 2023-08-31 2023-08-31 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311119808.7A CN117692788A (en) 2023-08-31 2023-08-31 Image processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN117692788A true CN117692788A (en) 2024-03-12

Family

ID=90127268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311119808.7A Pending CN117692788A (en) 2023-08-31 2023-08-31 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN117692788A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050270383A1 (en) * 2004-06-02 2005-12-08 Aiptek International Inc. Method for detecting and processing dominant color with automatic white balance
CN113170028A (en) * 2019-01-30 2021-07-23 华为技术有限公司 Method for generating image data of imaging algorithm based on machine learning
CN113379611A (en) * 2020-03-10 2021-09-10 Tcl科技集团股份有限公司 Image processing model generation method, image processing method, storage medium and terminal
CN113766204A (en) * 2021-07-28 2021-12-07 荣耀终端有限公司 Method for adjusting light source color of image, electronic device and storage medium
CN114331927A (en) * 2020-09-28 2022-04-12 Tcl科技集团股份有限公司 Image processing method, storage medium and terminal equipment
CN115835034A (en) * 2021-09-15 2023-03-21 荣耀终端有限公司 White balance processing method and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050270383A1 (en) * 2004-06-02 2005-12-08 Aiptek International Inc. Method for detecting and processing dominant color with automatic white balance
CN113170028A (en) * 2019-01-30 2021-07-23 华为技术有限公司 Method for generating image data of imaging algorithm based on machine learning
CN113379611A (en) * 2020-03-10 2021-09-10 Tcl科技集团股份有限公司 Image processing model generation method, image processing method, storage medium and terminal
CN114331927A (en) * 2020-09-28 2022-04-12 Tcl科技集团股份有限公司 Image processing method, storage medium and terminal equipment
CN113766204A (en) * 2021-07-28 2021-12-07 荣耀终端有限公司 Method for adjusting light source color of image, electronic device and storage medium
CN115835034A (en) * 2021-09-15 2023-03-21 荣耀终端有限公司 White balance processing method and electronic equipment

Similar Documents

Publication Publication Date Title
US9451173B2 (en) Electronic device and control method of the same
JP6513648B2 (en) Display device configured as an illumination source
US9117410B2 (en) Image display device and method
EP4280152A1 (en) Image processing method and apparatus, and electronic device
CN116744120B (en) Image processing method and electronic device
CN111866482A (en) Display method, terminal and storage medium
WO2022127611A1 (en) Photographing method and related device
CN105025283A (en) Novel color saturation adjusting method and system and mobile terminal
TW202139685A (en) Under-display camera systems and methods
CN112289278A (en) Screen brightness adjusting method, screen brightness adjusting device and electronic equipment
CN114463191A (en) Image processing method and electronic equipment
CN116668862B (en) Image processing method and electronic equipment
CN112513939A (en) Color conversion for environmentally adaptive digital content
WO2021249504A1 (en) Distributed display method and related device
CN117692788A (en) Image processing method and electronic equipment
CN116055699B (en) Image processing method and related electronic equipment
CN117119316B (en) Image processing method, electronic device, and readable storage medium
CN115514947B (en) Algorithm for automatic white balance of AI (automatic input/output) and electronic equipment
WO2024113162A1 (en) Luminance compensation method and apparatus, device, and storage medium
WO2023160221A1 (en) Image processing method and electronic device
WO2023236148A1 (en) Display control method and apparatus, and display device and storage medium
US20230342977A1 (en) Method for Determining Chromaticity Information and Related Electronic Device
CN116051434B (en) Image processing method and related electronic equipment
CN116723417B (en) Image processing method and electronic equipment
WO2022257574A1 (en) Fusion algorithm of ai automatic white balance and automatic white balance, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination