CN117750219A - Image white balance processing method, device, computer equipment and storage medium - Google Patents

Image white balance processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN117750219A
CN117750219A CN202211115945.9A CN202211115945A CN117750219A CN 117750219 A CN117750219 A CN 117750219A CN 202211115945 A CN202211115945 A CN 202211115945A CN 117750219 A CN117750219 A CN 117750219A
Authority
CN
China
Prior art keywords
value
color channel
compensation gain
gain value
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211115945.9A
Other languages
Chinese (zh)
Inventor
谭坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Insta360 Innovation Technology Co Ltd
Original Assignee
Insta360 Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Insta360 Innovation Technology Co Ltd filed Critical Insta360 Innovation Technology Co Ltd
Priority to CN202211115945.9A priority Critical patent/CN117750219A/en
Priority to PCT/CN2023/118731 priority patent/WO2024056014A1/en
Publication of CN117750219A publication Critical patent/CN117750219A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Colour balance circuits, e.g. white balance circuits or colour temperature control

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Of Color Television Signals (AREA)

Abstract

The application relates to an image white balance processing method, an image white balance processing device, a computer device, a storage medium and a computer program product. The method comprises the following steps: determining dimension characteristics according to color channel values of the image; generating a target compensation gain value based on the dimension feature and a conventional compensation gain value of the color channel value when the dimension feature is judged to correspond to a preset scene; according to the target compensation gain value, gain is carried out on a target color channel value of the image; the target color channel value is a color channel value corresponding to the target compensation gain value. By adopting the method, white balance processing can be realized when the preset scene is an underwater environment or other scenes with color temperatures in a specific range.

Description

Image white balance processing method, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image white balance processing method, an image white balance processing apparatus, a computer device, a storage medium, and a computer program product.
Background
With the development of the imaging technology, the number of scenes that can be imaged by a camera is increasing, and in order to ensure that the imaged image is more realistic, the problem of color cast of the image is handled by using a conventional compensation gain value of white balance.
Although in some scenarios conventional compensation gain values may be used to correct color shift problems of the image. However, in other preset scenes (e.g., underwater photographing scenes) where the color temperature is high, it is difficult to cope with the color shift problem using the conventional compensation gain value.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, apparatus, computer device, computer readable storage medium, and computer program product for efficiently performing image white balancing in a preset scene.
In a first aspect, the present application provides an image white balance processing method. The method comprises the following steps:
determining dimension characteristics according to color channel values of the image;
generating a target compensation gain value based on the dimension feature and a conventional compensation gain value of the color channel value when the dimension feature is judged to correspond to a preset scene;
according to the target compensation gain value, gain is carried out on a target color channel value of the image; the target color channel value is a color channel value corresponding to the target compensation gain value.
In one embodiment, the generating a target compensation gain value based on the dimensional feature and a conventional compensation gain value for the color channel value comprises:
screening each region of the image according to the relation between the dimensional characteristics of each region and the preset scene;
generating a scene compensation gain value according to the color channel values of the screened areas and the number of the areas;
and generating a target compensation gain value according to the scene compensation gain value and the conventional compensation gain value.
In one embodiment, the generating a target compensation gain value from the scene compensation gain value and the normal compensation gain value includes:
when the confidence coefficient of the conventional compensation gain value is smaller than a preset confidence coefficient threshold value, generating a first weight coefficient according to the confidence coefficient, wherein the first weight coefficient is the weight coefficient of the conventional compensation gain value;
generating a second weight coefficient according to the difference value of the confidence coefficient and the preset confidence coefficient threshold value; the second weight coefficient is a weight coefficient of the scene compensation gain value;
and weighting the conventional compensation gain value and the scene compensation gain value according to the first weight coefficient and the second weight coefficient to obtain a target compensation gain value.
In one embodiment, the determining the dimension feature according to the color channel value of the image includes:
determining a reference color channel value and a target color channel value in the color channel values according to the color sensitivity;
generating a channel difference value of a target color channel according to the reference color channel value and the target color channel value;
and taking the reference color channel value and the channel difference value as dimension characteristics.
In one embodiment, the determining that the dimension feature corresponds to a preset scene includes:
judging that channel difference values of a plurality of target color channels are positioned in a characteristic difference range, and judging that the reference color channel values are positioned in a reference color channel value range; the characteristic difference range is generated according to a preset characteristic relation among channel difference values of the target color channels;
and when the channel difference values of the target color channels are in the characteristic difference range and the reference color channel values are in the reference color channel value range, judging that the dimension characteristic corresponds to a preset scene.
In one embodiment, the method further comprises: when the dimension characteristic is not matched with a preset scene, acquiring a conventional compensation gain value of a reference color channel value, and performing gain on the reference color channel value according to the conventional compensation gain value of the reference color channel value; and according to the conventional compensation gain value of the color channel value, performing gain on the color channel value of the image.
In one embodiment, the method further comprises:
and when the confidence coefficient of the conventional compensation gain value is larger than a preset confidence coefficient threshold value, the color channel value of the image is gained according to the conventional compensation gain value.
In a second aspect, the present application further provides an image white balance processing apparatus. The device comprises:
an apparatus for image white balance processing, the apparatus comprising:
the dimension feature determining module is used for determining dimension features according to the color channel values of the images;
the compensation gain value generation module is used for generating a target compensation gain value based on the dimension characteristic and the conventional compensation gain value of the color channel value when the dimension characteristic is judged to correspond to a preset scene;
the gain module is used for carrying out gain on the color channel value of the image according to the target compensation gain value; the target color channel value is a color channel value corresponding to the target compensation gain value.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the image white balance processing in any of the embodiments described above when the processor executes the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the image white balance processing in any of the embodiments described above.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of the image white balance processing of any of the embodiments described above.
According to the image white balance processing method, the device, the computer equipment, the storage medium and the computer program product, the dimension characteristic is determined according to the color channel value of the image, the interference of brightness information is eliminated to a certain extent by the dimension characteristic determined by the color channel value, when the dimension characteristic is judged to correspond to a preset scene, the image can be more accurately determined to be shot in the preset scene, and then the target compensation gain value is generated based on the dimension characteristic and the conventional compensation gain value of the color channel value; according to the target compensation gain value, the target color channel value of the image is gained, the target color channel value is the color channel value corresponding to the target compensation gain value, and white balance processing can be realized when a preset scene is an underwater environment or other scenes with color temperatures in a specific range.
Drawings
FIG. 1 is an application environment diagram of an image white balance processing method in one embodiment;
FIG. 2 is a flow chart of a method for image white balance processing in one embodiment;
FIG. 3 is a schematic diagram of a range of feature differences in one embodiment;
FIG. 4 is a block diagram showing the configuration of an image white balance processing apparatus in one embodiment;
fig. 5 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The image white balance processing method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server.
The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, an image white balance processing method is provided, and the method is applied to the terminal 102 in fig. 1 for illustration, and includes the following steps:
step 202, determining dimension characteristics according to color channel values of the image.
The color channel value is information of the image in different color channels. The color channels of the image are set according to the color mode of the image; the color channel is of a single color channel or a composite channel and the like, the single color channel is used for storing certain color information of the pixel, and the composite channel is used for storing composite information obtained by superposition of various color information. The monochrome channel of the image is determined according to the image mode, and when the image adopts the RGB color mode, namely the three primary color space, the color channels comprise a red channel, a green channel and a blue channel; when the image adopts a subtractive color mode, such as a CMYK color mode, the color channels thereof include a cyan channel, a magenta channel, a yellow channel, and a black channel, respectively.
In one embodiment, the color channel value of the image is an average value of color channels in the current image captured by the camera, and the calculating mode is that the color channel value of each pixel is calculated, and the color channel value of each pixel is accumulated, and the accumulated result is averaged to obtain the color channel value. Illustratively, the color channel values for each pixel include a red channel value, a green channel value, and a blue channel value, respectively.
The color channel value is generated according to the color and information brightness of the shot object, and the information of the color channel is changed along with the brightness information of the object, so that in order to avoid misjudging the shooting scene of the image due to the brightness information, the terminal determines dimension characteristics according to the color channel value of the image, reduces the influence of the brightness information on the color channel value through the dimension characteristics, and accurately evaluates the scene information corresponding to the image according to the dimension characteristics.
In one embodiment, determining the dimensional feature from the color channel values of the image includes: determining a reference color channel value and a target color channel value in the color channel values according to the color sensitivity; generating a channel difference value of the target color channel according to the reference color channel value and the target color channel value; and taking the reference color channel value and the channel difference value as dimension characteristics.
Color sensitivity, among other things, is the degree of sensitivity of the human eye to different colors, which may be generated based on various color-resolved experiments of vision. When the color sensitivity of one color channel is higher than the color sensitivity of other color channels, the color channel value with higher color sensitivity is the reference color channel value, the color channel value with lower color sensitivity is the target color channel value, and the channel difference value can be generated based on the target color channel value and the reference color channel value.
The channel difference value is a relative value of a reference color channel value and a target color channel value, and the relative value is a ratio of the reference color channel value to the target color channel value or other calculation results for removing the influence of brightness information on the target color channel value, and belongs to a dimension characteristic; the reference color channel value is another dimension feature selected according to the color sensitivity of each color channel value, the two dimension features have different feature determination processes and judgment standards, and scene information corresponding to the image can be accurately estimated through the two dimension features.
For example, when the target color channel value is a red channel value and a blue channel value, the channel difference value of the red channel is a relative value of the green channel value of the image and the red channel value of the image, the channel difference value of the blue channel is a relative value of the green channel value of the image and the blue channel value of the image, and the channel difference value of the red channel and the blue channel are calculated as follows:
Rg=G÷R;
Bg=G÷B;
where Rg is the channel difference value of the red channel, G is the green channel value, R is the red channel value, bg is the channel difference value of the blue channel, and B is the blue channel value.
In one embodiment, determining that the dimensional feature corresponds to the preset scene includes: judging that channel difference values of a plurality of target color channels are positioned in a characteristic difference range, and judging that a reference color channel value is positioned in a reference color channel value range; the characteristic difference range is generated according to a preset characteristic relation among channel difference values of all target color channels; when the channel difference values of the target color channels are in the characteristic difference range and the reference color channel values are in the reference color channel value range, determining that the dimension characteristic corresponds to the preset scene.
The characteristic difference range is a range defined for the channel difference values of the plurality of target color channels, the range being determined from the respective channel difference values of the respective target color channels and also from the channel difference values between the respective target color channels. And the reference color channel value range is a range set for the reference color channel value, which is a range value of the reference color channel value in a preset scene.
As shown in fig. 3, the area surrounded by 8 straight lines is a relatively precise characteristic difference range, and the horizontal axis and the vertical axis of the characteristic difference range represent the information of different target color channels respectively; wherein one target color channel is a red channel and the other target color channel is a blue channel. And the reference color channel is the green channel, and the channel difference value is the relative value; in this case, the characteristic difference range is to determine an initial critical value according to the relative value range of the red channel and the relative value range of the blue channel, and then change the initial critical value according to a preset characteristic relationship between the red channel and the blue channel, so as to obtain the characteristic difference ranges of the multiple target color channels. When the reference color channel is the green channel, the reference color channel value ranges from 5 to 200. Thus, whether the scene of the image shooting is a preset scene or not can be judged more accurately.
The preset scene is an underwater scene, and the preset characteristic relation is determined according to the data of the underwater scenes in different water areas and different depths. An image photographed in an underwater scene, which has a high color temperature, cannot be gained according to a conventional compensation gain value; the underwater environment can be defined more accurately according to the data of underwater scenes in different water areas and different depths. The data of the underwater scene is critical data calculated based on the channel difference value of the target color channel of the picture or video shot by the underwater environment, the critical data of the underwater environment is 8 straight lines in the above figure 3, the area surrounded by the 8 straight lines is the range of the underwater scene, and the data of the 8 straight lines are shown in table 1:
TABLE 1
Straight line numbering Slope of Offset of
1 1000 3.61
2 1000 6.59
3 0 0.81
4 0 2.64
5 -1 5.74
6 -1 7.57
7 0.73 -0.31
8 0.73 -3.3
Specifically, when the channel difference value of the target color channel is located in the characteristic difference range surrounded by the 8 straight lines in table 1, and the corresponding reference color channel value is located in the reference color channel value range, it is determined that the dimensional characteristic corresponds to the preset scene.
Step 204, when the dimension feature is determined to correspond to the preset scene, generating a target compensation gain value based on the dimension feature and the conventional compensation gain value of the color channel value.
The preset scene is an environment of image capturing, the color temperature of which is in a certain color temperature range. The preset scene is characterized by a characteristic difference range of the preset scene and a reference color channel value, wherein the characteristic difference range is generated by the same environment in different geographic positions, and the reference color channel value is a critical value of a certain color component in the color channel value in the environment. Specifically, the color temperature of the preset scene exceeds the color temperature range, and a corresponding characteristic difference range can be generated according to different geographic positions in the preset scene. When the preset scene is an underwater environment, the geographic position of the preset scene is determined according to the water area and the underwater depth.
The conventional compensation gain value is data for performing white balance compensation gain for a scene whose color temperature is outside the above-described color temperature range, for correcting color shift. The conventional compensation gain value generation process comprises the following steps: calculating brightness information of the image, and inquiring target pixel points in the image according to the calculated brightness value; the target pixel point is a point having the smallest difference from the brightness information of the image; generating a conventional compensation gain value of each color channel according to the average value of each target pixel point in each color channel and the average value of brightness information; the conventional compensation gain values for each color channel are used to compensate for the color channel values in the image to which the gain corresponds.
When the dimension characteristic of the image is in a certain characteristic difference range, judging that the dimension characteristic corresponds to a preset scene, wherein the environment shot by the image has a higher color temperature, and generating a target compensation gain value based on the dimension characteristic and a conventional compensation gain value of a color channel value; when the dimension characteristic of the image is out of a certain characteristic difference range, the dimension characteristic is not matched with a preset scene, the environment of image shooting has a lower color temperature, and a target compensation gain value does not need to be generated.
In one embodiment, generating the target compensation gain value based on the dimensional feature and the conventional compensation gain value for the color channel value includes: based on the dimension characteristics or the dimension characteristic mapping data, adjusting the conventional compensation gain value to obtain an adjusted compensation gain value; and taking the adjusted compensation gain value as a target compensation gain value.
In one embodiment, generating the target compensation gain value based on the dimensional feature and the conventional compensation gain value for the color channel value includes: screening each region according to the relation between the dimensional characteristics of each region of the image and a preset scene; generating a scene compensation gain value according to the color channel values of the screened areas and the number of the areas; and generating a target compensation gain value according to the scene compensation gain value and the conventional compensation gain value.
The image comprises a plurality of areas, one area is one or more pixel points in the image, and the area of each area can be the same or different; when the areas of the areas are the same, the image can be rasterized, and each grid is used as an area, so that the dimension characteristics of each area can be calculated, and the speed of screening the areas is improved.
The dimensional characteristics of the region are determined according to the color channel values of the pixels of the region. The calculation process of the dimension characteristics of the region comprises the following steps: in the region, determining a reference color channel value and a target color channel value in the color channel values according to the color sensitivity; generating a channel difference value of the target color channel according to the reference color channel value and the target color channel value; the reference color channel value and the channel difference value are used as the dimension characteristics of the region.
In one embodiment, the dimension features include a region dimension feature and a global dimension feature, the region dimension feature is a region dimension feature of a certain region in the image, the region dimension feature can generate the global dimension feature, meanwhile, each region is screened according to the region dimension feature, and color channel values of the screened regions are used for generating a scene compensation gain value. The global dimension feature is a dimension feature generated by accumulating the regional dimension feature of the image, and when the global dimension feature judges a preset scene, a target compensation gain value is generated based on the dimension feature and a conventional compensation gain value of the color channel value.
And screening the areas according to the relation between the dimensional characteristics of the areas of the image and the preset scene, wherein the screening is used for screening out misleading information which does not accord with the preset scene so as to increase the reliability of the scene compensation gain value. And further enhancement of the reliability includes: and calculating weight information of each region based on the dimension characteristics of the region and an error value of a preset scene in a corresponding region, and generating a scene compensation gain value according to the dimension characteristics of each region and the corresponding weight information.
In one embodiment, generating a scene compensation gain value according to the color channel value and the number of regions of the screened region includes: accumulating the color channel values of the screened areas to obtain color channel accumulated values of the areas; and (3) according to the number of the screened areas, carrying out averaging on the color channel accumulated values of the areas to obtain the color channel average value of the areas, and taking the color channel average value of the areas as a scene compensation gain value.
Illustratively, the scene compensation gain value is averaged according to the number of the screened regions, and the formula for obtaining the average value of the color channels of the regions is as follows:
Rg_ avg =Rg_ sum ÷k
Bg_avg=Bg_sum÷k
wherein Rg/u avg Is the red channel mean of the region, rg/u sum Is the red channel cumulative value of the region, k is the number of regions screened, bg/u avg Is the red channel mean of the region, bg/u sum Is the red channel cumulative value of the region.
In the process of accumulating the color channel values of the screened areas by the terminal, the terminal can generate color channel weighted accumulated values of the areas according to the color channel values of the areas and the corresponding weight information, then average the color channel weighted accumulated values of the areas to obtain color channel weighted average values of the areas, and the color channel weighted average values of the areas are used as scene compensation gain values. Thereby, the reliability of the scene compensation gain value is increased.
In one embodiment, generating a channel difference value for a target color channel from a reference color channel value and a target color channel value comprises: determining the confidence level of the conventional compensation gain value; weighting the reference color channel value and the target color channel value according to the confidence coefficient; obtaining the channel difference value of the target color channel.
In one embodiment, determining the confidence of the conventional compensation gain value includes: and generating the confidence of the conventional compensation gain value according to the brightness information in the image or the brightness information at shooting. The specific process of determining the confidence of the conventional compensation gain value comprises the following steps: searching out gray pixel points meeting gray conditions in the image; calculating the relative value of the number of the searched gray pixel points and the total number of the pixel points of the image to obtain the relative value of the brightness information, and taking the relative value of the brightness information as the confidence of the conventional compensation gain value; the gray condition is a comparison result of brightness information of the pixel point and a certain threshold value.
Specifically, generating a target compensation gain value according to the scene compensation gain value and the conventional compensation gain value includes: when the confidence coefficient of the conventional compensation gain value is smaller than a preset confidence coefficient threshold value, generating a first weight coefficient according to the confidence coefficient; generating a second weight coefficient according to the difference value of the confidence coefficient and a preset confidence coefficient threshold value; and weighting the conventional compensation gain value and the scene compensation gain value according to the first weight coefficient and the second weight coefficient to obtain a target compensation gain value.
The preset confidence threshold is a threshold generated according to historical data, empirical data or expert database data, and is used for evaluating whether the conventional compensation gain value is reliable or not; in addition, when the confidence coefficient of the conventional compensation gain value is smaller than a preset confidence coefficient threshold value, a second weight coefficient is generated according to the difference value of the confidence coefficient and the preset confidence coefficient threshold value.
The first weight coefficient and the second weight coefficient differ by one or more of the following: the first weight coefficient is the weight coefficient of the scene compensation gain value, and the second weight coefficient is the weight coefficient of the scene compensation gain value; the first weight coefficient is positively correlated with the confidence level, and the first weight coefficient is positively correlated with the difference value between the confidence level and the preset confidence level threshold.
In one embodiment, weighting the conventional compensation gain value and the scene compensation gain value according to the first weight coefficient and the second weight coefficient includes: fusing the first weight coefficient with a conventional compensation gain value to obtain a first fusion parameter; fusing the second weight coefficient with the scene compensation gain value to obtain a second fusion parameter; combining the first fusion parameter and the second fusion parameter to obtain a combined parameter; and generating a target compensation gain value according to the combination parameters. It will be appreciated that the target compensation gain value may take a variety of forms when the confidence level is determined differently, such as: the relative value of the combined parameter and the preset confidence threshold is the target compensation gain value.
The target gain compensation value corresponds to the color channel value; and in the image shot by the preset scene, the target gain compensation value is used for carrying out gain on the corresponding color channel value, so that a white balance image is generated. And after the target gain compensation value is used for gain on the color channel value of the image, the color deviation phenomenon is not generated even though the color temperature of the image is higher than a certain color temperature range.
Illustratively, the formula for generating the target compensation gain value from the scene compensation gain value and the conventional compensation gain value is as follows:
Rg_wg=(Rg_std×C+Rg_avg×(C thr -C))÷C thr
Bg_wg=(Bg_std×C+Bg_avg×(C thr -C))÷C thr
where Rg_wg is the target compensation gain value for the red channel, rg_std is the normal compensation gain value for the red channel, C is the confidence level of the normal compensation gain value, rg_avg is the scene compensation gain value for the red channel, C thr A preset confidence threshold; bg_wg is a target compensation gain value of the blue channel, bg_std is a normal compensation gain value of the blue channel, bg_avg is a scene compensation gain value of the blue channel.
Step 206, gain is carried out on the target color channel value of the image according to the target compensation gain value; the target color channel value is a color channel value corresponding to the target compensation gain value.
In one embodiment, the gain is performed on the color channel values of the image according to the target compensation gain value, including: and combining each color channel value of the image with the corresponding target compensation gain value to obtain the color channel value after gain. While the image with the color channel values after the gain is a white balanced image. Illustratively, the formula for combining each color channel value of an image with a respective corresponding target compensation gain value may be as follows:
R*=Kr*R;
wherein R is the red channel value after gain, kr is the target compensation gain value corresponding to the red channel, and R is the red channel value;
B*=Kb*B;
wherein B is the blue channel value after gain, kb is the target compensation gain value corresponding to the blue channel, and B is the blue channel value.
Optionally, when the dimension feature is not matched with a preset scene, acquiring a conventional compensation gain value of a reference color channel value, and performing gain on the reference color channel value according to the conventional compensation gain value of the reference color channel value; and according to the conventional compensation gain value of the color channel value, performing gain on the color channel value of the image. Thus, outside the preset scene, the white balance processing is performed on both the target color channel value and the reference color channel value.
Illustratively, when the confidence coefficient of the conventional compensation gain value is greater than a preset confidence coefficient threshold or the dimension characteristic is not matched with a preset scene, acquiring the conventional compensation gain value of the blue channel value, and performing gain on the blue channel value according to the conventional compensation gain value of the blue channel value; the normal compensation gain values of the color channel values are a red channel value and a green channel value, the red channel value is gained according to the red channel value and the corresponding normal compensation gain value in the image, and the green channel value is gained according to the green channel value and the corresponding normal compensation gain value in the image.
The method further comprises the steps of: a step of gain the color channel values of the image according to the confidence of the conventional compensation gain values, the step comprising: and when the confidence coefficient of the conventional compensation gain value is larger than a preset confidence coefficient threshold value, the color channel value of the image is gained according to the conventional compensation gain value. Thus, the white balance processing can be performed even when the image is not captured in the preset scene.
In one embodiment, the gain is applied to the color channel values of the image according to a conventional compensation gain value, comprising: and according to the color channel values in the image and the corresponding conventional compensation gain values, the gain is carried out on the color channel values. Each color channel value includes a reference color channel value and a target color channel value.
According to the image white balance processing method, the dimension characteristics are determined according to the color channel values of the image, the interference of brightness information is eliminated to a certain extent by the dimension characteristics determined by the color channel values, when the dimension characteristics are judged to correspond to the preset scene, the image can be more accurately determined to be shot in the preset scene, and then the target compensation gain value is generated based on the dimension characteristics and the conventional compensation gain value of the color channel values; according to the target compensation gain value, the target color channel value of the image is gained, the target color channel value is the color channel value corresponding to the target compensation gain value, and white balance processing can be realized when a preset scene is an underwater environment or other scenes with color temperatures in a specific range. In addition, the underwater scene and the water scene can be considered without the instruction input by a user on the interface.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an image white balance processing device for realizing the above related image white balance processing method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of the image white balance processing device or devices provided below may refer to the limitation of the image white balance processing method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 4, there is provided an image white balance processing apparatus including: a dimensional feature determination module 402, a compensation gain value generation module 404, and a gain module 406, wherein:
a dimension feature determining module 402, configured to determine a dimension feature according to a color channel value of the image;
the compensation gain value generating module 404 is configured to generate a target compensation gain value based on the dimension feature and a conventional compensation gain value of the color channel value when the dimension feature is determined to correspond to a preset scene;
a gain module 406, configured to gain the color channel value of the image according to the target compensation gain value; the target color channel value is a color channel value corresponding to the target compensation gain value.
In one embodiment, the compensation gain value generation module 404 is configured to:
screening each region of the image according to the relation between the dimensional characteristics of each region and the preset scene;
generating a scene compensation gain value according to the color channel values of the screened areas and the number of the areas;
and generating a target compensation gain value according to the scene compensation gain value and the conventional compensation gain value.
In one embodiment, the compensation gain value generation module 404 is further configured to:
when the confidence coefficient of the conventional compensation gain value is smaller than a preset confidence coefficient threshold value, generating a first weight coefficient according to the confidence coefficient, wherein the first weight coefficient is the weight coefficient of the conventional compensation gain value;
generating a second weight coefficient according to the difference value of the confidence coefficient and the preset confidence coefficient threshold value; the second weight coefficient is a weight coefficient of the scene compensation gain value;
and weighting the conventional compensation gain value and the scene compensation gain value according to the first weight coefficient and the second weight coefficient to obtain a target compensation gain value.
In one embodiment, the dimension characteristics determination module 402 is configured to:
determining a reference color channel value and a target color channel value in the color channel values according to the color sensitivity;
generating a channel difference value of a target color channel according to the reference color channel value and the target color channel value;
and taking the reference color channel value and the channel difference value as dimension characteristics.
In one embodiment, the dimension characteristics determination module 402 is configured to:
judging that channel difference values of a plurality of target color channels are positioned in a characteristic difference range, and judging that the reference color channel values are positioned in a reference color channel value range; the characteristic difference range is generated according to a preset characteristic relation among channel difference values of the target color channels;
and when the channel difference values of the target color channels are in the characteristic difference range and the reference color channel values are in the reference color channel value range, judging that the dimension characteristic corresponds to a preset scene.
In one embodiment, the compensation gain value generation module 404 is further configured to: when the dimension characteristic is not matched with a preset scene, acquiring a conventional compensation gain value of a reference color channel value, and performing gain on the reference color channel value according to the conventional compensation gain value of the reference color channel value; and according to the conventional compensation gain value of the color channel value, performing gain on the color channel value of the image.
In one embodiment, the compensation gain value generation module 404 is further configured to:
and when the confidence coefficient of the conventional compensation gain value is larger than a preset confidence coefficient threshold value, the color channel value of the image is gained according to the conventional compensation gain value.
By the image white balance processing device, even if an image shot in an underwater environment with a higher color temperature can be subjected to white balance processing, the problem of serious color cast of an underwater scene is avoided; moreover, the underwater white balance and the conventional white balance are automatically selected without manual operation of a user, and meanwhile, the underwater and water scenes are considered, so that the user experience is good.
The respective modules in the above-described image white balance processing apparatus may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of image white balance processing. The display unit of the computer equipment is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device, wherein the display screen can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on a shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 5 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are all values and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A method of image white balance processing, the method comprising:
determining dimension characteristics according to color channel values of the image;
generating a target compensation gain value based on the dimension feature and a conventional compensation gain value of the color channel value when the dimension feature is judged to correspond to a preset scene;
according to the target compensation gain value, gain is carried out on a target color channel value of the image; the target color channel value is a color channel value corresponding to the target compensation gain value.
2. The method of claim 1, wherein the generating a target compensation gain value based on the dimensional feature and a conventional compensation gain value for the color channel value comprises:
screening each region of the image according to the relation between the dimensional characteristics of each region and the preset scene;
generating a scene compensation gain value according to the color channel values of the screened areas and the number of the areas;
and generating a target compensation gain value according to the scene compensation gain value and the conventional compensation gain value.
3. The method of claim 2, wherein the generating a target compensation gain value from the scene compensation gain value and the normal compensation gain value comprises:
when the confidence coefficient of the conventional compensation gain value is smaller than a preset confidence coefficient threshold value, generating a first weight coefficient according to the confidence coefficient, wherein the first weight coefficient is the weight coefficient of the conventional compensation gain value;
generating a second weight coefficient according to the difference value of the confidence coefficient and the preset confidence coefficient threshold value; the second weight coefficient is a weight coefficient of the scene compensation gain value;
and weighting the conventional compensation gain value and the scene compensation gain value according to the first weight coefficient and the second weight coefficient to obtain a target compensation gain value.
4. The method of claim 1, wherein determining the dimensional feature from the color channel values of the image comprises:
determining a reference color channel value and a target color channel value in the color channel values according to the color sensitivity;
generating a channel difference value of a target color channel according to the reference color channel value and the target color channel value;
and taking the reference color channel value and the channel difference value as dimension characteristics.
5. The method of claim 4, wherein the determining that the dimensional feature corresponds to a preset scene comprises:
judging that channel difference values of a plurality of target color channels are positioned in a characteristic difference range, and judging that the reference color channel values are positioned in a reference color channel value range; the characteristic difference range is generated according to a preset characteristic relation among channel difference values of the target color channels;
and when the channel difference values of the target color channels are in the characteristic difference range and the reference color channel values are in the reference color channel value range, judging that the dimension characteristic corresponds to a preset scene.
6. The method of claim 1, wherein when the dimensional feature does not match a preset scene, obtaining a conventional compensation gain value for a reference color channel value, and performing gain on the reference color channel value according to the conventional compensation gain value for the reference color channel value; and according to the conventional compensation gain value of the color channel value, performing gain on the color channel value of the image.
7. The method of claim 1, wherein the color channel values of the image are gained according to the conventional compensation gain values when the confidence level of the conventional compensation gain values is greater than a preset confidence threshold.
8. An apparatus for image white balance processing, the apparatus comprising:
the dimension feature determining module is used for determining dimension features according to the color channel values of the images;
the compensation gain value generation module is used for generating a target compensation gain value based on the dimension characteristic and the conventional compensation gain value of the color channel value when the dimension characteristic is judged to correspond to a preset scene;
the gain module is used for carrying out gain on the color channel value of the image according to the target compensation gain value; the target color channel value is a color channel value corresponding to the target compensation gain value.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202211115945.9A 2022-09-14 2022-09-14 Image white balance processing method, device, computer equipment and storage medium Pending CN117750219A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211115945.9A CN117750219A (en) 2022-09-14 2022-09-14 Image white balance processing method, device, computer equipment and storage medium
PCT/CN2023/118731 WO2024056014A1 (en) 2022-09-14 2023-09-14 Image white balance processing method, apparatus, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211115945.9A CN117750219A (en) 2022-09-14 2022-09-14 Image white balance processing method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117750219A true CN117750219A (en) 2024-03-22

Family

ID=90249453

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211115945.9A Pending CN117750219A (en) 2022-09-14 2022-09-14 Image white balance processing method, device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN117750219A (en)
WO (1) WO2024056014A1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2797326A1 (en) * 2013-04-22 2014-10-29 Nederlandse Organisatie voor toegepast -natuurwetenschappelijk onderzoek TNO Image color correction
TWI562636B (en) * 2014-06-16 2016-12-11 Altek Semiconductor Corp Image capture apparatus and image compensating method thereof
CN107635102B (en) * 2017-10-30 2020-02-14 Oppo广东移动通信有限公司 Method and device for acquiring exposure compensation value of high-dynamic-range image
CN111127359B (en) * 2019-12-19 2023-05-23 大连海事大学 Underwater image enhancement method based on selective compensation of colors and three-interval equalization
CN111696052B (en) * 2020-05-20 2022-08-12 河海大学 Underwater image enhancement method and system based on red channel weakness
WO2022067762A1 (en) * 2020-09-30 2022-04-07 深圳市大疆创新科技有限公司 Image processing method and apparatus, photographic device, movable platform, and computer-readable storage medium
CN112446841B (en) * 2020-12-14 2022-05-31 中国科学院长春光学精密机械与物理研究所 Self-adaptive image recovery method

Also Published As

Publication number Publication date
WO2024056014A1 (en) 2024-03-21

Similar Documents

Publication Publication Date Title
US10970600B2 (en) Method and apparatus for training neural network model used for image processing, and storage medium
US9558543B2 (en) Image fusion method and image processing apparatus
CN101527860A (en) White balance control apparatus, control method therefor, and image sensing apparatus
TWI777536B (en) Enhanced training method and device for image recognition model
CN107948627B (en) Video broadcasting method, calculates equipment and storage medium at device
CN109933639B (en) Layer-superposition-oriented multispectral image and full-color image self-adaptive fusion method
CN112384946A (en) Image dead pixel detection method and device
US11974050B2 (en) Data simulation method and device for event camera
CN114004754A (en) Scene depth completion system and method based on deep learning
CN113132695A (en) Lens shadow correction method and device and electronic equipment
CN114862735A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111445487A (en) Image segmentation method and device, computer equipment and storage medium
CN113177886B (en) Image processing method, device, computer equipment and readable storage medium
CN111476739B (en) Underwater image enhancement method, system and storage medium
CN112243119B (en) White balance processing method and device, electronic equipment and storage medium
CN116843566A (en) Tone mapping method, tone mapping device, display device and storage medium
CN117750219A (en) Image white balance processing method, device, computer equipment and storage medium
CN111988592B (en) Image color reduction and enhancement circuit
CN117522749B (en) Image correction method, apparatus, computer device, and storage medium
CN113034552B (en) Optical flow correction method and computer equipment
Zhao et al. Objective assessment of perceived sharpness of projection displays with a calibrated camera
CN118015102A (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
CN118071794A (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
CN113962914A (en) Image fusion method and device, computer equipment and storage medium
CN116935767A (en) Accuracy determination method, apparatus and computer readable storage medium for vision system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination