CN113784104A - White balance processing method and related device - Google Patents

White balance processing method and related device Download PDF

Info

Publication number
CN113784104A
CN113784104A CN202110950363.1A CN202110950363A CN113784104A CN 113784104 A CN113784104 A CN 113784104A CN 202110950363 A CN202110950363 A CN 202110950363A CN 113784104 A CN113784104 A CN 113784104A
Authority
CN
China
Prior art keywords
image
processed
preset threshold
segmentation
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110950363.1A
Other languages
Chinese (zh)
Inventor
刘志恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Tuya Information Technology Co Ltd
Original Assignee
Hangzhou Tuya Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Tuya Information Technology Co Ltd filed Critical Hangzhou Tuya Information Technology Co Ltd
Priority to CN202110950363.1A priority Critical patent/CN113784104A/en
Publication of CN113784104A publication Critical patent/CN113784104A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Colour balance circuits, e.g. white balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control

Abstract

The application discloses a white balance processing method and a related device, wherein the method comprises the following steps: performing target segmentation on an image to be processed to obtain a target segmentation area; determining attribute information of a target segmentation region; determining the image type of the image to be processed according to the attribute information of the target segmentation area, wherein the image type comprises the following steps: monochrome images, normal scene images, and color rich images; and carrying out white balance processing on the image to be processed by utilizing a white balance correction mode matched with the image type of the image to be processed. According to the technical scheme, the targeted white balance correction can be performed on the image to be processed according to the image type of the image.

Description

White balance processing method and related device
Technical Field
The present application relates to the field of image processing, and in particular, to a white balance processing method and related apparatus.
Background
The human visual system has the capability of restoring the true color of an object, and human eyes can recognize a white object as white whether under outdoor natural light or indoor fluorescent light or a mixed color temperature light source. Since the response of the camera to white is different from that of the human eye, the color presented by a white object under different ambient light sources is different. Therefore, it is necessary to correct the white balance of the image captured by the camera to correct the white object in the captured image to white. In the existing white balance processing scheme, the targeted correction cannot be made according to the image scene, so a technical scheme capable of solving the technical problem is needed.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a white balance processing method and a related device, which can realize targeted white balance correction on an image to be processed according to the image type of the image.
In order to solve the technical problem, the application adopts a technical scheme that: provided is a white balance processing method, including:
performing target segmentation on an image to be processed to obtain a target segmentation area;
determining attribute information of the target segmentation region;
determining the image type of the image to be processed according to the attribute information of the target segmentation region, wherein the image type comprises: monochrome images, normal scene images, and color rich images;
and carrying out white balance processing on the image to be processed by utilizing a white balance correction mode matched with the image type of the image to be processed.
Further, the performing target segmentation on the image to be processed to obtain a target segmentation region includes:
performing target segmentation on the image to be processed by using an image segmentation model to obtain a plurality of initial segmentation areas;
counting the dispersion degree of the gray value in each initial segmentation region;
respectively judging whether the discrete degree corresponding to each initial segmentation region is greater than or equal to a first preset threshold value;
and performing secondary segmentation on the initial segmentation region according to the judgment result to obtain the target segmentation region.
Further, the performing secondary segmentation on the initial segmented region according to the determination result to obtain the target segmented region further includes:
if the discrete degree of the initial segmentation region is greater than or equal to the first preset threshold value, performing secondary segmentation on the initial segmentation region according to the color information of the initial segmentation region to obtain the target segmentation region; or
And if the discrete degree of the initial segmentation region is smaller than the first preset threshold value, directly taking the initial segmentation region as the target segmentation region.
Further, the attribute information of the target segmentation region at least includes: the number of the target segmentation areas included in the image to be processed, and the ratio of the maximum size of the target segmentation areas in the image to be processed.
Further, determining the attribute information of the target segmentation region further includes:
counting the number of regions of the target segmentation regions included in the image to be processed, the number of pixels included in each target segmentation region and the number of pixels included in the image to be processed, and determining the target segmentation region with the largest number of pixels as the target segmentation region with the largest size;
and obtaining the occupation ratio of the maximum-size target segmentation region in the image to be processed by utilizing the number of the pixel points of the maximum-size target segmentation region and the number of the pixel points of the image to be processed.
Further, the determining the image type of the image to be processed according to the attribute information of the target segmentation region further includes:
if the number of the areas is smaller than or equal to a second preset threshold value, determining that the image type of the image to be processed is the monochrome image;
if the number of the regions is greater than the second preset threshold and less than a third preset threshold, or the number of the regions is greater than or equal to the third preset threshold and the percentage is greater than a fourth preset threshold, determining that the image type of the image to be processed is the common scene image;
and if the number of the areas is greater than or equal to the third preset threshold and the ratio is less than or equal to the fourth preset threshold, determining that the image type of the image to be processed is the colorful image.
Still further, the determining the image type of the image to be processed according to the attribute information of the target segmentation region further includes:
judging whether the number of the areas is less than or equal to a second preset threshold value or not;
if so, determining the image type of the image to be processed as the monochrome image;
if not, judging whether the number of the areas is smaller than a third preset threshold value or not;
if the number of the regions is smaller than the third preset threshold, determining that the image type of the image to be processed is the common scene image;
if the number of the areas is larger than or equal to the third preset threshold, judging whether the occupation ratio is smaller than or equal to a fourth preset threshold;
if the proportion is larger than a fourth preset threshold value, determining that the image type of the image to be processed is the common scene image;
and if the ratio is less than or equal to the fourth preset threshold, determining that the image type of the image to be processed is the colorful image.
Further, the white balance correction method includes: a static white balance correction method, a gray world method, and a color temperature estimation method.
In order to solve the above technical problem, another technical solution adopted by the present application is: an electronic device is provided that includes a processor and a memory coupled to the processor; wherein the content of the first and second substances,
the memory is used for storing a computer program;
the processor is configured to run the computer program to perform the method as described in any of the above.
In order to solve the technical problem, the application adopts a technical scheme that: there is provided a computer readable storage medium storing a computer program executable by a processor for implementing a method as claimed in any one of the above.
The beneficial effect of this application is: different from the situation of the prior art, according to the technical scheme provided by the application, the target segmentation is performed on the image to be processed to obtain the target segmentation areas, then the attribute information of each segmented target area obtained by segmentation is determined, the image type of the image to be processed is further determined according to the attribute information of the target segmentation area included in the image to be processed, and then the current image to be processed is subjected to white balance correction by using a white balance correction mode matched with the image type of the image to be processed.
Drawings
Fig. 1 is a schematic flowchart illustrating a white balance processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a reference curve drawn by a color temperature estimation method;
FIG. 3 is a schematic flow chart illustrating another embodiment of a white balance processing method according to the present application;
FIG. 4 is a schematic diagram illustrating segmentation of an image to be processed according to an embodiment;
FIG. 5 is a schematic flow chart illustrating a white balance processing method according to another embodiment of the present application;
FIG. 6 is a schematic flowchart of a white balance processing method according to another embodiment of the present application;
FIG. 7 is a schematic structural diagram of an embodiment of an electronic device according to the present application;
fig. 8 is a schematic structural diagram of an embodiment of a computer-readable storage medium according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a white balance processing method according to the present application. In the current embodiment, the method provided by the present application includes steps S110 to S140.
S110: and performing target segmentation on the image to be processed to obtain a target segmentation area.
In the technical scheme provided by the application, before white balance correction is performed on an image to be processed, target segmentation is performed on the image to be processed to obtain a target segmentation area. The image to be processed is an image which needs white balance processing, and the target segmentation area is an area obtained by segmenting the image to be processed. After the target segmentation is performed on each image to be processed, at least one target segmentation area can be obtained. Further, step S110 may also be understood as performing object recognition on the image to be processed, dividing the area where each object is located into separate areas, and outputting the area where each object is located as an object divided area.
In the technical solution provided by the present application, the image to be processed may be segmented by using a preset image segmentation model. Specifically, the image to be processed may be input into the image segmentation model, so as to obtain the target segmentation region. Wherein each object segmentation region comprises an object. The type of the target may be preset in the image segmentation model, and the target may include: a human, any of various types of cars, various types of animals, various types of plants, various types of signs, various types of buildings, roads, and the like. It is to be understood that, in other embodiments, the image segmentation model is not limited to include only the above-mentioned various types of objects, which are not specifically listed here.
S120: attribute information of the target segmented region is determined.
After the target segmentation is carried out on the image to be processed to obtain the target segmentation area, the attribute information of the target segmentation area is further determined. Specifically, in step S120, attribute information of the target segmented region obtained by segmentation in the image to be processed is determined.
Wherein the attribute information of the target divided region includes: the number of regions of the target segmented region included in the image to be processed, and the ratio of the maximum size of the target segmented region in the image to be processed. Specifically, the number of the target segmented regions included in the image to be processed is the total number of the target segmented regions included in the image to be processed, and the ratio of the maximum-sized target segmented region in the image to be processed is: the ratio of the number of pixels of the target segmentation area with the largest size included in the image to be processed to the total number of pixels of the image to be processed.
Further, in an embodiment, the number of the target segmented regions included in the image to be processed may be the number of the target segmented regions obtained by performing statistical segmentation when step S110 is performed, and the number of the target segmented regions included in the image to be processed may be obtained after the target segmentation of the image to be processed is completed.
Further, in another embodiment, when step S110 is executed, the number of pixels included in each target divided region is counted at the same time, and then the number of pixels included in each target divided region is obtained. And then, after the image to be processed is subjected to target segmentation, namely all target segmentation areas in the image to be processed are obtained through segmentation, the number of pixels included in each segmentation area is further compared, the target segmentation area with the largest size included in the current image to be processed is further determined through comparison, and the ratio of the number of pixels of the target segmentation area with the largest size to the total number of pixels of the image to be processed is further calculated.
S130: and determining the image type of the image to be processed according to the attribute information of the target segmentation region.
After the attribute information of the target segmentation area is acquired, the image type of the image to be processed is further determined according to the attribute information of the target segmentation area. Wherein the image types include: monochrome images, normal scene images, and color rich images.
In the technical scheme provided by the application, the monochrome image refers to an image acquired from a single color scene, the common scene image refers to an image acquired from a common scene, and the colorful image refers to an image acquired from a scene with rich colors.
Specifically, in step S130, the image type of the current image to be processed is determined according to the number of the target segmented regions included in the current image to be processed and the ratio of the largest size target segmented region in the current image to be processed in the image to be processed. Please refer to the embodiment corresponding to fig. 3 below.
S140: and carrying out white balance processing on the image to be processed by utilizing a white balance correction mode matched with the image type of the image to be processed.
In the technical scheme provided by the application, different white balance correction modes can be adopted for the images of different image types in advance, and the targeted white balance correction can be performed according to different image types, so that the more accurate white balance correction can be performed on the images of different types. Therefore, after the image type of the image to be processed is determined, the current image to be processed is further subjected to white balance processing by using a white balance correction mode matched with the image type of the image to be processed.
The white balance correction method comprises a static white balance correction method, a gray world method and a color temperature estimation method. It should be noted that the static white balance correction method, the gray world method and the color temperature estimation method are well-established white balance correction methods, and specific reference may be made to the already disclosed technical data, which are not described in detail herein.
Further, the monochrome image is subjected to correlation matching with a static white balance correction method in advance, the common scene image is subjected to correlation matching with a color temperature estimation method, and the colorful image is subjected to correlation matching with a gray scale world method. After the image type of the image to be processed is determined, white balance processing is further carried out on the image to be processed according to a preset white balance correction mode.
In an embodiment, if it is determined in step S130 that the image to be processed is a monochrome image, white balance gains RGain and BGain are calculated and obtained by using a static white balance correction method associated and matched with the monochrome image, and the white balance processing is performed on the image to be processed by using the calculated white balance gains.
In another embodiment, if it is determined in step S130 that the to-be-processed image is an ordinary scene image, white balance gains RGain and BGain are calculated and obtained by further using a white balance correction method of a color temperature estimation method associated and matched with the ordinary scene image, and then the to-be-processed image is subjected to white balance processing using the calculated white balance gains.
In another embodiment, if it is determined in step S130 that the image to be processed is a rich-color image, white balance gains RGain and BGain are calculated and obtained by using a white balance correction method of a gray world method associated and matched with the rich-color image, and then the white balance processing is performed on the image to be processed by using the calculated white balance gains.
The color temperature estimation method comprises the following specific implementation steps:
firstly, collecting standard color card images under different color temperatures, confirming a reference white point, and drawing a reference white area in a coordinate system according to the reference white point. Referring to fig. 2, fig. 2 is a schematic diagram of a reference curve drawn by a color temperature estimation method. As illustrated in fig. 2, the reference coordinate system is plotted with R/G, B/G as coordinate axes, respectively, and the reference curve is plotted by fitting according to the position of the reference white point in the reference coordinate system. For example, alreadyDetermining reference white point H1、H2、H3、H4And H5And drawing a reference curve by adopting an interpolation method. And setting a distance threshold value from any point in the reference coordinate system to the reference curve, forming a distance range by all coordinate points which are not more than the distance threshold value from the reference curve, and forming a reference white area by the reference curve and the distance range.
Then, each target segmentation area in the image to be processed is used as a statistic point, the mean value of RGB three-channel components of each statistic point is calculated, the red gain RGain and the blue gain BGain of the statistic point are calculated according to the mean value of the three-channel components, and the red gain RGain and the blue gain BGain are drawn in a coordinate system where the reference white area is located.
And finally, recording the statistical point in the reference white area as a white point, calculating white balance gains RGain and BGain of the white point, and performing white balance processing on the image to be processed by using the white balance gains RGain and BGain to optimize the image to be processed.
In the embodiment corresponding to fig. 1 of the present application, a target segmentation region is obtained by performing target segmentation on an image to be processed, then attribute information of each segmented target region obtained by the segmentation is determined, an image type of the image to be processed is further determined according to the attribute information of the segmented target region included in the image to be processed, and then a white balance correction mode matched with the image type of the image to be processed is used to perform white balance correction on the current image to be processed.
Referring to fig. 3, fig. 3 is a schematic flow chart of another embodiment of a white balance processing method according to the present application. In the current embodiment, the method provided by the present application includes steps S301 to S307.
In the present embodiment, the step S110 performs target segmentation on the image to be processed to obtain the target segmentation region, and the step S301 to the step S304 are included in the present embodiment.
S301: and performing target segmentation on the image to be processed by using the image segmentation model to obtain a plurality of initial segmentation areas.
Firstly, an image segmentation model is utilized to carry out target segmentation on an image to be processed to obtain a plurality of initial segmentation areas. The initial segmentation region is a region obtained by performing only one round of target segmentation on an image to be processed by using an image segmentation region, the image segmentation model comprises a deep learning segmentation model, and the deep learning segmentation model is obtained by inputting a group of images with pixel labels as training samples into a deep learning segmentation network for training. In the present embodiment, after performing target segmentation on an image to be processed by using an image segmentation model to obtain a plurality of initial segmentation regions, step S302 to step S304 are further performed to determine whether to perform secondary segmentation or multiple segmentation on the initial segmentation regions according to the discrete degree of the gray-level value in each of the initial segmentation regions obtained by the segmentation, so as to obtain the target segmentation regions.
S302: and counting the dispersion degree of the gray values in each initial segmentation area.
After the target segmentation is carried out on the image to be processed by using the image segmentation model to obtain a plurality of initial segmentation areas, the dispersion degree of the gray value in each initial area is further counted. Here, the degree of dispersion of the gray scale values of the initial divided region may also be understood as the variance of the gray scale channels in the initial divided region. Specifically, the above step S302 may be understood as counting the dispersion degree of the gray scale value in each initial divided region, that is, after the initial divided regions are obtained by the division, further calculating the variance of the gray scale channel of each initial divided region, and then further performing step S303.
S303: and respectively judging whether the discrete degree corresponding to each initial segmentation region is greater than or equal to a first preset threshold value.
After the dispersion degree of the gray value in the initial segmentation region is counted, whether the dispersion degree of the gray value corresponding to the initial segmentation region is larger than or equal to a first preset threshold value is further judged. The first preset threshold is a preset empirical value used for determining whether the initial segmentation region needs to be subjected to secondary segmentation, and specifically, the first preset threshold may be set and adjusted according to actual requirements, which is not limited herein.
The method includes performing target segmentation on an image to be processed by using an image segmentation model to obtain a plurality of initial segmentation regions, performing statistics on a dispersion degree of a gray value of each initial segmentation region, and then further respectively determining whether the dispersion degree of the gray value corresponding to each initial segmentation region is greater than or equal to a first preset threshold, so as to perform secondary segmentation on the initial segmentation regions according to a determination result of each initial segmentation region to obtain target segmentation regions. For example, after n initial divided regions are obtained by performing the division in step S301, it is further determined whether the dispersion degree of the gray-level values corresponding to the n initial divided regions is greater than or equal to the first preset threshold, and then it is determined whether the corresponding initial divided regions need to be secondarily divided according to the comparison result between the dispersion degree of the gray-level values of the initial divided regions and the first preset threshold.
S304: and performing secondary segmentation on the initial segmentation region according to the judgment result to obtain a target segmentation region.
If it is determined in step S303 that the degree of dispersion of the gray-level values corresponding to the initial divided regions is greater than or equal to the first preset threshold, it indicates that the information included in the current initial divided region is relatively rich, and the current initial divided region is further divided twice. Furthermore, when the discrete degree of the gray value corresponding to the initial segmentation region is judged to be greater than or equal to the first preset threshold, the initial segmentation is further subjected to secondary segmentation according to the color information of the initial segmentation region. The color information includes any one of a pixel value and chromaticity.
If it is determined in step S303 that the degree of dispersion of the gray-level values corresponding to the initial divided regions is smaller than the first preset threshold, it indicates that the information included in the current initial divided region is relatively single, and the current initial divided region is not divided twice.
For example, take any one of the initial divided regions R as an example, firstThe variance calculation formula of the R gray level channel of the initial segmentation region is as follows:
Figure BDA0003218407260000101
wherein n represents the number of pixels in the initial segmentation region R, and xiIndicating the ith pixel point in the initial segmentation region R,
Figure BDA0003218407260000102
mean value of pixels, s, representing the initial segmentation region R2The variance of the initial segmentation region R is indicated.
If the variance of the gray channel of the initial segmentation region R is greater than or equal to a first preset threshold, which indicates that the information contained in the initial segmentation region R is rich, the region is further secondarily segmented based on the color information of the initial segmentation region R. On the contrary, if the variance of the gray scale channel of the initial segmentation region R is smaller than the first preset threshold, which indicates that the information contained in the initial segmentation region R is relatively single, the current initial segmentation region R is not subjected to the secondary segmentation processing. Further, in the present embodiment, the method of performing secondary segmentation on the initial segmented region R based on the color information of the initial segmented region R includes, but is not limited to, a region growing method, a region splitting method, a histogram thresholding method, a color clustering method, and the like.
Further, the step S304 performs secondary segmentation on the initial segmented region according to the determination result to obtain the target segmented region, and further includes: and if the discrete degree of the gray value of the initial segmentation region is greater than or equal to a first preset threshold value, performing secondary segmentation on the initial segmentation region according to the color information of the initial segmentation region to obtain a target segmentation region. If the discrete degree of the gray value of the obtained initial segmentation region is judged to be greater than or equal to the first preset threshold, the information included in the current initial segmentation region is relatively rich, the current initial segmentation region needs to be subjected to secondary segmentation, and the region obtained by secondary segmentation is output as the target segmentation region. The target segmentation region comprises an initial segmentation region which is not subjected to secondary segmentation and a region obtained by performing secondary segmentation on the initial segmentation region of which the gray value dispersion degree is greater than or equal to a first preset threshold value.
In another embodiment, the step S304 of performing secondary segmentation on the initial segmented region according to the determination result to obtain the target segmented region further includes: and if the discrete degree of the gray value of the initial segmentation region is smaller than a first preset threshold value, directly taking the initial segmentation region as a target segmentation region. If the discrete degree of the gray value of the obtained initial segmentation region is smaller than the first preset threshold value, the information included in the current initial segmentation region is relatively single, so that the current initial segmentation region can be directly output as the target segmentation region.
For example, in an embodiment, in step S301, the image to be processed is subjected to target segmentation by using the image segmentation model to obtain n initial segmentation regions, and the determination in steps S302 to S303 is performed, and if the dispersion degree of the gray-level values of 3 initial segmentation regions in the n initial segmentation regions is greater than or equal to the first preset threshold, the 3 initial segmentation regions are further subjected to secondary segmentation, and a new region obtained by segmentation is output as the target segmentation region, and all the initial segmentation regions except the 3 initial segmentation regions are directly output as the target segmentation region.
For the execution effect of the above steps S301 to S304, refer to fig. 4 in particular, and fig. 4 is a schematic diagram of segmentation of an image to be processed in an embodiment. In the current embodiment, taking the image to be processed as the scene of putting sheep on the grassland as an example, after the image to be processed is segmented by using the image segmentation model, the initial segmentation regions (i) to (ii) illustrated in fig. 4 can be obtained, wherein (i) to (iii) are any one of human, lawn, sheep, dog hunting and the like.
S305: attribute information of the target segmented region is determined.
S306: and determining the image type of the image to be processed according to the attribute information of the target segmentation region.
S307: and carrying out white balance processing on the image to be processed by utilizing a white balance correction mode matched with the image type of the image to be processed.
In the current embodiment, steps S305 to S307 are the same as steps S120 to S140 above, and may specifically refer to the descriptions of the corresponding parts above, and are not repeated here.
In the current embodiment, the number of regions of the target segmentation region included in the image to be processed and the data of the ratio of the maximum size target segmentation region to the image to be processed are not accurate enough to influence the accuracy of the image type judgment of the current image to be processed by performing secondary segmentation on the initial segmentation region containing more abundant information when the discrete degree of the gray value of a certain initial segmentation region is judged to be greater than or equal to the first preset threshold value, thereby reducing the accuracy of the white balance correction mode selected in step S307 and the degree of conformity of the selected white balance correction mode with the current image to be processed.
Referring to fig. 5, fig. 5 is a schematic flow chart of a white balance processing method according to another embodiment of the present application. In the current embodiment, the method provided by the present application includes steps S501 to S505.
S501: and performing target segmentation on the image to be processed to obtain a target segmentation area.
In the current embodiment, step S501 is the same as step S110 described above, and may refer to the description of the corresponding parts above, which is not repeated here. Meanwhile, in the current embodiment, the step S120 determines the attribute information of the target divided region, and further includes steps S502 to S503.
S502: counting the number of regions of the target segmentation region included in the image to be processed, the number of pixel points included in each target segmentation region and the number of pixel points included in the image to be processed, and determining the target segmentation region with the largest number of pixel points as the target segmentation region with the largest size.
After the target segmentation is performed on the image to be processed to obtain the target segmentation area, the number of areas of the target segmentation area included in the current image to be processed is further counted. The image segmentation method includes performing target segmentation on an image to be processed by using an image segmentation model to obtain a plurality of initial segmentation regions, performing secondary segmentation on a part of the initial segmentation regions, and outputting a region obtained by the secondary segmentation and the initial segmentation regions which are not subjected to the secondary segmentation as target segmentation regions, and then further counting the number of all target segmentation regions obtained by segmenting the current image to be processed. In other words, what is counted in step S502 is the total number of the initial divided regions that are not subjected to the secondary division and the regions that are subjected to the secondary division for the initial divided regions whose degree of dispersion of the gradation values is greater than or equal to the first preset threshold.
When the target segmentation areas are obtained, the number of pixel points included in each target segmentation area is further counted, and the number of pixel points included in the current image to be processed is counted. And then further determining the maximum size target segmentation area included in the current image to be processed according to the counted number of pixel points included in each target segmentation area. The target segmentation region with the largest size is the target segmentation region with the largest number of pixel points.
S503: and obtaining the ratio of the maximum-size target segmentation region in the image to be processed by utilizing the number of the pixel points of the maximum-size target segmentation region and the number of the pixel points of the image to be processed.
After the number of pixel points included in each target segmentation region included in the image to be processed and the number of pixel points included in the image to be processed are obtained, and the target segmentation region with the maximum size is determined, the number of pixel points of the target segmentation region with the maximum size and the number of pixel points of the image to be processed are further utilized to calculate the occupation ratio of the target segmentation region with the maximum size in the image to be processed. Specifically, the ratio of the maximum size target segmentation region in the image to be processed is equal to the ratio of the number of pixels in the maximum size target segmentation region to the total number of pixels included in the image to be processed.
In the present embodiment, the step S130 determines the image type of the image to be processed according to the attribute information of the target segmented region, and further includes a step S504.
S504: and if the number of the areas is less than or equal to a second preset threshold value, determining that the image type of the image to be processed is a monochrome image.
After the number of the regions of the target segmentation region included in the image to be processed is obtained and the occupation ratio of the maximum-size target segmentation region in the image to be processed is determined, the image type of the current image to be processed is further determined based on the number of the regions of the target segmentation region included in the image to be processed and/or the occupation ratio of the maximum-size target segmentation region in the image to be processed.
In an embodiment, if the number of regions of the target segmented region included in the image to be processed is less than or equal to a second preset threshold, the image type of the current image to be processed is determined to be a monochrome image. The second preset threshold is a preset empirical value for judging whether the image to be processed is a monochrome image. In an embodiment, the second preset threshold may be set to 1, and then step S504 may be understood as: if the number of target areas included in the image to be processed is 1, determining that the image type of the image to be processed is a monochrome image. It is understood that, in other embodiments, the second preset threshold may also be set to other values, which is not limited herein.
In an embodiment different from fig. 5, if the number of regions is greater than the second preset threshold and less than the third preset threshold, or the number of regions is greater than or equal to the third preset threshold and the ratio is greater than the fourth preset threshold, it is determined that the image type of the image to be processed is the normal scene image.
And if the number of the target segmentation areas included in the image to be processed is judged to be larger than the second threshold and smaller than a third preset threshold, determining that the image type of the current image to be processed is a common scene image. In another embodiment, if it is determined that the number of the target segmentation regions included in the to-be-processed image is greater than or equal to a third preset threshold, and it is determined that the proportion of the largest target segmentation region in the current to-be-processed image in the to-be-processed image is greater than a fourth preset threshold, it may be determined that the current to-be-processed image is an ordinary scene image.
It should be noted that the third preset threshold is greater than the second preset threshold. The third preset threshold is an empirical value used for judging whether the number of the target segmentation regions included in the image to be processed is excessive, specifically, the third preset threshold is an empirical value used for judging whether the image to be processed is an image with rich colors from the aspect of the number of the regions, and the fourth preset threshold is an empirical value used for judging whether the image to be processed is an image with rich colors from the aspect of image proportion.
In another embodiment different from fig. 5, if the number of regions is greater than or equal to a third preset threshold and the occupancy ratio is less than or equal to a fourth preset threshold, it is determined that the image type of the image to be processed is a color-rich image.
In the present embodiment, it is necessary to jointly determine whether the image type of the current image to be processed is a color-rich image from the two angles of the number of target segmented regions included in the image to be processed and the ratio of the maximum-sized target segmented region. In the current embodiment, if the number of the target segmentation areas obtained by segmenting the current image to be processed is greater than or equal to a third preset threshold value, and the ratio of the maximum size target segmentation areas obtained by segmenting the current image to be processed is less than or equal to a fourth preset threshold value, the image type of the current image to be processed is determined to be a color-rich image.
S505: and carrying out white balance processing on the image to be processed by utilizing a white balance correction mode matched with the image type of the image to be processed.
Step S505 is the same as step S140 described above, and may specifically refer to the description of the corresponding parts above, and is not repeated here.
Referring to fig. 6, fig. 6 is a schematic flow chart of a white balance processing method according to another embodiment of the present application. In the present embodiment, mainly explaining a flow involved in the step S130 of determining the image type of the image to be processed according to the attribute information of the target segmented region. In the current embodiment, step S130 further includes step S601 to step S607.
S601: and judging whether the number of the areas is less than or equal to a second preset threshold value.
The number of regions refers to the number of target segmented regions included in the image to be processed. Specifically, in the current embodiment, after determining the attribute information of the target segmented region in the image to be processed, it is further determined whether the number of the target segmented regions included in the image to be processed is less than or equal to a second preset threshold. The second threshold is an empirical value used for determining whether the current image to be processed is a monochrome image, and can be specifically adjusted according to actual requirements. For example, in an embodiment, the second threshold may be set to 1, and the corresponding step S601 is to determine whether the number of target segmented regions included in the current image to be processed is equal to 1. It is understood that in other embodiments, the second threshold may be set to other values, which are not listed here.
S602: and if so, determining that the image type of the image to be processed is a monochrome image.
If the number of the obtained regions (the number of the target divided regions included in the image to be processed) is judged to be less than or equal to the second preset threshold value, the image type of the image to be processed is determined to be a monochrome image. On the contrary, if the number of the regions is determined to be greater than the second preset threshold, that is, the number of the target segmentation regions included in the image to be processed is determined to be greater than the second preset threshold, then step S603 is further executed.
S603: and judging whether the number of the areas is smaller than a third preset threshold value.
And the third preset threshold is greater than the second preset threshold. If the number of the obtained areas is larger than the second preset threshold value, whether the number of the areas is smaller than a third preset threshold value is further judged. That is, step S603 may be understood as that, when the number of the target segmented regions included in the image to be processed is greater than the second preset threshold, it is further determined whether the number of the target segmented regions included in the image to be processed is less than the third preset threshold. If the number of regions is less than the third preset threshold, step S604 is further performed, otherwise, if the number of regions is greater than or equal to the third preset threshold, step S605 is further performed.
S604: and determining the image type of the image to be processed as a common scene image.
If it is determined in step S603 that the number of target segmented regions included in the to-be-processed image is smaller than the third preset threshold, it is further determined that the image type of the current to-be-processed image is an ordinary scene image.
S605: and judging whether the occupation ratio is less than or equal to a fourth preset threshold value.
If it is determined in step S603 that the number of the target segmentation regions included in the image to be processed is greater than or equal to the third preset threshold, the image type of the image to be processed needs to be further determined according to the proportion of the maximum size target segmentation region included in the current image to be processed in the image to be processed. Specifically, when the number of the target segmentation areas included in the image to be processed is judged to be greater than or equal to the third preset threshold, whether the ratio of the maximum-sized target segmentation area in the image to be processed is less than or equal to the fourth preset threshold is further judged. The fourth preset threshold is an image experience value used for judging whether the image to be processed is rich in color or not from the aspect of image proportion, and can be specifically set and adjusted according to requirements.
S606: and determining the image type of the image to be processed as a common scene image.
And if the ratio of the maximum-size target segmentation area in the image to be processed is larger than a fourth preset threshold value, further determining that the image type of the image to be processed is a common scene image. That is, in the current embodiment, if it is determined that the number of the obtained regions is greater than or equal to the third preset threshold and the occupation ratio is greater than the fourth preset threshold, it is determined that the image type of the image to be processed is the common scene image.
S607: and determining the image type of the image to be processed as a color-rich image.
And if the occupation ratio is not greater than a fourth preset threshold, namely the occupation ratio of the target segmentation area with the maximum size in the image to be processed is judged to be less than or equal to the fourth preset threshold, further determining that the image type of the image to be processed is a common scene image. That is, in the current embodiment, if the number of the obtained regions is greater than or equal to the third preset threshold and the occupation ratio is less than or equal to the fourth preset threshold, it is determined that the image type of the image to be processed is a color-rich image.
It is understood that in other embodiments, the image type of the image to be processed may be determined through other flow sequences. For example, in an embodiment, the ratio of the maximum size target segmentation region in the image to be processed may be determined, and then the image type of the image to be processed is determined according to the determination result of the ratio and further according to the number of the target segmentation regions included in the image to be processed, and the determination sequence is specifically adjusted according to the requirement.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an embodiment of an electronic device according to the present application. In the current embodiment, the electronic device 700 provided herein includes a processor 701 and a memory 702 coupled to the processor 701. The electronic device 700 may perform the method described in any of the embodiments of fig. 1-6 and their counterparts.
The memory 702 includes a local storage (not shown) and is used for storing a computer program, and the computer program can implement the method described in any of the embodiments of fig. 1 to 6 and corresponding embodiments thereof when executed.
A processor 701 is coupled to the memory 702, and the processor 701 is configured to execute a computer program to perform the method as described in any of the embodiments of fig. 1 to 6 and their corresponding embodiments.
Further, in some embodiments, the electronic device may include any one of an image capturing apparatus, a mobile terminal, a vehicle-mounted terminal, a camera, a computer terminal, a computer, an image capturing device with computing storage capability, a server, and the like, and may also include any other device with computing processing function.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an embodiment of a computer-readable storage medium according to the present application. The computer-readable storage medium 800 stores a computer program 801 that can be executed by a processor, the computer program 801 being configured to implement the method as described in any one of the embodiments of fig. 1 to 6 and their counterparts. Specifically, the computer-readable storage medium 800 may be one of a memory, a personal computer, a server, a network device, or a usb disk, and is not limited in any way herein.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A white balance processing method, characterized by comprising:
performing target segmentation on an image to be processed to obtain a target segmentation area;
determining attribute information of the target segmentation region;
determining the image type of the image to be processed according to the attribute information of the target segmentation region, wherein the image type comprises: monochrome images, normal scene images, and color rich images;
and carrying out white balance processing on the image to be processed by utilizing a white balance correction mode matched with the image type of the image to be processed.
2. The method according to claim 1, wherein the performing the target segmentation on the image to be processed to obtain a target segmentation region comprises:
performing target segmentation on the image to be processed by using an image segmentation model to obtain a plurality of initial segmentation areas;
counting the dispersion degree of the gray value in each initial segmentation region;
respectively judging whether the discrete degree corresponding to each initial segmentation region is greater than or equal to a first preset threshold value;
and performing secondary segmentation on the initial segmentation region according to the judgment result to obtain the target segmentation region.
3. The method according to claim 2, wherein the performing secondary segmentation on the initial segmented region according to the determination result to obtain the target segmented region further comprises:
if the discrete degree of the gray value of the initial segmentation region is greater than or equal to the first preset threshold, performing secondary segmentation on the initial segmentation region according to the color information of the initial segmentation region to obtain the target segmentation region; or
And if the discrete degree of the gray value of the initial segmentation region is smaller than the first preset threshold value, directly taking the initial segmentation region as the target segmentation region.
4. The method according to claim 1, wherein the attribute information of the target segmentation region at least comprises: the number of the target segmentation areas included in the image to be processed, and the ratio of the maximum size of the target segmentation areas in the image to be processed.
5. The method of claim 4, wherein determining attribute information of the target segmentation region further comprises:
counting the number of regions of the target segmentation regions included in the image to be processed, the number of pixels included in each target segmentation region and the number of pixels included in the image to be processed, and determining the target segmentation region with the largest number of pixels as the target segmentation region with the largest size;
and obtaining the occupation ratio of the maximum-size target segmentation region in the image to be processed by utilizing the number of the pixel points of the maximum-size target segmentation region and the number of the pixel points of the image to be processed.
6. The method according to claim 4, wherein the determining the image type of the image to be processed according to the attribute information of the target segmentation region further comprises:
if the number of the areas is smaller than or equal to a second preset threshold value, determining that the image type of the image to be processed is the monochrome image;
if the number of the regions is greater than the second preset threshold and less than a third preset threshold, or the number of the regions is greater than or equal to the third preset threshold and the percentage is greater than a fourth preset threshold, determining that the image type of the image to be processed is the common scene image;
and if the number of the areas is greater than or equal to the third preset threshold and the ratio is less than or equal to the fourth preset threshold, determining that the image type of the image to be processed is the colorful image.
7. The method according to claim 6, wherein the determining the image type of the image to be processed according to the attribute information of the target segmentation region further comprises:
judging whether the number of the areas is less than or equal to a second preset threshold value or not;
if so, determining the image type of the image to be processed as the monochrome image;
if not, judging whether the number of the areas is smaller than a third preset threshold value or not;
if the number of the regions is smaller than the third preset threshold, determining that the image type of the image to be processed is the common scene image;
if the number of the areas is larger than or equal to the third preset threshold, judging whether the occupation ratio is smaller than or equal to a fourth preset threshold;
if the proportion is larger than a fourth preset threshold value, determining that the image type of the image to be processed is the common scene image;
and if the ratio is less than or equal to the fourth preset threshold, determining that the image type of the image to be processed is the colorful image.
8. The method according to claim 1, wherein the white balance correction manner comprises: a static white balance correction method, a gray world method, and a color temperature estimation method.
9. An electronic device, comprising a processor and a memory coupled to the processor; wherein the content of the first and second substances,
the memory is used for storing a computer program;
the processor is configured to run the computer program to perform the method of any one of claims 1 to 8.
10. A computer-readable storage medium, characterized in that it stores a computer program executable by a processor for implementing the method of any one of claims 1 to 8.
CN202110950363.1A 2021-08-18 2021-08-18 White balance processing method and related device Pending CN113784104A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110950363.1A CN113784104A (en) 2021-08-18 2021-08-18 White balance processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110950363.1A CN113784104A (en) 2021-08-18 2021-08-18 White balance processing method and related device

Publications (1)

Publication Number Publication Date
CN113784104A true CN113784104A (en) 2021-12-10

Family

ID=78838255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110950363.1A Pending CN113784104A (en) 2021-08-18 2021-08-18 White balance processing method and related device

Country Status (1)

Country Link
CN (1) CN113784104A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114390266A (en) * 2021-12-28 2022-04-22 杭州涂鸦信息技术有限公司 Image white balance processing method and device and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030156206A1 (en) * 2002-02-20 2003-08-21 Eiichiro Ikeda White balance correction
JP2005080190A (en) * 2003-09-03 2005-03-24 Fuji Photo Film Co Ltd White balance adjustment method and electronic camera
US20130242130A1 (en) * 2012-03-19 2013-09-19 Altek Corporation White Balance Method and Apparatus Thereof
CN103402103A (en) * 2013-07-25 2013-11-20 上海富瀚微电子有限公司 Self-adaptive white balance starting speed control method and device
CN105898264A (en) * 2016-05-26 2016-08-24 努比亚技术有限公司 Device and method for obtaining image processing manner
CN108376404A (en) * 2018-02-11 2018-08-07 广东欧珀移动通信有限公司 Image processing method and device, electronic equipment, storage medium
CN108737797A (en) * 2018-08-17 2018-11-02 Oppo广东移动通信有限公司 White balancing treatment method, device and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030156206A1 (en) * 2002-02-20 2003-08-21 Eiichiro Ikeda White balance correction
JP2005080190A (en) * 2003-09-03 2005-03-24 Fuji Photo Film Co Ltd White balance adjustment method and electronic camera
US20130242130A1 (en) * 2012-03-19 2013-09-19 Altek Corporation White Balance Method and Apparatus Thereof
CN103402103A (en) * 2013-07-25 2013-11-20 上海富瀚微电子有限公司 Self-adaptive white balance starting speed control method and device
CN105898264A (en) * 2016-05-26 2016-08-24 努比亚技术有限公司 Device and method for obtaining image processing manner
CN108376404A (en) * 2018-02-11 2018-08-07 广东欧珀移动通信有限公司 Image processing method and device, electronic equipment, storage medium
CN108737797A (en) * 2018-08-17 2018-11-02 Oppo广东移动通信有限公司 White balancing treatment method, device and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHONGHAI DENG ET AL.: "Source camera identification using Auto-White Balance approximation", 《2011 INTERNATIONAL CONFERENCE ON COMPUTER VISION》, pages 57 - 64 *
韩盛: "基于机器视觉的侧面车道信息获取系统设计与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 2019 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114390266A (en) * 2021-12-28 2022-04-22 杭州涂鸦信息技术有限公司 Image white balance processing method and device and computer readable storage medium

Similar Documents

Publication Publication Date Title
US20220092882A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN102955943B (en) Image processing apparatus and image processing method
WO2020125631A1 (en) Video compression method and apparatus, and computer-readable storage medium
US9280804B2 (en) Rotation of an image based on image content to correct image orientation
CN107292307B (en) Automatic identification method and system for inverted Chinese character verification code
CN111950723A (en) Neural network model training method, image processing method, device and terminal equipment
CN110971929A (en) Cloud game video processing method, electronic equipment and storage medium
CN102831176A (en) Method and server for recommending friends
CN111935479B (en) Target image determination method and device, computer equipment and storage medium
CN109686342B (en) Image processing method and device
CN110536172B (en) Video image display adjusting method, terminal and readable storage medium
CN105118027A (en) Image defogging method
CN106954051A (en) A kind of image processing method and mobile terminal
CN113378911B (en) Image classification model training method, image classification method and related device
CN113784104A (en) White balance processing method and related device
CN111047618B (en) Multi-scale-based non-reference screen content image quality evaluation method
CN110599532A (en) Depth estimation model optimization and depth estimation processing method and device for image
CN110245669B (en) Palm key point identification method, device, terminal and readable storage medium
CN112133260B (en) Image adjusting method and device
CN112565674A (en) Exhibition hall central control system capable of realizing remote video monitoring and control
CN110751703A (en) Winding picture generation method, device, equipment and storage medium
CN109544441B (en) Image processing method and device, and skin color processing method and device in live broadcast
CN111275128A (en) Image recognition model training method and system and image recognition method
CN113286082B (en) Target object tracking method, target object tracking device, electronic equipment and storage medium
CN115840550A (en) Angle-adaptive display screen display method, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination