CN114760422A - Backlight detection method and system, electronic equipment and storage medium - Google Patents

Backlight detection method and system, electronic equipment and storage medium Download PDF

Info

Publication number
CN114760422A
CN114760422A CN202210279531.3A CN202210279531A CN114760422A CN 114760422 A CN114760422 A CN 114760422A CN 202210279531 A CN202210279531 A CN 202210279531A CN 114760422 A CN114760422 A CN 114760422A
Authority
CN
China
Prior art keywords
target
backlight
imaging
shooting scene
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210279531.3A
Other languages
Chinese (zh)
Inventor
许志强
胡继瑶
赵如雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Semiconductor Nanjing Co Ltd
Original Assignee
Spreadtrum Semiconductor Nanjing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Semiconductor Nanjing Co Ltd filed Critical Spreadtrum Semiconductor Nanjing Co Ltd
Priority to CN202210279531.3A priority Critical patent/CN114760422A/en
Publication of CN114760422A publication Critical patent/CN114760422A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/72Combination of two or more compensation controls

Abstract

The invention discloses a backlight detection method and system, electronic equipment and a storage medium, wherein the backlight detection method comprises the following steps: recognizing a shooting scene of the target imaging; dividing the target image into a plurality of target areas according to the recognition result of the shooting scene; obtaining a backlight confidence of the target imaging based on the pixel values in the target areas; wherein the backlight confidence is used for representing the backlight degree of the shooting scene. The backlight detection method and system, the electronic device and the storage medium realize backlight detection based on the combination of two different dimensions of a shooting scene of target imaging and image processing of the target imaging, improve the accuracy and the referential property of the backlight detection, are beneficial to performing backlight correction on the target imaging on the basis, obtain better imaging effect and have industrial popularization value.

Description

Backlight detection method and system, electronic equipment and storage medium
Technical Field
The invention belongs to the technical field of optical imaging, and particularly relates to a backlight detection method and system, electronic equipment and a storage medium.
Background
The mobile phone, the industrial camera, the equipment of assembly digital camera or camera such as thing networking terminal receive the influence of ambient light easily and form the backlight scene in the formation of image process, and the object of shooing is the formation of image result of main concern object promptly darker, and unable clear display is in the image, and the background part is too bright influences visual perception equally, influences user experience, and the serious condition influences the product function even, unable normal work.
The backlight detection technology can detect a backlight scene, and special processing is carried out on the backlight scene, so that generation of backlight images can be greatly reduced, and imaging quality is optimized. In the existing backlight detection technology, an image is first divided into a plurality of image blocks, then the image is divided into luminance blocks or darkness blocks according to a set threshold, and then whether the number of the luminance blocks is greater than the set threshold is judged to judge whether the scene is a backlight scene. The other method is that the image is divided into a plurality of image blocks, then whether the pixel block is a brightness block or not is judged according to the relation of pixel values among the image blocks, then the brightness block and the darkness block are connected into a brightness area and a darkness area, and whether the image is a backlight scene or not is judged according to the relation of the brightness area and the darkness area.
The above prior art has the disadvantages that a threshold needs to be set or the backlight degree needs to be determined according to the judgment result of the adjacent luminance blocks, and when the threshold is set unreasonably and the luminance blocks are judged incorrectly, the detection result is greatly influenced, resulting in insufficient detection precision; in addition, only the result of determining whether a backlight scene is present or absent can be given, and a detailed description of the degree of backlight cannot be given.
Disclosure of Invention
The invention provides a backlight detection method and system, an electronic device and a storage medium, and aims to overcome the defect of backlight detection of imaging in the prior art.
The invention solves the technical problems through the following technical scheme:
the invention provides a backlight detection method, which comprises the following steps:
identifying a shooting scene of target imaging;
dividing the target image into a plurality of target areas according to the recognition result of the shooting scene;
obtaining a backlight confidence of the target imaging based on the pixel values in the target areas; wherein the backlight confidence is used for representing the backlight degree of the shooting scene.
Preferably, the step of obtaining the backlight confidence of the target imaging based on the pixel values in the target areas comprises:
Respectively calculating the average pixel value of the pixel points in each target region;
calculating an average pixel value of the target imaging according to the backlight reference weight corresponding to each target area and the average pixel value;
determining the backlight confidence corresponding to each target area according to the average pixel value of each target area and the average pixel value of the target imaging;
and determining the backlight confidence of the target imaging according to the backlight confidence of each target area.
Preferably, the step of calculating the average pixel value of the pixel points in each of the target regions respectively includes:
equally dividing the target imaging rule into a plurality of pixel units, wherein the number of pixel points in the pixel units is the same;
respectively calculating the pixel value sum of pixel points in each pixel unit;
and calculating the average pixel value of the pixel points in each target region according to the number of all the pixel units in each target region and the sum of the pixel values corresponding to all the pixel units.
Preferably, the backlight detection method further comprises: performing rotation processing on the target image;
the step of identifying the shooting scene of the target imaging is specifically configured to identify the shooting scenes of the target imaging after the rotation processing, respectively.
Preferably, the step of dividing the target image into a plurality of target areas according to the recognition result of the shooting scene comprises:
determining a target object in the shooting scene according to the recognition result of the shooting scene;
and according to the target object, dividing the target image into a plurality of target areas.
Preferably, the step of identifying the shooting scene imaged by the target comprises:
identifying the depth of field parameter of each object to be determined in the shooting scene of the target imaging; the depth of field parameter is used for representing the space distance between the object to be determined and the shooting lens;
the step of determining the target object in the shooting scene according to the recognition result of the shooting scene comprises the following steps:
and determining the target object in the shooting scene according to the depth of field parameter of each object to be determined.
Preferably, the backlight reference weight corresponding to the target area is determined according to the target object.
Preferably, the step of dividing the target image into a plurality of target areas according to the recognition result of the shooting scene includes:
determining the light angle in the shooting scene according to the recognition result of the shooting scene;
And dividing the target image into a plurality of target areas according to the light angle.
The invention also provides a backlight detection system, comprising:
the scene recognition module is used for recognizing a shooting scene of the target imaging;
the imaging division module is used for dividing the target imaging into a plurality of target areas according to the recognition result of the shooting scene;
the backlight detection module is used for obtaining the backlight confidence of the target imaging based on the pixel values in the target areas; wherein the backlight confidence is used for representing the backlight degree of the shooting scene.
Preferably, the backlight detection module includes:
the region pixel value calculating unit is used for calculating the average pixel value of the pixel points in each target region;
the average pixel value technology unit is used for calculating the average pixel value of the target imaging according to the backlight reference weight corresponding to each target area and the average pixel value;
the region confidence degree calculation unit is used for determining the backlight confidence degree corresponding to each target region according to the average pixel value of each target region and the average pixel value of the target imaging;
And the backlight confidence determining unit is used for determining the backlight confidence of the target imaging according to the backlight confidence of each target area.
Preferably, the area pixel value calculating unit is specifically configured to:
equally dividing the target imaging rule into a plurality of pixel units, wherein the number of pixel points in the pixel units is the same;
respectively calculating the pixel value sum of the pixel points in each pixel unit;
and calculating the average pixel value of the pixel points in each target region according to the number of all the pixel units in each target region and the sum of the pixel values corresponding to all the pixel units.
Preferably, the backlight detection system further comprises a rotation processing module, configured to perform rotation processing on the target image;
the scene recognition module is used for respectively recognizing the shooting scenes of the target images after the rotation processing.
Preferably, the imaging dividing module includes:
the target object determining unit is used for determining a target object in the shooting scene according to the recognition result of the shooting scene;
the first dividing unit is used for dividing the target imaging into a plurality of target areas according to the target object.
Preferably, the scene recognition module is specifically configured to: identifying the depth of field parameter of each object to be determined in the shooting scene of the target imaging; the depth of field parameter is used for representing the space distance between the object to be determined and the shooting lens;
the target object determination unit is specifically configured to: and determining the target object in the shooting scene according to the depth of field parameter of each object to be determined.
Preferably, the backlight reference weight corresponding to the target area is determined according to the target object.
Preferably, the imaging dividing module includes:
the light angle determining unit is used for determining the light angle in the shooting scene according to the recognition result of the shooting scene;
and the second dividing unit is used for dividing the target image into a plurality of target areas according to the light angle.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the backlight detection method.
The present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the backlight detection method described above.
The positive progress effects of the invention are as follows: by providing the backlight detection method and system, the electronic device and the storage medium, the backlight detection based on the combination of the shooting scene of the target imaging and the image processing of the target imaging in two different dimensions is realized, the accuracy and the referential performance of the backlight detection are improved, the backlight correction of the target imaging is facilitated on the basis, a better imaging effect is obtained, and the industrial popularization value is achieved.
Drawings
Fig. 1 is a flowchart of a backlight detection method according to embodiment 1 of the present invention.
Fig. 2 is a flowchart of a specific example of the backlight detection method according to embodiment 1 of the present invention.
Fig. 3 is a schematic diagram of target imaging of a specific example of the backlight detection method according to embodiment 1 of the present invention.
Fig. 4 is a schematic diagram of a target area of a specific example of the backlight detection method according to embodiment 1 of the present invention.
Fig. 5 is a schematic block diagram of a backlight detection system according to embodiment 2 of the present invention.
Fig. 6 is a schematic block diagram of a specific implementation of a backlight detection system in embodiment 2 of the present invention.
Fig. 7 is a block diagram illustrating another embodiment of a backlight detection system according to embodiment 2 of the present invention.
Fig. 8 is an electronic block diagram of an electronic device according to embodiment 3 of the present invention.
Detailed Description
The invention is further illustrated by the following examples, which are not intended to limit the invention thereto.
Example 1
Referring to fig. 1, the embodiment specifically provides a backlight detection method, which includes:
s1, identifying a shooting scene of target imaging;
s2, dividing the target image into a plurality of target areas according to the recognition result of the shooting scene;
s3, acquiring backlight confidence of target imaging based on pixel values in a plurality of target areas; the backlight confidence is used for representing the backlight degree of the shooting scene.
The target imaging in this embodiment includes, but is not limited to, a pixel dot matrix Image obtained by Processing an output Signal of an Image sensor by an Image Signal Processor (ISP), or an Image to be output in a Processing process. Step S1 identifies a photographic scene of the target imaging, i.e., modeling the target imaging based on the scene. Specifically, the modeling may be performed by performing deep neural network training on a convolutional network model based on a previously labeled scene data set, and then performing recognition processing on the target image through the convolutional network model. The data selection of the scene data set can comprise various pre-collected simulation scenes, and the variables such as contrast, brightness, sharpness, resolution and the like are comprehensively considered; so that the scene recognition model is suitable for most possible application environments. For example, a fusiform full convolution network containing deconvolution operation is used for training a scene recognition model, and different simulation scenes such as portrait, landscape and microspur are set in a detailed mode; therefore, the scene in the target imaging can be accurately identified for subsequent processing, and the identification result comprises but is not limited to a mask image. It will be appreciated that scene recognition for target imaging may also be implemented by other models, algorithms and is not limited to convolutional network models.
In an optional implementation manner, step S1 specifically includes: and identifying the depth of field parameter of each object to be determined in the shooting scene of the target imaging, wherein the depth of field parameter is used for representing the space distance between the object to be determined and the shooting lens. Specifically, the embodiment implements scene recognition by distinguishing a close shot and a background in target imaging, and depth-of-field data is actual spatial distance between each object to be determined in the target imaging and a shooting lens; the method can be achieved by adopting a binocular measurement technology, but is not limited to the binocular measurement technology, namely, the three-dimensional coordinate information of a target space is obtained by using two cameras according to a stereoscopic vision principle, the imaging parallax of each target point at the two cameras is obtained through image matching, and then the depth of field parameter is calculated.
Step S2 divides the target image into several target areas according to the recognition result of the shooting scene. Those skilled in the art will appreciate that the same target image may be divided into different combinations of target regions based on different recognition manners in step S1, and the backlight detection results, i.e. backlight confidence, obtained in step S3 are different based on the different combinations of target regions.
For example, for the above object scene identification manner based on the depth-of-field parameter, an optional implementation of step S2 specifically includes: determining a target object in the shooting scene according to the recognition result of the shooting scene; according to the target object, the target imaging is divided into a plurality of target areas. The target object in the shooting scene is determined, namely, the target object is determined according to the depth of field parameter of each object to be determined. Assuming target imaging includes a person standing in front of a window, the person, window, and other areas of the room can be identified as target objects with different depth of field parameters.
As another alternative, step S2 specifically includes:
determining the light angle in the shooting scene according to the recognition result of the shooting scene;
and dividing the target image into a plurality of target areas according to the light angle.
In this embodiment, the direction and angle of the incident light can be determined by, but not limited to, determining the local shadows and the concave-convex surfaces in the target imaging scene. Specifically, assuming that the scene recognition result of the target imaging includes a person standing in front of a window, the incidence direction and angle of the light can be determined by analyzing shadows and concave-convex surfaces of various parts of the body, such as the face of the person. And then, the division models corresponding to different backlight scenes are distinguished through light analysis, and the target imaging is divided based on the different backlight scenes. If the light ray is incident from the upper part of the image, the upper part of the division model corresponding to the backlight scene is set as one area, the middle upper part is set as one area, the central part is set as one area, and the bottom part is set as the other area. And when calculating the backlight confidence, different weights are correspondingly set in different areas.
Step S3 performs a backlight detection analysis for the target region based on the pixel values, and the backlight detection analysis is essentially a comprehensive decision for the two dimensions in combination with the spatial scene attributes in the target region.
As an alternative embodiment, step S3 includes:
s31, respectively calculating the average pixel value of the pixel points in each target area;
s32, calculating an average pixel value of target imaging according to the backlight reference weight and the average pixel value corresponding to each target area;
s33, determining the backlight confidence corresponding to each target area according to the average pixel value of each target area and the average pixel value of target imaging;
and S34, determining the backlight confidence of target imaging according to the backlight confidence of each target area.
In the process of calculating the pixel value, for a pixel point containing color information, the pixel value of the pixel point can be determined by converting a color value corresponding to an RGB (Red-Green-Blue three primary color channel) channel in advance. Step S31 calculates the average pixel value of the pixel point in each target region according to the division result, and step S32 calculates the average pixel value of the whole target image based on the backlight reference weight corresponding to the preset target region, for example, by means of weighted summation, the sum of the products of the average pixel value of each target region and the backlight reference weight is obtained. It will be appreciated that the backlight reference weights may be set based on, including but not limited to, imaging processing requirements. Preferably, for the scene recognition mode based on the target object, when the backlight reference weight is set, the determination may be performed according to the target object corresponding to each target area. For example, when the final imaging is a portrait, the backlight reference weight setting for a target region to which a part of the human body, the face, or the like is locally focused is high. Or when the final imaging is macro-photography, the target object which is the focus shooting object is judged based on the depth of field parameter, and the backlight reference weight can be set to be higher.
Step S33, comparing the average pixel value of each target area with the average pixel value of the target image respectively, and determining the corresponding backlight confidence of each target area; s34 selects the maximum value (which may be generally set as the maximum value) from the backlight confidences of each target region as the backlight confidences of the target imaging. Preferably, different backlight confidences of the target imaging may be obtained based on different scene recognition modes, so that for the same target imaging, the highest value of all the possible obtained backlight confidences may be taken again as the final backlight detection result.
Similarly, in addition to the recognition modes based on different target scenes, in order to ensure that the lens may be flipped or reversed during the shooting process, in a preferred embodiment, the backlight detection method further includes: performing rotation processing on the target image; therefore, when the target imaging shooting scene is recognized in step S1, the target imaging shooting scene after the rotation processing is specifically recognized. Specifically, for example, by rotating the object imaging by 90 °, 180 °, and 270 °, processing is performed based on the rotated object imaging, thereby contributing to covering most application scenes. Preferably, when the target imaging passes through at least two scene recognition modes simultaneously and also passes through the rotation processing, the backlight confidences obtained under all combinations should be compared to determine the final backlight detection result.
In order to facilitate actual modeling and calculation, the target imaging can be preprocessed, that is, in step S31, the target imaging rule can be equally divided into a plurality of pixel units, so that the number of pixels in each pixel unit is the same; and respectively calculating the pixel value sum of the pixel points in each pixel unit. And calculating the average pixel value of the pixel points in each target region according to the number of all the pixel units in each target region and the sum of the pixel values corresponding to all the pixel units.
Referring to fig. 2, the method for detecting the backlight of the present embodiment is described below by using a specific example, which does not limit the present invention.
S61, converting the RGB value of the color information in the target imaging output by equipment such as a digital camera or a video camera into brightness information.
The specific conversion formula is as follows: Y0.299R + 0.587G + 0.114B. Y represents the calculated brightness information, R, G, B represents the pixel values of three channels of red, green and blue of the pixel points respectively, the conversion coefficient is set universally in the industry, all the pixel points in the image are subjected to the operation to obtain the brightness values of the pixel points, the color image is converted into the brightness image, and if the output image of the camera is a gray image and does not contain color information, the step can be omitted.
S62, dividing the image into image blocks of m x n, namely pixel units, and adding all brightness values Y in each pixel unit to obtain brightness value information of each pixel unit to obtain an image of m x n.
And S63, modeling is carried out according to the actual backlight scene, and the image is divided into different target areas.
The purpose of modeling is to combine the brightness information of the target imaging pixel point with the position information of the pixel point, and the backlight scene is not detected independently according to the brightness information of the pixel point. The modeling process can be performed according to an actual backlight scene, see fig. 3, and if a person takes a picture in front of a window, the backlight scene shows that the region where the person is located in the target imaging center is darker and the surrounding region is brighter. Then, referring to fig. 4, the target image can be divided into five regions, the white region is not considered, and the remaining region brightness information is as follows: first target region (T1) > second target region (T2) > third target region (T3) > fourth target region (T4). Of course, the number of the actual target distinguishing areas can be flexibly set according to the scene. In this example, the target regions correspond to different backlight reference weights.
And S64, calculating the sum of pixel values in each target area.
Specifically, the target areas include the following, sum _ b (representing a first target area), sum _ o (representing a second target area), sum _ br (representing a third target area), and sum _ blk (representing a fourth target area), where the number of pixels in each target area is: num _ b (corresponding to the first target area), num _ o (corresponding to the second target area), num _ br (corresponding to the third target area), and num _ blk (corresponding to the fourth target area).
S65, calculating an average value of pixel values of pixel points in each target area:
avg_b=sum_b/num_b,
avg_o=sum_o/num_o,
avg_br=sum_br/num_br,
avg_blk=sum_blk/num_blk。
s66, calculating an average pixel value avg _ img of the whole target imaging according to the backlight reference weight of each target area:
avg_img=(avg_b*weight_b+avg_o*weight_o+avg_br*weight_br+avg_blk*weight_blk)/(weight_b+weight_o+weight_br+weight_blk),
in the formula: weight _ b, weight _ o, weight _ br and weight _ blk respectively represent the backlight reference weight corresponding to the target area.
S67, calculating a backlight confidence value in each region respectively:
cfd_b=(avg_b-avg_img)/avg_img,
cfd_o=(avg_o-avg_img)/avg_img,
cfd_br=(avg_br-avg_img)/avg_img,
cfd_blk=(avg_blk-avg_img)/avg_img。
s68, acquiring the backlight confidence of target imaging cfd _ mode 1:
cfd_mode1=max(cfd_b,cfd_o,cfd_br,cfd_blk)。
specifically, the backlight confidence value may be normalized to a certain range, and different values represent different backlight degrees, for example, normalized to a value of 0-1, that is, when the confidence value is 0, it represents that the scene is not a backlight scene, and when the confidence value is 1, it represents that the scene is a backlight scene, and the larger the value, the larger the backlight degree. At this time, the maximum value of the backlight confidence corresponding to the target area is obtained as the final backlight confidence. Further, after the backlight confidence value is obtained through calculation, whether the current scene is the backlight scene or not can be judged by combining the brightness of the actual scene.
The backlight detection method of the embodiment realizes backlight detection based on combination of two different dimensions of a shooting scene of target imaging and image processing of the target imaging, improves precision and referential property of the backlight detection, is beneficial to backlight correction of the target imaging on the basis, obtains a better imaging effect, and has industrial popularization value.
Example 2
Referring to fig. 5 to 7, the embodiment specifically provides a backlight detection system, which includes:
a scene recognition module 51 for recognizing a shooting scene of the target imaging;
the imaging division module 52 is used for dividing the target imaging into a plurality of target areas according to the recognition result of the shooting scene;
a backlight detection module 53, configured to obtain a backlight confidence of the target imaging based on pixel values in the plurality of target regions; the backlight confidence is used for representing the backlight degree of the shooting scene.
The target imaging in this embodiment includes, but is not limited to, a pixel dot matrix image obtained by processing an output signal of an image sensor by an image signal processor, or an image to be output in a processing process. The scene recognition module 51 recognizes a shooting scene of the target imaging, i.e., models the target imaging based on the scene. Specifically, the modeling may be performed by performing deep neural network training on a convolutional network model based on a previously labeled scene data set, and then performing recognition processing on the target image through the convolutional network model. The data selection of the scene data set can comprise various pre-collected simulation scenes, and the variables such as contrast, brightness, sharpness, resolution and the like are comprehensively considered; so that the scene recognition model is suitable for most possible application environments. For example, a fusiform full convolution network containing deconvolution operation is used for training a scene recognition model, and different simulation scenes such as portrait, landscape and microspur are set in a detailed mode; therefore, the scene in the target imaging can be accurately identified for subsequent processing, and the identification result comprises but is not limited to a mask image. It will be appreciated that scene recognition for target imaging may also be implemented by other models, algorithms and is not limited to convolutional network models.
In an optional embodiment, the scene recognition module 51 is specifically configured to recognize a depth of field parameter of each object to be determined in a shooting scene of the target imaging, where the depth of field parameter is used to characterize a spatial distance between the object to be determined and the shooting lens. Specifically, the embodiment realizes scene recognition by distinguishing a close shot and a background in target imaging, and depth of field data is actual spatial distance between each object to be determined in the target imaging and a shooting lens; the method can be achieved by adopting a binocular measurement technology, but is not limited to the binocular measurement technology, namely, the three-dimensional coordinate information of a target space is obtained by using two cameras according to a stereoscopic vision principle, the imaging parallax of each target point at the two cameras is obtained through image matching, and then the depth of field parameter is calculated.
The imaging division module 52 divides the target imaging into a plurality of target areas according to the recognition result of the shooting scene. It can be understood by those skilled in the art that the same target image may be divided into different target area combinations based on different recognition modes of the scene recognition module 51, and then the backlight detection result, i.e. the backlight confidence, obtained by the backlight detection module 53 is also different on the basis.
For example, for the above target scene recognition manner based on the depth of field parameter, optionally, the imaging dividing module 52 includes a target object determining unit 521, configured to determine a target object in the shooting scene according to a recognition result of the shooting scene; a first dividing unit 522, configured to divide the target image into several target areas according to the target object. The target object in the shooting scene is determined, namely, the target object is determined according to the depth of field parameter of each object to be determined. Assuming target imaging includes a person standing in front of a window, the person, window, and other areas of the room can be identified as target objects with different depth of field parameters.
As another alternative, the imaging dividing module 52 specifically includes:
a light angle determining unit 523 configured to determine a light angle in the shooting scene according to the recognition result of the shooting scene;
and a second dividing unit 524, configured to divide the target image into a plurality of target areas according to the light angles.
In the present embodiment, the direction and angle of the incident light can be determined by, but not limited to, determining the local shadow and the concave-convex surface in the target imaging scene. Specifically, assuming that the scene recognition result of the target imaging includes a person standing in front of a window, the incidence direction and angle of the light can be determined by analyzing shadows and concave-convex surfaces of various parts of the body such as the face of the person. And then, the division models corresponding to different backlight scenes are distinguished through light ray analysis, and the target imaging is divided based on the different backlight scenes. If the light ray is emitted from the upper part of the image, the upper part of the division model corresponding to the backlight scene is set as a region, the middle upper part is set as a region, the central part is set as a region, and the bottom part is set as another region. And when calculating the backlight confidence, different weights are correspondingly set in different areas.
The backlight detection module 53 performs backlight detection analysis on the target region based on the pixel values, and combines spatial scene attributes carried in the target region, and the backlight detection analysis is essentially a comprehensive decision for the above two dimensions.
As an alternative embodiment, the backlight detecting module 53 includes:
an area pixel value calculating unit 531 for calculating an average pixel value of pixel points in each target area, respectively;
an average pixel value technique unit 532, configured to calculate an average pixel value of the target image according to the backlight reference weight and the average pixel value corresponding to each target region;
a region confidence calculating unit 533, configured to determine, according to the average pixel value of each target region and the average pixel value of the target imaging, a backlight confidence corresponding to each target region;
and the backlight confidence determining unit 534 is used for determining the backlight confidence of the target imaging according to the backlight confidence of each target area.
In the process of calculating the pixel value, for the pixel point containing color information, the pixel value of the pixel point can be determined by converting the color value corresponding to the RGB channel in advance. The region pixel value calculating unit 531 calculates an average pixel value of pixel points in each target region according to the division result, and the average pixel value technical unit 532 calculates an average pixel value of the whole target image based on a backlight reference weight corresponding to a preset target region, for example, by means of weighted summation, a sum of products of the average pixel value of each target region and the backlight reference weight is obtained. It will be appreciated that the backlight reference weight may be set based on, including but not limited to, imaging processing requirements. Preferably, for the above-mentioned scene recognition method based on the target object, when the backlight reference weight is set, the determination may be performed according to the target object corresponding to each target area. For example, when the final imaging is a portrait, the backlight reference weight setting for a target region to which a part of the human body, the face, or the like is locally focused is high. Or when the final imaging is macro-photography, the target object which is the focus shooting object is judged based on the depth of field parameter, and the backlight reference weight can be set to be higher.
The region confidence calculating unit 533 determines the backlight confidence corresponding to each target region by comparing the average pixel value of each target region with the average pixel value of the target imaging; the backlight confidence determining unit 534 selects the maximum value (which may be generally set as the maximum value) from the backlight confidence of each target region as the backlight confidence of the target imaging. Preferably, different backlight confidence levels of the target imaging may be obtained based on different scene recognition modes, so that for the same target imaging, the highest value of all the possible obtained backlight confidence levels can be obtained again to serve as a final backlight detection result.
Similarly, in addition to the recognition modes based on different target scenes, in order to ensure that the lens may be turned over or reversed during the shooting process, in a preferred embodiment, the backlight detection system further includes a rotation processing module 54 for performing rotation processing on the target image; therefore, when the scene recognition module 51 recognizes the shooting scene of the target image, it specifically recognizes the shooting scene of the target image after the rotation processing. Specifically, processing is performed based on rotated object imaging, for example, by rotating the object imaging by 90 °, 180 °, 270 °, thereby helping to cover most application scenarios. Preferably, when the target is imaged by at least two scene recognition modes simultaneously and is further subjected to rotation processing, the backlight confidences obtained under all combinations should be compared to determine a final backlight detection result.
In order to facilitate actual modeling and calculation, the target imaging can be preprocessed, that is, the region pixel value calculating unit 531 can equally divide the target imaging rule into a plurality of pixel units, so that the number of pixels in each pixel unit is the same; and respectively calculating the pixel value sum of the pixel points in each pixel unit. And calculating the average pixel value of the pixel points in each target region according to the number of all the pixel units in each target region and the sum of the pixel values corresponding to all the pixel units.
The backlight detection system of the embodiment realizes backlight detection based on combination of two different dimensions of a shooting scene of target imaging and image processing of the target imaging, improves the accuracy and the referential property of the backlight detection, is beneficial to backlight correction of the target imaging on the basis, obtains a better imaging effect, and has industrial popularization value.
Example 3
The present embodiments provide an electronic device comprising at least one processor, and a memory communicatively coupled to the at least one processor. Wherein the memory stores instructions executable by the at least one processor to enable the processing device to perform the backlight detection method of embodiment 1.
It should be noted that the electronic device in this embodiment may specifically be a single chip, a chip module, or a network device, or may be a chip or a chip module integrated in a network device; each module/unit included in the processing device for communication data may be a software module/unit, a hardware module/unit, a part of the software module/unit, and a part of the hardware module/unit. For example, for each device or product applied to or integrated into a chip, each module/unit included in the device or product may be implemented by hardware such as a circuit, or at least a part of the module/unit may be implemented by a software program running on a processor integrated within the chip, and the rest (if any) part of the module/unit may be implemented by hardware such as a circuit; for each device or product applied to or integrated with the chip module, each module/unit included in the device or product may be implemented by using hardware such as a circuit, and different modules/units may be located in the same component (e.g., a chip, a circuit module, etc.) or different components of the chip module, or at least some of the modules/units may be implemented by using a software program running on a processor integrated within the chip module, and the rest (if any) of the modules/units may be implemented by using hardware such as a circuit; for each device and product applied to or integrated in the terminal, each module/unit included in the device and product may be implemented by using hardware such as a circuit, and different modules/units may be located in the same component (e.g., a chip, a circuit module, etc.) or different components in the terminal, or at least part of the modules/units may be implemented by using a software program running on a processor integrated in the terminal, and the rest (if any) part of the modules/units may be implemented by using hardware such as a circuit.
As shown in fig. 8, as a preferred embodiment, this embodiment specifically provides an electronic device 30, which includes a processor 31, a memory 32, and a computer program stored in the memory 32 and capable of running on the processor 31, and when the processor 31 executes the program, the backlight detection method in embodiment 1 is implemented. The electronic device 30 shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention.
The electronic device 30 may be embodied in the form of a general purpose computing device, which may be, for example, a server device. The components of the electronic device 30 may include, but are not limited to: the at least one processor 31, the at least one memory 32, and a bus 33 that couples various system components including the memory 32 and the processor 31.
The bus 33 includes a data bus, an address bus, and a control bus.
The memory 32 may include volatile memory, such as Random Access Memory (RAM)321 and/or cache memory 322, and may further include Read Only Memory (ROM) 323.
Memory 32 may also include a program/utility 325 having a set (at least one) of program modules 324, such program modules 324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The processor 31 executes various functional applications and data processing, such as the backlight detection method in embodiment 1 of the present invention, by running the computer program stored in the memory 32.
The electronic device 30 may also communicate with one or more external devices 34 (e.g., a keyboard, a pointing device, etc.). Such communication may be through an input/output (I/O) interface 35. Also, model-generating device 30 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via network adapter 36. Network adapter 36 communicates with the other modules of model-generating electronic device 30 over bus 33. Other hardware and/or software modules may be used in conjunction with the model-generating device 30, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, and data backup storage systems, to name a few.
It should be noted that although in the above detailed description several units/modules or sub-units/modules of the electronic device are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the units/modules described above may be embodied in one unit/module according to embodiments of the invention. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
Example 4
The present embodiment provides a computer-readable storage medium on which a computer program is stored, the program implementing the backlight detection method in embodiment 1 when executed by a processor.
More specific examples, among others, that the readable storage medium may employ may include, but are not limited to: a portable disk, a hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In a possible implementation manner, the present invention can also be implemented in the form of a program product, which includes program code for causing a terminal device to execute a method for implementing the backlight detection in embodiment 1 when the program product runs on the terminal device.
Where program code for carrying out the invention is written in any combination of one or more programming languages, the program code may execute entirely on the user device, partly on the user device, as a stand-alone software package, partly on the user device and partly on a remote device or entirely on the remote device.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that this is by way of example only, and that the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the spirit and scope of the invention, and these changes and modifications are within the scope of the invention.

Claims (18)

1. A backlight detection method is characterized by comprising the following steps:
recognizing a shooting scene of the target imaging;
dividing the target image into a plurality of target areas according to the recognition result of the shooting scene;
obtaining a backlight confidence of the target imaging based on the pixel values in the target areas; wherein the backlight confidence is used for representing the backlight degree of the shooting scene.
2. The backlight detection method of claim 1, wherein the step of obtaining the backlight confidence of the target imaging based on the pixel values in the plurality of target regions comprises:
respectively calculating the average pixel value of the pixel points in each target region;
calculating an average pixel value of the target imaging according to the backlight reference weight corresponding to each target area and the average pixel value;
determining the backlight confidence corresponding to each target area according to the average pixel value of each target area and the average pixel value of the target imaging;
and determining the backlight confidence of the target imaging according to the backlight confidence of each target area.
3. The backlight detection method according to claim 2, wherein the step of calculating the average pixel value of the pixel points in each of the target regions respectively comprises:
Equally dividing the target imaging rule into a plurality of pixel units, wherein the number of pixel points in the pixel units is the same;
respectively calculating the pixel value sum of pixel points in each pixel unit;
and calculating the average pixel value of the pixel points in each target region according to the number of all the pixel units in each target region and the sum of the pixel values corresponding to all the pixel units.
4. A method of backlighting detection as recited in claim 1, further comprising: performing rotation processing on the target image;
the step of identifying the shooting scene of the target imaging is specifically configured to identify the shooting scenes of the target imaging after the rotation processing, respectively.
5. The backlight detection method according to claim 1 or 2, wherein the step of dividing the target image into a plurality of target areas according to the recognition result of the photographing scene comprises:
determining a target object in the shooting scene according to the recognition result of the shooting scene;
and according to the target object, dividing the target image into a plurality of target areas.
6. The backlight detection method according to claim 5, wherein the step of recognizing the photographed scene imaged by the target comprises:
Identifying the depth of field parameter of each object to be determined in the shooting scene of the target imaging; the depth of field parameter is used for representing the space distance between the object to be determined and the shooting lens;
the step of determining the target object in the shooting scene according to the recognition result of the shooting scene comprises the following steps:
and determining the target object in the shooting scene according to the depth of field parameter of each object to be determined.
7. The method of claim 5, wherein a backlight reference weight corresponding to the target area is determined based on the target object.
8. The backlight detection method according to claim 1, wherein the step of dividing the target image into a plurality of target areas according to the recognition result of the photographing scene comprises:
determining the light angle in the shooting scene according to the recognition result of the shooting scene;
and according to the light angle, dividing the target image into a plurality of target areas.
9. A backlighting detection system, comprising:
the scene recognition module is used for recognizing a shooting scene of the target imaging;
the imaging division module is used for dividing the target imaging into a plurality of target areas according to the recognition result of the shooting scene;
The backlight detection module is used for obtaining the backlight confidence of the target imaging based on the pixel values in the target areas; wherein the backlight confidence is used for representing the backlight degree of the shooting scene.
10. The backlighting detection system of claim 9 wherein the backlighting detection module comprises:
the region pixel value calculating unit is used for calculating the average pixel value of the pixel points in each target region;
the average pixel value technology unit is used for calculating the average pixel value of the target imaging according to the backlight reference weight corresponding to each target area and the average pixel value;
the region confidence degree calculation unit is used for determining the backlight confidence degree corresponding to each target region according to the average pixel value of each target region and the average pixel value of the target imaging;
and the backlight confidence determining unit is used for determining the backlight confidence of the target imaging according to the backlight confidence of each target area.
11. The backlighting detection system of claim 10, wherein the regional pixel value computation unit is specifically configured to:
equally dividing the target imaging rule into a plurality of pixel units, wherein the number of pixel points in the pixel units is the same;
Respectively calculating the pixel value sum of pixel points in each pixel unit;
and calculating the average pixel value of the pixel points in each target region according to the number of all the pixel units in each target region and the sum of the pixel values corresponding to all the pixel units.
12. The backlighting detection system of claim 9 further comprising a rotation processing module for performing rotation processing on the target image;
the scene recognition module is used for respectively recognizing the shooting scenes of the target images after the rotation processing.
13. The backlighting detection system of claim 9 or 10 wherein the image-dividing module comprises:
a target object determining unit, configured to determine a target object in the shooting scene according to a recognition result of the shooting scene;
the first dividing unit is used for dividing the target imaging into a plurality of target areas according to the target object.
14. The backlighting detection system of claim 13, wherein the scene recognition module is specifically configured to: identifying the depth of field parameter of each object to be determined in a shooting scene of target imaging; the depth of field parameter is used for representing the space distance between the object to be determined and the shooting lens;
The target object determination unit is specifically configured to: and determining a target object in the shooting scene according to the depth of field parameters of the undetermined objects.
15. The backlighting detection system of claim 13 wherein a backlighting reference weight for the target area is determined based on the target object.
16. The backlighting detection system of claim 9 wherein the image-dividing module comprises:
the light angle determining unit is used for determining the light angle in the shooting scene according to the recognition result of the shooting scene;
and the second dividing unit is used for dividing the target image into a plurality of target areas according to the light angle.
17. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the backlight detection method according to any one of claims 1-8 when executing the computer program.
18. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the backlight detection method according to any one of claims 1 to 8.
CN202210279531.3A 2022-03-21 2022-03-21 Backlight detection method and system, electronic equipment and storage medium Pending CN114760422A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210279531.3A CN114760422A (en) 2022-03-21 2022-03-21 Backlight detection method and system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210279531.3A CN114760422A (en) 2022-03-21 2022-03-21 Backlight detection method and system, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114760422A true CN114760422A (en) 2022-07-15

Family

ID=82327755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210279531.3A Pending CN114760422A (en) 2022-03-21 2022-03-21 Backlight detection method and system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114760422A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102196182A (en) * 2010-03-09 2011-09-21 株式会社理光 Backlight detection equipment and method
CN105245786A (en) * 2015-09-09 2016-01-13 厦门美图之家科技有限公司 Self-timer method based on intelligent light measurement, self-timer system and photographing terminal
CN108805103A (en) * 2018-06-29 2018-11-13 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN109005361A (en) * 2018-08-06 2018-12-14 Oppo广东移动通信有限公司 Control method, device, imaging device, electronic equipment and readable storage medium storing program for executing
CN110111281A (en) * 2019-05-08 2019-08-09 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN112312001A (en) * 2019-07-30 2021-02-02 北京百度网讯科技有限公司 Image detection method, device, equipment and computer storage medium
US20210056917A1 (en) * 2019-08-20 2021-02-25 Beijing Boe Optoelectronics Technology Co., Ltd. Method and device for backlight control, electronic device, and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102196182A (en) * 2010-03-09 2011-09-21 株式会社理光 Backlight detection equipment and method
CN105245786A (en) * 2015-09-09 2016-01-13 厦门美图之家科技有限公司 Self-timer method based on intelligent light measurement, self-timer system and photographing terminal
CN108805103A (en) * 2018-06-29 2018-11-13 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
WO2020001197A1 (en) * 2018-06-29 2020-01-02 Oppo广东移动通信有限公司 Image processing method, electronic device and computer readable storage medium
CN109005361A (en) * 2018-08-06 2018-12-14 Oppo广东移动通信有限公司 Control method, device, imaging device, electronic equipment and readable storage medium storing program for executing
CN110111281A (en) * 2019-05-08 2019-08-09 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN112312001A (en) * 2019-07-30 2021-02-02 北京百度网讯科技有限公司 Image detection method, device, equipment and computer storage medium
US20210056917A1 (en) * 2019-08-20 2021-02-25 Beijing Boe Optoelectronics Technology Co., Ltd. Method and device for backlight control, electronic device, and computer readable storage medium

Similar Documents

Publication Publication Date Title
US10997696B2 (en) Image processing method, apparatus and device
US10205896B2 (en) Automatic lens flare detection and correction for light-field images
US20190236404A1 (en) Image processing apparatus image processing method and storage medium for lighting processing on image using model data
CN100550053C (en) Determine the scene distance in the digital camera images
US8902328B2 (en) Method of selecting a subset from an image set for generating high dynamic range image
WO2021057652A1 (en) Focusing method and apparatus, electronic device, and computer readable storage medium
CN112672139A (en) Projection display method, device and computer readable storage medium
CN108668093A (en) The generation method and device of HDR image
JP5779089B2 (en) Edge detection apparatus, edge detection program, and edge detection method
CN112272292B (en) Projection correction method, apparatus and storage medium
CN113902657A (en) Image splicing method and device and electronic equipment
CN112261292B (en) Image acquisition method, terminal, chip and storage medium
JP4176369B2 (en) Compensating digital images for optical falloff while minimizing changes in light balance
CN104184936B (en) Image focusing processing method and system based on light field camera
JP2012134625A (en) Light source estimation device and light source estimation method
CN113888509A (en) Method, device and equipment for evaluating image definition and storage medium
CN111597963B (en) Light supplementing method, system and medium for face in image and electronic equipment
CN116524022B (en) Offset data calculation method, image fusion device and electronic equipment
CN110111341B (en) Image foreground obtaining method, device and equipment
EP4090006A2 (en) Image signal processing based on virtual superimposition
CN112995633B (en) Image white balance processing method and device, electronic equipment and storage medium
CN114760422A (en) Backlight detection method and system, electronic equipment and storage medium
Khan et al. Offset aperture: A passive single-lens camera for depth sensing
CN112243118B (en) White balance correction method, device, equipment and storage medium
JP2017138927A (en) Image processing device, imaging apparatus, control method and program thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination