CN111711767B - Automatic exposure control method and electronic equipment - Google Patents

Automatic exposure control method and electronic equipment Download PDF

Info

Publication number
CN111711767B
CN111711767B CN202010586862.2A CN202010586862A CN111711767B CN 111711767 B CN111711767 B CN 111711767B CN 202010586862 A CN202010586862 A CN 202010586862A CN 111711767 B CN111711767 B CN 111711767B
Authority
CN
China
Prior art keywords
brightness
target
current image
targetluma
curluma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010586862.2A
Other languages
Chinese (zh)
Other versions
CN111711767A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhigan Electronic Technology Co ltd
Original Assignee
Suzhou Zhigan Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhigan Electronic Technology Co ltd filed Critical Suzhou Zhigan Electronic Technology Co ltd
Priority to CN202010586862.2A priority Critical patent/CN111711767B/en
Publication of CN111711767A publication Critical patent/CN111711767A/en
Application granted granted Critical
Publication of CN111711767B publication Critical patent/CN111711767B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides an automatic exposure control method and electronic equipment, wherein the automatic exposure control method comprises the following steps: calculating the brightness CurLuma of the current image; according to the 256 sections of histograms, dynamically calculating target brightness TargetLuma by taking the number of image saturation points not more than a preset reference saturation point number SaturateRefNum as a standard; calculating an adjustment rate AdjustRatio according to the current image brightness CurLuma, the target brightness TargetLuma and the image content change degree; and correcting the target exposure amount TargetExpIndex according to the adjustment rate AdjustRatio, and distributing exposure time and gain. Through the technical scheme of the invention, various defects in the automatic exposure method in the prior art are overcome, especially, the automatic exposure control in scenes with different brightness is solved, the exposure degree in the final imaging photo is reasonably controlled, and the imaging quality is obviously improved.

Description

Automatic exposure control method and electronic equipment
Technical Field
The invention relates to the technical field of video processing, in particular to an automatic exposure control method and electronic equipment.
Background
The purpose of automatic exposure is to enable the identification of brightness levels in different lighting conditions and scenes and to adjust the exposure parameters in real time so that the captured video or image appears to the human eye as bright as appropriate. To achieve this goal, the lens aperture, sensor exposure time, sensor analog gain and digital gain are adjusted. This process of adjustment is called Auto Exposure (Auto Exposure).
The current common automatic exposure control algorithm uses a fixed target brightness value as an adjustment target, forms a positive feedback control flow by counting the brightness information of the current picture, combining the photosensitive characteristic of a CMOS image sensor and linearly adjusting the exposure time and the gain value of a front-end sensor, and finally enables the brightness value of the current picture to be converged in a certain interval range of the set target brightness value as a stable state so as to ensure that the brightness of the scene picture is moderate.
The exposure control method in the prior art has the defect of poor scene adaptability, so that different optimal brightness control under each scene cannot be realized. In a traditional automatic exposure control algorithm, most of the set fixed target brightness is a result obtained by debugging and experience, and application requirements under most of common scenes can be basically met, but in more cases, the uniqueness and the difference of each scene inevitably cause the overall average brightness of each scene to be not fixed, and if all scenes are converged to the same fixed target brightness, the situation that the picture quality is poor due to overexposure or underexposure easily occurs.
In addition, the automatic exposure usually adopts a stepping control method, and the step length is calculated according to a certain method, so that the aim of gradually controlling the brightness of the current image to approach the target brightness is fulfilled. The adjustment step length is too large, the image brightness tends to the target brightness quickly, but the phenomenon of flickering appears; the adjustment step length is too small, although the image brightness can be smoothly transited to the target brightness, the adjustment speed is slow, and for a scene with rapid change, the adjustment is not timely, so that the image is overexposed or too dark.
In view of the above, there is a need to improve the exposure control method in the prior art to solve the above problems.
Disclosure of Invention
The invention aims to disclose an automatic exposure control method and electronic equipment, which are used for overcoming various defects in the automatic exposure method in the prior art, in particular for solving the problem of automatic exposure control in different brightness scenes so as to improve the reasonable control of the exposure degree in the final imaging photo and improve the imaging quality.
To achieve the first object, the present invention provides an automatic exposure control method, including the steps of:
s1, calculating the current image brightness CurLuma;
s2, Hist according to 256 sections of histogramsjDynamically calculating target brightness TargetLuma by taking the number of image saturation points as a standard, wherein the number of the image saturation points is not more than a preset reference saturation point number SaturateRefNum;
s3, calculating an adjustment rate AdjustRatio according to the current image brightness CurLuma, the target brightness TargetLuma and the image content change degree;
s4, correcting the target exposure amount TargetExpIndex according to the adjustment rate AdjustRatio, and distributing the exposure time and gain.
As a further improvement of the present invention, the step S1 includes the steps of:
s101, dividing an image to be processed into N blocks;
s102, calculating R of each blocki、Gi、BiComponent mean value according to R of each blocki、Gi、 BiComponent mean calculationLuminance Luma of each blocki
S103, giving weight to each blockiAnd carrying out weighted average on all the blocks to obtain the current image brightness CurLuma.
As a further improvement of the present invention, in step S103, each block is given a weightiThe calculation formula for obtaining the current image brightness CurLuma by performing weighted average on all the blocks is as follows:
Figure 1
as a further improvement of the present invention, step S2 includes the steps of:
s201, judging whether the current image brightness CurLuma is smaller than a difference value between the lower limit of the preset target brightness and the lower limit of the stable range; if so, executing step S202, assigning the lower limit of the preset target brightness to the target brightness TargetLuma, otherwise, executing step S203;
s203, judging whether the current image brightness CurLuma is larger than the sum of the upper limit of the preset target brightness and the upper limit of the stable range; if yes, executing step S204, and assigning the upper limit of the preset target brightness to the target brightness TargetLuma; if not, go to step S205;
s205, according to 256 sections of histograms HistjCalculating the total number histPixelNum of the pixel points participating in statistics;
s206, calculating the preset reference saturation point number SaturateRefNum according to the preset reference saturation point number percentage SaturateRefPer;
s207, counting the current image saturation point number SaturateNum according to a preset reference saturation point pixel value SaturateRefBin;
s208, judging whether the current image saturation point number SaturateNum is more than a preset reference saturation point number SaturateRefNum; if yes, executing step 209, reducing the target brightness TargetLuma and calculating the target brightness TargetLuma; if not, go to step 210;
s210, estimating estimated target brightness TtmptargetLuma required to be achieved;
s211, when the estimated target brightness TmpTargetLuma is reached, judging whether the estimated saturation point number TmpSertateNum is greater than a preset reference saturation point number SaturateRefNum;
if yes, skipping to execute the step S209;
if not, go to step S212;
s212, the estimated target luminance TmpTargetLuma is set as the target luminance TargetLuma.
As a further improvement of the present invention, the calculation formula of step S206 is as follows: saturraterefnum ═ saturerefper ═ histPixelNum;
the preset reference saturation point number saturerefnum is an upper limit of the image saturation point number, and the image saturation point number saturenum is not greater than the preset reference saturation point number saturerefnum.
As a further improvement of the present invention, the calculation formula of step S207 is as follows:
Figure BDA0002554122010000041
where the parameter j represents the pixel value size and j ∈ [ SaturateRefBin,255 ∈ [ ]]Parameter HistjRepresenting the number of pixels having a pixel value of j.
As a further improvement of the present invention, the calculation formula for calculating the target brightness TargetLuma in step S209 is as follows:
TargetLuma=CurLuma*(SaturateRefBin/TmpSaturateRefBin);
wherein, TmpShatureRefBin is the estimated reference saturation point pixel value.
As a further improvement of the present invention, in the step S210, the estimated target brightness TmpTargetLuma is the sum of the upper limit of the preset target brightness and the upper limit of the stable range.
As a further improvement of the present invention, the step S3 includes the steps of:
s31, calculating the brightness difference formed by the current image brightness CurLuma and the target brightness TargetLuma to obtain a preliminary adjustment rate InitiadjustRatio;
s32, detecting the change degree of the image content to obtain a motion coefficient MotionFactor;
and S33, superposing the motion coefficients by the initial adjustment rate to obtain the adjustment rate AdjustRatio.
As a further improvement of the present invention, in the step S4, the calculation formula for correcting the target exposure amount targetexplndex according to the adjustment rate is as follows:
TargetExpIndex=AdjustRatio*CurExposureTime*CurGain;
wherein CurExposuerTime is the exposure time of the current image, and CurGain is the gain of the current image.
Based on the same inventive concept, the present application also discloses an electronic device, comprising:
a processor, at least one memory connected with the processor;
the memory stores computer program instructions which, when read and executed by a processor, perform the steps of the automatic exposure control method according to any of the above inventions.
Compared with the prior art, the invention has the beneficial effects that:
the invention overcomes various defects in the automatic exposure method in the prior art, particularly solves the automatic exposure control in scenes with different brightness, and improves the reasonable control of the exposure degree in the final imaging photo, thereby obviously improving the imaging quality.
Drawings
FIG. 1 is a general flowchart of an automatic exposure control method according to the present invention;
FIG. 2 is a flow chart of calculating the current image luminance CurLuma;
fig. 3 is a detailed flowchart of dynamically calculating the target luminance TargetLuma;
FIG. 4 is an overall flow chart of the calculation of the adjustment rate AdjustRatio;
FIG. 5 is a detailed flow chart of calculating the adjustment ratio AdjustRatio;
FIG. 6 is a schematic diagram of a 256-segment histogram used to calculate the target luminance TargetLuma;
FIG. 7 is a luminance matrix of a current image divided into 16 × 12 blocks;
fig. 8 is a schematic diagram of calculating LBP values for the luminance matrix of the current image shown in fig. 7 based on the LBP algorithm using an operator of 3 × 3 to obtain an LBP matrix of 14 × 10;
fig. 9 is a schematic diagram of a 14 × 10 LBP matrix formed by omitting the outer-most luminance blocks in fig. 8 and calculating each block by using a 3 × 3 operator, so as to finally obtain the current image;
FIG. 10 is a topology diagram of an electronic device of the present invention.
Detailed Description
The present invention is described in detail with reference to the embodiments shown in the drawings, but it should be understood that these embodiments are not intended to limit the present invention, and those skilled in the art should understand that functional, methodological, or structural equivalents or substitutions made by these embodiments are within the scope of the present invention.
The applicant indicates the meaning of a term or symbol appearing in the present application.
Term "Logic"includes any physical and tangible functions for performing a task. For example, each operation illustrated in the flowcharts corresponds to a logical component for performing the operation. Operations may be performed using, for example, software running on a computer device, hardware (e.g., chip-implemented logic functions), etc., and/or any combination thereof. When implemented by a computing device, the logical components represent electrical components that are physical parts of the computer system, regardless of the manner in which they are implemented.
Phrase "Is configured as"or a phrase"Is configured to"includes any manner in which any kind of physical and tangible functionality may be constructed to perform the identified operations. The functions may be configured to perform operations using, for example, software running on a computer device, hardware (e.g., chip-implemented logic functions), and/or the like, and/or any combination thereof.
The symbol ". times" or "x" is the product of a mathematical operator and "/" is the division of the mathematical operator.
"Y" is "YES" in the judgment logic, and "N" is "NO" in the judgment logic.
The first embodiment is as follows:
referring to fig. 1 to 9, an embodiment of an automatic exposure control method (hereinafter referred to as "method") according to the present invention is disclosed.
The automatic exposure control method referring to fig. 1 includes the following steps: and step S1, calculating the current image brightness CurLuma. Step S2, according to 256 sections of histograms HistjAnd dynamically calculating the target brightness TargetLuma by taking the number of the image saturation points as a standard that the number of the image saturation points is not more than a preset reference saturation point number SaturateRefNum. Step S3, calculating the adjustment rate AdjustRatio according to the current image brightness CurLuma, the target brightness TargetLuma and the image content change degree. And step S4, correcting the target exposure amount TargetExpIndex according to the adjustment rate AdjustRatio, and distributing exposure time and gain.
Referring to fig. 2, the step S1 includes the following steps:
step S101, dividing an image to be processed into N blocks.
For example, referring to fig. 7, when the image to be processed is divided into a plurality of blocks of 16 × 12 standard, N is 192; when the image to be processed is divided into a plurality of blocks with the specification of 17 × 15, N is 255. The parameter i represents the number of the current block, i belongs to [0, N-1 ]]。Ri、Gi、BiRepresenting the average of the ith tile R, G, B.
Step S102, calculating R of each blocki、Gi、BiComponent mean value according to R of each blocki、 Gi、BiCalculating the luminance Luma of each block by the component mean valuei
Specifically, there are two ways to calculate the block luminance from the R, G, B components. The method comprises the following steps: r is to bei、 Gi、BiThe maximum value of the three is used as the block brightness Lumai. This is a simple way of approximation, considering the component with the largest value as the intensity value. The second method comprises the following steps: by RGB to YUV (image color space)The calculation formula (2) is specifically as follows, and the luminance value Y is calculated. To facilitate software calculations (e.g., MATLAB), and specifically can be calculated using the following formula: luma (Luma)i=(54*Ri+183*Gi+19*Bi)>>8。
And step S103, giving weight to each blockiAnd carrying out weighted average on all the blocks to obtain the current image brightness CurLuma.
In step S103, weight is given to each blockiThe calculation formula for obtaining the current image brightness CurLuma by performing weighted average on all the blocks is as follows:
Figure 2
weight is given to each blockiThe method comprises the following three steps:
(one) each tile is given the same weight, then this method is called "averaging photometry";
(II) relatively higher weight is set for the blocks in the center of the image, and relatively lower weight is set for the blocks at the periphery, so that the method is called as 'center-weighted photometry';
(III) according to the brightness Luma of each blockiThe weights are dynamically assigned.
In this embodiment, any one of the above block weight setting methods can be selected by itself.
Referring to fig. 3, a detailed implementation of step S2 in this embodiment is described in detail below, and step S2 includes the following steps.
And starting.
Step S201, judging whether the current image brightness CurLuma is smaller than a difference value between the lower limit TargetLumaLow of the preset target brightness and the lower limit StableRangeLow of the stable range;
if yes, executing step S202, and assigning a lower limit TargetLumaLow of the preset target brightness to the target brightness TargetLuma;
if not, go to step S203.
In the present embodiment, the lower limit TargetLumaLow of the preset target brightness is assigned to the target brightness TargetLuma, so as to avoid frequent and useless adjustment of the current image brightness between the lower limit stablerarangelow of the stable range and the lower limit TargetLumaLow of the preset target brightness, thereby contributing to increase the adjustment speed of the automatic exposure and improving the adaptability to the scene.
Step S203, judging whether the current image brightness CurLuma is larger than the sum of the upper limit TargetLumahigh of the preset target brightness and the upper limit StableRangehigh of the stable range; if yes, executing step S204, and assigning an upper limit TargetLumaHigh of preset target brightness to the target brightness TargetLuma; if not, go to step S205. The upper limit TargetLumaHigh of the above-mentioned preset target luminance is the maximum value of the target luminance, and the target luminance TargetLuma must not be larger than the upper limit TargetLumaHigh of the preset target luminance.
The steps S201 to S204 are used to perform preliminary screening on the brightness of the current image, and if the brightness of the current image is too bright or too dark, the subsequent processing step is not executed, and the target brightness is directly configured to be the upper limit targetlumamahigh of the preset target brightness or the lower limit targetlumamalow of the preset target brightness, so as to reduce the overall calculation overhead and help to accelerate the adjustment speed of performing automatic exposure on the current image.
Step S205, according to 256 sections of histograms HistjAnd calculating the total number histPixelNum of the pixel points participating in statistics. The calculation formula of step S205 is as follows:
Figure BDA0002554122010000081
wherein the parameter j represents the size of the pixel value, and the parameter j belongs to [0,255]]。HistjRepresenting the number of pixels having a pixel value size j.
Step S206, calculating a preset reference saturation point number saturerefnum according to the preset reference saturation point number percentage saturerefper.
The calculation formula involved in step S206 is:
SaturateRefNum=SaturateRefPer*histPixelNum。
in the embodiment, the predetermined reference saturation point number saturerefnum is an upper limit of the number of image saturation points, i.e. it is always determined that the number of image saturation points saturenum is not greater than the predetermined reference saturation point number saturerefnum.
Step S207, counting the current image saturation point number SaturateNum according to a preset reference saturation point pixel value SaturateRefBin; wherein,
the calculation formula of step S207 is as follows:
Figure BDA0002554122010000082
where the parameter j represents the pixel value size and the parameter j is ∈ [ SaturateRefBin,255 ∈ [ ]]Parameter HistjRepresenting the number of pixels having a pixel value of j.
Step S208, judging whether the current image saturation point number SaturateNom is larger than a preset reference saturation point number SaturateRefNum; if yes, executing step 209, reducing the target brightness TargetLuma and calculating the target brightness TargetLuma; if not, go to step 210.
In this embodiment, if the current image saturation point number saturatenem is greater than the preset reference saturation point number saturatrefnum, the saturation point number of the current image is too high, so that it is proved that the current image is already bright (i.e. overexposed), and the target brightness TargetLuma needs to be reduced, and the brightness reduction magnitude criterion is: after the current image brightness CurLuma is reduced to the target brightness TargetLuma, the number of saturation points contained in the current image brightness is not more than the preset reference number of saturation points.
In step S209, the calculation formula for calculating the target luminance TargetLuma is as follows:
TargetLuma=CurLuma*(SaturateRefBin/TmpSaturateRefBin)。
in the above calculation formula, tmpsatererebin is the estimated reference saturation point pixel value.
As shown in fig. 6, in steps S205 to S209, the CMOS sensor or the CCD sensor may be used to calculate the target luminance TargetLumaThe device obtains and forms the image brightness and 256 sections of histogram Hist in the process of image acquisition and formationjAn approximately linear relationship is formed so that the target luminance TargetLuma is calculated relatively accurately. In fig. 6, the horizontal axis represents the gray scale value, and the vertical axis represents the number of pixels corresponding to the gray scale value. When the brightness of the current image is darker, the 256-segment histogram HistjThe farther to the left, the fewer brighter pixels (or pixels); when the brightness of the current image is brighter, the 256-segment histogram HistjThe farther to the right, the more bright pixels (or pixel points).
Since the image brightness and the histogram statistical information have an approximately linear relationship, it is possible to guess the specific histogram statistical information if the real adjusted current image brightness CurLuma is reduced to the target brightness TargetLuma without actually adjusting the current image brightness CurLuma to be reduced to the target brightness TargetLuma. Firstly, when the current image brightness is still the current image brightness CurLuma, the estimated reference saturation point pixel value tmpsataturerefbin needs to be recalculated by taking the image saturation point number saturatemum just not greater than the preset reference saturation point number saturatrefnum as the screening standard.
As shown in fig. 6, the number of pixels satisfying the above-mentioned filtering criteria is counted from the pixel having the pixel value of 255 to the left. If the pixel value J of the current image satisfies both of the following calculation formulas:
Figure BDA0002554122010000101
Figure BDA0002554122010000102
then, the estimated reference saturation point pixel value tmpsaturatarefbin should be changed to the pixel value J if the filter criterion that the estimated image saturation point number TmpSaturateNum is just not greater than the preset reference saturation point number SaturateRefNum is satisfied, that is, the estimated reference saturation point pixel value tmpsaturatarefbin is J. Wherein the pixel value J represents the size of a pixel value satisfying both of the above equations, and the pixel value J belongs to [0,255 ]. The pixel value J and the pixel value J are distinguished from each other in the present embodiment. The pixel value J satisfies the two calculation formulas, the pixel value J is taken within a range of [0,255], and the range of the pixel value J is a preset reference saturation point pixel value saturatrelbin to 255, which is specifically referred to in step S207.
Specifically, in fig. 6, the sum of pixels in box 1 is less than the predetermined reference saturation point number saturerefnum, and the sum of pixels in box 2 is more than the predetermined reference saturation point number saturerefnum.
And step S210, estimating the estimated target brightness TtmptargetLuma required to be reached. Step S210 is to adjust the brightness of the current image to a specific value of the estimated target brightness TmpTargetLuma.
The estimated target brightness TtmpTargetLuma is the sum of the upper limit TargetLumaHigh of the preset target brightness and the upper limit StableRangeHigh of the stable range; the calculation formula of step S210 is as follows: TmpTargetLuma ═ TargetLumaHigh + StableRangeHigh;
step S211, when the estimated target brightness TmpTargetLuma is reached, judging whether the estimated saturation point number TmpDaturateNum is larger than a preset reference saturation point number SaturateRefNum;
if yes, skipping to execute the step S209;
if not, go to step S212;
in step S212, the estimated target luminance TmpTargetLuma is set to the target luminance TargetLum.
Through the step S212, the target brightness TargetLuma of the current image can approach the estimated target brightness TmpTargetLuma, so that the target brightness of the current image by the automatic exposure control method disclosed by the invention tends to a true value, and the defects of overexposure and/or underexposure are prevented.
In step S2, the estimated target luminance TmpTargetLuma is the sum of the upper limit of the preset target luminance and the upper limit of the stable range. In step S2 in the present embodiment, the histogram statistical information Hist is calculated based on 256 segments of the imagejAnd whether the current image saturation point number SaturateNum is larger than the preset reference saturation point number SaturateRefNum is taken as a judgment targetThe method for dynamically calculating the target brightness targetLuma ensures that the image brightness is remarkably improved under the condition that the image is not over exposed, so that the dynamic adjustment range of a sensor (such as a CMOS sensor or a CCD sensor) based on the collected image is utilized as much as possible to show the image details as much as possible.
In the embodiment, 256 pieces of histogram statistical information Hist of the imagejProvided by a hardware module, reflecting a gray value [0,255]]The number of pixels per pixel value in the range. The hardware module is specifically an AE statistical information unit built in an integrated circuit chip such as a camera device DSP, and the AE statistical information unit is used for acquiring 256 sections of histogram statistical information HistjThe operation of (2). The AE-based statistical information unit is a prior art and is not described in detail in this application.
After steps S1 and S2 are completed, the automatic exposure of the current image can be converged to the same fixed target brightness, and underexposure or overexposure can be avoided.
As shown in fig. 4, in the present embodiment, the step S3 includes the following steps:
s31, calculating the brightness difference formed by the current image brightness CurLuma and the target brightness TargetLuma to obtain the initial adjustment ratio InitiadjustRatio. And S32, detecting the change degree of the image content to obtain the motion coefficient MotionFactor. And S33, superposing the motion coefficients by the initial adjustment rate to obtain the adjustment rate AdjustRatio. Through the step S3 of changing the current image brightness CurLuma, the target brightness TargetLuma and the image content, the defect of overexposure or underexposure caused by a scene based on rapid change (for example, a scene in which the brightness changes at any time due to cloud movement) in the automatic exposure control method in the prior art is solved.
The adjustment rate AdjustRatio is based on 256, indicating the ratio of the target exposure amount targetexplndex to the current exposure amount curexplndex. If the adjustment rate AdjustRatio is 256, the ratio of the target exposure amount to the current exposure amount is doubled, and the brightness is not adjusted; if the adjustment rate AdjustRatio is 512, the ratio of the target exposure amount to the current exposure amount is doubled, namely, the brightness is increased at double speed; if the adjustment rate AdjustRatio is 128, the ratio of the target exposure amount to the current exposure amount is half, that is, the brightness is decreased twice.
By analogy, when the adjustment rate adjust is greater than 256 and the difference is larger, the brightness increasing speed is faster; when the adjustment rate adjust is less than 256 and the difference is larger, the surface luminance decreases faster.
The core idea of calculating the initial adjustment rate by using the brightness difference between the current image brightness CurLuma and the target brightness TargetLuma is that when the difference between the current image brightness CurLuma and the target brightness TargetLuma is large, the adjustment rate Adjustionis increased, which indicates rapid adjustment; when the difference between the two is small, the adjustment rate AdjustRatio is reduced, indicating slow adjustment. Different size parameters are used to indicate different brightness differences and different parameters are used to indicate different step sizes.
The detection of the degree of change of the image content is performed by using the luminance information of the blocks obtained in step S1, and the specific implementation process is shown in fig. 7 to 9.
Fig. 7 shows a luminance matrix of a current image, which is divided into 16 × 12 blocks.
Referring to fig. 8, the LBP algorithm is used and 3 × 3 operators are used, so when the LBP matrix is calculated from the 16 × 12 luminance matrix, in order to simplify the calculation, the luminance blocks in the outermost circle are ignored, and the LBP values are calculated directly from the luminance blocks in the second row and the second column, so that the resulting LBP matrix is 14 × 10.
In the detection window of 3 x 3, the LBP algorithm takes the central pixel of the detection window as a threshold value, the gray values of 8 adjacent pixels are compared with the gray value of the central pixel, and if the gray value of the surrounding pixels is larger than that of the central pixel, the positions of the surrounding pixels are marked as 1; otherwise, the surrounding pixel is marked as 0. Thus, 8 surrounding pixels in the 3 x 3 neighborhood are compared to generate an 8-bit binary number, as shown in fig. 8. Fig. 9 shows a decimal number, i.e., an LBP value, corresponding to the 8-bit binary number in fig. 8. The 8-bit binary number is arranged in the clockwise direction from the upper left corner of the detection window, and the obtained binary number is converted into decimal number to obtain LBP value (local binary pattern value), e.g. (011111000)10124. And ignoring the brightness blocks in the outermost circle, and performing 3 × 3 operator operation on each block to finally obtain an LBP matrix of the current image 14 × 10.
For the calculation of LBPMatrix (i.e. LBP matrix), it is controlled by the detection phase DetectPhase. The detection phase DetectPhase controls how many frames of images apart the detection of the image content is performed, i.e., the detection frequency. If the DetectPhase setting is too large, the DetectPhase setting method is used for detecting the change of the image content of the frames which are far away from each other, so that the influence of the change of the image content on the adjustment rate is small; if the DetectPhase setting is too small, it indicates that the image content change detection is performed on the frames which are relatively close to each other, so that the influence of the change of the image content on the adjustment rate is relatively large. For example, the detection phase DetectPhase is 5, and the detection of the degree of change in the image content is performed every 5 frames.
According to the detection phase DetectPhase, the current image frame is divided into a reference frame, a skip frame and a detection frame. The reference frame and the detected frame are both subjected to LBPMatrix calculation, and at the time of detecting the frame, the "degree of change" from the reference frame is calculated, and the skipped frame is not subjected to any processing. For example, if the detection phase DetectPhase is 5, the 0 th frame is a reference frame, the 1 st to 4 th frames are skipped frames, and the 5 th frame is a detection frame; the 6 th frame is a reference frame, the 7 th to 9 th frames are skipped frames, the 10 th frame is a detection frame, and the like.
The degree of change between the detected front and back frame images can be characterized by the Euclidean distance LBPDistance between the two LBPMatrix. The euclidean distance may reflect the degree of difference between the two matrices. If the Euclidean distance is large, the difference between the two matrixes is large, and therefore the image content is greatly changed; if the Euclidean distance is small, it indicates that the difference between the two matrices is small, thereby indicating that the change of the image content is small.
The euclidean distance LBPDistance is used to compare with the motion detection threshold MotionDetectThr to reflect the degree of change in image content. If LBPDistance is less than or equal to motionDetectThr, the image content is considered to be approximately unchanged, and the motion coefficient motionAnaactor is 1; if LBPDistance > MotionDetectThr, the image content is considered to be greatly changed, and the calculation formula of the motion coefficient MotionFactor at the moment is as follows:
MotionFactor=1+(LBPDistance–MotionDetectThr)/MotionDetectThr;
wherein, the motion coefficient motionactor in the above calculation formula is used for calculating the adjustment rate adjust.
As shown in fig. 5, in the present embodiment, the specific process of adjusting the rate adjust ratio in step S3 includes the following steps.
And starting.
Step S301, judging whether the sum of the current image brightness CurLuma and the parameter BigGapRange is smaller than the target brightness TargetLuma; if yes, go to step S302; if not, executing step S303;
step S302, calculating a preliminary adjustment rate InitadjustRatio, wherein the calculation formula of the preliminary adjustment rate InitadjustRatio is as follows:
initadjust ratio 256+ (TargetLuma-CurLuma) FastStep/CurLuma, wherein,
the bigggaprge is the difference between the current image brightness CurLuma and the target brightness TargetLuma, and the FastStep is the step size when the difference between the current image brightness CurLuma and the target brightness TargetLuma is large. The larger the BigGapRange is, the larger the difference between the current image brightness CurLuma and the target brightness TargetLuma is represented; on the contrary, the smaller the BigGapRange parameter is, the smaller the difference between the current image brightness CurLuma and the target brightness TargetLuma is. The difference between the current image brightness CurLuma and the target brightness TargetLuma is the absolute value of the image brightness index (usually characterized by lumens).
Step S303, judging whether the sum of the current image brightness CurLuma and the target range low end TargetLowRange is smaller than the target brightness TargetLuma, if so, executing step S304; if not, go to step S305.
Step S304, the calculation formula of the initial adjustment rate InitadjustRatio and the initial adjustment rate InitadjustRatio is as follows:
InitAdjustRatio=256+(TargetLuma–CurLuma)*SlowStep/CurLuma,
wherein, TargetLowRange is a parameter with small difference between the brightness of the current image and the target brightness.
If the sum of the current image brightness CurLuma and the TargetLowRange is smaller than the target brightness TargetLuma, it indicates that the current image brightness is smaller than the target brightness, and the parameter SlowStep is smaller, in this scene, the brightness needs to be increased at a slower speed in the automatic exposure process, so a slower preliminary adjustment rate larger than 256 needs to be configured. The parameter SlowStep is a step length when the difference between the current image brightness and the target brightness targetLuma is small, and is used for calculating the slow speed adjustment rate.
Step S305, judging whether the current image brightness CurLuma is smaller than the sum of the target brightness TargetHighRange and the target brightness TargetLuma at the high end of the target range; if yes, go to step S306; if not, go to step S308.
S306, judging whether the sum of the current image brightness CurLuma and the tolerance is smaller than the target brightness; if yes, go to step S307; if not, go to step S310.
The parameter Tolerance represents the Tolerance, and if the current image brightness currluma is within the positive and negative Tolerance of the target brightness TargetLuma (i.e., an interval formed by the positive Tolerance and the negative Tolerance), the initadjust ratio is 256, which indicates that the current image brightness is stable and the preliminary adjustment rate does not need to be adjusted. If the current image luminance CurLuma is outside the positive and negative Tolerance of the target luminance, InitiadjustRatio is equal to FineRatioLow or FineRatioHigh. The parameter FineRatioLow represents the low end precision ratio and the parameter FineRatioHigh represents the high end precision ratio, both for precision adjustment. If the parameter FineRatioLow is greater than 256, the brightness of the current image is improved; if the parameter FineRatioHigh is less than 256, it indicates that the brightness of the current image is reduced.
Step S308, judging whether the brightness of the current image is smaller than the sum of the target brightness TargetLuma and the BigGapRange; if yes, skipping to execute the step S313; if not, the step S309 is skipped to.
If the sum of the current image brightness CurLuma and the tolerance is less than the target brightness TargetLuma, executing step S307, and estimating the adjustment rate initadjust to fineratio low; if the sum of the current image brightness CurLuma and the tolerance is greater than or equal to the target brightness TargetLuma, executing the step S310, and judging whether the current image brightness CurLuma is greater than the sum of the target brightness TargetLuma and the tolerance;
if yes, go to step S311, change the estimated adjustment rate inittadustratio to fineratio high; if not, step S312 is executed to set the estimated adjustment rate initadjust ratio to 256.
Step S309, initadjust ratio is 256- (CurLuma-TargetLuma) × FastStep/CurLuma.
And step S313, InitiadjustRatio is 256- (CurLuma-TargetLuma). SlowStep/CurLuma.
In this embodiment, as shown in fig. 5, step S309, step S313, step S312, step S311, step S307, step S304 and step S302 all perform a skip step S314, determine whether the current image frame is a skipped frame, if so, perform a skip step S316, and keep the motion coefficient unchanged; if not, performing a skip step S317, judging whether the current image frame is a reference frame, if so, performing a skip step S318, calculating LBPMatrix and keeping the motion coefficient unchanged; if not, the step S319 is executed, the LBPMatrix is calculated, and the motion coefficient is calculated. Step S319 and step S319 both skip execution step S320 and calculate the adjustment rate adjust.
And (6) ending.
Through steps S301 to S320 included in step S3 in this embodiment, the calculation process of the adjustment rate adjust refers to the luminance difference information between the current image luminance CurLuma and the target luminance TargetLuma, and combines with the image content change, so as to effectively solve the requirement for fast automatic exposure adjustment in different scenes, and particularly adapt to the scene with fast ambient luminance change, thereby making the exposure control in the automatic exposure control process of the current image more convergent.
Finally, in step S4, the calculation formula for correcting the target exposure amount targetexplndex according to the adjustment rate AdjustRatio is:
TargetExpIndex=AdjustRatio*CurExposureTime*CurGain;
wherein CurExposuerTime is the exposure time of the current image, and CurGain is the gain of the current image. The CurExposuerTime is the exposure time of the current picture, and CurGain is the gain of the current picture. After the target exposure amount targetexpinpindex is calculated, the target exposure time targetexposurretime and the target gain TargetGain need to be calculated. And after the calculation of the target exposure time and the target gain is finished, the target exposure time and the target gain are issued to the sensor for collecting the image, and the sensor for collecting the image takes effect on new exposure time and gain. The method for calculating the target exposure time, TargetExposureTime, and the target gain, TargetGain, from the target exposure amount, targetexplndex, is the prior art, and in this embodiment, an exposure table is shown, which is specifically shown in the following table.
Exposure time (ExposureTime) Gain (Gain)
100 256
40000 256
40000 256000
Table one: exposure meter
In the first table, the product of the two component exposure times (exposuretimes) and the Gain (Gain) is referred to as an exposure node, and in the first table, there are three exposure nodes. And calculating the corresponding target exposure time TargetExposureTime and target gain TargetGain according to the position between which two nodes the target exposure amount TargetExpIndex is positioned. The exposure time and gain values in table one above are abstract values, representing relative magnitudes, not actual magnitudes.
Example two:
with reference to fig. 10 in combination with the method for controlling automatic exposure disclosed in the first embodiment, the present embodiment further discloses an embodiment of an electronic apparatus based on the method for controlling automatic exposure.
In the present embodiment, an electronic device 400 includes:
a processor 41, and at least one memory 42 connected to the processor 41.
The memory 42 stores computer program instructions that, when read and executed by a processor 41, perform the steps of an automatic exposure control method as disclosed in one embodiment.
The electronic device 400 may in practice be configured as a camera, a mobile camera, an electronic computer with a camera unit or a wearable device, and may even be configured as a computer, a cluster server or a cloud platform, a data center or a hyper-fusion kiosk.
It should be noted that the Memory 42 in this embodiment may be composed of one or more Memory units, and each Memory unit may be configured as a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), RAID 0-RAID 7, a NAND, or the like.
The electronic device disclosed in this embodiment and the automatic exposure control method disclosed in the first embodiment have the same technical solutions, please refer to the description of the first embodiment, and are not described herein again.
The above-listed detailed description is only a specific description of a possible embodiment of the present invention, and they are not intended to limit the scope of the present invention, and equivalent embodiments or modifications made without departing from the technical spirit of the present invention should be included in the scope of the present invention.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (10)

1. An automatic exposure control method is characterized by comprising the following steps:
s1, calculating the current image brightness CurLuma;
s2, Hist according to 256 sections of histogramsjDynamically calculating target brightness TargetLuma by taking the number of image saturation points as a standard, wherein the number of the image saturation points is not more than a preset reference saturation point number SaturateRefNum;
s3, calculating an adjustment rate AdjustRatio according to the current image brightness CurLuma, the target brightness TargetLuma and the image content change degree;
s4, correcting the target exposure amount TargetExpIndex according to the adjustment rate AdjustRatio, and distributing exposure time and gain;
step S2 includes the following steps:
s201, judging whether the current image brightness CurLuma is smaller than a difference value between the lower limit of the preset target brightness and the lower limit of the stable range; if so, executing step S202, assigning the lower limit of the preset target brightness to the target brightness TargetLuma, otherwise, executing step S203;
s203, judging whether the current image brightness CurLuma is larger than the sum of the upper limit of the preset target brightness and the upper limit of the stable range; if yes, executing step S204, and assigning the upper limit of the preset target brightness to the target brightness TargetLuma; if not, go to step S205;
s205, according to 256 sections of histograms HistjCalculating the total number histPixelNum of the pixel points participating in statistics;
s206, calculating the preset reference saturation point number SaturateRefNum according to the preset reference saturation point number percentage SaturateRefPer;
s207, counting the current image saturation point number SaturateNum according to a preset reference saturation point pixel value SaturateRefBin;
s208, judging whether the current image saturation point number SaturateNum is more than a preset reference saturation point number SaturateRefNum; if yes, executing step 209, reducing the target brightness TargetLuma and calculating the target brightness TargetLuma; if not, go to step 210;
s210, estimating estimated target brightness TtmptargetLuma required to be achieved;
s211, when the estimated target brightness TmpTargetLuma is reached, judging whether the estimated saturation point number TmpSertateNum is greater than a preset reference saturation point number SaturateRefNum;
if yes, skipping to execute the step S209;
if not, go to step S212;
s212, the estimated target luminance TmpTargetLuma is set as the target luminance TargetLuma.
2. The automatic exposure control method according to claim 1, wherein the step S1 includes the steps of:
s101, dividing an image to be processed into N blocks;
s102, calculating R of each blocki、Gi、BiComponent mean value according to R of each blocki、Gi、BiCalculating the brightness of each block by the component mean valueLumai
S103, giving weight to each blockiAnd carrying out weighted average on all the blocks to obtain the current image brightness CurLuma.
3. The automatic exposure control method according to claim 2, wherein in step S103, a weight is given to each blockiThe calculation formula for obtaining the current image brightness CurLuma by performing weighted average on all the blocks is as follows:
Figure FDA0003161554760000021
4. the automatic exposure control method according to claim 1, wherein the calculation formula of step S206 is as follows: saturraterefnum ═ saturerefper ═ histPixelNum;
wherein the preset reference saturation point number saturatrefnum is an upper limit of the image saturation point number.
5. The automatic exposure control method according to claim 1, wherein the calculation formula of step S207 is as follows:
Figure FDA0003161554760000022
where the parameter j represents the pixel value size and j ∈ [ SaturateRefBin,255 ∈ [ ]]Parameter HistjRepresenting the number of pixels having a pixel value of j.
6. The automatic exposure control method according to claim 1, wherein the calculation formula of the target brightness TargetLuma in step S209 is as follows:
TargetLuma=CurLuma*(SaturateRefBin/TmpSaturateRefBin);
wherein, TmpShatureRefBin is the estimated reference saturation point pixel value.
7. The automatic exposure control method according to claim 1, wherein the estimated target luminance TmpTargetLuma is a sum of an upper limit of a preset target luminance and an upper limit of a stable range in the step S210.
8. The automatic exposure control method according to claim 1, wherein the step S3 includes the steps of:
s31, calculating the brightness difference formed by the current image brightness CurLuma and the target brightness TargetLuma to obtain a preliminary adjustment rate InitiadjustRatio;
s32, detecting the change degree of the image content to obtain a motion coefficient MotionFactor;
and S33, superposing the motion coefficients by the initial adjustment rate to obtain the adjustment rate AdjustRatio.
9. The automatic exposure control method according to claim 1, wherein the calculation formula for correcting the target exposure amount targetexplndex according to the adjustment rate in step S4 is:
TargetExpIndex=AdjustRatio*CurExposureTime*CurGain;
wherein CurExposuerTime is the exposure time of the current image, and CurGain is the gain of the current image.
10. An electronic device (400), comprising:
a processor (41), at least one memory (42) connected to the processor (41);
the memory (42) stores computer program instructions which, when read and executed by a processor (41), perform the steps of the automatic exposure control method according to any one of claims 1 to 9.
CN202010586862.2A 2020-06-24 2020-06-24 Automatic exposure control method and electronic equipment Active CN111711767B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010586862.2A CN111711767B (en) 2020-06-24 2020-06-24 Automatic exposure control method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010586862.2A CN111711767B (en) 2020-06-24 2020-06-24 Automatic exposure control method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111711767A CN111711767A (en) 2020-09-25
CN111711767B true CN111711767B (en) 2021-11-02

Family

ID=72542361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010586862.2A Active CN111711767B (en) 2020-06-24 2020-06-24 Automatic exposure control method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111711767B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112312036A (en) * 2020-10-30 2021-02-02 天津天地伟业智能安全防范科技有限公司 Automatic exposure method in photographing field
CN112788250B (en) * 2021-02-01 2022-06-17 青岛海泰新光科技股份有限公司 Automatic exposure control method based on FPGA
CN113347369B (en) * 2021-06-01 2022-08-19 中国科学院光电技术研究所 Deep space exploration camera exposure adjusting method, adjusting system and adjusting device thereof
CN114449175A (en) * 2022-01-13 2022-05-06 瑞芯微电子股份有限公司 Automatic exposure adjusting method, automatic exposure adjusting device, image acquisition method, medium and equipment
CN116709003A (en) * 2022-10-09 2023-09-05 荣耀终端有限公司 Image processing method and electronic equipment
CN116528054A (en) * 2023-05-17 2023-08-01 珠海凌烟阁芯片科技有限公司 Self-adjusting method and device for imaging brightness of camera

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101529891A (en) * 2006-08-22 2009-09-09 高通股份有限公司 Dynamic automatic exposure compensation for image capture devices
CN103826066A (en) * 2014-02-26 2014-05-28 芯原微电子(上海)有限公司 Automatic exposure adjusting method and system
CN109889733A (en) * 2019-03-25 2019-06-14 福州瑞芯微电子股份有限公司 A kind of automatic exposure compensation method, storage medium and computer
CN110519523A (en) * 2019-08-24 2019-11-29 苏州酷豆物联科技有限公司 A kind of exposure regulating method and its acquisition device based on target area analysis

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106713778B (en) * 2016-12-28 2019-04-23 上海兴芯微电子科技有限公司 Exposal control method and device
US10602075B2 (en) * 2017-09-12 2020-03-24 Adobe Inc. Automatically determining a set of exposure values for a high dynamic range image capture device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101529891A (en) * 2006-08-22 2009-09-09 高通股份有限公司 Dynamic automatic exposure compensation for image capture devices
CN103826066A (en) * 2014-02-26 2014-05-28 芯原微电子(上海)有限公司 Automatic exposure adjusting method and system
CN109889733A (en) * 2019-03-25 2019-06-14 福州瑞芯微电子股份有限公司 A kind of automatic exposure compensation method, storage medium and computer
CN110519523A (en) * 2019-08-24 2019-11-29 苏州酷豆物联科技有限公司 A kind of exposure regulating method and its acquisition device based on target area analysis

Also Published As

Publication number Publication date
CN111711767A (en) 2020-09-25

Similar Documents

Publication Publication Date Title
CN111711767B (en) Automatic exposure control method and electronic equipment
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
CN110099222B (en) Exposure adjusting method and device for shooting equipment, storage medium and equipment
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN109936698B (en) Automatic exposure control method and device, electronic equipment and storage medium
CN110839129A (en) Image processing method and device and mobile terminal
CN110248112B (en) Exposure control method of image sensor
CN101739672B (en) A kind of histogram equalizing method based on sub-regional interpolation and device
CN106412447A (en) Exposure control system and method thereof
CN108337447A (en) High dynamic range images exposure compensating value-acquiring method, device, equipment and medium
CN111225162B (en) Image exposure control method, system, readable storage medium and camera equipment
CN112738411B (en) Exposure adjusting method, exposure adjusting device, electronic equipment and storage medium
KR101972032B1 (en) Adaptive exposure control apparatus for a camera
CN110881108B (en) Image processing method and image processing apparatus
CN110445986B (en) Image processing method, image processing device, storage medium and electronic equipment
CN112565636A (en) Image processing method, device, equipment and storage medium
TW202022799A (en) Metering compensation method and related monitoring camera apparatus
CN111953893B (en) High dynamic range image generation method, terminal device and storage medium
CN114449175A (en) Automatic exposure adjusting method, automatic exposure adjusting device, image acquisition method, medium and equipment
CN111405185B (en) Zoom control method and device for camera, electronic equipment and storage medium
CN114666512A (en) Adjusting method and system for rapid automatic exposure
CN106686320A (en) Tone mapping method based on numerical density balance
CN112598609A (en) Dynamic image processing method and device
CN111970501A (en) Pure color scene AE color processing method and device, electronic equipment and storage medium
CN112839182B (en) Automatic exposure control system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant