CN117294947A - Automatic exposure method based on face and head positioning tracking - Google Patents

Automatic exposure method based on face and head positioning tracking Download PDF

Info

Publication number
CN117294947A
CN117294947A CN202210686216.2A CN202210686216A CN117294947A CN 117294947 A CN117294947 A CN 117294947A CN 202210686216 A CN202210686216 A CN 202210686216A CN 117294947 A CN117294947 A CN 117294947A
Authority
CN
China
Prior art keywords
face
information
head
mean
human
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210686216.2A
Other languages
Chinese (zh)
Inventor
吴慎华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Ingenic Technology Co ltd
Original Assignee
Hefei Ingenic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Ingenic Technology Co ltd filed Critical Hefei Ingenic Technology Co ltd
Priority to CN202210686216.2A priority Critical patent/CN117294947A/en
Publication of CN117294947A publication Critical patent/CN117294947A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)
  • Exposure Control For Cameras (AREA)

Abstract

The invention provides an automatic exposure method based on human face and human head positioning and tracking, which comprises the following steps of S1 human face and human head positioning, wherein the human face and human head positioning is introduced to solve the problem that human face information is lost under the conditions of dim light, weak light, backlight and backlight in the traditional automatic exposure algorithm; s2, screening face and head information: only the face information is considered under the condition that the face and head information exists; s3, screening statistical information: acquiring hardware statistics information, and acquiring the brightness value with the largest brightness value of a face or a head area in the statistics information according to the face or the head information, wherein the brightness value is marked as face_mean, and the strategy effectively prevents brightness differences from being present under multiple faces, and the face area is pulled to be exploded; s4, control strategy: the control weight of the face or head statistic value is introduced as face_weight; introducing face or head statistics experience values to control the range ae_face_min and ae_face_max; s5, face or head statistical information calculation: and (3) calculating to obtain the current scene brightness value ae_face_mean according to the screening and obtaining information of S3 and S4. The method collects the face image under the environments of low illumination, weak light, backlight and the like, and preserves the face information.

Description

Automatic exposure method based on face and head positioning tracking
Technical Field
The invention belongs to the technical field of intelligent monitoring video processing, and particularly relates to an automatic exposure method based on face and head positioning tracking.
Background
At present, the visual perception environment is complex and changeable, and the imaging quality of image signal processing faces challenges. The visual perception scene is diversified, the light changes complicated, such as tunnel or ground warehouse entrance, bad weather, underground mining, etc., and a large amount of valuable image information is collected under the environments of low illumination, weak light, backlight, rain, snow, fog, strong wind, etc., however, the image quality is greatly affected by low illumination light and bad environment. Particularly, face information is collected in low-light, weak-light, backlight and other environments, the imaging quality is greatly affected by light, and the face information is difficult to be preserved.
Furthermore, the common terminology in the prior art is as follows:
image signal processing: image Signal Processor it is abbreviated as ISP, and has the main functions of post-processing of signals output by an image sensor, such as automatic focusing, automatic exposure, black level correction, lens shading correction, dead spot correction, noise reduction, white balance, demosaicing, color correction, gamma correction, wide dynamic range, defogging, brightness, contrast, saturation, tone control and the like, and can restore site details well under different optical conditions only depending on ISP, and the ISP technology determines the imaging quality of a camera to a great extent.
Automatic exposure: auto Exposure profile AE is a mechanism by which a camera automatically adjusts the Exposure and gain according to the intensity of external light to prevent overexposure or underexposure.
Overexposure: the picture is overexposed and the whole is too bright.
Underexposure: the picture is underexposed and the whole is dark.
Disclosure of Invention
In order to solve the above problems, an object of the present application is to: face images are collected under the environments of low illumination, weak light, backlight and the like, and face information is saved.
Specifically, the invention provides an automatic exposure method based on positioning and tracking of a human face and a human head, which comprises the following steps:
s1, positioning a human face and a human head:
positioning the positions of the human face and the human head through a human face and human head detection algorithm, and introducing human face and human head positioning to solve the problem that the human face information is lost under dim light, weak light, backlight and backlight in the traditional automatic exposure algorithm;
s2, screening face and head information:
only the face information is considered under the condition that the face information and the head information exist;
s3, screening statistical information:
acquiring hardware statistics information, and acquiring the brightness value with the largest brightness value of a face or a head area in the hardware statistics information according to the face or the head information, wherein the brightness value is marked as face_mean, and the strategy effectively prevents brightness differences under multiple faces and the face area is pulled to be exploded;
s4, control strategy:
the control weight of the face or head statistic value is introduced as face_weight;
introducing a face or head experience value to control the range ae_face_min and ae_face_max;
s5, calculating statistical information of human faces or human heads
And (3) screening and obtaining information according to the steps S3 and S4, and calculating the formulas (1) and (2):
ae_face_mean=ae_mean- (ae_mean-face_mean) ×face_weight formula (1)
The ae_face_mean is the current scene brightness value calculated by the automatic exposure hardware statistics module according to face or head information statistics.
The step S3 of obtaining the hardware statistics information is implemented by traversing an AE hardware statistics module, the AE hardware statistics module is a part of an automatic exposure AE module, the AE module comprises software and hardware, the hardware part is used for statistics information, the software part is used for policy control of the AE module, the AE statistics information part comprises brightness information of a partition, namely a brightness value and the number of pixels, and a global 256-segment brightness histogram, and the algorithm of the AE module calculates proper sensitivity, aperture and exposure time configured for an image sensor and image signal processing through analysis of a current scene, so that the image sensor outputs or the image signal processing before automatic exposure reaches a certain brightness.
The method further comprises:
s1, face positioning, which comprises the following steps:
s1.1, positioning a face position through a yolov5 face detection algorithm;
s1.2, adding kalman filtering to track the face position to improve the stability of a detection algorithm;
s1.3, increasing yolov5 human head detection and improving the robustness of a detection algorithm;
s2, screening face and head information, including:
s2.1, face or head information acquisition: only the face information is considered when the face information and the head information exist;
s2.2, human face or human head information transmission
The face or head information is transmitted to the AE module;
s2.3, judging the information of the human face or the human head
Determine whether there is valid face or head information?
If the target brightness does not exist, a traditional automatic exposure module is executed, and a set of new exposure parameters are obtained directly according to the target brightness, wherein the new exposure parameters comprise combinations of different image sensor analog gains, image sensor digital gains, image signal processing digital gains, apertures and shutter speeds; if the data exists, acquiring hardware statistical information according to the face or head information, and simultaneously closing a strong light suppression and exposure compensation mode under the traditional AE module;
s3, screening statistical information:
acquiring hardware statistical information face_mean according to face information faceInfo or header info: dividing the whole image into 15 x 15 partitions, and counting R/Gr/Gb/B four-component average value information by each partition block;
the BT601 standard is used to convert the rectangular in-frame R/Gr/Gb/B to luminance information y,
and (3) calculating:
when face information is obtained: face position information upper left and lower right angles faceInfo (x 0, y 0) (x 1, y 1):
the face_mean is a face region statistical average brightness value;
y (xi,yi) is the average luminance value of each partition of the hardware partition;
when the head information is obtained: head position information upper left and lower right headInfo (x 2, y 2) (x 3, y 3):
wherein face_mean is the statistical average brightness value of the head region;
y (xi,yi) is the average luminance value of each partition of the hardware partition;
s4, information acquisition and control strategies:
s4.1, acquiring hardware information by adopting a traditional automatic exposure algorithm, wherein the calculated brightness value is ae_mean:
dividing the whole image into 15 x 15 partitions, and counting R/Gr/Gb/B four-component average value information by each partition block;
the BT601 standard is used to convert the rectangular in-frame R/Gr/Gb/B to luminance information y,
and (3) calculating:
where ae _ mean is the overall hardware partition statistical average luminance value,
y (i,j) is the average luminance value of each partition of the hardware partition;
s4.2, obtaining the weight of the face or head statistics value as face_weight;
s4.3, acquiring a face or head statistic value adjustable range [ ae_face_min, ae_face_max ];
s5, calculating statistical information of the human face or the human head.
The value of ae_mean in S4.1 is 0-255, the brightness values of different scenes are different, the brightness value of the overexposure scene is larger and is close to 255, and the extremely dark scene is smaller and is close to 0;
s4.2, wherein the value of face_weight is 0-8, wherein 0 represents the brightness value counted by using the traditional automatic exposure algorithm, 8 represents the brightness value counted by using the face or head information, and 1-7 are between the two;
the value of ae_face_min in S4.3 is the auto-exposure target brightness value minus 15, and the value of ae_face_max is the auto-exposure target brightness value plus 10, wherein the auto-exposure target brightness value is controlled by the user.
The step S2.1 further comprises:
s2.1.1, determine whether a face is present? If so, proceed to step S2.1.2; if not, go to step S2.1.3;
s2.1.2, obtaining the upper left corner and the lower right corner (x 0, y 0) (x 1, y 1) of the face position information faceInfo through a yolov5 face detection algorithm; step S2.1.5 is then performed;
s2.1.3, judging whether there is a person's head? If so, proceed to step S2.1.4; if not, go to step S2.1.5;
s2.1.4, acquiring the upper left corner and the lower right corner (x 2, y 2) (x 3, y 3) of head position information headInfo through a yolov5 head detection algorithm; step S2.1.5 is then performed;
s2.1.5, turning off the conventional auto-exposure glare suppression mode; the conventional auto exposure compensation mode is turned off.
The step S4.2 is that the weight of the face or head statistical value is controlled by a software strategy, wherein the software strategy is that users can adjust under different scenes and different application requirements, and controllable ranges are set to 0-8, wherein 0 represents a brightness value counted by using a traditional automatic exposure algorithm, 8 represents a brightness value counted by using face or head information, and 1-7 are between the two; and step S4.3, acquiring the face or head statistic adjustable range as an experience value.
The method further comprises steps S6, S7,
s6, information transfer:
transferring ae_face_mean to a traditional automatic exposure algorithm and covering the brightness value ae_mean calculated by the traditional automatic exposure algorithm; namely: ae_mean=ae_face_mean;
s7, traditional automatic exposure:
a conventional auto-exposure algorithm is performed.
The step S5 further includes:
obtaining ae_mean and face_weight according to the steps S3 and S4;
s5.1, determine ae_face_mean < ae_mean is true? If yes, go on step S5.2; if not, carrying out step S7;
s5.2, ae_face_mean=ae_mean- (ae_mean-face_mean) ×face_weight; then, respectively judging the steps S5.3 and S5.4;
s5.3, determine ae_face_mean < ae_face_min is true? If so, ae_mean=ae_face_min, followed by step S7; if not, carrying out step S6;
s5.4, determine ae_face_mean > ae_face_max is true? If so, ae_mean=ae_face_max, then step S7 is performed; if not, step S6 is performed.
Thus, the present application has the advantages that:
1. the stability and the robustness of the face detection algorithm are improved by using a kalman filter tracking algorithm and a yolov5 human head detection algorithm.
2. The face information is collected under the environments of low illumination, weak light, backlight and the like, the exposure gain is automatically adjusted according to the face information, the overexposure or underexposure of the face is effectively solved, and the face information is saved.
3. Strategy control on the face adjustment range according to the experience value of the traditional automatic exposure algorithm compensates the delay problem of the face positioning algorithm to a certain extent.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate and together with the description serve to explain the invention.
FIG. 1 is a schematic flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of the operation of the algorithm of the present invention constructing an automatic exposure module.
Fig. 3 is a flow chart of an embodiment of an automatic exposure method based on face and head positioning tracking.
Detailed Description
In order that the technical content and advantages of the present invention may be more clearly understood, a further detailed description of the present invention will now be made with reference to the accompanying drawings.
The invention relates to a method for automatically exposing a face scene according to the intensity of external light, in particular to an automatic exposure method based on the positioning and tracking of a face head, as shown in figure 1, the whole idea of the method is that the face positioning is introduced to solve the problem that the face information is lost under the conditions of dim light, weak light, backlight and backlight in the traditional automatic exposure algorithm, and the method comprises the following steps:
s1, positioning a human face and a human head:
the human face and human head positioning is introduced to solve the problem that the human face information is lost under the conditions of dim light, weak light, backlight and backlight in the traditional automatic exposure algorithm;
s2, screening face or head information:
only the face information is considered under the condition that the face information and the head information exist;
s3, screening statistical information:
acquiring hardware statistical information, traversing an AE hardware statistical module which is a part of an Automatic Exposure (AE) module, wherein the AE module comprises software and hardware, the hardware part is used for making statistical information, the software part is used for making an AE module control strategy, the AE statistical information part comprises brightness information (brightness value and pixel number) of a partition, a global 256-section brightness histogram is acquired, and the brightness value with the largest brightness value of a face or a head area in the hardware statistical information is recorded as face_mean according to face or head information, and the strategy is used for effectively preventing brightness difference under multiple faces and the face area is pulled to be exposed; the algorithm of the AE module is to calculate proper sensitivity, aperture and exposure time allocated to an image sensor (sensor) and an Image Signal Processing (ISP) according to exposure = sensitivity x aperture x exposure time by analyzing the current scene, so that the image sensor outputs or the image signal processing outputs before automatic exposure and light measurement reach a certain brightness, as shown in fig. 2;
further, the working principle of the algorithm of the automatic exposure module is as follows: by analyzing the current scene, the appropriate exposure amount allocated to the image sensor (sensor) and the Image Signal Processing (ISP) is calculated, and the best image effect is obtained. I.e. obtaining information such as image brightness, comparing the required target brightness, and then adjusting the exposure and gain of the image sensor. Auto exposure is the process of feedback computation between Image Signal Processing (ISP) and image sensor (sensor), including configuration and coordination of exposure time and gain.
S4, control strategy:
the control weight of the face or head statistic value is introduced as face_weight;
introducing face or head statistics value adjustable range experience values ae_face_min and ae_face_max;
s5, calculating statistical information of human faces or human heads
And (3) screening and obtaining information according to the steps S3 and S4, and calculating the formulas (1) and (2):
ae_face_mean=ae_mean- (ae_mean-face_mean) ×face_weight formula (1)
The ae_face_mean is the current scene brightness value calculated by hardware according to face or head information statistics.
According to the method and the device, through analysis of the current scene, the proper exposure amount configured for the image sensor and image signal processing is calculated, and the optimal image effect is obtained. I.e. obtaining information such as image brightness, comparing the required target brightness, and then adjusting the exposure and gain of the image sensor. Automatic exposure is the process of feedback computation between image signal processing and image sensors, including configuration and coordination of exposure time and gain.
Further, the implementation step comprises the following steps:
s1, face and head positioning, which comprises the following steps:
s1.1, positioning the position of a human head of a human face through a yolov5 human face detection algorithm;
s1.2, a plurality of frames in the video stream are detected but are not detected, and in order to solve the instability of a detection algorithm, a kalman filter is added to track the human face or the human head, so that the stability of the detection algorithm is improved;
s1.3, in some backlight and dim light scenes, face information is basically lost, the detection effect is poor, the robustness of a face detection algorithm is improved, yolov5 head detection is increased, and the robustness of the detection algorithm is improved;
s2, screening face and head information, including:
s2.1, acquiring face or head information
S2.2, human face or human head information transmission
The face or head information is transmitted to an automatic exposure AE module;
s2.3, judging the information of the human face or the human head
Judging whether effective face or head information exists, if not, executing an algorithm of a traditional exposure module, and directly decomposing according to target brightness to obtain a group of new exposure parameters, wherein the new exposure parameters comprise combinations of different image sensor analog gains, image sensor digital gains, image signal processing digital gains, apertures and shutter speeds;
if the automatic exposure module exists, acquiring hardware statistical information according to the face or head information, and simultaneously closing a strong light suppression and exposure compensation mode under the traditional automatic exposure module;
s3, screening statistical information:
acquiring hardware statistical information face_mean according to face information faceInfo or header info: dividing the whole image into 15 x 15 partitions, and counting R/Gr/Gb/B four-component average value information by each partition block;
the BT601 standard is used to convert the rectangular in-frame R/Gr/Gb/B to luminance information y,
and (3) calculating:
taking face information as an example: face position information left upper corner and right lower corner faceInfo (x 0, y 0) (x 1, y 1)
Wherein face_mean is the face region statistical average brightness value
y (xi,yi) Is the average luminance value of each partition of the hardware partition;
taking the head information as an example: head position information left upper corner and right lower corner headInfo (x 2, y 2) (x 3, y 3)
Wherein face_mean is the statistical average brightness value of the head region
y (xi,yi) Is the average luminance value of each partition of the hardware partition;
s4, information acquisition and control strategies:
s4.1, acquiring hardware information by adopting a traditional automatic exposure algorithm, wherein the calculated brightness value is ae_mean:
dividing the whole image into 15 x 15 partitions, and counting R/Gr/Gb/B four-component average value information by each partition block;
the BT601 standard is used to convert the rectangular in-frame R/Gr/Gb/B to luminance information y,
and (3) calculating:
where ae _ mean is the overall hardware partition statistical average luminance value,
y (i,j) is the average luminance value of each partition of the hardware partition;
s4.2, obtaining the weight of the face or head statistics value as face_weight;
s4.3, acquiring a face or head statistic value adjustable range [ ae_face_min, ae_face_max ];
s4.1, the acquisition hardware adopts a traditional automatic exposure algorithm, the calculated brightness value is ae_mean, the value is 0-255, the brightness values of different scenes are different, for example, the brightness value of an overexposure scene is larger and is generally close to about 255, and the brightness value of an extremely dark scene is smaller and is generally close to about 0;
s4.2, obtaining face or head statistical values, wherein the weight of the face or head statistical values is face_weight, the weight is 0-8, 0 represents brightness values counted by using a traditional automatic exposure algorithm, 8 represents brightness values counted by using face or head information, and 1-7 are between the face and head statistical values;
s4.3, acquiring a human face or human head statistic value adjustable range ae_face_min and ae_face_max, wherein ae_face_min is the automatic exposure target brightness value minus 15; ae_face_max is the auto exposure target brightness value plus 10; wherein the auto exposure target brightness value is controlled by a user.
S5, calculating face or head statistical information:
and (3) screening and obtaining information according to the steps S3 and S4, and calculating the formulas (1) and (2):
ae_face_mean=ae_mean- (ae_mean-face_mean) ×face_weight formula (1)
The ae_face_mean is a current scene brightness value calculated by hardware according to face or head information statistics; the ae_face_mean is the current scene brightness value; the traditional automatic exposure algorithm is based on the current scene brightness value obtained by calculation of the whole image, and ae_face_mean is based on the brightness value obtained by calculation of the face or the head in the image;
s6, information transfer:
the face_ae_mean is transmitted to a traditional automatic exposure algorithm and covers the brightness value ae_mean calculated by the traditional automatic exposure algorithm;
s7, traditional automatic exposure:
a conventional auto-exposure algorithm is performed.
In summary, as shown in fig. 3, the step flow of the embodiment is as follows:
s1, starting, positioning the head of a human face;
s2, screening face and head information, including:
is it determined whether to initiate face auto exposure faceAE mode? If yes, go on step S2.1; if not, carrying out step S7;
s2.1, acquiring face and head information
S2.1.1, determine whether a face is present? If so, proceed to step S2.1.2; if not, go to step S2.1.3;
s2.1.2, obtaining the upper left corner and the lower right corner (x 0, y 0) (x 1, y 1) of the face position information faceInfo through a yolov5 face detection algorithm; step S2.1.5 is then performed;
s2.1.3, judging whether there is a person's head? If so, proceed to step S2.1.4; if not, go to step S2.1.5;
s2.1.4, acquiring the upper left corner and the lower right corner (x 2, y 2) (x 3, y 3) of head position information headInfo through a yolov5 head detection algorithm; step S2.1.5 is then performed;
s2.1.5, turning off the conventional auto-exposure glare suppression mode; the conventional auto exposure compensation mode is turned off.
S2.2, human face or human head information transmission
The face or head information is transmitted to an automatic exposure module;
s2.3, judging the information of the human face or the human head
Judging whether effective face or head information exists or not, and executing a traditional exposure algorithm does not exist; the method comprises the steps of acquiring hardware statistical information according to human face or human head information, and simultaneously closing a strong light suppression and exposure compensation mode under a traditional automatic exposure module;
s3, statistical information screening
Acquiring a hardware statistics face_mean according to face information faceInfo or head information headInfo;
s4, controlling strategy
Get ae_mean (traditional exposure algorithm statistics), face_weight (user input);
the control weight of the face or head statistic value is introduced as face_weight;
introducing face or head statistics value adjustable range experience values ae_face_min and ae_face_max;
s5, calculating statistical information of human faces or human heads
S5.1, determine ae_face_mean < ae_mean is true? If yes, go on step S5.2; if not, carrying out step S7;
s5.2, ae_face_mean=ae_mean- (ae_mean-face_mean) ×face_weight; then, respectively judging the steps S5.3 and S5.4;
s5.3, determine ae_face_mean < ae_face_min is true? If so, ae_mean=ae_face_min, followed by step S7; if not, carrying out step S6;
s5.4, determine ae_face_mean > ae_face_max is true? If so, ae_mean=ae_face_max, then step S7 is performed; if not, carrying out step S6;
s6, transmitting ae_face_mean to a traditional automatic exposure algorithm and covering the brightness value ae_mean calculated by the traditional automatic exposure algorithm in a statistics mode; namely: ae_mean=ae_face_mean;
step S7 is carried out;
s7, traditional automatic exposure: algorithms of conventional auto-exposure modules are performed.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, and various modifications and variations can be made to the embodiments of the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. An automatic exposure method based on face and head positioning tracking is characterized by comprising the following steps:
s1, positioning a human face and a human head:
positioning the positions of the human face and the human head through a human face or human head detection algorithm, and introducing human face and human head positioning to solve the problem that the human face information is lost under dim light, weak light, backlight and backlight in the traditional automatic exposure algorithm;
s2, screening face and head information:
only the face information is considered under the condition that the face information and the head information exist;
s3, screening statistical information:
acquiring hardware statistics information, and acquiring the brightness value with the largest brightness value of a face or a head area in the hardware statistics information according to the face or the head information, wherein the brightness value is marked as face_mean, and the strategy effectively prevents brightness differences under multiple faces and the face area is pulled to be exploded;
s4, control strategy:
the control weight of the face or head statistic value is introduced as face_weight;
introducing a face or head experience value to control the range ae_face_min and ae_face_max;
s5, calculating statistical information of human faces or human heads
And (3) screening and obtaining information according to the steps S3 and S4, and calculating the formulas (1) and (2):
ae_face_mean=ae_mean- (ae_mean-face_mean) ×face_weight formula (1)
The ae_face_mean is the current scene brightness value calculated by the automatic exposure hardware statistics module according to face or head information statistics.
2. The automatic exposure method based on face and head positioning tracking according to claim 1, wherein the obtaining of the hardware statistics information in the step S3 is achieved by traversing an AE hardware statistics module, the AE hardware statistics module is a part of an automatic exposure AE module, the AE module comprises software and hardware, the hardware part is used for statistics information, the software part is used for policy controlled by the AE module, the AE statistics information part comprises brightness information of a subarea, namely a brightness value and a pixel number, a global 256-segment brightness histogram, and the algorithm of the AE module is used for calculating proper sensitivity, aperture and exposure time configured for an image sensor and image signal processing according to exposure = sensitivity by analyzing a current scene, so that the image sensor outputs or an image output of the image signal processing before automatic exposure reaches a certain brightness.
3. The automatic exposure method based on the face and head positioning tracking according to claim 2, further comprising:
s1, face and head positioning, which comprises the following steps:
s1.1, positioning a face position through a yolov5 face detection algorithm;
s1.2, adding kalman filtering to track the face position to improve the stability of a detection algorithm;
s1.3, increasing yolov5 human head detection and improving the robustness of a detection algorithm;
s2, screening face and head information, including:
s2.1, face or head information acquisition: only the face information is considered when the face information and the head information exist;
s2.2. human face or human head information transfer
The face or head information is transmitted to the AE module;
s2.3. judging face or head information
Determine whether there is valid face or head information?
If the target brightness does not exist, a traditional automatic exposure module is executed, and a set of new exposure parameters are obtained directly according to the target brightness, wherein the new exposure parameters comprise combinations of different image sensor analog gains, image sensor digital gains, image signal processing digital gains, apertures and shutter speeds; if the data exists, acquiring hardware statistical information according to the face or head information, and simultaneously closing a strong light suppression and exposure compensation mode under the traditional AE module;
s3, screening statistical information:
acquiring hardware statistical information face_mean according to face information faceInfo or header info: dividing the whole image into 15 x 15 partitions, and counting R/Gr/Gb/B four-component average value information by each partition block;
the BT601 standard is used to convert the rectangular in-frame R/Gr/Gb/B to luminance information y,
and (3) calculating:
when face information is obtained: face position information upper left and lower right angles faceInfo (x 0, y 0) (x 1, y 1):
the face_mean is a face region statistical average brightness value;
y (xi,yi) is the average luminance value of each partition of the hardware partition;
when the head information is obtained: head position information upper left and lower right headInfo (x 2, y 2) (x 3, y 3):
wherein face_mean is the statistical average brightness value of the head region;
y (xi,yi) is the average luminance value of each partition of the hardware partition;
s4, information acquisition and control strategies:
s4.1, acquiring hardware information by adopting a traditional automatic exposure algorithm, wherein the calculated brightness value is ae_mean:
dividing the whole image into 15 x 15 partitions, and counting R/Gr/Gb/B four-component average value information by each partition block;
the BT601 standard is used to convert the rectangular in-frame R/Gr/Gb/B to luminance information y,
and (3) calculating:
where ae _ mean is the overall hardware partition statistical average luminance value,
y (i,j) is the average luminance value of each partition of the hardware partition;
s4.2, obtaining the weight of the face or head statistics value as face_weight;
s4.3, acquiring a face or head statistic value adjustable range [ ae_face_min, ae_face_max ];
s5, calculating statistical information of the human face or the human head.
4. The automatic exposure method based on the face and the head positioning and tracking according to claim 3, wherein,
the value of ae_mean in S4.1 is 0-255, the brightness values of different scenes are different, the brightness value of the overexposure scene is larger and is close to 255, and the extremely dark scene is smaller and is close to 0;
s4.2, wherein the value of face_weight is 0-8, wherein 0 represents the brightness value counted by using the traditional automatic exposure algorithm, 8 represents the brightness value counted by using the face or head information, and 1-7 are between the two;
the value of ae_face_min in S4.3 is the auto-exposure target brightness value minus 15, and the value of ae_face_max is the auto-exposure target brightness value plus 10, wherein the auto-exposure target brightness value is controlled by the user.
5. The automatic exposure method based on the face and head positioning tracking according to claim 3, wherein the step S2.1 further comprises:
s2.1.1, determine whether a face is present? If so, proceed to step S2.1.2; if not, go to step S2.1.3;
s2.1.2, obtaining the upper left corner and the lower right corner (x 0, y 0) (x 1, y 1) of the face position information faceInfo through a yolov5 face detection algorithm; step S2.1.5 is then performed;
s2.1.3, judging whether there is a person's head? If so, proceed to step S2.1.4; if not, go to step S2.1.5;
s2.1.4, acquiring the upper left corner and the lower right corner (x 2, y 2) (x 3, y 3) of head position information headInfo through a yolov5 head detection algorithm; step S2.1.5 is then performed;
s2.1.5, turning off the conventional auto-exposure glare suppression mode; the conventional auto exposure compensation mode is turned off.
6. The automatic exposure method based on face and head positioning tracking according to claim 5, wherein the step S4.2 is characterized in that the weight of the face or head statistics value obtained by the step S4.2 is controlled by a software strategy, the software strategy is that users can adjust under different scenes and different application requirements, and controllable ranges are set between 0 and 8, wherein 0 represents a brightness value counted by using a traditional automatic exposure algorithm, 8 represents a brightness value counted by using face or head information, and 1 to 7 are between the two brightness values; and step S4.3, acquiring the face or head statistic adjustable range as an experience value.
7. An automatic exposure method based on face and head positioning tracking according to claim 3, further comprising steps S6, S7,
s6, information transfer:
transferring ae_face_mean to a traditional automatic exposure algorithm and covering the brightness value ae_mean calculated by the traditional automatic exposure algorithm; namely: ae_mean=ae_face_mean;
s7, traditional automatic exposure:
a conventional auto-exposure algorithm is performed.
8. The automatic exposure method based on the face and head positioning tracking according to claim 7, wherein the step S5 further comprises:
obtaining ae_mean and face_weight according to the steps S3 and S4;
s5.1, determine ae_face_mean < ae_mean is true? If yes, go on step S5.2; if not, carrying out step S7;
s5.2, ae_face_mean=ae_mean- (ae_mean-face_mean) ×face_weight; then, respectively judging the steps S5.3 and S5.4;
s5.3, determine ae_face_mean < ae_face_min is true? If so, ae_mean=ae_face_min, followed by step S7; if not, carrying out step S6;
s5.4, determine ae_face_mean > ae_face_max is true? If so, ae_mean=ae_face_max, then step S7 is performed; if not, step S6 is performed.
CN202210686216.2A 2022-06-16 2022-06-16 Automatic exposure method based on face and head positioning tracking Pending CN117294947A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210686216.2A CN117294947A (en) 2022-06-16 2022-06-16 Automatic exposure method based on face and head positioning tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210686216.2A CN117294947A (en) 2022-06-16 2022-06-16 Automatic exposure method based on face and head positioning tracking

Publications (1)

Publication Number Publication Date
CN117294947A true CN117294947A (en) 2023-12-26

Family

ID=89243082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210686216.2A Pending CN117294947A (en) 2022-06-16 2022-06-16 Automatic exposure method based on face and head positioning tracking

Country Status (1)

Country Link
CN (1) CN117294947A (en)

Similar Documents

Publication Publication Date Title
CN110830779B (en) Image signal processing method and system
US8035728B2 (en) Method and apparatus providing rule-based auto exposure technique preserving scene dynamic range
WO2021109620A1 (en) Exposure parameter adjustment method and apparatus
KR20080076740A (en) Motion detecting device, motion detecting method, imaging device, and monitoring system
JP2749921B2 (en) Imaging device
KR20080084685A (en) Image pickup apparatus, image pickup method, exposure control method, and program
JP4600684B2 (en) Imaging apparatus and imaging method
KR20080095786A (en) Image capturing apparatus, image capturing method, exposure control method, and program
US20120162467A1 (en) Image capture device
JP4042432B2 (en) Imaging device
CN101227562A (en) Luminance correcting method
CN110047060B (en) Image processing method, image processing device, storage medium and electronic equipment
CN114666512B (en) Method and system for adjusting rapid automatic exposure
CN107835351B (en) Two camera modules and terminal
KR20110125154A (en) Apparatus and method for auto adjusting brightness of image taking device
Jiang et al. Multiple templates auto exposure control based on luminance histogram for onboard camera
JP2008219838A (en) Imaging apparatus, white balance control device and white balance control method
KR101005769B1 (en) Auto exposure and auto white-balance method for detecting high dynamic range conditions
CN117294947A (en) Automatic exposure method based on face and head positioning tracking
JP2008305122A (en) Image-processing apparatus, image processing method and program
US11678060B2 (en) Apparatus, method for controlling apparatus, and storage medium
JP3958700B2 (en) Digital camera
JP3100815B2 (en) Camera white balance control method
JP4123888B2 (en) Imaging apparatus, automatic exposure processing method
JP3948982B2 (en) Digital camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination