CN112822409B - Exposure parameter adjusting method and device - Google Patents

Exposure parameter adjusting method and device Download PDF

Info

Publication number
CN112822409B
CN112822409B CN202110197782.2A CN202110197782A CN112822409B CN 112822409 B CN112822409 B CN 112822409B CN 202110197782 A CN202110197782 A CN 202110197782A CN 112822409 B CN112822409 B CN 112822409B
Authority
CN
China
Prior art keywords
face
target
determining
target image
detection time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110197782.2A
Other languages
Chinese (zh)
Other versions
CN112822409A (en
Inventor
管清岩
张思博
瞿二平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110197782.2A priority Critical patent/CN112822409B/en
Publication of CN112822409A publication Critical patent/CN112822409A/en
Application granted granted Critical
Publication of CN112822409B publication Critical patent/CN112822409B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an exposure parameter adjusting method and device, wherein the method comprises the following steps: acquiring one or more face regions of a target image in a target video; determining whether at least one face area in the one or more face areas belongs to a face density normal condition or not based on a time filtering mechanism; if the judgment result is yes, determining a target face area of the target image based on a tracking filtering mechanism; the exposure parameters of the target image are adjusted according to the target face area, so that the problems that in the related technology, the brightness of the image in the video can be changed repeatedly under the condition that the density of the face in the video is low or the brightness of a plurality of faces is different, and the overall effect of the video is influenced can be solved, stable video quality is ensured without repeated brightness change under the condition that the density of the face in the video is low or the brightness of the plurality of faces is different, and at least one face is ensured to be accurately exposed.

Description

Exposure parameter adjusting method and device
Technical Field
The invention relates to the field of image processing, in particular to an exposure parameter adjusting method and device.
Background
In order to improve the exposure effect of a face region in a video, a self-adaptive exposure method is provided in the related art, and whether the brightness of the face region is distributed in a specific threshold range or not is judged, and if the brightness of the face region is beyond the threshold range, exposure parameters are adjusted according to the weighted brightness of the face region.
The scheme does not process the situation that the face density in the image is low, and when the face density in the image is low or the face with short duration appears, the brightness of the image can be changed repeatedly, so that the overall effect of the video is influenced. The situation that a plurality of faces appear in the image is not distinguished, and due to the fact that the illumination condition is possibly complex, the situation that the brightness of the plurality of faces in the image is different can occur, the brightness of the image changes repeatedly, and the overall effect of the video is affected.
Aiming at the problem that in the related technology, the brightness of images in a video can be changed repeatedly under the condition that the density of faces in the video is low or the brightness of a plurality of faces is different, and the overall effect of the video is influenced, no solution is provided.
Disclosure of Invention
The embodiment of the invention provides an exposure parameter adjusting method and device, which at least solve the problem that in the related art, the brightness of images in a video can be repeatedly changed under the condition that the density of human faces in the video is low or the brightness of a plurality of human faces is different, so that the overall effect of the video is influenced.
According to an embodiment of the present invention, there is provided an exposure parameter adjustment method including: acquiring one or more face regions of a target image in a target video; determining whether at least one face area in the one or more face areas belongs to a face density normal condition based on a time filtering mechanism; determining a target face area of the target image based on a tracking and filtering mechanism under the condition that the judgment result is yes; and adjusting the exposure parameters of the target image according to the target face area.
In an exemplary embodiment, determining whether the face region of the target image belongs to a normal face density condition based on the temporal filtering mechanism comprises: acquiring first detection time of a current face region in the target image, and acquiring second detection time of the face region of the image with the largest difference value from the first detection time from a pre-established circular queue for storing the face region detection time of the target video; and determining whether the face area of the target image belongs to a face density normal condition or not according to the first detection time and the second detection time.
In an exemplary embodiment, determining whether the face region of the target image belongs to a face density normal condition according to the second detection time and the second detection time includes: determining a time difference between the first detection time and the second detection time; judging whether the time difference value is smaller than a preset time threshold value or not; if the judgment result is yes, determining that the face area of the target image belongs to the face density normal condition; and under the condition that the judgment result is negative, determining that the face area of the target image does not belong to the face density normal condition.
In one exemplary embodiment, determining the target face region of the target image based on the tracking filter mechanism comprises: acquiring face information contained in the target image, wherein the face information at least comprises: a face region and a face ID; and determining the target face area according to the face information.
In one exemplary embodiment, determining the target face region from the face information comprises: selecting a face area according to the face information based on a preset rule; judging whether the face areas selected in the continuous T frames are the same target face or not; and under the condition that the judgment result is yes, determining the selected face area as the target face area.
In an exemplary embodiment, selecting a face region according to the face message based on a preset rule includes: acquiring a face region at a preset position in a face sequence; acquiring a face area closest to the center of the image; acquiring a face area with the movement direction closest to the center of the image; and acquiring the face region with the largest occurrence frequency in the past preset time period.
In an exemplary embodiment, the determining whether the face regions selected in the consecutive T frames are the same target face includes: putting the face IDs detected by the continuous T frames into a preset circular queue with the length of T; judging whether the circular queue only contains one face ID; if the judgment result is yes, determining that the face areas selected in the continuous T frames are the same target face; and under the condition that the judgment result is negative, determining that the face areas selected in the continuous T frames are not the same target face.
In an exemplary embodiment, the determining whether the face regions selected in the consecutive T frames are the same target face includes: storing a face ID in the face region currently used for stabilizing exposure parameters through a set first register, storing a target face ID in the face region in the previous frame through a set second register, and storing the continuous occurrence frequency of the target face ID in the face region in the previous frame through a set third register; when the face ID in the face area is obtained, comparing the face ID with the face ID in the second register; if the times in the third register are the same, adding 1 to the times in the third register, and if the times are not the same, setting the times in the third register to be zero; under the condition that the times in the third register are larger than a preset threshold value, the same target face is determined in the face area selected in the continuous T frames; and under the condition that the times in the third register are smaller than the preset threshold value, determining that the face areas selected in the continuous T frames are not the same target face.
According to still another embodiment of the present invention, there is also provided an exposure parameter adjustment apparatus including: the acquisition module is used for acquiring one or more face areas of a target image in a target video; the first determination module is used for determining whether at least one face area in the one or more face areas belongs to a face density normal condition or not based on a time filtering mechanism; the second determining module is used for determining a target face area of the target image based on a tracking and filtering mechanism under the condition that the judging result is yes; and the adjusting module is used for adjusting the exposure parameters of the target image according to the target face area.
In one exemplary embodiment, the first determining module includes: the first processing unit is used for acquiring first detection time of a current face region in the target image and acquiring second detection time of the face region of the image with the largest difference value from the first detection time from a pre-established circular queue for storing the face region detection time of the target video; and the first determining unit is used for determining whether the face area of the target image belongs to the face density normal condition or not according to the first detection time and the second detection time.
In an exemplary embodiment, the first determining unit is further configured to: determining a time difference value between the first detection time and the second detection time; judging whether the time difference value is smaller than a preset time threshold value or not; if the judgment result is yes, determining that the face area of the target image belongs to the face density normal condition; and under the condition that the judgment result is negative, determining that the face area of the target image does not belong to the face density normal condition.
In an exemplary embodiment, the second determining module includes: a second obtaining unit, configured to obtain face information included in the target image, where the face information at least includes: a face region and a face ID; and the second determining unit is used for determining the target face area according to the face information.
In an exemplary embodiment, the adjusting module includes: the selecting unit is used for selecting a face area according to the face information based on a preset rule; the judging unit is used for judging whether the face areas selected in the continuous T frames are the same target face or not; and the third determining unit is used for determining the selected face area as the target face area under the condition that the judgment result is yes.
In an exemplary embodiment, the adjusting module is further configured to: acquiring a face region at a preset position in a face sequence; acquiring a face region closest to the center of the image; acquiring a face area with the movement direction closest to the center of the image; and acquiring the face region with the largest occurrence frequency in the past preset time period.
In an exemplary embodiment, the adjusting module is further configured to: putting the face IDs detected by the continuous T frames into a preset circular queue with the length of T; judging whether the circular queue only contains one face ID; if the judgment result is yes, determining that the face areas selected in the continuous T frames are the same target face; and under the condition that the judgment result is negative, determining that the face areas selected in the continuous T frames are not the same target face.
In an exemplary embodiment, the adjusting module is further configured to: storing a face ID in the face region currently used for stabilizing exposure parameters through a set first register, storing a target face ID in the face region in the previous frame through a set second register, and storing the continuous occurrence frequency of the target face ID in the face region in the previous frame through a set third register; when the face ID in the face area is obtained, comparing the face ID with the face ID in the second register; if the times in the third register are the same, adding 1 to the times in the third register, and if the times are not the same, setting the times in the third register to be zero; under the condition that the times in the third register are larger than a preset threshold value, the same target face is determined in the face area selected in the continuous T frames; and under the condition that the times in the third register are smaller than the preset threshold value, determining that the face areas selected in the continuous T frames are not the same target face.
According to a further embodiment of the present invention, a computer-readable storage medium is also provided, in which a computer program is stored, wherein the computer program is configured to perform the steps of any of the above-described method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, one or more face regions of a target image in a target video are obtained; determining whether at least one face area in the one or more face areas belongs to a face density normal condition based on a time filtering mechanism; if the judgment result is yes, determining a target face area of the target image based on a tracking filtering mechanism; the exposure parameters of the target image are adjusted according to the target face area, so that the problems that in the related technology, the brightness of the image in the video can be changed repeatedly under the condition that the density of the face in the video is low or the brightness of a plurality of faces is different, and the overall effect of the video is influenced can be solved, stable video quality is ensured without repeated brightness change under the condition that the density of the face in the video is low or the brightness of the plurality of faces is different, and accurate exposure of at least one face is ensured.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a mobile terminal of an exposure parameter adjustment method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an exposure parameter adjustment method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating an exposure parameter adjustment method according to an alternative embodiment of the present invention;
FIG. 4 is a flow chart of an exposure parameter adjustment method according to an alternative embodiment of the invention;
FIG. 5 is a flow chart of an exposure parameter adjustment method according to an alternative embodiment of the invention;
FIG. 6 is a flowchart illustrating an exposure parameter adjustment method according to an alternative embodiment of the present Invention (IV);
FIG. 7 is a schematic diagram of a face exposure stabilization procedure based on a temporal filtering mechanism according to an alternative embodiment of the present invention;
FIG. 8 is a schematic diagram of a face exposure stabilization procedure based on a tracking filtering mechanism according to an alternative embodiment of the present invention;
fig. 9 is a flowchart of an exposure parameter adjustment method according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that, in the present application, the embodiments and features of the embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Example 1
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking a mobile terminal as an example, fig. 1 is a block diagram of a hardware structure of the mobile terminal of the exposure parameter adjustment method according to the embodiment of the present invention, as shown in fig. 1, the mobile terminal may include one or more processors 102 (only one is shown in fig. 1) (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, and optionally, the mobile terminal may further include a transmission device 106 for a communication function and an input/output device 108. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of an application software, such as a computer program corresponding to the exposure parameter adjustment method in the embodiment of the present invention, and the processor 102 executes the computer program stored in the memory 104 to execute various functional applications and control of remote login, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In this embodiment, an exposure parameter adjusting method operating in the mobile terminal or the network architecture is provided, and fig. 2 is a flowchart of the exposure parameter adjusting method according to the embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, one or more face regions of a target image in a target video are obtained;
step S204, determining whether at least one face area in the one or more face areas belongs to a face density normal condition or not based on a time filtering mechanism;
step S206, under the condition that the judgment result is yes, determining a target face area of the target image based on a tracking and filtering mechanism;
and S208, adjusting the exposure parameters of the target image according to the target face area.
Through the steps S202 to S208, one or more face regions of a target image in a target video are obtained; determining whether at least one face area in the one or more face areas belongs to a face density normal condition based on a time filtering mechanism; determining a target face area of the target image based on a tracking and filtering mechanism under the condition that the judgment result is yes; the exposure parameters of the target image are adjusted according to the target face area, so that the problem that the brightness of the image in the video can be changed repeatedly under the condition that the density of the face in the video is low or the brightness of a plurality of faces is different in the related technology and the overall effect of the video is influenced can be solved, the stable quality of the video is ensured without repeated brightness change under the condition that the density of the face in the video image is low or the brightness of the plurality of faces is different, and the accurate exposure of at least one face is ensured.
Fig. 3 is a flowchart illustrating an exposure parameter adjustment method according to an alternative embodiment of the invention (i), and as shown in fig. 3, the step S204 includes:
step S302, acquiring first detection time of a current face region in the target image, and acquiring second detection time of the face region of the image with the largest difference value from the first detection time from a pre-established circular queue for storing the face region detection time of the target video;
step S304, determining whether the face area of the target image belongs to the face density normal condition according to the first detection time and the second detection time.
Namely, the first detection time of the current face region in the target image needs to be obtained firstly based on the time filtering mechanism, then the second detection time of the face region of the image with the largest difference value between the first detection time and the second detection time is obtained, and whether the face region of the target image belongs to the normal face density condition or not is determined according to the first detection time and the second detection time.
Fig. 4 is a schematic flow chart (ii) of an exposure parameter adjusting method according to an alternative embodiment of the present invention, and as shown in fig. 4, the step S304 includes:
step S402, determining a time difference value between the first detection time and the second detection time;
step S404, judging whether the time difference value is smaller than a preset time threshold value;
step S406, determining that the face area of the target image belongs to a face density normal condition under the condition that the judgment result is yes;
step 408, in the case that the judgment result is no, determining that the face area of the target image does not belong to the face density normal condition.
Namely, determining whether the face region belongs to the density normal condition according to the first detection time and the second detection time requires determining a time difference value of the first detection time and the second detection time, and determining according to a size relation between the difference value and a preset time threshold.
Fig. 5 is a schematic flow chart (iii) of an exposure parameter adjustment method according to an alternative embodiment of the invention, and as shown in fig. 5, the step S206 includes:
step S502, obtaining face information included in the target image, where the face information at least includes: a face region and a face ID;
step S504, the target face area is determined according to the face information.
That is, determining the target face region of the target image based on the tracking filter mechanism requires first obtaining the face region and the face ID of the target image, and then determining the target face region according to the face information.
Fig. 6 is a flowchart illustrating an exposure parameter adjustment method according to an alternative embodiment of the invention (four), and as shown in fig. 6, the step S504 includes:
step S602, selecting a face area according to the face information based on a preset rule;
step S604, judging whether the face areas selected in the continuous T frames are the same target face or not;
step S606, in case that the determination result is yes, determining that the selected face area is the target face area.
That is, determining the target face area according to the face information requires selecting the face area according to the face information based on a predetermined rule, and then judging whether the face areas selected in the consecutive T frames are identical to each other, if yes, determining that the selected face area is the target face area.
In an optional embodiment, the step S602 may specifically include: acquiring a face region at a preset position in a face sequence; acquiring a face region closest to the center of the image; acquiring a face area with the movement direction closest to the center of the image; and acquiring the face region with the largest occurrence frequency in the past preset time period.
That is, selecting a face region based on a preset rule requires acquiring a face region at a preset position in a face sequence, acquiring a face region closest to the center of an image, acquiring a face region closest to the center of the image in a motion direction, and acquiring a face region with the largest occurrence frequency in a past preset time period.
In an alternative embodiment, the step S604 includes: putting face IDs detected by continuous T frames into a preset circulating queue with the length of T; judging whether the circular queue only contains one face ID or not; if the judgment result is yes, determining that the face areas selected in the continuous T frames are the same target face; and under the condition that the judgment result is negative, determining that the face areas selected in the continuous T frames are not the same target face.
That is, whether the face areas selected in the continuous T frames are the same face needs to be firstly put into a previously set circular queue according to the face IDs detected by the continuous frames, then the number of the face IDs included in the circular queue is judged, if only one face ID is included, the face areas are determined to be the same target face, and otherwise, the face areas are different target faces.
In an alternative embodiment, the step S604 includes: storing a face ID in the face region currently used for stabilizing exposure parameters through a set first register, storing a target face ID in the face region in the previous frame through a set second register, and storing the continuous occurrence frequency of the target face ID in the face region in the previous frame through a set third register; when the face ID in the face area is obtained, comparing the face ID with the face ID in the second register; if the times in the third register are the same, adding 1 to the times in the third register, and if the times are not the same, setting the times in the third register to be zero; under the condition that the times in the third register are larger than a preset threshold value, the same target face is determined in the face area selected in the continuous T frames; and under the condition that the times in the third register are smaller than the preset threshold value, determining that the face areas selected in the continuous T frames are not the same target face.
Namely, judging whether the same face in the face area selected in the continuous T frames needs to register the face ID, the target face ID and the continuous occurrence frequency of the target face ID in the face area, comparing the newly acquired face ID with the registered face ID, updating the registered frequency in real time, and finally determining whether the same target face exists according to the size relationship between the determined frequency and a preset threshold value.
Fig. 7 is a schematic diagram of a face exposure stabilization process based on a temporal filtering mechanism according to an alternative embodiment of the present invention, as shown in fig. 7, including:
step S1, establishing a circular queue for storing the detection time;
step S2, judging whether the human face detection frame rate is higher than a threshold value T1; if yes, go to step S3; if not, go to step S4;
step S3, setting the length of the circular queue to be T1;
step S4, setting the length of the circular queue as the frame rate of face detection;
step S5, obtaining the current face detection result;
step S6, judging whether a human face is detected in the picture; if yes, go to step S7; if not, returning to the step S5 for re-detection;
step S7, covering the circular queue head element with the time t1 of face detection;
step S8, obtaining the earliest face detection time t2 in the queue;
step S9, the time difference threshold is T2, and whether | T1-T2| < T2 is met is judged; if yes, go to step S10; if not, returning to the step S5 for re-detection;
step S10, adjusting storage parameters according to the current face area; and ending the flow.
The specific steps are described as follows:
s1, a circular queue for storing the time when the face was detected is established.
And S2, setting the length of the circular queue according to the frame rate of the current face detection and a preset threshold value T1. When the detection frame rate is less than T1, setting the length of the circular queue as the detection frame rate; when the detection frame rate is greater than T1, the circular queue length is set to T1.
And S5, acquiring the result of the face detection. And acquiring face related information in the single-frame image by calling an artificial intelligence method, wherein the face related information comprises the number and the coordinates of the detected face regions, the detected time of each face region, the pixel brightness distribution of each region and the like.
And S6, judging whether the current image frame contains the face area. And if the face area is not contained, returning to the previous step to continuously acquire the face information of the next frame. If the current image contains a human face region, a time filtering mechanism is required to be started to ensure the stability of the overall exposure effect of the video.
And S7, adding the detection time t1 of the current face area into a queue, and dequeuing the element at the head of the queue. When a circular queue is used and the number of elements in the queue is greater than the circular queue length set in step 2, the newly enqueued element can overwrite the next element.
S8, searching the queue, and acquiring the earliest detection time t2 except the current face area in the queue. When a circular queue is employed, the next non-zero element in the circular direction may be taken as t 2.
S9, the L1 distance of T1 and T2 is compared, namely | T1-T2| is compared with the maximum interval time threshold T2. When the | T1-T2| > T2, the face area is considered to belong to a case where the face density is low, and is noise for face exposure. When the | T1-T2| < T2, the region is considered to belong to the case that the face density is normal, and the calculation of the face exposure parameters can be included.
And S10, when the face area is judged to be still valid after time filtering, adjusting the corresponding exposure parameter according to the brightness of the current face area.
Fig. 8 is a schematic diagram of a face exposure stabilization process based on a tracking filtering mechanism according to an alternative embodiment of the present invention, as shown in fig. 8, including:
step S1, judging whether the face exposure function is started; if yes, go to step S2; if not, ending the flow;
step S2, acquiring the face and ID contained in the current image;
step S3, determining a target face and recording an ID;
step S4, judging whether the target face IDs in the T frame are the same; if yes, go to step S5; if not, go to step S6;
step S5, adjusting exposure parameters based on the current face;
step S6, maintaining the existing exposure parameters;
step S7, issuing exposure parameters; and ending the flow.
The specific steps are described as follows:
s1, determine whether the face exposure stabilizing method mentioned in this proposal has been started. If the power is on, processing is performed according to the following steps. Otherwise the following steps are skipped.
And S2, acquiring the face data contained in the current image. Including the number and coordinates of each detected face in the image, the pixel brightness distribution in the region, the position number in the detection sequence, and the recognition result ID of the face.
And S3, selecting one of the detected faces and the identification result ID as a target according to the acquired face information. The method comprises a method of taking a face area and an ID at a specific position in an acquired face sequence as targets, a method of taking a face area and an ID closest to the center of an image as targets, a method of taking a face and an ID closest to the center of an image in a motion direction as targets, a method of taking a face and an ID with the largest occurrence frequency in a past period of time as targets, and the like.
And S4, judging whether the target faces in the latest continuous T frames are the same. Wherein T is a preset threshold. When the threshold value is larger, the exposure change is more stable, and the exposure parameter retention time is longer after the target face disappears from the image. When the threshold is smaller, the exposure variation is more sensitive. And exposure parameter configuration can be switched more quickly when the target face is replaced. When the threshold is 0, the module is logically closed, and the problem of face exposure flicker cannot be solved. And when the current face is judged to continuously appear more than T frames, adjusting the exposure parameters by adopting the face. Otherwise, the original exposure parameters are maintained, and the image self-adaptive exposure can be ensured to be stable.
In addition, a circular queue with the length of T needs to be set, and the ID corresponding to each face area is set as i. And after confirming the target face area each time, enqueuing the i, dequeuing the head element of the queue, if the queue only contains the i, considering that the target face always exists in the latest continuous T frame, and calculating the exposure parameter by taking the target face i as the reference. Otherwise, the target face is not considered to appear for a long time, and the original exposure parameters are kept.
Or three register variables S, L, C are set to indicate the face ID currently used for stabilizing the exposure parameters, the target face ID of the previous frame, and the number of times the target face ID of the previous frame appears continuously. Each time a target face is acquired, a comparison is made with L. If so, C is increased by 1, otherwise C is set to zero. When C does not exceed the threshold, the current exposure parameters are maintained. And when the C exceeds the threshold value, the newly appeared face is considered to be stable, S is set as a new ID, and the exposure parameters are adjusted according to the target face.
In addition, the information of the face area needs to be sent to modules such as an automatic exposure module and the like, and exposure parameters are calculated through the face area. Ensuring that at least one face is accurately exposed.
Example 2
An embodiment of the present invention further provides an exposure parameter adjusting apparatus, and fig. 9 is a flowchart of an exposure parameter adjusting method according to an embodiment of the present invention, as shown in fig. 9, including:
an obtaining module 92, configured to obtain one or more face regions of a target image in a target video;
a first determining module 94, configured to determine whether at least one face region of the one or more face regions belongs to a face density normality condition based on a temporal filtering mechanism;
a second determining module 96, configured to determine, if the determination result is yes, a target face region of the target image based on a tracking filtering mechanism;
and the adjusting module 98 is configured to adjust the exposure parameter of the target image according to the target face area.
By the device, the problem that in the related technology, the overall effect of a video is affected due to the fact that the brightness of images in the video can be changed repeatedly under the condition that the density of human faces in the video is low or the brightness of a plurality of human faces is different can be solved, the video quality is guaranteed to be stable without repeated brightness change under the condition that the density of human faces in the video is low or the brightness of a plurality of human faces is different, and at least one human face is guaranteed to be exposed accurately.
In an alternative embodiment, the first determining module 94 includes: the first processing unit is used for acquiring first detection time of a current face region in the target image and acquiring second detection time of the face region of the image with the largest difference value from the first detection time from a pre-established circular queue for storing the face region detection time of the target video; and the first determining unit is used for determining whether the face area of the target image belongs to the face density normal condition or not according to the first detection time and the second detection time.
Namely, the first detection time of the current face region in the target image needs to be obtained firstly based on the time filtering mechanism, then the second detection time of the face region of the image with the largest difference value between the first detection time and the second detection time is obtained, and whether the face region of the target image belongs to the normal face density condition or not is determined according to the first detection time and the second detection time.
In an optional embodiment, the first determining unit is further configured to: determining a time difference between the first detection time and the second detection time; judging whether the time difference value is smaller than a preset time threshold value or not; if the judgment result is yes, determining that the face area of the target image belongs to the face density normal condition; and under the condition that the judgment result is negative, determining that the face area of the target image does not belong to the face density normal condition.
Namely, determining whether the face region belongs to the density normal condition according to the first detection time and the second detection time requires determining a time difference value of the first detection time and the second detection time, and determining according to a size relation between the difference value and a preset time threshold.
In an alternative embodiment, the second determining module 96 includes: a second obtaining unit, configured to obtain face information included in the target image, where the face information at least includes: a face region and a face ID; and the second determining unit is used for determining the target face area according to the face information.
That is, determining the target face region of the target image based on the tracking filter mechanism requires first obtaining the face region and the face ID of the target image, and then determining the target face region according to the face information.
In an alternative embodiment, the adjusting module 98 includes: the selecting unit is used for selecting a face area according to the face information based on a preset rule; the judging unit is used for judging whether the face areas selected in the continuous T frames are the same target face or not; and the third determining unit is used for determining the selected face area as the target face area under the condition that the judgment result is yes.
That is, determining the target face area according to the face information requires selecting the face area according to the face information based on a predetermined rule, and then judging whether the face areas selected in the consecutive T frames are identical to each other, if yes, determining that the selected face area is the target face area.
In an alternative embodiment, the adjusting module 98 is further configured to: acquiring a face region at a preset position in a face sequence; acquiring a face area closest to the center of the image; acquiring a face area with the movement direction closest to the center of the image; and acquiring the face region with the largest occurrence frequency in the past preset time period.
That is, selecting a face region based on a preset rule requires acquiring a face region at a preset position in a face sequence, acquiring a face region closest to the center of an image, acquiring a face region closest to the center of the image in a motion direction, and acquiring a face region with the largest occurrence frequency in a past preset time period.
In an alternative embodiment, the adjusting module 98 is further configured to: putting the face IDs detected by the continuous T frames into a preset circular queue with the length of T; judging whether the circular queue only contains one face ID; if the judgment result is yes, determining that the face areas selected in the continuous T frames are the same target face; and under the condition that the judgment result is negative, determining that the face areas selected in the continuous T frames are not the same target face.
That is, whether the face areas selected in the continuous T frames are the same face needs to be firstly put into a previously set circular queue according to the face IDs detected by the continuous frames, then the number of the face IDs included in the circular queue is judged, if only one face ID is included, the face areas are determined to be the same target face, and otherwise, the face areas are different target faces.
In an alternative embodiment, the adjusting module 98 is further configured to: storing a face ID in the face region currently used for stabilizing exposure parameters through a set first register, storing a target face ID in the face region in a previous frame through a set second register, and storing the continuous occurrence times of the target face ID in the face region in the previous frame through a set third register; when the face ID in the face area is obtained, comparing the face ID with the face ID in the second register; if the times in the third register are the same, adding 1 to the times in the third register, and if the times are not the same, setting the times in the third register to be zero; under the condition that the times in the third register are larger than a preset threshold value, the same target face is determined in the face area selected in the continuous T frames; and under the condition that the times in the third register are smaller than the preset threshold value, determining that the face areas selected in the continuous T frames are not the same target face.
Namely, judging whether the same face in the face area selected in the continuous T frames needs to register the target face ID, the face ID of the previous frame and the continuous occurrence frequency of the face ID of the previous frame, comparing the newly acquired face ID with the registered face ID, updating the registered frequency in real time, and finally determining whether the same target face exists according to the size relationship between the determined frequency and the preset threshold.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Example 3
Embodiments of the present invention also provide a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring one or more face regions of a target image in the target video;
s2, determining whether at least one face area in the one or more face areas belongs to a face density normal condition based on a time filtering mechanism;
s3, determining a target face area of the target image based on a tracking and filtering mechanism under the condition that the judgment result is yes;
and S4, adjusting the exposure parameters of the target image according to the target face area.
Optionally, in this embodiment, the storage medium may include but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Example 4
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring one or more face regions of a target image in a target video;
s2, determining whether at least one face area in the one or more face areas belongs to a face density normal condition or not based on a time filtering mechanism;
s3, determining a target face area of the target image based on a tracking and filtering mechanism under the condition that the judgment result is yes;
and S4, adjusting the exposure parameters of the target image according to the target face area.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. An exposure parameter adjustment method, comprising:
acquiring one or more face regions of a target image in a target video;
determining whether at least one face area in the one or more face areas belongs to a face density normal condition based on a time filtering mechanism;
if the judgment result is yes, determining a target face area of the target image based on a tracking filtering mechanism, wherein the determining comprises the following steps: judging whether the face areas selected in the continuous T frames are the same target face or not; if the judgment result is yes, determining the selected face area as the target face area;
adjusting exposure parameters of the target image according to the target face area;
wherein determining whether the face region of the target image belongs to a face density normal condition based on the temporal filtering mechanism comprises:
acquiring first detection time of a current face region in the target image, and acquiring second detection time of the face region of the image with the largest difference value from the first detection time from a pre-established circular queue for storing the face region detection time of the target video;
determining a time difference between the first detection time and the second detection time;
judging whether the time difference value is smaller than a preset time threshold value or not;
if the judgment result is yes, determining that the face area of the target image belongs to the face density normal condition;
and under the condition that the judgment result is negative, determining that the face area of the target image does not belong to the face density normal condition.
2. The method of claim 1, wherein determining a target face region of the target image based on the tracking filter mechanism comprises:
acquiring face information contained in the target image, wherein the face information at least comprises: a face region and a face ID;
and determining the target face area according to the face information.
3. The method of claim 1, wherein selecting a face region according to the face message based on a predetermined rule comprises:
acquiring a face region at a preset position in a face sequence;
acquiring a face area closest to the center of the image;
acquiring a face area with the movement direction closest to the center of the image;
and acquiring the face region with the largest occurrence frequency in the past preset time period.
4. The method of claim 1, wherein determining whether the face regions selected in consecutive T frames are the same target face comprises:
putting the face IDs detected by the continuous T frames into a preset circular queue with the length of T;
judging whether the circular queue only contains one face ID;
if the judgment result is yes, determining that the face areas selected in the continuous T frames are the same target face;
and under the condition that the judgment result is negative, determining that the face areas selected in the continuous T frames are not the same target face.
5. The method of claim 1, wherein determining whether the face regions selected in consecutive T frames are the same target face comprises:
storing a face ID in the face region currently used for stabilizing exposure parameters through a set first register, storing a target face ID in the face region in a previous frame through a set second register, and storing the continuous occurrence times of the target face ID in the face region in the previous frame through a set third register;
when the face ID in the face area is obtained, comparing the face ID with the face ID in the second register;
if the times in the third register are the same, adding 1 to the times in the third register, and if the times are not the same, setting the times in the third register to be zero;
under the condition that the times in the third register are larger than a preset threshold value, the same target face is determined in the face area selected in the continuous T frames;
and under the condition that the times in the third register are smaller than the preset threshold value, determining that the face areas selected in the continuous T frames are not the same target face.
6. An exposure parameter adjustment apparatus, comprising:
the acquisition module is used for acquiring one or more face regions of a target image in a target video;
the first determination module is used for determining whether at least one face area in the one or more face areas belongs to a face density normal condition or not based on a time filtering mechanism;
a second determining module, configured to determine a target face region of the target image based on a tracking filtering mechanism if the determination result is yes, where the second determining module includes: judging whether the face areas selected in the continuous T frames are the same target face or not; if the judgment result is yes, determining the selected face area as the target face area;
the adjusting module is used for adjusting the exposure parameters of the target image according to the target face area;
the first determining module comprises: the first processing unit is used for acquiring first detection time of a current face region in the target image and acquiring second detection time of the face region of the image with the largest difference value from the first detection time from a pre-established circular queue for storing the face region detection time of the target video; the first determining unit is used for determining whether the face area of the target image belongs to the face density normal condition or not according to the first detection time and the second detection time;
the first determination unit is further configured to: determining a time difference between the first detection time and the second detection time; judging whether the time difference value is smaller than a preset time threshold value or not; if the judgment result is yes, determining that the face area of the target image belongs to the face density normal condition; and under the condition that the judgment result is negative, determining that the face area of the target image does not belong to the face density normal condition.
7. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to carry out the method of any one of claims 1 to 5 when executed.
8. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 5.
CN202110197782.2A 2021-02-22 2021-02-22 Exposure parameter adjusting method and device Active CN112822409B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110197782.2A CN112822409B (en) 2021-02-22 2021-02-22 Exposure parameter adjusting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110197782.2A CN112822409B (en) 2021-02-22 2021-02-22 Exposure parameter adjusting method and device

Publications (2)

Publication Number Publication Date
CN112822409A CN112822409A (en) 2021-05-18
CN112822409B true CN112822409B (en) 2022-06-24

Family

ID=75864666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110197782.2A Active CN112822409B (en) 2021-02-22 2021-02-22 Exposure parameter adjusting method and device

Country Status (1)

Country Link
CN (1) CN112822409B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101247480A (en) * 2008-03-26 2008-08-20 北京中星微电子有限公司 Automatic exposure method based on objective area in image
CN105791673A (en) * 2015-01-09 2016-07-20 佳能株式会社 Exposure control apparatus, control method therefor, and image pickup apparatus
CN106210523A (en) * 2016-07-22 2016-12-07 浙江宇视科技有限公司 A kind of exposure adjustment method and device
CN110913147A (en) * 2018-09-14 2020-03-24 浙江宇视科技有限公司 Exposure adjusting method and device and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4945722B2 (en) * 2006-02-15 2012-06-06 Hoya株式会社 Imaging device
US8233789B2 (en) * 2010-04-07 2012-07-31 Apple Inc. Dynamic exposure metering based on face detection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101247480A (en) * 2008-03-26 2008-08-20 北京中星微电子有限公司 Automatic exposure method based on objective area in image
CN105791673A (en) * 2015-01-09 2016-07-20 佳能株式会社 Exposure control apparatus, control method therefor, and image pickup apparatus
CN106210523A (en) * 2016-07-22 2016-12-07 浙江宇视科技有限公司 A kind of exposure adjustment method and device
CN110913147A (en) * 2018-09-14 2020-03-24 浙江宇视科技有限公司 Exposure adjusting method and device and electronic equipment

Also Published As

Publication number Publication date
CN112822409A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN111355864B (en) Image flicker elimination method and device
CN109167931B (en) Image processing method, device, storage medium and mobile terminal
US10706512B2 (en) Preserving color in image brightness adjustment for exposure fusion
CN109587407A (en) Exposure amount adjustment method, device and the computer equipment of image taking
CN111310727B (en) Object detection method and device, storage medium and electronic device
CN108932696B (en) Signal lamp halo suppression method and device
US9342896B2 (en) Image processing apparatus, image apparatus, image processing method, and program for analyzing an input image of a camera
CN109960969B (en) Method, device and system for generating moving route
CN110731076A (en) Shooting processing method and device and storage medium
CN110662168B (en) Method and device for acquiring fence area, electronic equipment and readable storage medium
CN114096994A (en) Image alignment method and device, electronic equipment and storage medium
CN110334652B (en) Image processing method, electronic device, and storage medium
CN112822409B (en) Exposure parameter adjusting method and device
CN114302226A (en) Intelligent cutting method for video picture
CN110264402B (en) Image fusion method, image fusion device, storage medium and electronic device
CN112351197B (en) Shooting parameter adjusting method and device, storage medium and electronic equipment
CN117528228A (en) Camera power consumption control method and device, electronic equipment and computer storage medium
CN112037127A (en) Privacy shielding method and device for video monitoring, storage medium and electronic device
CN110930340B (en) Image processing method and device
CN112565604A (en) Video recording method and device and electronic equipment
CN113992863B (en) Automatic exposure method, device, electronic equipment and computer readable storage medium
US11743444B2 (en) Electronic device and method for temporal synchronization of videos
CN113364980A (en) Device control method, device, storage medium, and electronic apparatus
CN112085002A (en) Portrait segmentation method, portrait segmentation device, storage medium and electronic equipment
CN110703615A (en) Data-driven scene method and device based on Internet of things

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant