CN111654643B - Exposure parameter determination method and device, unmanned aerial vehicle and computer readable storage medium - Google Patents

Exposure parameter determination method and device, unmanned aerial vehicle and computer readable storage medium Download PDF

Info

Publication number
CN111654643B
CN111654643B CN202010715066.4A CN202010715066A CN111654643B CN 111654643 B CN111654643 B CN 111654643B CN 202010715066 A CN202010715066 A CN 202010715066A CN 111654643 B CN111654643 B CN 111654643B
Authority
CN
China
Prior art keywords
current image
exposure
determining
face
statistical information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010715066.4A
Other languages
Chinese (zh)
Other versions
CN111654643A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhendi Intelligent Technology Co Ltd
Original Assignee
Suzhou Zhendi Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhendi Intelligent Technology Co Ltd filed Critical Suzhou Zhendi Intelligent Technology Co Ltd
Priority to CN202010715066.4A priority Critical patent/CN111654643B/en
Publication of CN111654643A publication Critical patent/CN111654643A/en
Application granted granted Critical
Publication of CN111654643B publication Critical patent/CN111654643B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application provides an exposure parameter determination method, an exposure parameter determination device, an unmanned aerial vehicle and a computer readable storage medium, wherein the method comprises the following steps: detecting a current image to determine a concerned object in the current image, wherein the current image is an image acquired by the camera equipment to be adjusted; determining a target exposure mode according to the type of the concerned object; and determining the required exposure parameters according to the target exposure mode and the information of the attention object in the current image. The method in the embodiment of the application can meet the exposure requirements in more scenes.

Description

Exposure parameter determination method and device, unmanned aerial vehicle and computer readable storage medium
Technical Field
The invention relates to the technical field of camera shooting, in particular to an exposure parameter determining method and device, an unmanned aerial vehicle and a computer readable storage medium.
Background
The camera is used for recording nice moments, and under the demand, higher requirements are put on the imaging quality of the camera. At present, with the development of the hardware technology of the camera, the comprehensive imaging quality of the camera is greatly improved. However, in the face of many complex scenes, the exposure parameters of the camera still need to be manually adjusted, and the quality of automatic exposure imaging of the camera in the complex scenes is poor.
Disclosure of Invention
The invention aims to provide an exposure parameter determination method, an exposure parameter determination device, an unmanned aerial vehicle and a computer readable storage medium, which can meet the exposure requirements in more scenes.
In a first aspect, an embodiment of the present invention provides an exposure parameter determining method, including:
detecting a current image to determine an attention object in the current image, wherein the current image is an image acquired by the camera equipment to be adjusted;
determining a target exposure mode according to the type of the attention object;
and determining required exposure parameters according to the target exposure mode and the information of the attention object in the current image.
In an optional implementation manner, the determining, according to the target exposure mode and the attention object information in the current image, a required exposure parameter includes:
obtaining first histogram statistical information of the attention object according to the current image;
and determining the required exposure parameters according to the statistical information of the first histogram.
In the method in the embodiment of the application, the histogram statistical information is used for determining the light and shade distribution condition of the current image, so that the required exposure parameter can be determined based on the current light and shade distribution condition, and the imaging based on the required exposure parameter can be better.
In an alternative embodiment, the current image comprises: the camera equipment to be adjusted collects multiple frames of pictures; the method further comprises the following steps:
predicting the track of the attention object according to the multi-frame images to obtain a target track;
the determining the required exposure parameter according to the statistical information of the first histogram includes:
and determining the required exposure parameters according to the target track and the statistical information of the first histogram.
In an optional implementation manner, the performing, according to the multiple frames of images, trajectory prediction on the attention object to obtain a target trajectory includes:
and carrying out face detection and head detection on the multi-frame pictures to carry out track prediction on the attention object so as to obtain a target track.
In the method in the embodiment of the application, the trajectory of the attention object is predicted, so that the attention object can be better tracked during shooting, and the imaging effect is better.
In an optional embodiment, when the current image includes a plurality of faces, the obtaining, according to the current image, first histogram statistical information of the attention object includes:
respectively determining first histogram statistical information of each face according to the current image to obtain joint histogram statistical information;
the determining the required exposure parameter according to the statistical information of the first histogram includes:
and respectively determining required exposure parameters according to the statistical information of the combined histogram.
In the method in the embodiment of the application, when the shooting of multiple faces is carried out, the statistical histogram statistical information can be respectively carried out on each face, so that the determined exposure parameters required can better use the exposure requirements of the multiple faces, and the imaging effect of the multiple faces can be better.
In an alternative embodiment, the current image comprises: the camera equipment to be adjusted acquires multi-frame exposure pictures under different exposure parameters; the determining the first histogram statistical information of each face according to the current image to obtain the joint histogram statistical information includes:
carrying out face key point detection on the target face in the multi-frame exposure picture in the current image aiming at any target face in the current image so as to determine the position of facial features;
carrying out image segmentation on the multi-frame exposure picture according to the positions of the facial features to obtain a target facial skin area;
obtaining multi-stage histogram statistical information of the target face according to the target face skin area in each frame of exposure frame in the multi-frame exposure frames, and forming combined histogram statistical information by the multi-stage histogram statistical information corresponding to all faces in the current image;
the determining the required exposure parameters according to the statistical information of the joint histogram respectively comprises the following steps:
determining the face skin colors corresponding to all faces in the current image according to the multi-stage histogram statistical information corresponding to each face;
and determining the required exposure parameters according to the face complexion corresponding to all the faces in the current image.
In the method in the embodiment of the application, when the exposure parameter is determined, the skin color of the human face is detected, and the corresponding imaging effect is determined based on different skin colors, so that the determined exposure parameter can meet the imaging requirements of human faces with different skin colors, and the imaging effect is better.
In an optional embodiment, the detecting the current image and determining the object of interest in the current image includes:
classifying the current image through a pre-trained classification model to determine an attention object corresponding to the current image and a target scene corresponding to the attention object according to a classification result;
the determining the required exposure parameter according to the target exposure mode and the attention object information in the current image includes:
acquiring a preset exposure parameter rule according to the target scene;
and determining exposure parameters matched with the target scene according to the exposure parameter rules.
In the method in the embodiment of the application, different exposure rules can be preset for different scenes, so that the required exposure parameters can be determined relatively quickly.
In an optional embodiment, the determining a required exposure parameter according to the target exposure mode and the attention object information in the current image includes:
calculating the importance degree of each region in the current image by using a significance detection algorithm so as to determine a significant region in the current image according to the importance degree of each region;
determining second histogram statistical information corresponding to the significant region;
and determining the required exposure parameters according to the second histogram statistical information.
In the method in the embodiment of the application, the salient region is determined through the saliency detection, so that the required exposure parameter can be determined based on the salient region which is possibly concerned, the determined required exposure parameter can adapt to the requirement of the current scene, the imaging effect can be better, and the requirement of a user can be better adapted.
In an optional implementation manner, the determining, according to the target exposure mode and the attention object information in the current image, a required exposure parameter includes:
calculating the importance degree of each region in the current image by using a significance detection algorithm so as to determine a significant region in the current image according to the importance degree of each region;
determining third histogram statistical information corresponding to the significant region;
determining fourth histogram statistical information of the attention object from the current image;
performing weighting processing on the third histogram statistical information and the fourth histogram statistical information to obtain mixed histogram statistical information;
and determining the required exposure parameters according to the statistical information of the mixed histogram.
In the method in the embodiment of the application, the salient region is determined through the saliency detection, the histogram statistical information of the face part is determined through the attention of the face, and the required exposure parameter is determined based on the salient region and the face region, so that the determined required exposure parameter can adapt to the exposure requirements of various types of information in a picture, and the imaging effect is better.
In an alternative embodiment, the method further comprises:
and if the concerned object is out of the acquisition range of the camera equipment to be adjusted, determining exposure parameters of each time period in a specified time period according to a set change trend, wherein the specified time period is a time period from a current time node to a time node after the current time.
In the method in the embodiment of the application, when the attention object is separated from the acquisition range of the camera equipment to be adjusted, the exposure parameters are gradually adjusted by adopting the set change trend, so that the flicker phenomenon of the exposure of a camera can be reduced.
In a second aspect, an embodiment of the present invention provides an exposure parameter determining apparatus, including:
the detection module is used for detecting a current image and determining an attention object in the current image, wherein the current image is an image acquired by the camera equipment to be adjusted;
the first determining module is used for determining a target exposure mode according to the type of the attention object;
and the second determining module is used for determining the required exposure parameters according to the target exposure mode and the attention object information in the current image.
In a third aspect, an embodiment of the present invention provides an unmanned aerial vehicle, including:
the camera shooting device is used for collecting images;
a processor;
a memory storing machine-readable instructions executable by the processor, the machine-readable instructions, when executed by the processor, performing the steps of the method of any of the preceding embodiments when the electronic device is run.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the method according to any one of the foregoing embodiments.
The beneficial effects of the embodiment of the application are that: by adopting different exposure modes based on different attention objects, the exposure parameter determination in the embodiment can adapt to the requirements of more different scenes, and the determination adaptability of the exposure parameter can be more.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Fig. 2 is a flowchart of an exposure parameter determining method according to an embodiment of the present application.
Fig. 3 is a detailed flowchart of step 203 of the exposure parameter determining method according to an embodiment of the present application.
Fig. 4 is another detailed flowchart of step 203 of the exposure parameter determining method according to an embodiment of the present disclosure.
Fig. 5 is a schematic functional block diagram of an exposure parameter determining apparatus according to an embodiment of the present disclosure.
Detailed Description
The technical solution in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
The inventor researches on the exposure mode of the current camera, and the automatic light of the current camera is mainly divided into two types of exposure based on the statistic information of a global image histogram and exposure based on a local area: the first type of global exposure controls parameters such as exposure time, exposure gain and the like of a camera by counting the average brightness of a global image; the second type of local exposure is mainly exposure based on face region information, and setting of exposure parameters is performed by using face brightness.
However, the inventor researches on the imaging requirement and the current exposure mode and finds that the method has some problems: 1. under the condition of human faces and no human faces, the camera exposure is easy to generate the phenomena of flicker, flickering of image brightness and the like, and the camera exposure switching is not smooth enough; 2. under the condition of multiple faces, the problems of over-black part of the faces, over-white part of the faces and inconsistent imaging can exist; 3. the intelligent exposure problem under the condition of no human face (backlight or head light), at the moment, because no human face information is available, most of the intelligent exposure problem is based on full image information, so that dark areas of an image are too dark (even if a camera supports a wide dynamic mode, the problems of the dark areas being too dark and the bright areas being too dark are still caused); 4. exposure setting cannot be carried out based on a specific target, people shoot different objects and focus points are different, and the imaging of the region of interest which people may need to shoot more clearly; 5. the exposure of people with different skin colors is not consistent.
In view of the situation research, embodiments of the present application provide an exposure parameter determination method, an apparatus, an unmanned aerial vehicle, and a computer-readable storage medium, which are described below with several embodiments.
Example one
To facilitate understanding of the present embodiment, an electronic device that executes an exposure parameter determination method disclosed in the embodiments of the present application will be described in detail first.
As shown in fig. 1, is a block schematic diagram of an electronic device. The electronic device 100 may include a memory 111, a memory controller 112, a processor 113, a peripheral interface 114, an input-output unit 115, a display unit 116, and an acquisition unit 117. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is merely exemplary and is not intended to limit the structure of the electronic device 100. For example, electronic device 100 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The above-mentioned memory 111, the memory controller 112, the processor 113, the peripheral interface 114, the input/output unit 115, the display unit 116 and the acquisition unit 117 are electrically connected to each other directly or indirectly, so as to implement data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The processor 113 is used to execute the executable modules stored in the memory.
The Memory 111 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 111 is configured to store a program, and the processor 113 executes the program after receiving an execution instruction, and the method executed by the electronic device 100 defined by the process disclosed in any embodiment of the present application may be applied to the processor 113, or implemented by the processor 113.
The processor 113 may be an integrated circuit chip having signal processing capability. The Processor 113 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The peripheral interface 114 couples various input/output devices to the processor 113 and memory 111. In some embodiments, the peripheral interface 114, the processor 113, and the memory controller 112 may be implemented in a single chip. In other examples, they may be implemented separately from the individual chips.
The input/output unit 115 is used to provide input data to the user. The input-output unit 115 may be for inputting a signal. For example, when the electronic device 100 is a drone, the input/output unit 115 may be a key provided on the drone, a remote controller that controls the drone, or the like.
The display unit 116 provides an interactive interface (e.g., a user operation interface) between the electronic device 100 and the user or is used for displaying image data to the user for reference. In this embodiment, the display unit may be a liquid crystal display or a touch display. Alternatively, the display unit 116 may be used to display the image data acquired by the acquisition unit 117.
The above-mentioned acquisition unit 117 is used for acquiring image data of the periphery of the electronic device.
Optionally, the electronic device 100 in this embodiment may be an unmanned aerial vehicle, or may be a handheld cradle head or the like with a camera device installed.
The electronic device 100 in this embodiment may be configured to perform each step in each method provided in this embodiment. The implementation of the exposure parameter determination method is described in detail below by several embodiments.
Example two
Please refer to fig. 2, which is a flowchart illustrating an exposure parameter determining method according to an embodiment of the present disclosure. The specific process shown in fig. 2 will be described in detail below.
Step 201, detecting a current image, and determining an attention object in the current image.
And the current image is an image acquired by the camera equipment to be adjusted.
In an embodiment, the current image may be classified by a pre-trained classification model, so as to determine an attention object corresponding to the current image and a target scene corresponding to the attention object according to a classification result.
Alternatively, the classification model described above may be used to classify different types of images.
For example, the classification model may classify a landscape type image. For example, the classification category may include scenic images of different scenes of sunrise versus sunset, clouds, waterfall, and so on.
For example, when the classification model is used for classifying landscape images, the classification model may be obtained by training an initial classification model using image data sets of different scenes, such as sunrise, sunset, cloud, waterfall, and the like.
Illustratively, the classification model may classify human and non-human images. For example, the classification categories may then include: human images and non-human images.
For example, when the classification model is used to classify whether the image contains a portrait, the classification model may be obtained by training an initial classification model using an image set containing a portrait and an image set not containing a portrait.
In another embodiment, the current image may be detected by a face detection algorithm to determine the object of interest in the current image.
Step 202, determining a target exposure mode according to the type of the attention object.
Alternatively, the types of the attention object may include a human face, an animal, a landscape, and the like.
In this embodiment, different exposure modes may be adopted for different subjects. Illustratively, the exposure mode may include: the exposure is carried out based on the face information, the exposure is carried out based on the hot spot area, the exposure is carried out based on the face and the hot spot area, and the like.
Alternatively, the exposure based on the hot spot region detection may also be performed according to the difference of the hot spots, and the selection of the exposure parameters is also different.
Optionally, determining the target exposure mode may include: and when a switching instruction is received, determining a target exposure mode.
Optionally, determining the target exposure mode may further include: the electronic equipment automatically selects a corresponding target exposure mode according to the type of the attention object.
Step 203, determining the required exposure parameters according to the target exposure mode and the information of the attention object in the current image.
Alternatively, the required exposure parameters may include exposure time, exposure gain, and the like.
Optionally, the type of object of interest comprises a human face. At this time, as shown in fig. 3, step 203 may include the following steps 2031 to 2033.
Step 2031, obtaining first histogram statistical information of the attention object according to the current image.
Illustratively, the first histogram statistical information may represent a shading distribution of pixels of the face image. For example, the abscissa of the histogram may represent a distribution of light and shade, and the ordinate represents the number of pixels corresponding to various light and shade.
Optionally, the first histogram statistical information of the attention object may be determined according to an image parameter in an area corresponding to the attention object in the current image. For example, the image parameters may include color values for individual pixels.
Alternatively, the subject being photographed may include a plurality of persons, and then a plurality of faces may be included in the current image. At this time, step 2031 may include: and respectively determining the first histogram statistical information of each face according to the current image so as to obtain the joint histogram statistical information.
Optionally, because exposure parameters required by faces with different skin colors may be different, in this embodiment, in order to improve the quality of face exposure, the skin color of the face in the picture may be detected. If multiple faces are included in the current image, the faces may contain faces with different skin colors, such as a black face, a white face, a yellow face, and a brown face.
The corresponding face color can be determined according to the histogram statistical information of the multi-frame image. Illustratively, the current image may include: and acquiring multi-frame exposure pictures of the camera equipment to be adjusted under different exposure parameters. Step 2031 may comprise the following steps a to c.
Step a, aiming at any target face in the current image, carrying out face key point detection on the target face in the multi-frame exposure picture in the current image so as to determine the position of facial features.
Illustratively, face keypoint detection may use LBF (Local Binary Features) algorithm, ERT (Regression tree Ensemble) algorithm, TCDCN (Tasks-Constrained Deep Convolutional Network) algorithm, MTCNN (Multi-task nested Convolutional Network) algorithm, and the like.
The face key points may be points on a face key region, wherein the key region may include: eyebrows, nose, mouth, face, contours, etc.
And b, carrying out image segmentation on the multi-frame exposure picture according to the positions of the facial features to obtain a target facial skin area.
In this embodiment, the face key region is determined by the face key point, and the region on the face other than the face key region may represent a face skin region.
In this embodiment, the target face skin region can be obtained by segmenting the face key region from other parts of the face.
And c, obtaining multi-stage histogram statistical information of the target face according to the target face skin area in each frame of exposure frame in the multi-frame exposure frames, and forming combined histogram statistical information by the multi-stage histogram statistical information corresponding to all faces in the current image.
Alternatively, the skin tone of the target face may be determined from multi-stage histogram statistics of the target face.
The accuracy of face exposure and the effect of actual exposure can be improved through the detection of the skin color of the face, and the camera exposure problem under different skin colors of people, such as the problem of inconsistent exposure of black people and white people, can be reduced.
Step 2033, determining the required exposure parameters according to the statistical information of the first histogram.
In one embodiment, step 2033 may comprise: and respectively determining required exposure parameters according to the statistical information of the combined histogram.
In this embodiment, the required exposure parameters may be determined according to the face brightness of the multiple persons and the brightness of the background behind the faces, which are determined by the joint histogram statistical information.
For example, if the ratio of the luminance to the color of the combined histogram statistical information corresponding to the current multiple faces is larger than that of the normal image histogram statistical information, the luminance may be increased, so that the ratio of the dark series of the obtained picture is larger.
In one embodiment, step 2033 may comprise: and determining the face complexion corresponding to all faces in the current image according to the multi-stage histogram statistical information corresponding to each face, and determining the required exposure parameters according to the face complexion corresponding to all faces in the current image.
For example, for a first target face, if the difference of the histogram statistical information of each stage in the multi-stage histogram statistical information of the first target face is smaller than a set value, it indicates that the influence of different exposures on the first target face is small, and thus may indicate that the skin color of the first target face is black.
For example, for a second target face, if the difference of the histogram statistical information of each stage in the multi-stage histogram statistical information of the second target face is greater than a set value, it indicates that the influence of different exposures on the second target face is large, and thus may indicate that the skin color of the second target face is light color. For example, the skin color of the second target face may be white, yellow, etc.
Alternatively, the above set value may be set as desired, for example, the set value may be 30%, 26% or the like.
Optionally, because the black skin-colored face has a small influence on different exposure parameters, when multiple faces are photographed, the exposure condition corresponding to the light-colored face may be focused. Illustratively, the histogram statistical information corresponding to the light-colored face can be used as the basis for adjusting the exposure parameters to determine the required exposure parameters.
Optionally, the current image may include a plurality of frames captured by the image capturing apparatus to be adjusted. Before step 2033, the method may further include: step 2032, performing trajectory prediction on the attention object according to the multiple frames of pictures to obtain a target trajectory.
In this embodiment, in order to better determine the exposure parameter, a designated face may also be tracked.
Optionally, the designated face may be a face image input in advance, and the designated face within the acquisition range of the image pickup device to be adjusted is found out in a face matching manner. The object of interest may then be a designated face.
In an alternative embodiment, a kalman algorithm may be used to dynamically predict an object of interest in a multi-frame picture.
Optionally, when the current picture includes a plurality of faces, the trajectory of each face may also be predicted.
Alternatively, the exposure parameters required by the corresponding position can be determined by the position of the current face.
In an embodiment, step 2032 may further comprise: and carrying out face detection and head detection on the multi-frame pictures to carry out track prediction on the attention object so as to obtain a target track.
Optionally, step 2033 may comprise: and determining the required exposure parameters according to the target track and the statistical information of the first histogram.
Further, different exposure requirements may be required for different shooting requirements, for example, when shooting a night scene, dark colors may be more heavily occupied, and the histogram brightness distribution may not be uniform due to exposure. For another example, when shooting a snowy mountain, the ratio of bright color is heavy, and thus, the histogram luminance distribution unevenness may not be caused by exposure. Therefore, different preset exposure parameter rules can be set for different shooting requirements.
For example, when the image to be captured is a landscape, the target of interest may not include a human face, and the target of interest may be the entire image or some local feature in the image. Alternatively, the object of interest in the current image may then be a salient region.
Alternatively, as shown in fig. 4, step 203 may include the following steps 2034 to 2036.
Step 2034, calculating the importance degree of each region in the current image by using a saliency detection algorithm, so as to determine a salient region in the current image according to the importance degree of each region.
The saliency detection algorithm used in the present embodiment may include, but is not limited to, an LC (Luminance Contrast) algorithm, an HC (Histogram Contrast) algorithm, an FT (Frequency-tuned) algorithm, and the like.
For example, the salient region may be an animal in an image including an animal, a sunset region in a sunset landscape image, a mountain tree region in an image including a mountain tree, or the like.
Step 2035, determining second histogram statistical information corresponding to the significant region.
In this embodiment, the second histogram statistic information may represent the shading condition of the pixels in the salient region.
Step 2036, determining the required exposure parameters according to the second histogram statistical information.
Alternatively, when the dark pixel proportion in the salient region is greater than the first preset value, the required exposure parameter may be an exposure parameter for making imaging brighter.
Alternatively, when the bright pixel proportion in the salient region is greater than the second preset value, the required exposure parameter may be an exposure parameter for making imaging darker.
In this embodiment, the values of the first preset value and the second preset value may be set as required.
In one embodiment, the image to be captured may include a person and may also include some other object of interest. At this time, step 203 may include: calculating the importance degree of each region in the current image by using a significance detection algorithm so as to determine a significant region in the current image according to the importance degree of each region; determining third histogram statistical information corresponding to the significant region; determining fourth histogram statistical information of the attention object from the current image; performing weighting processing on the third histogram statistical information and the fourth histogram statistical information to obtain mixed histogram statistical information; and determining the required exposure parameters according to the statistical information of the mixed histogram.
For determining the relevant content of the significant region in the current image, reference may be made to the descriptions in step 2034 to step 2036, which are not described herein again.
Alternatively, the weight of the third histogram statistical information and the weight of the fourth histogram statistical information may be set to the same weight. Alternatively, the weight of the third histogram statistical information may be set to a value greater than the weight of the fourth histogram statistical information. Alternatively, the weight of the third histogram statistic may be set to a value smaller than the weight of the fourth histogram statistic.
In this embodiment, the weight of the third histogram statistical information and the weight of the fourth histogram statistical information may be set as required.
In one embodiment, exposure parameter rules required by different scenes may be preset, and the exposure parameters are adaptively determined according to the rules.
Illustratively, step 203 may include the following steps 2037 to 2038.
Step 2037, acquiring a preset exposure parameter rule according to the target scene.
Optionally, the preset exposure parameter rule may be a corresponding exposure parameter value; or the light and shade distribution proportion in the standard histogram corresponding to different scene matching, etc.
For example, a parameter list of exposure parameter values corresponding to different scenes may be pre-stored.
Step 2038, determining the exposure parameters matched with the target scene according to the exposure parameter rules.
Optionally, the exposure parameter value corresponding to the target scene may be searched in the parameter list, so that the exposure parameter matched with the target scene may be obtained.
Optionally, the histogram statistical information corresponding to the target scene may be determined according to the current image, the histogram statistical information corresponding to the target scene is compared with the bright-dark distribution ratio of the standard histogram corresponding to the target scene, and if the pixel duty ratio of the histogram statistical information corresponding to the target scene is greater than the bright pixel duty ratio of the standard histogram, the exposure parameter may be adjusted to an exposure parameter that makes the imaging darker; if the dark pixel proportion of the histogram statistical information corresponding to the target scene is larger than the dark pixel proportion in the light-dark distribution proportion of the standard histogram, the exposure parameters can be adjusted to the exposure parameters which enable imaging to be brighter.
Optionally, the method further comprises: and 204, if the attention object is out of the acquisition range of the camera equipment to be adjusted, determining exposure parameters of each time period in a specified time period according to a set change trend, wherein the specified time period is a time period from a current time node to a time node after the current time.
For example, if the human face or the human head is not detected, the set trend may be a slowly decreasing isp (image Signal processor) luminance expected average value.
The exposure parameter determination method according to the embodiment of the application comprises the following steps: by adopting different exposure modes based on different attention objects, the exposure parameter determination in the embodiment can adapt to the requirements of more different scenes, and the determination adaptability of the exposure parameter can be more.
Furthermore, the interest area can be determined based on face detection or saliency detection, so that exposure parameters can be adaptively adjusted based on the image condition of the interest area, and the determined exposure condition can be more adaptive to the exposure required by the interest area.
Further, when the attention object such as a person is not detected, the slow decay ISP brightness expected mean value can be adopted, so that the exposure flicker can be reduced.
EXAMPLE III
Based on the same application concept, an exposure parameter determining apparatus corresponding to the exposure parameter determining method is further provided in the embodiments of the present application, and since the principle of the apparatus in the embodiments of the present application for solving the problem is similar to that in the embodiments of the exposure parameter determining method, the apparatus in the embodiments of the present application may be implemented by referring to the description in the embodiments of the method, and repeated details are omitted.
Fig. 5 is a schematic diagram of functional modules of an exposure parameter determining apparatus according to an embodiment of the present disclosure. Each module in the exposure parameter determination device in the present embodiment is configured to perform each step in the above-described method embodiment. The exposure parameter determination device includes: a detection module 301, a first determination module 302, a second determination module 303; wherein the content of the first and second substances,
the detection module 301 is configured to detect a current image, and determine an object of interest in the current image, where the current image is an image acquired by a camera device to be adjusted;
a first determining module 302, configured to determine a target exposure mode according to the type of the attention object;
a second determining module 303, configured to determine a required exposure parameter according to the target exposure mode and the attention object information in the current image.
In a possible implementation, the type of the object of interest includes a human face, and the second determining module 303 includes: an information obtaining unit and a parameter determining unit;
an information obtaining unit, configured to obtain first histogram statistical information of the attention object according to the current image;
and the parameter determining unit is used for determining the required exposure parameter according to the statistical information of the first histogram.
In a possible embodiment, the current image comprises: the camera equipment to be adjusted collects multiple frames of pictures; the exposure parameter determining apparatus in this embodiment further includes: the track prediction module is used for carrying out track prediction on the attention object according to the multi-frame images so as to obtain a target track;
a parameter determination unit for:
and determining the required exposure parameters according to the target track and the statistical information of the first histogram.
In one possible embodiment, the trajectory prediction module is configured to:
and carrying out face detection and head detection on the multi-frame pictures to carry out track prediction on the attention object so as to obtain a target track.
In a possible implementation manner, when the current image includes a plurality of faces, the information obtaining unit is configured to respectively determine first histogram statistical information of each face according to the current image to obtain joint histogram statistical information;
and the parameter determining unit is used for respectively determining the required exposure parameters according to the statistical information of the combined histogram.
In a possible embodiment, the current image comprises: the camera equipment to be adjusted acquires multi-frame exposure pictures under different exposure parameters;
an information obtaining unit operable to:
carrying out face key point detection on the target face in the multi-frame exposure picture in the current image aiming at any target face in the current image so as to determine the position of facial features;
carrying out image segmentation on the multi-frame exposure picture according to the positions of the facial features to obtain a target facial skin area;
obtaining multi-stage histogram statistical information of the target face according to the target face skin area in each frame of exposure frame in the multi-frame exposure frames, and forming combined histogram statistical information by the multi-stage histogram statistical information corresponding to all faces in the current image;
a parameter determination unit for:
determining the face skin colors corresponding to all faces in the current image according to the multi-stage histogram statistical information corresponding to each face;
and determining the required exposure parameters according to the face complexion corresponding to all the faces in the current image.
In a possible implementation manner, the detection module 301 is configured to classify the current image through a pre-trained classification model, so as to determine, according to a classification result, an attention object corresponding to the current image and a target scene corresponding to the attention object;
a second determining module 303, configured to:
acquiring a preset exposure parameter rule according to the target scene;
and determining exposure parameters matched with the target scene according to the exposure parameter rules.
In a possible implementation, the second determining module 303 is configured to:
calculating the importance degree of each region in the current image by using a significance detection algorithm so as to determine a significant region in the current image according to the importance degree of each region;
determining second histogram statistical information corresponding to the significant region;
and determining the required exposure parameters according to the second histogram statistical information.
In a possible implementation, the types of the object of interest include a human face and a salient region, and the second determining module 303 is configured to:
calculating the importance degree of each region in the current image by using a significance detection algorithm so as to determine a significant region in the current image according to the importance degree of each region;
determining third histogram statistical information corresponding to the significant region;
determining fourth histogram statistical information of the attention object from the current image;
performing weighting processing on the third histogram statistical information and the fourth histogram statistical information to obtain mixed histogram statistical information;
and determining the required exposure parameters according to the statistical information of the mixed histogram.
In a possible implementation manner, the exposure parameter determining apparatus in this embodiment further includes: and the third determining module is used for determining exposure parameters of each time period in a specified time period according to a set change trend if the attention object is out of the acquisition range of the camera equipment to be adjusted, wherein the specified time period is a time period from the current time node to a time node after the current time.
Furthermore, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the exposure parameter determination method in the above method embodiment.
The computer program product of the exposure parameter determining method provided in the embodiment of the present application includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the exposure parameter determining method described in the above method embodiment, which may be specifically referred to in the above method embodiment and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (7)

1. An exposure parameter determination method, comprising:
detecting a current image to determine an attention object in the current image, wherein the current image is an image acquired by a camera device to be adjusted, the type of the attention object comprises a human face, the current image comprises a plurality of human faces, and the current image comprises a plurality of frames of exposure pictures acquired by the camera device to be adjusted under different exposure parameters;
determining a target exposure mode according to the type of the attention object;
determining a required exposure parameter according to the target exposure mode and the information of the attention object in the current image, including:
according to the current image, respectively determining first histogram statistical information of each face to obtain joint histogram statistical information, wherein the joint histogram statistical information comprises the following steps: performing face key point detection on the target face in a multi-frame exposure picture in the current image aiming at any target face in the current image to determine the position of facial features, performing image segmentation on the multi-frame exposure picture according to the position of the facial features to obtain a target face skin area, obtaining multi-stage histogram statistical information of the target face according to the target face skin area in each frame exposure picture in the multi-frame exposure picture, and forming combined histogram statistical information according to the multi-stage histogram statistical information corresponding to all faces in the current image;
determining the face skin colors corresponding to all faces in the current image according to the multi-stage histogram statistical information corresponding to each face;
and determining the required exposure parameters according to the face complexion corresponding to all the faces in the current image.
2. The method of claim 1, wherein the current image comprises: the camera equipment to be adjusted collects multiple frames of pictures; the method further comprises the following steps:
carrying out face detection and head detection on the multi-frame pictures to carry out track prediction on the attention object so as to obtain a target track;
the determining the required exposure parameter according to the statistical information of the first histogram includes:
and determining the required exposure parameters according to the target track and the statistical information of the first histogram.
3. The method of claim 1, wherein the detecting the current image and determining the object of interest in the current image comprises:
classifying the current image through a pre-trained classification model to determine an attention object corresponding to the current image and a target scene corresponding to the attention object according to a classification result;
the determining the required exposure parameter according to the target exposure mode and the attention object information in the current image includes:
acquiring a preset exposure parameter rule according to the target scene;
and determining exposure parameters matched with the target scene according to the exposure parameter rules.
4. The method according to claim 1, wherein the determining a required exposure parameter according to the target exposure mode and the attention object information in the current image comprises:
calculating the importance degree of each region in the current image by using a significance detection algorithm so as to determine a significant region in the current image according to the importance degree of each region;
determining second histogram statistical information corresponding to the significant region;
and determining the required exposure parameters according to the second histogram statistical information.
5. An exposure parameter determination apparatus, characterized by comprising:
the detection module is used for detecting a current image and determining an attention object in the current image, wherein the current image is an image acquired by the camera equipment to be adjusted, the type of the attention object comprises a human face, the current image comprises a plurality of human faces, and the current image comprises a plurality of frames of exposure pictures acquired by the camera equipment to be adjusted under different exposure parameters;
the first determining module is used for determining a target exposure mode according to the type of the attention object;
the second determining module is used for determining the required exposure parameters according to the target exposure mode and the attention object information in the current image;
wherein the second determining module comprises: an information obtaining unit and a parameter determining unit;
the information obtaining unit is used for respectively determining the first histogram statistical information of each face according to the current image so as to obtain the joint histogram statistical information;
the information obtaining unit is further configured to perform face key point detection on any target face in the current image in a multi-frame exposure picture in the current image to determine a face five-sense organ position, perform image segmentation on the multi-frame exposure picture according to the face five-sense organ position to obtain a target face skin region, obtain multi-stage histogram statistical information of the target face according to the target face skin region in each frame exposure picture in the multi-frame exposure picture, and form joint histogram statistical information according to the multi-stage histogram statistical information corresponding to all faces in the current image;
and the parameter determining unit is used for determining the face complexion corresponding to all the faces in the current image according to the multi-stage histogram statistical information corresponding to each face, and determining the required exposure parameters according to the face complexion corresponding to all the faces in the current image.
6. An unmanned aerial vehicle, comprising:
the camera shooting device is used for collecting images;
a processor;
memory storing machine-readable instructions executable by the processor, the machine-readable instructions when executed by the processor performing the steps of the method of any of claims 1 to 4 when the electronic device is run.
7. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, is adapted to carry out the steps of the method according to any one of claims 1 to 4.
CN202010715066.4A 2020-07-22 2020-07-22 Exposure parameter determination method and device, unmanned aerial vehicle and computer readable storage medium Active CN111654643B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010715066.4A CN111654643B (en) 2020-07-22 2020-07-22 Exposure parameter determination method and device, unmanned aerial vehicle and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010715066.4A CN111654643B (en) 2020-07-22 2020-07-22 Exposure parameter determination method and device, unmanned aerial vehicle and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111654643A CN111654643A (en) 2020-09-11
CN111654643B true CN111654643B (en) 2021-08-31

Family

ID=72350195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010715066.4A Active CN111654643B (en) 2020-07-22 2020-07-22 Exposure parameter determination method and device, unmanned aerial vehicle and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111654643B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112153300A (en) * 2020-09-24 2020-12-29 广州云从洪荒智能科技有限公司 Multi-view camera exposure method, device, equipment and medium
CN112598616B (en) * 2020-11-09 2023-11-24 联想(北京)有限公司 Method for determining exposure parameters of electronic equipment and imaging method
CN114727024A (en) * 2021-01-05 2022-07-08 广州汽车集团股份有限公司 Automatic exposure parameter adjusting method and device, storage medium and shooting equipment

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005333248A (en) * 2004-05-18 2005-12-02 Sumitomo Electric Ind Ltd Method and apparatus for adjusting luminance of picture in camera type vehicle sensor
CN1997113A (en) * 2006-12-28 2007-07-11 上海交通大学 Automatic explosion method based on multi-area partition and fuzzy logic
CN101013250A (en) * 2006-01-30 2007-08-08 索尼株式会社 Exposure control apparatus and image pickup apparatus
CN101904166A (en) * 2007-12-19 2010-12-01 伊斯曼柯达公司 Camera using preview image to select exposure
CN104580925A (en) * 2014-12-31 2015-04-29 安科智慧城市技术(中国)有限公司 Image brightness controlling method, device and camera
CN106210334A (en) * 2016-07-22 2016-12-07 惠州Tcl移动通信有限公司 A kind of smart flash lamp control method, system and mobile terminal
CN106534714A (en) * 2017-01-03 2017-03-22 南京地平线机器人技术有限公司 Exposure control method, device and electronic equipment
CN106791476A (en) * 2017-01-25 2017-05-31 北京图森未来科技有限公司 A kind of image-pickup method and device
CN107211096A (en) * 2015-01-22 2017-09-26 三菱电机株式会社 Camera device and method and program and recording medium
CN107592468A (en) * 2017-10-23 2018-01-16 维沃移动通信有限公司 A kind of shooting parameter adjustment method and mobile terminal
CN107592473A (en) * 2017-10-31 2018-01-16 广东欧珀移动通信有限公司 Exposure parameter method of adjustment, device, electronic equipment and readable storage medium storing program for executing
CN108200351A (en) * 2017-12-21 2018-06-22 深圳市金立通信设备有限公司 Image pickup method, terminal and computer-readable medium
CN111263074A (en) * 2020-03-13 2020-06-09 深圳市雄帝科技股份有限公司 Method, system and equipment for automatically adjusting brightness of camera and storage medium thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080101277A (en) * 2007-05-16 2008-11-21 삼성테크윈 주식회사 Digiatal image process apparatus for displaying histogram and method thereof
CN104994306B (en) * 2015-06-29 2019-05-03 厦门美图之家科技有限公司 A kind of image capture method and photographic device based on face's brightness adjust automatically exposure
TWI697867B (en) * 2018-12-12 2020-07-01 晶睿通訊股份有限公司 Metering compensation method and related monitoring camera apparatus
CN110691199A (en) * 2019-10-10 2020-01-14 厦门美图之家科技有限公司 Face automatic exposure method and device, shooting equipment and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005333248A (en) * 2004-05-18 2005-12-02 Sumitomo Electric Ind Ltd Method and apparatus for adjusting luminance of picture in camera type vehicle sensor
CN101013250A (en) * 2006-01-30 2007-08-08 索尼株式会社 Exposure control apparatus and image pickup apparatus
CN1997113A (en) * 2006-12-28 2007-07-11 上海交通大学 Automatic explosion method based on multi-area partition and fuzzy logic
CN101904166A (en) * 2007-12-19 2010-12-01 伊斯曼柯达公司 Camera using preview image to select exposure
CN104580925A (en) * 2014-12-31 2015-04-29 安科智慧城市技术(中国)有限公司 Image brightness controlling method, device and camera
CN107211096A (en) * 2015-01-22 2017-09-26 三菱电机株式会社 Camera device and method and program and recording medium
CN106210334A (en) * 2016-07-22 2016-12-07 惠州Tcl移动通信有限公司 A kind of smart flash lamp control method, system and mobile terminal
CN106534714A (en) * 2017-01-03 2017-03-22 南京地平线机器人技术有限公司 Exposure control method, device and electronic equipment
CN106791476A (en) * 2017-01-25 2017-05-31 北京图森未来科技有限公司 A kind of image-pickup method and device
CN107592468A (en) * 2017-10-23 2018-01-16 维沃移动通信有限公司 A kind of shooting parameter adjustment method and mobile terminal
CN107592473A (en) * 2017-10-31 2018-01-16 广东欧珀移动通信有限公司 Exposure parameter method of adjustment, device, electronic equipment and readable storage medium storing program for executing
CN108200351A (en) * 2017-12-21 2018-06-22 深圳市金立通信设备有限公司 Image pickup method, terminal and computer-readable medium
CN111263074A (en) * 2020-03-13 2020-06-09 深圳市雄帝科技股份有限公司 Method, system and equipment for automatically adjusting brightness of camera and storage medium thereof

Also Published As

Publication number Publication date
CN111654643A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
TWI805869B (en) System and method for computing dominant class of scene
CN108197546B (en) Illumination processing method and device in face recognition, computer equipment and storage medium
CN108764370B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN111654643B (en) Exposure parameter determination method and device, unmanned aerial vehicle and computer readable storage medium
US9516217B2 (en) Digital image processing using face detection and skin tone information
US8170350B2 (en) Foreground/background segmentation in digital images
US8948468B2 (en) Modification of viewing parameters for digital images using face detection information
US7466866B2 (en) Digital image adjustable compression and resolution using face detection information
US7693311B2 (en) Perfecting the effect of flash within an image acquisition devices using face detection
US7315630B2 (en) Perfecting of digital image rendering parameters within rendering devices using face detection
US7362368B2 (en) Perfecting the optics within a digital image acquisition device using face detection
US8055090B2 (en) Digital image processing using face detection information
US9070044B2 (en) Image adjustment
CN110580428A (en) image processing method, image processing device, computer-readable storage medium and electronic equipment
US20060204054A1 (en) Digital image processing composition using face detection information
US20100054549A1 (en) Digital Image Processing Using Face Detection Information
US20100039525A1 (en) Perfecting of Digital Image Capture Parameters Within Acquisition Devices Using Face Detection
CN110807759B (en) Method and device for evaluating photo quality, electronic equipment and readable storage medium
US9460521B2 (en) Digital image analysis
CN104182721A (en) Image processing system and image processing method capable of improving face identification rate
CN108737728A (en) A kind of image capturing method, terminal and computer storage media
CN113989387A (en) Camera shooting parameter adjusting method and device and electronic equipment
CN117119322A (en) Image processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant