WO2021109409A1 - 图像拍摄方法、装置、设备及存储介质 - Google Patents

图像拍摄方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2021109409A1
WO2021109409A1 PCT/CN2020/084589 CN2020084589W WO2021109409A1 WO 2021109409 A1 WO2021109409 A1 WO 2021109409A1 CN 2020084589 W CN2020084589 W CN 2020084589W WO 2021109409 A1 WO2021109409 A1 WO 2021109409A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
image
exposure
current
projection area
Prior art date
Application number
PCT/CN2020/084589
Other languages
English (en)
French (fr)
Inventor
徐琼
张文萍
Original Assignee
浙江宇视科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江宇视科技有限公司 filed Critical 浙江宇视科技有限公司
Priority to US17/782,521 priority Critical patent/US20230005239A1/en
Priority to EP20896699.4A priority patent/EP4072124A4/en
Publication of WO2021109409A1 publication Critical patent/WO2021109409A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/72Combination of two or more compensation controls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10144Varying exposure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Definitions

  • This application relates to the field of monitoring technology, for example, to an image shooting method, device, equipment, and storage medium.
  • the collected images include multiple target objects such as faces, human bodies, and license plates, it will be impossible to guarantee that different target objects meet the shooting requirements at the same time, that is, usually only one of the target objects can be guaranteed.
  • the shooting effect meets the requirements, but the shooting effects that cannot take into account multiple target objects meet the requirements.
  • the embodiments of the present application provide an image shooting method, device, equipment, and storage medium, so as to ensure that various target objects can achieve the expected shooting effect during the shooting process.
  • An embodiment of the present application provides an image shooting method, including:
  • a new captured image is captured at the new capturing moment, wherein both the new captured image and the current captured image include the target object.
  • An embodiment of the present application also provides an image capturing device, including:
  • the exposure brightness determination module is configured to predict the predicted projection area position of the target object in the current captured image at the new acquisition time on the image sensor, and an estimate of the target object’s position in the predicted projection area at the new acquisition time Exposure brightness information;
  • An exposure parameter adjustment module configured to adjust the exposure parameter of the target object at the predicted projection area position when the new acquisition time arrives according to the type of the target object and the estimated exposure brightness information
  • the image exposure shooting module is configured to collect a new shot image at the new collection moment according to the adjusted exposure parameters, wherein both the new shot image and the current shot image include the target object.
  • An embodiment of the present application also provides an electronic device, including:
  • One or more processors are One or more processors;
  • Storage device set to store one or more programs
  • the one or more programs are executed by the one or more processors, so that the one or more processors implement the image capturing method as provided in any embodiment of the present application.
  • An embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored.
  • the program is executed by a processor, the image capturing method as provided in any embodiment of the present application is implemented.
  • FIG. 1 is a flowchart of an image shooting method provided in an embodiment of the present application
  • FIG. 2 is a flowchart of another image shooting method provided in an embodiment of the present application.
  • FIG. 3 is a schematic diagram of determining a predicted projection position provided in an embodiment of the present application.
  • FIG. 4 is a flowchart of another image shooting method provided in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of projection of a projection area on an image sensor provided in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of photometry of a photometry area provided in an embodiment of the present application.
  • FIG. 7 is a flowchart of another image shooting method provided in an embodiment of the present application.
  • FIG. 8 is a structural block diagram of an image shooting device provided in an embodiment of the present application.
  • Fig. 9 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
  • Fig. 1 is a flowchart of an image shooting method provided in an embodiment of the present application.
  • the embodiments of the present application may be applicable to a situation where a target object in a shooting screen is photographed, for example, a situation in which a target object is photographed under the premise that multiple target objects appear in the shooting screen.
  • the method can be executed by an image capturing device, which can be implemented by software and/or hardware, and integrated on any electronic device with a network communication function.
  • the electronic equipment includes, but is not limited to, electronic photographing equipment, electronic police equipment, and the like.
  • the image capturing method provided in the embodiment of the present application includes the following steps:
  • S110 Determine the predicted projection area position of the target object in the current captured image on the image sensor at the new acquisition time, and the predicted exposure brightness information of the target object at the predicted projection area position at the new acquisition time.
  • the current captured image refers to the captured image obtained at the current collection moment.
  • the new collection time refers to the collection time point after the current collection time.
  • the electronic device includes an image sensor, and an external target object generates an optical image through the lens assembly and is projected onto the image sensor.
  • the image sensor converts the projected optical image into an electrical signal. After a series of signal processing, the electrical signal is converted to an image.
  • the target object in the current captured image is moving rather than stationary in most cases, when the electronic device is fixed, the position of the target object's projection area on the image sensor included in the electronic device will also change . Therefore, it is necessary to determine the projection area position of the target object on the image sensor at the new acquisition time as the predicted projection area position.
  • the predicted projection area location is the predicted projection area location.
  • the current captured image includes a target object.
  • the target object may include a human face, a human body, a vehicle body, a license plate, and so on.
  • the license plate in a strong forward light scene is easy to overexpose, and the brightness of the license plate and the brightness of the face in a strong backlight scene are very low; at night, the brightness of the face is very low, and the license plate is easy to overexpose .
  • the optical image of the target object is projected to the image sensor, the image sensor will be exposed.
  • the estimated exposure brightness information includes the estimated exposure brightness of the target object in the current captured image at the new acquisition time at the position of the predicted projection area.
  • the exposure brightness refers to the brightness of the target object at the position of the projection area after the optical image of the target object is projected to the position of the projection area of the image sensor and exposed.
  • the image shooting method of this embodiment further includes:
  • detection processing is performed on the acquired current shooting image; if it is detected that the current shooting image includes the target object, it is determined that it is in the exposure adjustment mode to perform the adjustment operation of the exposure parameters.
  • the shooting process if the target object does not appear in the shooting screen, there will generally not be overexposure or too low brightness during the exposure shooting process. Therefore, it is necessary to obtain the current captured image obtained at the current collection time in real time, and to detect the current captured image. If it is detected that the current captured image includes the target object, it indicates that the target object appears in the shooting screen, then continue to use the exposure adjustment mode or switch from the normal exposure mode to the exposure adjustment mode, so that the image shooting is in the exposure adjustment mode, and the exposure parameter adjustment operation is performed.
  • the current captured image does not include the target object, it indicates that the target object has disappeared in the shooting screen, then continue to use the normal exposure mode or switch from the exposure adjustment mode to the normal exposure mode, so that the image shooting is in the normal exposure mode. For example, when it is detected that the target object appears in the shooting screen, it switches to the exposure adjustment mode; when it is detected that the target object disappears in the shooting screen, it switches back to the normal exposure mode.
  • the detection of the target object is real-time, and the exposure mode will also switch in real time following the appearance and disappearance of the target object.
  • the subsequent acquisition of the current captured image is continued to be detected; if within the preset time, the subsequent acquisition is determined If the target object is still not detected in the current captured image, switch from the exposure adjustment mode to the normal exposure mode.
  • the electronic device in the exposure adjustment mode, when the target object in the shooting image disappears, the electronic device will continue to detect the shooting image for a preset period of time Tn. If it is detected within a preset period of time Tn that the target object still does not appear in the shooting screen, switch the exposure mode from the exposure adjustment mode to the normal exposure mode, so that the normal exposure mode can ensure that there is no target object in the shooting screen. The overall effect. If it is detected that the target object reappears in the shooting picture within a preset period of time Tn, keep in the exposure adjustment mode, and continue to adjust the exposure parameters in the exposure adjustment mode, and loop like this, so as to realize the self-adaptation of the all-weather effect Exposure shooting.
  • S120 According to the type of the target object in the currently captured image and the estimated exposure brightness information, adjust the exposure parameter of the target object at the position of the predicted projection area when the new acquisition time arrives.
  • the currently captured image may include one or more target objects, and each target object corresponds to a predicted projection area position at the new acquisition moment. If the current captured image includes multiple target objects, the estimated exposure brightness information of each target object at the predicted projection area location at the new acquisition time needs to be separately counted, and the exposure parameters used at different predicted projection area locations of the image sensor need to be adjusted separately. In this way, different exposure parameters can be assigned to different areas of the image sensor.
  • the types of different target objects are different, and different types of target objects have different exposure levels under the same lighting conditions. For example, at night, the brightness of the face is very low, and the license plate is easy. Over exposure. Therefore, when adjusting the exposure parameters of the target object at the predicted projection area position at the new acquisition time, it is necessary to combine the type of the target object in the current captured image and the estimated exposure brightness information of the target object at the predicted projection area position at the new acquisition time. To make adjustments.
  • the exposure parameter of the target object at the predicted projection area position at the new acquisition time refers to the exposure parameter to be used by the image sensor at the predicted projection area position at the new acquisition time before the exposure parameter adjustment is performed.
  • the type of the target object may be license plate, body, face, and body.
  • different exposure parameters can be set for different areas of the image sensor according to the type of the target object and the estimated exposure brightness information of the target object, so as to ensure that the adjusted exposure parameters can be used for exposure shooting at the new acquisition time.
  • Different target objects can be exposed using adapted exposure parameters.
  • the solution of this embodiment can dynamically independently adjust the exposure parameters of different target objects in the projection area of the image sensor to achieve independent exposure. It can be applied to any time period without distinguishing between day and night, as long as the target object needs to adjust the exposure. When it is time, it will dynamically determine the different projection areas corresponding to different target objects, and independently adjust the exposure parameters of different projection areas to achieve independent exposure, ensuring that different target objects in the captured image can reach the appropriate shooting brightness.
  • both the newly captured image and the current captured image include the target object.
  • exposure when the new acquisition time comes, exposure can be performed according to the adjusted exposure parameters at the position of the predicted projection area, and the newly captured image can be acquired.
  • the embodiment of this application provides an image shooting solution.
  • the solution of this embodiment can automatically detect the projection area of different target objects that will be on the image sensor at the time of subsequent acquisition, and according to the exposure brightness and the exposure brightness of the projection area where each target object is located.
  • the target exposure brightness that needs to be achieved allocates appropriate exposure parameters to the projection area where the target object is located, ensuring that multiple target objects can reach the desired shooting brightness during the shooting process, and can ensure that the target object that is prone to be too dark or that is prone to be too bright
  • the target objects can reach the desired brightness.
  • FIG. 2 is a flowchart of another image shooting method provided in an embodiment of the present application.
  • the embodiment of the present application explains the steps of S110 in the foregoing embodiment on the basis of the foregoing embodiment.
  • the embodiment of the present application may be compared with the foregoing one. Or a combination of alternatives in multiple embodiments.
  • the image shooting method provided in this embodiment includes the following steps:
  • S210 Determine the current projection area position of the target object in the currently captured image on the image sensor at the current acquisition moment.
  • determining the current projection area position of the target object in the current captured image on the image sensor at the current capture time includes: determining when the optical image of the target object in the current captured image is projected to the image sensor at the current capture time , The projection area position of the optical image of the target object on the image sensor is recorded as the current projection area position. Since the current acquisition time has occurred, the current projection area position at this time is the projection area position obtained by actual calculation of the current captured image, rather than the predicted projection area position. The time is distinguished by the current collection time and the new collection time.
  • S220 Determine the predicted projection area position of the target object in the currently captured image on the image sensor at the new acquisition moment.
  • determining the predicted projection area position of the target object in the current captured image on the image sensor at the new acquisition time includes: predicting when the optical image of the target object in the current captured image is projected to the image sensor at the new acquisition time , The projection area position of the optical image of the target object on the image sensor is recorded as the predicted projection area position.
  • determining the predicted projection area position of the target object in the current captured image on the image sensor at the new acquisition moment includes step A10 to step A30.
  • Step A10 Determine the current image position of the target object in the currently captured image in the current captured image at the current acquisition moment.
  • the current captured image contains the target object.
  • the image position of the target object in the current captured image can be determined by the target recognition algorithm, that is, the current image position of the target object in the current captured image at the current collection time is obtained. .
  • Step A20 Predict the motion of the target object according to the current image position, and determine the predicted image position of the target object in the current captured image at the new acquisition moment.
  • the current image position is the position of the target object in the current captured image at the current acquisition time, but the target object is generally in motion. Therefore, the motion prediction of the target object in the current captured image can be performed to determine that the target object is in the current captured image. To which position in the current captured image the new acquisition time will move to, the predicted image position of the target object in the current captured image at the new acquisition time can be obtained. Optionally, take the current image position of the target object as the starting point, and estimate according to the movement direction and speed of the target object in the current captured image, so as to obtain which position in the current captured image the target object will move to at the new acquisition moment .
  • Step A30 Determine the predicted projection area location of the target object on the image sensor at the new acquisition time according to the predicted image location.
  • the captured image and the imaging plane of the image sensor may use the same coordinate division information. If the two use the same coordinate division information, the predicted image position can be directly used as the predicted projection area position of the target object on the image sensor. If the two use different coordinate division information, it is necessary to determine the predicted projection area location of the target object on the image sensor based on the predicted image location and the preset location mapping information. Optionally, the predicted image location can be matched with preset location mapping information to determine the predicted projection location of the optical image of the target object on the imaging plane of the image sensor at the new acquisition moment.
  • the captured image where the target object is located is divided into M*N area blocks, and the imaging plane on the image sensor where the target object is located is divided into m*n area blocks, so that the same target object is in the captured image
  • FIG. 3 is a schematic diagram of determining a predicted projection position provided in an embodiment of the present application.
  • the coordinate information of the current image position of the target object (human face) in the current captured image at the current collection time is denoted as: X(x1, y1) and Y(x2, y2).
  • the coordinate information of the predicted image position of the target object in the current captured image at the subsequent new acquisition time can be obtained, denoted as: X'(x3,y3) and Y'(x4,y4) ).
  • the predicted projection of the target object on the image sensor at the new acquisition time can be obtained according to the position conversion relationship included in the position mapping information
  • the coordinate information of the area location is denoted as: X'(f 1 (x3), f 2 (y3)), Y'(f 1 (x4), f 2 (y4)).
  • determining the current projection area position of the target object in the currently captured image on the image sensor at the current acquisition time includes step B10 to step B20.
  • Step B10 Determine the current image position of the target object in the current captured image in the current captured image at the current acquisition moment.
  • Step B20 Determine the current projection area position of the target object on the image sensor at the current acquisition time according to the current image area position.
  • the current image position can be matched with preset position mapping information to determine the current projection position of the optical image of the target object on the imaging plane of the image sensor at the current collection moment.
  • the target object in FIG. 3 after determining the current image position X(x1, y1), Y(x2, y2) of the target object, it can be obtained according to the position conversion relationship included in the position mapping information
  • the coordinate information of the current projection area position of the target object on the image sensor at the current acquisition time is denoted as: X(f 1 (x1), f 2 (y1)), Y(f 1 (x2), f 2 (y2) ).
  • S230 Determine the current exposure brightness information of the target object at the current projection area location and the predicted projection area location at the current acquisition time.
  • the optical image of the target object in the currently captured image is projected to the image sensor at the current acquisition moment, and the image sensor will expose the optical image of the target object according to the exposure parameters.
  • the brightness of the target object at the current projection area position on the image sensor at this time is recorded as the current exposure brightness information of the target object at the current projection area position at the current collection time.
  • the current exposure brightness information of the target object at the location of the predicted projection area at the current acquisition time can be obtained.
  • the current exposure brightness information of the target object at the current projection area position at the current collection time is:
  • the current exposure brightness information of the target object at the location of the predicted projection area at the current acquisition time is: Among them, l(x, y) is the brightness at (x, y) where the coordinates of the projection area of the image sensor at the current acquisition time.
  • S240 Determine the predicted exposure brightness information according to the current exposure brightness information at the current projection area location and the predicted projection area location respectively.
  • the advantage of using the above method is that considering that the predicted projection area position obtained by prediction is not necessarily very accurate, there is a certain error, so when predicting the exposure brightness of the target object at the predicted projection area position, it is not possible to refer only to the predicted projection area position. Brightness also needs to refer to the brightness of the target object at the current projection area position on the image sensor, and combine the brightness at the two projection area positions to estimate the accurate exposure brightness of the target object at the predicted projection area position at the new acquisition time.
  • S250 According to the type of the target object in the currently captured image and the estimated exposure brightness information, adjust the exposure parameter of the target object at the position of the predicted projection area when the new acquisition time arrives.
  • the embodiment of this application provides an image shooting solution.
  • the solution of this embodiment can automatically detect the projection area of different target objects that will be on the image sensor at the time of subsequent acquisition, and according to the exposure brightness and the exposure brightness of the projection area where each target object is located.
  • the target exposure brightness that needs to be achieved allocates appropriate exposure parameters to the projection area where the target object is located, ensuring that multiple target objects can reach the desired shooting brightness during the shooting process, and can ensure that the target object that is prone to be too dark or that is prone to be too bright
  • the target objects can reach the desired brightness.
  • FIG. 4 is a flowchart of another image capturing method provided in an embodiment of the present application.
  • the embodiment of the present application describes the steps of S120 and S250 in the foregoing embodiment on the basis of the foregoing embodiment.
  • the embodiment of the present application may be compared with the steps of S120 and S250 in the foregoing embodiment.
  • the optional solutions in one or more of the above embodiments are combined.
  • the image shooting method in this embodiment includes the following steps:
  • S410 Determine the predicted projection area position of the target object in the current captured image at the new acquisition time on the image sensor, and the predicted exposure brightness information of the target object in the current captured image at the predicted projection area location at the new acquisition time.
  • S420 Determine target exposure brightness information associated with the type of the target object in the currently captured image.
  • the current captured image may include multiple target objects, and each target object corresponds to a projection area on the image sensor, so that the image sensor includes multiple projection areas.
  • the target exposure brightness required by different types of target objects is different. Therefore, when adjusting the exposure of each target object separately, it is necessary to determine the target exposure brightness information associated with the type of each target object, so that according to the target exposure brightness The information determines whether to adjust the exposure parameters.
  • S430 According to the target exposure brightness information and the estimated exposure brightness information, adjust the exposure parameters of the target object in the current captured image at the predicted projection area position when the new acquisition time arrives.
  • the target exposure brightness information associated with the type of the target object and the estimated exposure brightness information of the target object at the predicted projection area position at the new acquisition time The brightness difference information between. If it is detected that the brightness difference information does not meet the preset difference information, then according to the brightness difference information, the exposure parameters of the target object at the predicted projection area position when the new collection time arrives are adjusted. If it is detected that the brightness difference information meets the preset difference information, the exposure parameters of the target object at the predicted projection area position are not adjusted, and the original exposure parameters are still maintained.
  • FIG. 5 is a schematic diagram of projection of a projection area on an image sensor provided in an embodiment of the present application.
  • the projection areas of all target objects on the image sensor in the current captured image include: R1, R2, and R3, and the exposure brightness of multiple target objects at the projection area positions to which they belong is denoted as: Lr1, Lr2, and Lr3, respectively.
  • Lr1, Lr2, and Lr3 the target exposure brightness T1 associated with Lr1 and the target exposure brightness T2 associated with Lr2 and the target exposure brightness T3 associated with Lr3 are compared in sequence, and then the automatic exposure (AE) is adjusted in real time and the corresponding is issued.
  • the exposure parameters are given to multiple projection areas of the image sensor until the exposure brightness of the target object in the projection area reaches the desired target exposure brightness value.
  • adjusting the exposure parameters of the target object at the predicted projection area position when the new acquisition time arrives according to the brightness difference information includes: determining the target according to the difference value of the exposure brightness included in the brightness difference information The exposure adjustment information of the object at the predicted projection position; according to the exposure adjustment information, the exposure parameter of the target object at the predicted projection area position of the image sensor is adjusted when the new acquisition time arrives.
  • T1 Lr1+ ⁇
  • the exposure parameter at the predicted projection area needs to be adjusted by increasing the exposure
  • T1 Lr1- ⁇
  • is the calculated exposure that should be increased or decreased.
  • the image capturing method of this embodiment before acquiring a newly captured image according to the adjusted exposure parameters, the image capturing method of this embodiment further includes the following steps C10 to C20.
  • Step C10 Reject the projection area of the target object on the image sensor in the photometry area, and perform photometry on the photometry area after the removal process.
  • Step C20 Determine the exposure information of the photometry area according to the photometry result, and adjust the exposure parameters used at the projection area of the non-target object on the image sensor at the new acquisition time according to the exposure information of the photometry area.
  • FIG. 6 is a schematic diagram of photometry of a photometry area provided in an embodiment of the present application.
  • the projection area of the non-target object on the image sensor is removed for statistics, and the metering area and weight are the same as when there is no target object, that is, the target will be subtracted from the metering area
  • the information of the object in the projection area on the image sensor, and the non-photometric area will also subtract the information of the target object in the projection area on the image sensor, and then calculate the brightness of the overall picture according to the weight, and then give the non-target Exposure parameters of the projected area of the object on the image sensor.
  • the weighted average brightness of the projection area of the non-target object on the image sensor is:
  • ⁇ (p, q) is the weight of the pixel (p, q) in the imaging plane of the captured image or the image sensor.
  • pix represents the pixel
  • Lr0 represents the weighted average brightness of the projection area of the non-target object on the image sensor
  • L0 represents the target brightness value of the projection area of the non-target object on the image sensor, which is a constant
  • ⁇ 1 represents the error range
  • P and Q represents the width and height of the captured image
  • n represents the number of target objects in the captured image.
  • the exposure parameter gain_i for the exposure adjustment of the projection area of the target object on the image sensor is obtained according to the following:
  • gain ⁇ [gain0- ⁇ 1, gain0+ ⁇ 1] represents the range of gain parameters
  • shutter ⁇ [shutter0- ⁇ 2, shutter0+ ⁇ 2] represents the range of shutter parameters
  • Lri represents the average brightness of the i-th object in the projection area (such as The current exposure brightness in the target object and the weighted average brightness in the non-target object)
  • Li represents the target brightness value of the i-th object in the projection area, which is a constant.
  • the adjustment of the exposure parameters can be adjusted in the following manner: adjusting the shutter of the target object at the position of the predicted projection area of the image sensor and other exposure parameters. For the projection area of the non-target object on the image sensor, most of these areas are the background and do not contain the target object. The priority is to choose the exposure principle of slowing down the shutter and minimizing the gain to adjust the exposure parameters.
  • the above-mentioned dynamic independent exposure of the projection area of multiple target objects on the image sensor and the projection area of non-target objects on the image sensor can be applied in any time period without distinguishing between day and night, as long as the target When the subject needs to adjust the exposure, it will dynamically determine each area and adjust the exposure.
  • the projection area of the target object on the image sensor can also be several designated areas. The brightness statistics and exposure in this area are the same as above.
  • the area can be a face, a human body, a license plate, a car body, or other areas. Target audience.
  • the embodiment of this application provides an image shooting solution.
  • the solution of this embodiment can automatically detect the projection area of different target objects that will be on the image sensor at the time of subsequent acquisition, and according to the exposure brightness and the exposure brightness of the projection area where each target object is located.
  • the target exposure brightness that needs to be achieved assigns appropriate exposure parameters to the projection area where the target object is located to ensure that multiple target objects can reach the desired shooting brightness during the shooting process, especially for target objects that are prone to too dark or easy to pass.
  • Bright target objects can reach the desired brightness.
  • FIG. 7 is a flowchart of another image capturing method provided in an embodiment of the present application.
  • the embodiment of the present application is described on the basis of the foregoing embodiment.
  • the embodiment of the present application may be compared with one or more of the foregoing embodiments. Combination of options.
  • the image shooting method provided in the embodiment of the present application includes the following steps:
  • S710 Determine the predicted projection area position of the target object in the current captured image on the image sensor at the new acquisition moment, and the estimated exposure brightness information of the target object at the predicted projection area location at the new acquisition moment.
  • S720 According to the type of the target object and the estimated exposure brightness information, adjust the exposure parameter of the target object at the position of the predicted projection area when the new acquisition time arrives.
  • online image quality evaluation of the newly captured image can be performed to determine the image quality effect of the target object in the newly captured image.
  • the image quality of the human face included in the newly captured image can be evaluated and determined by at least one of the following: a facial brightness evaluation value Y1, a facial clarity evaluation value Ts, and a facial noise evaluation value Nn , Human face skin color evaluation value C.
  • the image quality of the license plate included in the newly captured image can be evaluated and determined by at least one of the following: license plate brightness evaluation value Ylc, license plate clarity evaluation value Tsc, license plate noise evaluation value Nnc, license plate color evaluation Value Cc4.
  • the feature similarity of the newly captured image can also be determined to determine the similarity of the target object in the newly captured image.
  • artificial intelligence (AI) feature similarity calculation is performed on the newly captured image. Taking the evaluation of the face image quality as an example, the evaluation of the quality of the face image and the evaluation of the similarity of the facial features are carried out. The process is as follows.
  • Gray (p, q) represents the gray value of the pixel (p, q)
  • P and Q represent the width and height of the face image, respectively.
  • the evaluation function of face brightness is:
  • a low-pass filter is used to obtain a blurred image of the human face in the newly captured image, and the blurred image of the human face is compared with the original image of the human face to obtain the edge image of the human face, which is used to characterize the sharpness of the human face.
  • t1, t2, t3, and t4 are demarcation thresholds, which can be adjusted according to actual needs.
  • a smoothing filter is used to obtain a denoised face image, and the denoised face image is compared with the original image to obtain a noisy image.
  • the original image is I and the smoothing filter is G, then the noise image is:
  • the average value of face noise can be obtained as:
  • p, q are the subscripts of the pixels in the image, and P and Q represent the width and height of the face image, respectively.
  • the evaluation function of human face skin color is:
  • Tf ⁇ di,j
  • the evaluation value of the face image quality and the evaluation value of the similarity of the facial features can be obtained respectively.
  • the evaluation value of image quality determine whether the shooting effect of the target object in the newly captured image can meet the requirements of human eyes, and whether the shooting effect of the target object in the newly captured image can achieve the similarity of AI through the evaluation value of the similarity of facial features Degree recognition requirements.
  • S750 Adjust the image parameters of the newly captured image according to the image quality and/or feature similarity to obtain the adjusted newly captured image.
  • the image quality and feature similarity after the image quality and feature similarity are obtained, it is possible to determine whether the target requirements are met according to the image quality and feature similarity, so as to adjust the image parameters of the newly captured image in a targeted manner. Perform real-time evaluation, then adjust the image parameters, perform similarity evaluation simultaneously, and adjust the image parameters in a loop until the image quality evaluation value and feature similarity evaluation value meet the target requirements, and then the adjusted new captured image can be obtained.
  • the embodiment of this application provides an image shooting solution.
  • the solution of this embodiment can automatically detect the projection area of different target objects that will be on the image sensor at the time of subsequent acquisition, and according to the exposure brightness and the exposure brightness of the projection area where each target object is located.
  • the target exposure brightness that needs to be achieved assigns appropriate exposure parameters to the projection area where the target object is located to ensure that multiple target objects can reach the desired shooting brightness during the shooting process, especially for target objects that are prone to too dark or easy to pass.
  • Bright target objects can reach the desired brightness.
  • online image quality evaluation, real-time adjustment of exposure, and ISP related processing to ensure that the final target image quality can meet both the AI recognition requirements and the visual requirements of the human eye.
  • Fig. 8 is a structural block diagram of an image capturing device provided in an embodiment of the present application.
  • the embodiments of the present application may be applicable to a situation where a target object in a shooting screen is photographed, for example, a situation in which a target object is photographed under the premise that multiple target objects appear in the shooting screen.
  • the device can be implemented in software and/or hardware and integrated on any electronic device with network communication functions.
  • the electronic equipment includes, but is not limited to, electronic photographing equipment, electronic police equipment, and the like.
  • the image shooting device provided in the embodiment of the present application includes: an exposure brightness determination module 810, an exposure parameter adjustment module 820, and an image exposure shooting module 830. among them:
  • the exposure brightness determination module 810 is configured to determine the predicted projection area position of the target object in the current captured image on the image sensor at the new acquisition time, and the predicted exposure brightness information of the target object at the predicted projection area position at the new acquisition time
  • the exposure parameter adjustment module 820 is configured to adjust the exposure parameters of the target object at the position of the predicted projection area when the new acquisition time arrives according to the type of the target object and the estimated exposure brightness information; the image exposure shooting module 830 , It is set to collect a new captured image at a new acquisition time according to the adjusted exposure parameters, wherein both the newly captured image and the current captured image include the target object.
  • the exposure brightness determination module 810 is set to:
  • the exposure brightness determination module 810 is set to:
  • the exposure brightness determination module 810 is set to:
  • the current projection area position of the target object on the image sensor at the current acquisition time is determined.
  • the exposure parameter adjustment module 820 includes:
  • the target brightness determining unit is configured to determine target exposure brightness information associated with the type of the target object
  • the exposure parameter adjustment unit is configured to adjust the exposure parameter of the target object at the position of the predicted projection area when a new acquisition time arrives according to the target exposure brightness information and the estimated exposure brightness information.
  • the exposure parameter adjustment unit is set to:
  • the device further includes:
  • the captured image analysis module 840 is configured to determine the image quality and/or feature similarity of the target object included in the newly captured image
  • the captured image processing module 850 is configured to adjust the image parameters of the newly captured image according to the image quality and/or the feature similarity to obtain an adjusted new captured image.
  • the image capturing device provided in the embodiment of the present application can execute the image capturing method provided in any embodiment of the present application, and has the corresponding functions and effects for executing the image capturing method.
  • the process please refer to the related image capturing method in the foregoing embodiment. operating.
  • Fig. 9 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
  • the electronic device provided in the embodiment of the present application includes: one or more processors 910 and a storage device 920; there may be one or more processors 910 in the electronic device, as shown in FIG.
  • the processor 910 is taken as an example; the storage device 920 is configured to store one or more programs; the one or more programs are executed by the one or more processors 910, so that the one or more processors 910 implement the same Apply for the image shooting method described in any one of the embodiments.
  • the electronic device may further include: an input device 930 and an output device 940.
  • the processor 910, the storage device 920, the input device 930, and the output device 940 in the electronic device may be connected through a bus or other methods.
  • the connection through a bus is taken as an example.
  • the storage device 920 in the electronic device can be configured to store one or more programs.
  • the programs can be software programs, computer-executable programs, and modules, as provided in the embodiments of the present application.
  • the processor 910 executes various functional applications and data processing of the electronic device by running the software programs, instructions, and modules stored in the storage device 920, that is, implements the image capturing method in the foregoing method embodiment.
  • the storage device 920 may include a storage program area and a storage data area.
  • the storage program area may store an operating system and an application program required by at least one function; the storage data area may store data created according to the use of the electronic device, and the like.
  • the storage device 920 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
  • the storage device 920 may include memories remotely provided with respect to the processor 910, and these remote memories may be connected to the device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the input device 930 may be configured to receive inputted number or character information, and generate key signal input related to user settings and function control of the electronic device.
  • the output device 940 may include a display device such as a display screen.
  • the programs can also perform related operations in the image capturing method provided in any embodiment of the present application.
  • An embodiment of the present application provides a computer-readable medium on which a computer program is stored, and the program is used to execute an image capturing method when the program is executed by a processor, and the method includes:
  • the program when executed by the processor, it may also be used to execute the image capturing method provided in any embodiment of the present application.
  • the computer storage medium of the embodiment of the present application may adopt any combination of one or more computer-readable media.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above.
  • Examples of computer-readable storage media include: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (Read Only) Memory, ROM), Erasable Programmable Read Only Memory (EPROM), flash memory, optical fiber, compact Disc Read Only Memory (CD-ROM), optical storage devices, magnetic storage Pieces, or any suitable combination of the above.
  • the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to: electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium, and the computer-readable medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wireless, wire, optical cable, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the foregoing.
  • the computer program code used to perform the operations of this application can be written in one or more programming languages or a combination thereof.
  • the programming languages include object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional Procedural programming language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network-including Local Area Network (LAN) or Wide Area Network (WAN)-or it can be connected to an external computer (For example, use an Internet service provider to connect via the Internet).
  • LAN Local Area Network
  • WAN Wide Area Network

Abstract

本文公开了一种图像拍摄方法、装置、设备及存储介质。所述方法包括:预测在新采集时刻当前拍摄图像中的目标对象在图像传感器上的预测投射区域位置,以及在所述新采集时刻所述目标对象在预测投射区域位置的预估曝光亮度信息;依据目标对象所属类型和所述预估曝光亮度信息,在所述新采集时刻到来时调整所述目标对象在所述预测投射区域位置的曝光参数;按照调整后的曝光参数,在所述新采集时刻采集新拍摄图像,其中,所述新拍摄图像和所述当前拍摄图像中均包括所述目标对象。

Description

图像拍摄方法、装置、设备及存储介质
本申请要求在2019年12月03日提交中国专利局、申请号为201911222644.4的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及监控技术领域,例如涉及一种图像拍摄方法、装置、设备及存储介质。
背景技术
随着监控技术的发展,在视频监控应用中,对人脸、人体和车牌等拍摄目标的获取和识别的诉求越来越高。
但是,在实际场景中,如果采集得到的图像中包括人脸、人体和车牌等多种目标对象,会导致无法保证不同的目标对象同时满足拍摄要求,即通常只能保证其中一种目标对象的拍摄效果达到要求,而不能兼顾多个目标对象的拍摄效果均达到要求。
发明内容
本申请实施例中提供了一种图像拍摄方法、装置、设备及存储介质,以实现在拍摄过程中保证多种目标对象均能达到预期的拍摄效果。
本申请实施例中提供了一种图像拍摄方法,包括:
预测在新采集时刻当前拍摄图像中的目标对象在图像传感器上的预测投射区域位置,以及在所述新采集时刻所述目标对象在所述预测投射区域位置的预估曝光亮度信息;
依据所述目标对象所属类型和所述预估曝光亮度信息,在所述新采集时刻到来时调整所述目标对象在所述预测投射区域位置的曝光参数;
按照调整后的曝光参数,在所述新采集时刻采集新拍摄图像,其中,所述新拍摄图像和所述当前拍摄图像中均包括所述目标对象。
本申请实施例中还提供了一种图像拍摄装置,包括:
曝光亮度确定模块,设置为预测在新采集时刻当前拍摄图像中的目标对象在图像传感器上的预测投射区域位置,以及在所述新采集时刻所述目标对象在所述预测投射区域位置的预估曝光亮度信息;
曝光参数调整模块,设置为依据所述目标对象所属类型和所述预估曝光亮度信息,在所述新采集时刻到来时调整所述目标对象在所述预测投射区域位置的曝光参数;
图像曝光拍摄模块,设置为按照调整后的曝光参数,在所述新采集时刻采集新拍摄图像,其中,所述新拍摄图像和所述当前拍摄图像中均包括所述目标对象。
本申请实施例中还提供了一种电子设备,包括:
一个或多个处理器;
存储装置,设置为存储一个或多个程序;
所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如本申请任意实施例中提供的图像拍摄方法。
本申请实施例中还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本申请任意实施例中提供的图像拍摄方法。
附图说明
图1是本申请实施例中提供的一种图像拍摄方法的流程图;
图2是本申请实施例中提供的另一种图像拍摄方法的流程图;
图3是本申请实施例中提供的一种预测投射位置的确定示意图;
图4是本申请实施例中提供的另一种图像拍摄方法的流程图;
图5是本申请实施例中提供的一种图像传感器上投射区域的投射示意图;
图6是本申请实施例中提供的一种测光区域的测光示意图;
图7是本申请实施例中提供的另一种图像拍摄方法的流程图;
图8是本申请实施例中提供的一种图像拍摄装置的结构框图;
图9是本申请实施例中提供的一种电子设备的结构示意图。
具体实施方式
下面结合附图和实施例对本申请进行说明。此处所描述的具体实施例仅仅用于解释本申请,而非对本申请的限定。为了便于描述,附图中仅示出了与本申请相关的部分而非全部结构。
在讨论示例性实施例之前应当提到的是,一些示例性实施例被描述成作为流程图描绘的处理或方法。虽然流程图将多项操作(或步骤)描述成顺序的处 理,但是其中的许多操作可以被并行地、并发地或者同时实施。此外,多项操作的顺序可以被重新安排。当其操作完成时所述处理可以被终止,但是还可以具有未包括在附图中的附加步骤。所述处理可以对应于方法、函数、规程、子例程、子程序等等。
为了理解本申请的技术方案,这里分析实际场景中的曝光拍摄的相关内容,以发现曝光拍摄的缺陷。一般情况下,人脸和人体属于易过暗的目标对象,车牌属于易过亮的目标对象。如果拍摄画面中单独出现人脸、人体或车牌时,通过人脸测光、人体测光或者车牌测光即可保证拍摄图像中的人脸、人体或车牌的亮度符合要求。但,通常拍摄图像中很少会单独出现人脸、人体或车牌。如果拍摄画面中同时出现人脸、人体、车牌时,无论使用哪种测光,只能保证其中一种目标对象的亮度效果,但是不能兼顾多个目标对象的亮度效果。
结合上述对曝光拍摄的分析,下面通过以下实施例及实施例的可选技术方案对图像拍摄方法、装置、设备及存储介质进行阐述。
图1是本申请实施例中提供的一种图像拍摄方法的流程图。本申请实施例可适用于对拍摄画面中的目标对象进行拍摄的情况,例如在拍摄画面中出现多个目标对象的前提下,对目标对象进行拍摄的情形。该方法可由图像拍摄装置执行,该装置可采用软件和/或硬件的方式实现,并集成在任何具有网络通信功能的电子设备上。例如,该电子设备包括但不限于电子拍摄设备、电子警察设备等。如图1所示,本申请实施例中提供的图像拍摄方法,包括以下步骤:
S110、确定当前拍摄图像中的目标对象在新采集时刻在图像传感器上的预测投射区域位置,以及在新采集时刻目标对象在预测投射区域位置的预估曝光亮度信息。
在本实施例中,当前拍摄图像是指在当前采集时刻得到的拍摄图像。新采集时刻是指时间点在当前采集时刻之后的采集时间点。电子设备中包括有图像传感器,外界的目标对象通过镜头组件会生成光学图像,并投射到图像传感器上。图像传感器会将投射的光学图像转换成电信号,电信号经过一系列信号处理后,会转换得到图像。考虑到当前拍摄图像中的目标对象在大部分情况下是移动的而非静止不动的,在电子设备固定的情况下,目标对象在电子设备包括的图像传感器上的投射区域位置也会发生变化。因此,需要确定目标对象在新采集时刻在图像传感器上的投射区域位置,作为预测投射区域位置。其中,考虑到当前采集时刻是已经发生的时间点,而新采集时刻是即将要到达但还未发生的时间点,因此预测投射区域位置是预测得到的投射区域位置。
在本实施例中,在曝光调整模式下,当前拍摄图像中包括有目标对象,例如,目标对象可以包括人脸、人体、车身和车牌等。在实际场景中,例如在白 天,强顺光场景下的车牌很容易过曝,强逆光场景下的车牌亮度、人脸亮度都非常低;在夜间,人脸亮度非常低,且车牌容易过曝光。当目标对象的光学图像投射到图像传感器时,图像传感器会进行曝光。考虑到多类型的目标对象在同一光照条件下或者在不同光照条件下的曝光难易程度不同,不同条件下的目标对象在图像传感器的投射区域位置的曝光亮度有所差异。因此,需要自动确定目标对象在新采集时刻在预测投射区域位置的预估曝光亮度信息。预估曝光亮度信息包括预估得到在新采集时刻当前拍摄图像中的目标对象在预测投射区域位置的曝光亮度。其中,曝光亮度是指目标对象的光学图像投射到图像传感器的投射区域位置并进行曝光后,在投射区域位置处的目标对象的亮度。
在本实施例的一种可选方式中,本实施例的图像拍摄方法还包括:
在拍摄过程中,对获取的当前拍摄图像进行检测处理;若检测到当前拍摄图像包括目标对象,则确定处于曝光调整模式,以进行曝光参数的调整操作。
在本实施方式中,在拍摄过程中,如果拍摄画面中未出现目标对象,则在曝光拍摄过程中一般不会出现过曝光或者亮度过低的情况。因此,需要实时获取在当前采集时刻得到的当前拍摄图像,并对当前拍摄图像进行检测。若检测到当前拍摄图像包括目标对象,表明拍摄画面出现目标对象,则继续保持使用曝光调整模式或者由普通曝光模式切换到曝光调整模式,使得图像拍摄处于曝光调整模式,进行曝光参数的调整操作。若检测到当前拍摄图像未包括目标对象,表明拍摄画面中目标对象消失,则继续保持使用普通曝光模式或者由曝光调整模式切换到普通曝光模式,使得图像拍摄处于普通曝光模式。例如,当检测到拍摄画面出现目标对象后,切换到曝光调整模式;当检测到拍摄画面中目标对象消失后,又切回普通曝光模式。对于目标对象的检测是实时的,曝光模式也会跟随着目标对象的出现和消失做实时切换。
采用上述方案,能够根据拍摄画面是否出现目标对象来自适应地曝光,根据拍摄画面中目标对象的出现情况来自动切换曝光模式,使得在有目标对象和无目标对象时,采集得到的拍摄图像的效果都能尽可能达到最佳;同时,考虑到曝光调整模式在数据处理上相对于普通曝光模式要复杂些许,通过检测目标对象能在无目标对象时及时从曝光调整模式切换到普通曝光模式,在一定程度上降低曝光拍摄的复杂性。
在本实施方式中,可选地,在曝光调整模式下,若检测到当前拍摄图像未包括目标对象,则持续对后续获取的当前拍摄图像进行检测处理;若在预设时间内,确定后续获取的当前拍摄图像仍未检测目标对象,则从曝光调整模式切换到普通曝光模式。采用上述可选方式的好处在于,考虑到当前拍摄图像中的目标对象可能只是暂时消失,一会之后会重新出现,通过上述方式可以避免在 曝光调整模式与普通曝光模式之间进行频繁切换,浪费处理资源。
示例性地,在曝光调整模式下,当拍摄画面中的目标对象消失,电子设备会持续对拍摄画面检测预设的一段时间Tn。如果在预设的一段时间Tn内检测到拍摄画面中仍然没再出现目标对象,则将曝光模式从曝光调整模式切换为普通曝光模式,这样在普通曝光模式能保证拍摄画面中无目标对象时画面的整体效果。如果在预设的一段时间Tn内检测到拍摄画面中再次出现目标对象,则保持在曝光调整模式,并在曝光调整模式下继续进行曝光参数的调整操作,如此循环,从而实现全天候效果的自适应曝光拍摄。
S120、依据当前拍摄图像中的目标对象所属类型和预估曝光亮度信息,在新采集时刻到来时调整目标对象在预测投射区域位置的曝光参数。
在本实施例中,当前拍摄图像可以包括一个或多个目标对象,每一个目标对象在新采集时刻均对应有一个预测投射区域位置。若当前拍摄图像包括多个目标对象,每个目标对象在新采集时刻在预测投射区域位置的预估曝光亮度信息需要单独统计,图像传感器的不同预测投射区域位置处使用的曝光参数需要单独调整,这样就可以为图像传感器的不同区域分配不同的曝光参数。
在本实施例中,不同目标对象之间的所属类型有所不同,而不同类型的目标对象在同一光照条件下的曝光难易程度不相同,例如在夜间,人脸亮度非常低,且车牌容易过曝光。因此,在对新采集时刻目标对象在预测投射区域位置的曝光参数进行调整时,需要结合当前拍摄图像中的目标对象所属类型和目标对象在新采集时刻在预测投射区域位置的预估曝光亮度信息,来进行调整。其中,在新采集时刻目标对象在预测投射区域位置的曝光参数是指未进行曝光参数调整前,在新采集时刻图像传感器在预测投射区域位置处将要使用的曝光参数。可选地,目标对象所属类型可以为车牌、车身、人脸和人身等类型。
采用上述技术方案,能够根据目标对象所属类型和目标对象的预估曝光亮度信息对图像传感器的不同区域设置不同的曝光参数,以保证在新采集时刻可以使用调整后的曝光参数进行曝光拍摄,确保不同目标对象能使用适配的曝光参数进行曝光。可见,本实施例的方案能够动态的对不同目标对象在图像传感器的投射区域进行曝光参数的独立调整,实现独立曝光,可应用在任何时间段,不区分白天、夜晚,只要目标对象需要调整曝光的时候,就会动态确定不同目标对象对应的不同投射区域,并对不同投射区域的曝光参数进行独立调整,实现独立曝光,确保拍摄图像中不同目标对象均能达到合适的拍摄亮度。
S130、按照调整后的曝光参数,在新采集时刻采集新拍摄图像。
在本实施例中,新拍摄图像和当前拍摄图像中均包括所述目标对象。
在本实施例中,当新采集时刻来临时,可以按照调整后的在预测投射区域位置的曝光参数进行曝光,采集得到新拍摄图像。
本申请实施例中提供了一种图像拍摄方案,采用本实施例的方案,能够自动检测不同目标对象后续采集时刻将要在图像传感器的投射区域,并根据每个目标对象所在投射区域的曝光亮度和需要达到的目标曝光亮度向该目标对象所在的投射区域分配合适的曝光参数,确保在拍摄过程中多个目标对象均能达到期望的拍摄亮度,能够保证易过暗的目标对象或者易过亮的目标对象均能达到期望的亮度。
图2是本申请实施例中提供的另一种图像拍摄方法的流程图,本申请实施例在上述实施例的基础上对前述实施例中S110的步骤进行说明,本申请实施例可以与上述一个或者多个实施例中的可选方案结合。如图2所示,本实施例中提供的图像拍摄方法,包括以下步骤:
S210、确定当前拍摄图像中的目标对象在当前采集时刻在图像传感器上的当前投射区域位置。
在本实施例中,确定当前拍摄图像中的目标对象在当前采集时刻在图像传感器上的当前投射区域位置,包括:确定当前拍摄图像中的目标对象的光学图像在当前采集时刻投射到图像传感器时,目标对象的光学图像在图像传感器上的投射区域位置,记为当前投射区域位置。由于当前采集时刻已经发生,此时当前投射区域位置是通过当前拍摄图像进行实际计算得到的投射区域位置,而不是预测的投射区域位置。通过当前采集时刻与新采集时刻,区分先后时间。
S220、确定当前拍摄图像中的目标对象在新采集时刻在图像传感器上的预测投射区域位置。
在本实施例中,确定当前拍摄图像中的目标对象在新采集时刻在图像传感器上的预测投射区域位置,包括:预测当前拍摄图像中的目标对象的光学图像在新采集时刻投射到图像传感器时,目标对象的光学图像在图像传感器上的投射区域位置,记为预测投射区域位置。
在本实施例的一种可选方式中,确定当前拍摄图像中的目标对象在新采集时刻在图像传感器上的预测投射区域位置,包括步骤A10-步骤A30。
步骤A10、确定当前拍摄图像中的目标对象在当前采集时刻在当前拍摄图像中的当前图像位置。
在本实施方式中,当前拍摄图像中包含目标对象,此时可以通过目标识别算法确定目标对象在当前拍摄图像中的图像位置,即得到目标对象在当前采集时刻在当前拍摄图像中的当前图像位置。
步骤A20、依据当前图像位置对目标对象的运动进行预测,确定目标对象在新采集时刻在当前拍摄图像中的预测图像位置。
在本实施方式中,当前图像位置是在当前采集时刻目标对象在当前拍摄图像中的位置,但是目标对象一般是运动的,因此能对当前拍摄图像中的目标对象进行运动预测,确定目标对象在新采集时刻将要运动到当前拍摄图像中的哪个位置,即可得到目标对象在新采集时刻在当前拍摄图像中的预测图像位置。可选地,以目标对象的当前图像位置为起点,按照目标对象在当前拍摄图像中的运动方向和运动速度进行预估,以得到目标对象在新采集时刻将要运动到当前拍摄图像中的哪个位置。
步骤A30、依据预测图像位置,确定目标对象在新采集时刻在图像传感器上的预测投射区域位置。
在本实施方式中,拍摄图像与图像传感器的成像平面可以使用相同的坐标划分信息。如果两者使用相同的坐标划分信息,那么可以直接将预测图像位置作为目标对象在图像传感器上的预测投射区域位置。如果两者使用不相同的坐标划分信息,则需要依据预测图像位置与预设的位置映射信息,确定目标对象在图像传感器上的预测投射区域位置。可选地,可以将预测图像位置与预设的位置映射信息进行匹配,确定在新采集时刻目标对象的光学图像在图像传感器的成像平面上的预测投射位置。
在本实施方式中,目标对象所在的拍摄图像划分为M*N个区域块,而目标对象所在的图像传感器上的成像平面被划分为m*n个区域块,这样同一目标对象在拍摄图像中的图像位置与在图像传感器的成像平面上的投射区域位置存在一定的位置映射关系,并将这一映射关系进行预存。在此基础上,目标对象的图像位置与投射区域位置的位置映射信息为:x=f 1(x0)=取整
Figure PCTCN2020084589-appb-000001
和y=f 2(y0)=取整
Figure PCTCN2020084589-appb-000002
其中,坐标(x0,y0)表示目标对象在拍摄图像中的图像位置的横纵坐标,坐标(x,y)表示目标对象在图像传感器上的投射区域位置的横纵坐标,为了保证坐标转换的准确性,这里取常数0.5。
示例性地,图3是本申请实施例中提供的一种预测投射位置的确定示意图。参见图3,在拍摄图像中,目标对象(人脸)在当前采集时刻在当前拍摄图像中的当前图像位置的坐标信息,记为:X(x1,y1)和Y(x2,y2)。在对目标物进行运动预判后可以得到,目标对象在后续的新采集时刻在当前拍摄图像中的预测图像位置的坐标信息,记为:X’(x3,y3)和Y’(x4,y4)。在确定目标对象的预测图像位置X’(x3,y3)和Y’(x4,y4)后,可以依据位置映射信息包括的位置转换关系,得到目标对象在新采集时刻在图像传感器上的预测投射区域位置的坐标信息,记为:X’(f 1(x3),f 2(y3)),Y’(f 1(x4),f 2(y4))。
在本实施例的一种可选方式中,确定当前拍摄图像中的目标对象在当前采集时刻在图像传感器上的当前投射区域位置,包括步骤B10-步骤B20。
步骤B10、确定当前拍摄图像中的目标对象在当前采集时刻在当前拍摄图像中的当前图像位置。
步骤B20、依据当前图像区域位置,确定目标对象在当前采集时刻在图像传感器上的当前投射区域位置。
在本实施例中,可选地,可以将当前图像位置与预设的位置映射信息进行匹配,确定在当前采集时刻目标对象的光学图像在图像传感器的成像平面上的当前投射位置。
示例性地,仍然以图3中的目标对象为例,在确定目标对象的当前图像位置X(x1,y1),Y(x2,y2)后,可以依据位置映射信息包括的位置转换关系,得到目标对象在当前采集时刻在图像传感器上的当前投射区域位置的坐标信息,记为:X(f 1(x1),f 2(y1)),Y(f 1(x2),f 2(y2))。
S230、确定目标对象在当前采集时刻分别在当前投射区域位置和预测投射区域位置的当前曝光亮度信息。
在本实施例中,当前拍摄图像中的目标对象的光学图像在当前采集时刻向图像传感器进行投射,图像传感器会根据曝光参数对目标对象的光学图像进行曝光。在对目标对象进行曝光后,此时目标对象的亮度在图像传感器上的当前投射区域位置处的亮度,记为目标对象在当前采集时刻在当前投射区域位置的当前曝光亮度信息。按照相同的计算方式,可以得到目标对象在当前采集时刻在预测投射区域位置的当前曝光亮度信息。
示例性地,设定目标对象在当前采集时刻在当前投射区域位置的坐标为:X(f 1(x1),f 2(y1)),Y(f 1(x2),f 2(y2)),以及目标对象在当前采集时刻在预测投射区域位置的坐标为:X’(f 1(x3),f 2(y3)),Y’(f 1(x4),f 2(y4))。那么,目标对象在当前采集时刻在当前投射区域位置的当前曝光亮度信息为:
Figure PCTCN2020084589-appb-000003
目标对象当前采集时刻在预测投射区域位置的当前曝光亮度信息为:
Figure PCTCN2020084589-appb-000004
其中,l(x,y)为当前采集时刻的图像传感器的投射区域位置坐标为(x,y)处的亮度。
S240、依据分别在当前投射区域位置和预测投射区域位置的当前曝光亮度信息,确定预估曝光亮度信息。
在本实施例中,在得到目标对象在当前采集时刻在当前投射区域位置的当前曝光亮度信息和目标对象在当前采集时刻在预测投射区域位置的当前曝光亮 度信息后,可以根据预设的加权权重比例进行加权平均计算,将得到的加权平均亮度值作为目标对象在新采集时刻在预测投射区域位置的预估曝光亮度信息。例如加权平均亮度值L=m1*L 当前1+m2*L 当前2,其中,m1+m2=1。
采用上述方式的好处在于,考虑到预测得到的预测投射区域位置并不一定十分准确,存在一定的误差,因此在预估目标对象在预测投射区域位置的曝光亮度时不能仅参考预测投射区域位置的亮度,还需要参考目标对象在图像传感器上当前投射区域位置的亮度,结合两个投射区域位置处的亮度才能预估得到在新采集时刻目标对象在预测投射区域位置的准确曝光亮度。
S250、依据当前拍摄图像中的目标对象所属类型和预估曝光亮度信息,在新采集时刻到来时调整目标对象在预测投射区域位置的曝光参数。
S260、按照调整后的曝光参数,在新采集时刻采集新拍摄图像。
本申请实施例中提供了一种图像拍摄方案,采用本实施例的方案,能够自动检测不同目标对象后续采集时刻将要在图像传感器的投射区域,并根据每个目标对象所在投射区域的曝光亮度和需要达到的目标曝光亮度向该目标对象所在的投射区域分配合适的曝光参数,确保在拍摄过程中多个目标对象均能达到期望的拍摄亮度,能够保证易过暗的目标对象或者易过亮的目标对象均能达到期望的亮度。同时,在确定目标对象在预测投射区域的预估曝光亮度时,充分考虑了目标对象在当前投射区域位置和预测投射区域位置的当前曝光亮度,使得得到的预估曝光亮度更接近实际场景中预测投射区域位置处的曝光亮度。
图4是本申请实施例中提供的又一种图像拍摄方法的流程图,本申请实施例在上述实施例的基础上对前述实施例中S120和S250的步骤进行说明,本申请实施例可以与上述一个或者多个实施例中的可选方案结合。如图4所示,本实施例中的图像拍摄方法,包括以下步骤:
S410、确定当前拍摄图像中的目标对象在新采集时刻在图像传感器上的预测投射区域位置,以及当前拍摄图像中的目标对象在新采集时刻在预测投射区域位置的预估曝光亮度信息。
S420、确定当前拍摄图像中的目标对象的所属类型关联的目标曝光亮度信息。
在本实施例中,当前拍摄图像中可以包含多个目标对象,每一个目标对象在图像传感器上均会对应一个投射区域,这样图像传感器上就会包括多个投射区域。不同类型的目标对象所要求满足的目标曝光亮度有所差异,因此在单独对每个目标对象进行曝光调整时,需要确定每一个目标对象的所属类型关联的目标曝光亮度信息,以便根据目标曝光亮度信息确定是否进行曝光参数调整。
S430、依据目标曝光亮度信息与预估曝光亮度信息,在新采集时刻到来时调整当前拍摄图像中的目标对象在预测投射区域位置的曝光参数。
在本实施例中,对于当前拍摄图像中的每一个目标对象而言,确定目标对象的所属类型关联的目标曝光亮度信息与目标对象在新采集时刻在预测投射区域位置的预估曝光亮度信息之间的亮度差异信息。若检测到亮度差异信息不满足预设的差异信息,则依据亮度差异信息,调整在新采集时刻到来时目标对象在预测投射区域位置的曝光参数。若检测到亮度差异信息满足预设的差异信息,则不调整目标对象在预测投射区域位置的曝光参数,仍然保持原有的曝光参数。
示例性地,图5是本申请实施例中提供的一种图像传感器上投射区域的投射示意图。参见图5,当前拍摄图像中所有目标对象在图像传感器上的投射区域包括:R1、R2和R3,多个目标对象在所属的投射区域位置处的曝光亮度分别记为:Lr1、Lr2和Lr3。根据Lr1、Lr2和Lr3,分别跟Lr1关联的目标曝光亮度T1、Lr2关联的目标曝光亮度T2和Lr3关联的目标曝光亮度T3进行依次对比,然后实时调整自动曝光(Automatic Exposure,AE)下发相应的曝光参数给图像传感器的多个投射区域,直到目标对象在投射区域位置的曝光亮度达到想要的目标曝光亮度值。
在本实施例中,可选地,依据亮度差异信息,调整在新采集时刻到来时目标对象在预测投射区域位置的曝光参数,包括:依据亮度差异信息中包括的曝光亮度的差异值,确定目标对象在预测投射位置处的曝光量调整信息;依据曝光量调整信息,在新采集时刻到来时调整目标对象在图像传感器的预测投射区域位置处的曝光参数。例如,如果目标对象的曝光亮度低于目标曝光亮度,则T1=Lr1+δ,需要通过增加曝光量来调整预测投射区域位置处曝光参数;如果目标对象的曝光亮度高于目标曝光亮度,则T1=Lr1-δ,需要通过降低曝光量来调整预测投射区域位置处曝光参数;δ为计算出来应该增加或降低的曝光量。
S440、按照调整后的曝光参数,在新采集时刻采集新拍摄图像。
在本实施例的一种可选方式中,在按照调整后的曝光参数,采集新拍摄图像之前,本实施例的图像拍摄方法,还包括以下步骤C10-步骤C20。
步骤C10、将测光区域中目标对象在图像传感器上的投射区域进行剔除,并对剔除处理后的测光区域进行测光。
步骤C20、依据测光结果,确定测光区域的曝光量信息,并依据测光区域的曝光量信息,调整在新采集时刻非目标对象在图像传感器上的投射区域处使用的曝光参数。
在本实施方式中,图6是本申请实施例中提供的一种测光区域的测光示意 图。参见图6,对于非目标对象在图像传感器上的投射区域,除去目标对象在图像传感器上的投射区域进行统计,测光区域和权重跟无目标对象时一致,即测光区域中会扣掉目标对象在图像传感器上的投射区域中的信息,非测光区域中也会扣掉目标对象在图像传感器上的投射区域中的信息,然后再根据权重计算整体画面的亮度,随之给出非目标对象在图像传感器上的投射区域的曝光参数。非目标对象在图像传感器上的投射区域的加权平均亮度为:
Figure PCTCN2020084589-appb-000005
其中,ω(p,q)为像素(p,q)在拍摄图像或者图像传感器的成像平面中所占的权重。已知pix(p,q)=shutter(p,q)*gain(p,q),则根据ABS(Lr0-L0)<δ1可以得到非目标对象在图像传感器上的投射区域的曝光稳定后的曝光参数,记为快门参数shutter0和增益参数gain0。其中,pix表示像素,Lr0表示非目标对象在图像传感器上的投射区域的加权平均亮度,L0表示非目标对象在图像传感器上的投射区域的目标亮度值,为常数,δ1表示误差范围,P和Q分别表示拍摄图像宽度和高度,n表示拍摄图像中的目标对象的个数。
在本实施方式中,考虑到曝光的稳定性,以及目标对象在图像传感器上的投射区域与周围的非目标对象在图像传感器上的投射区域亮度的过渡自然程度,这里限制目标对象在图像传感器上的投射区域的曝光参数的调整范围。由此根据如下得到目标对象在图像传感器上的投射区域的曝光调整的曝光参数gain_i:
ABS(Lri-Li)<δ2,gain∈[gain0-ε1,gain0+ε1],
shutter∈[shutter0-ε2,shutter0+ε2]
其中,gain∈[gain0-ε1,gain0+ε1]表示增益参数的范围,shutter∈[shutter0-ε2,shutter0+ε2]表示快门参数的范围,Lri表示第i个对象在投射区域的平均亮度(比如目标对象中的当前曝光亮度和非目标对象中的加权平均亮度),Li表示第i个对象在投射区域的目标亮度值,为常数。
在本实施方式中,曝光参数的调整可以通过以下方式进行调整:调整目标对象在图像传感器的预测投射区域位置处的快门以及其他曝光参数。对于非目标对象在图像传感器的投射区域,这些区域大多为背景而不包含目标对象,优先选择放慢快门,增益尽量小的曝光原则,以进行曝光参数调整。
在本实施方式中,上述动态进行多个目标对象在图像传感器上的投射区域、非目标对象在图像传感器上的投射区域的独立曝光,可应用在任何时间段,不区分白天、夜晚,只要目标对象需要调整曝光的时候,就会动态进行确定每个 区域以及曝光调整。除此之外,目标对象在图像传感器上的投射区域还可以是指定的几个区域,该区域内的亮度统计以及曝光跟上述一样,该区域内可以是人脸、人体、车牌、车身或其他的目标对象。
本申请实施例中提供了一种图像拍摄方案,采用本实施例的方案,能够自动检测不同目标对象后续采集时刻将要在图像传感器的投射区域,并根据每个目标对象所在投射区域的曝光亮度和需要达到的目标曝光亮度向该目标对象所在的投射区域分配合适的曝光参数,确保在拍摄过程中多个目标对象均能达到期望的拍摄亮度,尤其是能够保证易过暗的目标对象或者易过亮的目标对象均能达到期望的亮度。
图7是本申请实施例中提供的又一种图像拍摄方法的流程图,本申请实施例在上述实施例的基础上进行说明,本申请实施例可以与上述一个或者多个实施例中的可选方案结合。如图7所示,本申请施例中提供的图像拍摄方法,包括以下步骤:
S710、确定当前拍摄图像中的目标对象在新采集时刻在图像传感器上的预测投射区域位置,以及在新采集时刻目标对象在预测投射区域位置的预估曝光亮度信息。
S720、依据目标对象所属类型和预估曝光亮度信息,在新采集时刻到来时调整目标对象在预测投射区域位置的曝光参数。
S730、按照调整后的曝光参数,在新采集时刻采集新拍摄图像。
S740、确定新拍摄图像中包括的目标对象的图像质量和/或特征相似度。
在本实施例中,在根据调整后的曝光参数进行曝光拍摄,采集得到新拍摄图像后,可以对新拍摄图像进行在线图像质量的评价,确定新拍摄图像中的目标对象的图像质量效果。对于目标对象为人脸而言,新拍摄图像中包括的人脸的图像质量可以由以下至少一项进行评价确定:人脸亮度评价值Yl、人脸清晰度评价值Ts、人脸噪声评价值Nn、人脸肤色评价值C。若采用上述4个标准进行图像质量的评价,且上述4个标准的权重分别为W1、W2、W3、W4,那么拍摄图像中人脸的图像质量为:FaceValue=Yl*W1+Ts*W2+Nn*W3+C*W4。对于目标对象为车牌而言,新拍摄图像中包括的车牌的图像质量可以由以下至少一项进行评价确定:车牌亮度评价值Ylc、车牌清晰度评价值Tsc、车牌噪声评价值Nnc、车牌颜色评价值Cc4。若采用上述4个标准进行图像质量的评价,且上述4个标准的权重分别为Wc1、Wc2、Wc3、Wc4,那么拍摄图像中车牌的图像质量为:Car=Ylc*Wc1+Tsc*Wc2+Nnc*Wc3+Cc*Wc4。
在本实施例中,在采集得到新拍摄图像后,还可以对新拍摄图像进行特征 相似度进行确定,确定新拍摄图像中的目标对象的相似度。可选地,对新拍摄图像进行人工智能(Artificial Intelligence,AI)的特征相似度计算。以人脸图像质量的评价为例,进行人脸图像质量的评价和人脸特征相似度的评价。过程如下。
(1)人脸亮度评价
计算人脸图像I的平均亮度为:
Figure PCTCN2020084589-appb-000006
其中,Gray(p,q)表示像素(p,q)的灰度值,P和Q分别表示人脸图像的宽度和高度。
人脸亮度的评价函数为:
Figure PCTCN2020084589-appb-000007
其中,t1=40、t2=110、t3=140、t4=180、t5=195为分界阈值,可根据实际需要调节。
(2)人脸清晰度的评价
采用低通滤波器得到新拍摄图像中人脸的模糊图像,并将人脸的模糊图像与人脸的原图做差,得到人脸的边缘图像,用来表征人脸的清晰度。其中,假设人脸的原图为I,低通滤波器为F,人脸的边缘图像:
Figure PCTCN2020084589-appb-000008
由此,可得到人脸清晰度平均值为:
Figure PCTCN2020084589-appb-000009
人脸清晰度的评价函数为:
Figure PCTCN2020084589-appb-000010
其中,t1、t2、t3、t4为分界阈值,可根据实际需要调节。
(3)人脸噪声的评价
采用平滑滤波器得到降噪后的人脸图像,并将降噪后的人脸图像与原图做差,得到噪声图像。设原图为I,平滑滤波器为G,则噪声图像为:
Figure PCTCN2020084589-appb-000011
由此,可得到人脸噪声的平均值为:
Figure PCTCN2020084589-appb-000012
其中,p,q为图像中像素的下标,P和Q分别表示人脸图像的宽度和高度。
人脸噪声的评价函数为:
Figure PCTCN2020084589-appb-000013
其中,t1=1,t2=3,t3=5为分界阈值,可根据实际需要调节。
(4)人脸肤色的评价
提取新拍摄图像中人脸区域的U、V通道的信息,得到色彩域的平均差值为:
Figure PCTCN2020084589-appb-000014
人脸肤色的评价函数为:
Figure PCTCN2020084589-appb-000015
这里记色彩域的平均和值为:
Figure PCTCN2020084589-appb-000016
则A=m1-((X>3)*(X-m1))*0.5;其中,m1=3,B=5,k=0.7,可根据实际需要调节。
此时,人脸图像质量的评价值为:FaceValue=Yl*W1+Ts*W2+Nn*W3+Cf*W4。其中,W1+W2+W3+W4=1。
以下为人脸的特征相似度评价:
首先,计算新拍摄图像中人脸特征点集H内的每个特征坐标点之间的欧式距离,得出距离矩阵DS。
设二维空间包含n个点的人脸特征点集:H={(xi,yi)},i=1,2,…,n,(xi,yi)为第i个特征点的坐标。
定义H的距离矩阵为:
Figure PCTCN2020084589-appb-000017
取DS中的如下元素构成特征向量Tf:Tf={di,j|i≥j;i=1,2,…,n;j=1,2,…,n},di,j为第i个特征点与第j个特征点之间的距离。
计算两个向量的平均绝对误差,然后做归一化处理,最终得到人脸相似度。
可见,通过上述过程,可以分别得到人脸图像质量的评价值和人脸特征相似度的评价值。通过图像质量的评价值,确定新拍摄图像中的目标对象的拍摄效果是否能够达到人眼要求,通过人脸特征相似度的评价值新拍摄图像中的目标对象的拍摄效果是否能够达到AI的相似度识别的要求。
S750、依据图像质量和/或特征相似度,对新拍摄图像的图像参数进行调整,得到调整后的新拍摄图像。
在本实施例中,在得到图像质量和特征相似度,可以根据图像质量和特征相似度,确定是否满足目标要求,从而有针对性的对新拍摄图像的图像参数进行调整,调整完之后,再实时进行评价,再对图像参数进行调整,同步进行相似度评价,如此循环调节图像参数,直到图像质量的评价值、特征相似度评价值都达到目标要求为止,即可得到调整后的新拍摄图像。
本申请实施例中提供了一种图像拍摄方案,采用本实施例的方案,能够自动检测不同目标对象后续采集时刻将要在图像传感器的投射区域,并根据每个目标对象所在投射区域的曝光亮度和需要达到的目标曝光亮度向该目标对象所在的投射区域分配合适的曝光参数,确保在拍摄过程中多个目标对象均能达到期望的拍摄亮度,尤其是能够保证易过暗的目标对象或者易过亮的目标对象均能达到期望的亮度。同时,在线图像质量评价,实时调整曝光,同时进行ISP相关处理,确保最终目标物图像质量既能达到AI识别要求也能达到人眼的视觉要求。
图8是本申请实施例中提供的一种图像拍摄装置的结构框图。本申请实施例可适用于对拍摄画面中的目标对象进行拍摄的情况,例如在拍摄画面中出现多个目标对象的前提下,对目标对象进行拍摄的情形。该装置可采用软件和/或硬件的方式实现,并集成在任何具有网络通信功能的电子设备上。例如,该电子设备包括但不限于电子拍摄设备、电子警察设备等。如图8所示,本申请实施例中提供的图像拍摄装置,包括:曝光亮度确定模块810、曝光参数调整模块820和图像曝光拍摄模块830。其中:
曝光亮度确定模块810,设置为确定当前拍摄图像中的目标对象在新采集时刻在图像传感器上的预测投射区域位置,以及在新采集时刻目标对象在所述预测投射区域位置的预估曝光亮度信息;曝光参数调整模块820,设置为依据所述目标对象所属类型和所述预估曝光亮度信息,在新采集时刻到来时调整目标对象在所述预测投射区域位置的曝光参数;图像曝光拍摄模块830,设置为按照调整后的曝光参数,在新采集时刻采集新拍摄图像,其中,新拍摄图像和当前拍摄图像中均包括所述目标对象。
在上述实施例的基础上,可选地,曝光亮度确定模块810是设置为:
确定所述目标对象在当前采集时刻在所述图像传感器上的当前投射区域位置;确定所述目标对象在当前采集时刻分别在所述当前投射区域位置和所述预测投射区域位置的当前曝光亮度信息;依据所述分别在所述当前投射区域位置和所述预测投射区域位置的当前曝光亮度信息,确定所述预估曝光亮度信息。
在上述实施例的基础上,可选地,曝光亮度确定模块810是设置为:
确定所述目标对象在当前采集时刻在当前拍摄图像中的当前图像位置;依据所述当前图像位置对所述目标对象的运动进行预测,确定所述目标对象在新采集时刻在所述当前拍摄图像中的预测图像位置;依据所述预测图像位置,确定所述目标对象在所述图像传感器上的预测投射区域位置;
曝光亮度确定模块810是设置为:
依据所述当前图像区域位置,确定所述目标对象在当前采集时刻在所述图像传感器上的当前投射区域位置。
在上述实施例的基础上,可选地,曝光参数调整模块820包括:
目标亮度确定单元,设置为确定所述目标对象的所属类型关联的目标曝光亮度信息;
曝光参数调整单元,设置为依据所述目标曝光亮度信息与所述预估曝光亮度信息,在新采集时刻到来时调整目标对象在所述预测投射区域位置的曝光参数。
在上述实施例的基础上,可选地,曝光参数调整单元是设置为:
确定所述目标曝光亮度信息与所述预估曝光亮度信息之间的亮度差异信息;若检测到所述亮度差异信息不满足预设的差异信息,则依据所述亮度差异信息,在新采集时刻到来时调整目标对象在所述预测投射区域位置的曝光参数。
在上述实施例的基础上,可选地,所述装置还包括:
拍摄图像分析模块840,设置为确定新拍摄图像中包括的目标对象的图像质量和/或特征相似度;
拍摄图像处理模块850,设置为依据所述图像质量和/或所述特征相似度,对所述新拍摄图像的图像参数进行调整,得到调整后的新拍摄图像。
本申请实施例中所提供的图像拍摄装置可执行上述本申请任意实施例中所提供的图像拍摄方法,具备执行该图像拍摄方法相应的功能和效果,过程参见前述实施例中图像拍摄方法的相关操作。
图9是本申请实施例中提供的一种电子设备的结构示意图。如图9所示结构,本申请实施例中提供的电子设备包括:一个或多个处理器910和存储装置920;该电子设备中的处理器910可以是一个或多个,图9中以一个处理器910为例;存储装置920设置为存储一个或多个程序;所述一个或多个程序被所述一个或多个处理器910执行,使得所述一个或多个处理器910实现如本申请实施例中任一项所述的图像拍摄方法。
该电子设备还可以包括:输入装置930和输出装置940。
该电子设备中的处理器910、存储装置920、输入装置930和输出装置940可以通过总线或其他方式连接,图9中以通过总线连接为例。
该电子设备中的存储装置920作为一种计算机可读存储介质,可设置为存储一个或多个程序,所述程序可以是软件程序、计算机可执行程序以及模块,如本申请实施例中所提供的图像拍摄方法对应的程序指令/模块。处理器910通过运行存储在存储装置920中的软件程序、指令以及模块,从而执行电子设备的多种功能应用以及数据处理,即实现上述方法实施例中图像拍摄方法。
存储装置920可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据电子设备的使用所创建的数据等。此外,存储装置920可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储装置920可包括相对于处理器910远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置930可设置为接收输入的数字或字符信息,以及产生与电子设备的用户设置以及功能控制有关的键信号输入。输出装置940可包括显示屏等显示设备。
并且,当上述电子设备所包括一个或者多个程序被所述一个或者多个处理器910执行时,程序进行如下操作:
确定当前拍摄图像中的目标对象在新采集时刻在图像传感器上的预测投射区域位置,以及在新采集时刻目标对象在所述预测投射区域位置的预估曝光亮度信息;依据所述目标对象所属类型和所述预估曝光亮度信息,在新采集时刻到来时调整目标对象在所述预测投射区域位置的曝光参数;按照调整后的曝光参数,在新采集时刻采集新拍摄图像,其中,新拍摄图像和当前拍摄图像中均包括所述目标对象。
当上述电子设备所包括一个或者多个程序被所述一个或者多个处理器910执行时,程序还可以进行本申请任意实施例中所提供的图像拍摄方法中的相关操作。
本申请实施例中提供了一种计算机可读介质,其上存储有计算机程序,该程序被处理器执行时用于执行图像拍摄方法,该方法包括:
确定当前拍摄图像中的目标对象在新采集时刻在图像传感器上的预测投射区域位置,以及在新采集时刻目标对象在所述预测投射区域位置的预估曝光亮度信息;依据所述目标对象所属类型和所述预估曝光亮度信息,在新采集时刻到来时调整目标对象在所述预测投射区域位置的曝光参数;按照调整后的曝光参数,在新采集时刻采集新拍摄图像,其中,新拍摄图像和当前拍摄图像中均包括所述目标对象。
可选的,该程序被处理器执行时还可以用于执行本申请任意实施例中所提供的图像拍摄方法。
本申请实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(Random Access Memory,RAM)、只读存储器(Read Only Memory,ROM)、可擦式可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、闪存、光纤、便携式光盘只读存储器(Compact Disc Read Only Memory,CD-ROM)、光存储器件、磁存储器 件、或者上述的任意合适的组合。计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于:电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、无线电频率(RadioFrequency,RF)等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
在本文的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。

Claims (10)

  1. 一种图像拍摄方法,包括:
    预测在新采集时刻当前拍摄图像中的目标对象在图像传感器上的预测投射区域位置,以及在所述新采集时刻所述目标对象在所述预测投射区域位置的预估曝光亮度信息;
    依据所述目标对象所属类型和所述预估曝光亮度信息,在所述新采集时刻到来时调整所述目标对象在所述预测投射区域位置的曝光参数;
    按照调整后的曝光参数,在所述新采集时刻采集新拍摄图像,其中,所述新拍摄图像和所述当前拍摄图像中均包括所述目标对象。
  2. 根据权利要求1所述的方法,其中,所述预测在所述新采集时刻所述目标对象在所述预测投射区域位置的预估曝光亮度信息,包括:
    确定在当前采集时刻所述目标对象在所述图像传感器上的当前投射区域位置;
    确定在所述当前采集时刻所述目标对象分别在所述当前投射区域位置和所述预测投射区域位置的当前曝光亮度信息;
    依据所述分别在所述当前投射区域位置和所述预测投射区域位置的当前曝光亮度信息,确定所述预估曝光亮度信息。
  3. 根据权利要求2所述的方法,其中,所述预测在新采集时刻当前拍摄图像中的目标对象在图像传感器上的预测投射区域位置,包括:
    确定在所述当前采集时刻所述目标对象在所述当前拍摄图像中的当前图像位置;
    依据所述当前图像位置对所述目标对象的运动进行预测,确定在所述新采集时刻所述目标对象在所述当前拍摄图像中的预测图像位置;
    依据所述预测图像位置,确定所述目标对象在所述图像传感器上的预测投射区域位置;
    所述确定在当前采集时刻所述目标对象在所述图像传感器上的当前投射区域位置,包括:
    依据所述当前图像区域位置,确定在所述当前采集时刻所述目标对象在所述图像传感器上的当前投射区域位置。
  4. 根据权利要求1所述的方法,其中,所述依据所述目标对象所属类型和所述预估曝光亮度信息,在所述新采集时刻到来时调整所述目标对象在所述预测投射区域位置的曝光参数,包括:
    确定所述目标对象的所属类型关联的目标曝光亮度信息;
    依据所述目标曝光亮度信息与所述预估曝光亮度信息,在所述新采集时刻到来时调整所述目标对象在所述预测投射区域位置的曝光参数。
  5. 根据权利要求4所述的方法,其中,所述依据所述目标曝光亮度信息与所述预估曝光亮度信息,在所述新采集时刻到来时调整所述目标对象在所述预测投射区域位置的曝光参数,包括:
    确定所述目标曝光亮度信息与所述预估曝光亮度信息之间的亮度差异信息;
    在检测到所述亮度差异信息不满足预设的差异信息的情况下,依据所述亮度差异信息,在所述新采集时刻到来时调整所述目标对象在所述预测投射区域位置的曝光参数。
  6. 根据权利要求1所述的方法,还包括:
    确定所述新拍摄图像中包括的目标对象的图像质量和特征相似度中的至少之一;
    依据确定的所述图像质量和所述特征相似度中的至少之一,对所述新拍摄图像的图像参数进行调整,得到调整后的新拍摄图像。
  7. 一种图像拍摄装置,包括:
    曝光亮度确定模块,设置为预测在新采集时刻当前拍摄图像中的目标对象在图像传感器上的预测投射区域位置,以及在所述新采集时刻所述目标对象在所述预测投射区域位置的预估曝光亮度信息;
    曝光参数调整模块,设置为依据所述目标对象所属类型和所述预估曝光亮度信息,在所述新采集时刻到来时调整所述目标对象在所述预测投射区域位置的曝光参数;
    图像曝光拍摄模块,设置为按照调整后的曝光参数,在所述新采集时刻采集新拍摄图像,其中,所述新拍摄图像和所述当前拍摄图像中均包括所述目标对象。
  8. 根据权利要求7所述的装置,所述曝光亮度确定模块是设置为:
    确定在当前采集时刻所述目标对象在所述图像传感器上的当前投射区域位置;
    确定在所述当前采集时刻所述目标对象分别在所述当前投射区域位置和所述预测投射区域位置的当前曝光亮度信息;
    依据所述当前曝光亮度信息,确定所述预估曝光亮度信息。
  9. 一种电子设备,包括:
    至少一个处理器;
    存储装置,设置为存储至少一个程序;
    当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现权利要求1-6中任一所述的图像拍摄方法。
  10. 一种计算机可读存储介质,存储有计算机程序,所述程序被处理器执行时实现权利要求1-6中任一所述的图像拍摄方法。
PCT/CN2020/084589 2019-12-03 2020-04-14 图像拍摄方法、装置、设备及存储介质 WO2021109409A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/782,521 US20230005239A1 (en) 2019-12-03 2020-04-14 Image capturing method and device, apparatus, and storage medium
EP20896699.4A EP4072124A4 (en) 2019-12-03 2020-04-14 IMAGE RECORDING METHOD AND DEVICE, DEVICE AND STORAGE MEDIUM

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911222644.4A CN112911160B (zh) 2019-12-03 2019-12-03 图像拍摄方法、装置、设备及存储介质
CN201911222644.4 2019-12-03

Publications (1)

Publication Number Publication Date
WO2021109409A1 true WO2021109409A1 (zh) 2021-06-10

Family

ID=76104673

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/084589 WO2021109409A1 (zh) 2019-12-03 2020-04-14 图像拍摄方法、装置、设备及存储介质

Country Status (4)

Country Link
US (1) US20230005239A1 (zh)
EP (1) EP4072124A4 (zh)
CN (1) CN112911160B (zh)
WO (1) WO2021109409A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727022B (zh) * 2021-08-30 2023-06-20 杭州申昊科技股份有限公司 巡检图像的采集方法及装置、电子设备、存储介质
CN114125317A (zh) * 2021-12-01 2022-03-01 展讯通信(上海)有限公司 一种曝光控制方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100141826A1 (en) * 2008-12-05 2010-06-10 Karl Ola Thorn Camera System with Touch Focus and Method
CN103369227A (zh) * 2012-03-26 2013-10-23 联想(北京)有限公司 一种运动对象的拍照方法及电子设备
CN103376616A (zh) * 2012-04-26 2013-10-30 华晶科技股份有限公司 图像获取装置及自动对焦的方法与自动对焦系统
CN105407276A (zh) * 2015-11-03 2016-03-16 北京旷视科技有限公司 拍照方法和设备
CN107071290A (zh) * 2016-12-28 2017-08-18 深圳天珑无线科技有限公司 拍照方法及其拍照装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007074163A (ja) * 2005-09-05 2007-03-22 Sony Corp 撮像装置および撮像方法
JP4433045B2 (ja) * 2007-12-26 2010-03-17 株式会社デンソー 露出制御装置及び露出制御プログラム
CN101247479B (zh) * 2008-03-26 2010-07-07 北京中星微电子有限公司 一种基于图像中目标区域的自动曝光方法
CN109547701B (zh) * 2019-01-04 2021-07-09 Oppo广东移动通信有限公司 图像拍摄方法、装置、存储介质及电子设备
CN109922275B (zh) * 2019-03-28 2021-04-06 苏州科达科技股份有限公司 曝光参数的自适应调整方法、装置及一种拍摄设备
CN110099222B (zh) * 2019-05-17 2021-05-07 睿魔智能科技(深圳)有限公司 一种拍摄设备的曝光调整方法、装置、存储介质及设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100141826A1 (en) * 2008-12-05 2010-06-10 Karl Ola Thorn Camera System with Touch Focus and Method
CN103369227A (zh) * 2012-03-26 2013-10-23 联想(北京)有限公司 一种运动对象的拍照方法及电子设备
CN103376616A (zh) * 2012-04-26 2013-10-30 华晶科技股份有限公司 图像获取装置及自动对焦的方法与自动对焦系统
CN105407276A (zh) * 2015-11-03 2016-03-16 北京旷视科技有限公司 拍照方法和设备
CN107071290A (zh) * 2016-12-28 2017-08-18 深圳天珑无线科技有限公司 拍照方法及其拍照装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4072124A4

Also Published As

Publication number Publication date
CN112911160A (zh) 2021-06-04
US20230005239A1 (en) 2023-01-05
CN112911160B (zh) 2022-07-08
EP4072124A4 (en) 2023-12-06
EP4072124A1 (en) 2022-10-12

Similar Documents

Publication Publication Date Title
US11089207B2 (en) Imaging processing method and apparatus for camera module in night scene, electronic device and storage medium
CN108322646B (zh) 图像处理方法、装置、存储介质及电子设备
CN110691193B (zh) 摄像头切换方法、装置、存储介质及电子设备
CN109218628B (zh) 图像处理方法、装置、电子设备及存储介质
CN108419023B (zh) 一种生成高动态范围图像的方法以及相关设备
KR101826897B1 (ko) 화상 조정 파라미터를 결정하기 위한 방법 및 카메라
KR101926490B1 (ko) 이미지 처리 장치 및 방법
WO2018201809A1 (zh) 基于双摄像头的图像处理装置及方法
CN110166692B (zh) 一种提高摄像机自动聚焦准确率和速度的方法及装置
WO2020037959A1 (en) Image processing method, image processing apparatus, electronic device and storage medium
CN110248048B (zh) 一种视频抖动的检测方法及装置
US20230360254A1 (en) Pose estimation method and related apparatus
WO2020057353A1 (zh) 基于高速球的物体跟踪方法、监控服务器、视频监控系统
WO2021037285A1 (zh) 一种测光调整方法、装置、设备和存储介质
CN109618102B (zh) 对焦处理方法、装置、电子设备及存储介质
WO2021109409A1 (zh) 图像拍摄方法、装置、设备及存储介质
US8798369B2 (en) Apparatus and method for estimating the number of objects included in an image
CN113940057A (zh) 基于与图像传感器相关联的运动特性控制曝光设置的系统和方法
US20230419505A1 (en) Automatic exposure metering for regions of interest that tracks moving subjects using artificial intelligence
WO2022174539A1 (zh) 自行走设备的摄像曝光方法和装置
CN112911132B (zh) 拍照控制方法、拍照控制装置、电子设备及存储介质
KR20230173667A (ko) Ai 기반 객체인식을 통한 감시 카메라의 셔터값 조절
CN113572968A (zh) 图像融合方法、装置、摄像设备及存储介质
CN113691720B (zh) 一种激光云台摄像机的聚焦方法、装置、存储介质及设备
WO2022227916A1 (zh) 图像处理方法、图像处理器、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20896699

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020896699

Country of ref document: EP

Effective date: 20220704