WO2022174696A1 - 曝光处理方法、装置、电子设备及计算机可读存储介质 - Google Patents

曝光处理方法、装置、电子设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2022174696A1
WO2022174696A1 PCT/CN2022/071410 CN2022071410W WO2022174696A1 WO 2022174696 A1 WO2022174696 A1 WO 2022174696A1 CN 2022071410 W CN2022071410 W CN 2022071410W WO 2022174696 A1 WO2022174696 A1 WO 2022174696A1
Authority
WO
WIPO (PCT)
Prior art keywords
exposure
focal plane
frame image
current frame
estimated
Prior art date
Application number
PCT/CN2022/071410
Other languages
English (en)
French (fr)
Inventor
朱文波
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2022174696A1 publication Critical patent/WO2022174696A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time

Definitions

  • the present application relates to the field of image processing, and in particular, to an exposure processing method, an apparatus, an electronic device, and a computer-readable storage medium.
  • Embodiments of the present application provide an exposure processing method, an apparatus, an electronic device, and a computer-readable storage medium, which can improve the efficiency of exposure processing.
  • An embodiment of the present application provides an exposure processing method, wherein the exposure processing method includes:
  • Exposure is performed using the estimated exposure parameters during shooting to obtain the current frame image.
  • the embodiment of the present application also provides an exposure processing device, wherein the exposure processing device includes:
  • the first acquisition module is used to acquire the estimated depth information of the current frame image
  • a second obtaining module configured to obtain an estimated exposure parameter corresponding to the estimated depth information according to a change trend between the exposure parameter and the depth information, wherein the change trend is determined based on a plurality of historical frame images;
  • the exposure module is configured to use the estimated exposure parameters to perform exposure during shooting to obtain the current frame image.
  • An embodiment of the present application further provides an electronic device, wherein the electronic device includes a processor and a memory, and a computer program is stored in the memory, and the processor executes any one of the methods provided by the embodiments of the present application by calling the computer program stored in the memory. Steps in an exposure processing method.
  • Embodiments of the present application further provide a computer-readable storage medium, wherein a computer program is stored in the storage medium, and when the computer program runs on the computer, the computer is made to execute any one of the exposure processing methods provided in the embodiments of the present application. step.
  • FIG. 1 is a schematic flowchart of a first type of exposure processing method provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a second exposure processing method provided by an embodiment of the present application.
  • FIG. 3 is a third schematic flowchart of the exposure processing method provided by the embodiment of the present application.
  • FIG. 4 is a fourth schematic flowchart of the exposure processing method provided by the embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a fifth type of exposure processing method provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a first structure of an exposure processing apparatus provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a second structure of an exposure processing apparatus provided in an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a first structure of an electronic device provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a second structure of an electronic device provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an integrated circuit chip provided by an embodiment of the present application.
  • Embodiments of the present application provide an exposure processing method, which is applied to electronic equipment.
  • the execution subject of the exposure processing method may be the exposure processing device provided in the embodiment of the present application, or an electronic device integrated with the exposure processing device, the exposure processing device may be implemented in hardware or software, and the electronic device may be a smartphone, A tablet computer, a handheld computer, a notebook computer, or a desktop computer, etc., is a device that is equipped with a processor and has processing power.
  • the embodiments of the present application provide an exposure processing method, including:
  • Exposure is performed using the estimated exposure parameters during shooting to obtain the current frame image.
  • the method before obtaining the estimated exposure parameter corresponding to the estimated depth information according to the change trend between the exposure parameter and the depth information, the method further includes:
  • the estimated depth information is input into the exposure processing model, and the estimated exposure parameter corresponding to the estimated depth information is obtained according to the change trend between the exposure parameter and the depth information.
  • the acquisition of an exposure processing model trained from multiple historical frame images includes:
  • the exposure processing model is obtained by training the depth information, brightness information and exposure parameters of the focal plane regions of the multiple historical frame images.
  • the obtaining the estimated depth information of the current frame image includes:
  • the method further includes:
  • the exposure processing model further includes a change trend between the exposure index and the relative depth, where the relative depth is the relative depth of the non-focal plane area relative to the focal plane area in the same image, and the method further includes: include:
  • the method further includes:
  • Exposure correction is performed on the non-focal plane area of the current frame image by using the estimated exposure index.
  • the method further includes:
  • the non-focal plane area of the current frame image includes at least two non-focal plane areas, and the dividing the current frame image into the focal plane area and the non-focal plane area includes:
  • the other area is divided into at least two non-focal plane areas according to the depth interval in which it is located, wherein each non-focal plane area corresponds to a depth interval.
  • the segmenting the focal plane region from the current frame image includes:
  • FIG. 1 is a schematic flowchart of a first exposure processing method provided by an embodiment of the present application.
  • the execution body of the exposure processing method may be the exposure processing apparatus provided in the embodiment of the present application, or an integrated circuit chip or electronic device integrated with the exposure processing apparatus.
  • the exposure processing method provided in the embodiment of the present application may include the following steps:
  • the camera is activated to obtain a video stream of the scene to be photographed, and multiple frames of images of the scene to be photographed are obtained.
  • the multi-frame image may be in RAW (RAW Image Format, raw) format, that is, an unprocessed original image after the image sensor converts the captured light source signal into a digital signal.
  • RAW Raster Image Format, raw
  • the specific number of multi-frame images can be set according to actual needs, which is not limited in this application.
  • the current frame image and the historical frame image can be distinguished.
  • the current frame image is the current frame image to be captured
  • the historical frame image is the image captured before the current frame image
  • the historical frame image and The current frame image may have the same or similar image content for the same scene to be shot.
  • the multiple historical frame images may have the same or different exposure parameters.
  • the exposure parameter is a parameter that affects the exposure of the image when the image is captured, such as aperture, exposure duration, and sensitivity.
  • exposure parameters may include exposure duration, and exposure duration affects exposure.
  • the historical frame image may also be an image that has been subjected to exposure processing using the exposure processing method provided in the embodiment of the present application.
  • the depth information of the current frame image may be estimated according to the depth information of the multiple historical frame images, so as to obtain the estimated depth information of the current frame image. Since the multiple historical frame images are images of the same scene to be shot successively, the content in the images generally does not change much. According to the depth information of the historical frame images, the estimated depth information of the current frame image can be obtained. If there is a moving object in the scene to be shot, the movement trend of the moving object can be obtained according to multiple historical frame images, and the moving object can be predicted according to the movement trend of the moving object and the depth information of the moving image in the multiple historical frame images. Depth information in the frame image (ie, the current frame image that is not captured).
  • the depth information may exist in the form of at least one depth map, indicating the distance between the sensor (eg, image sensor, depth sensor) and the object (eg, the distance from the depth sensor to all objects in the scene to be photographed). Since different objects have corresponding depth information, the different depth information can reflect the positions of different objects in the image from the device.
  • the electronic device is provided with a light sensor, and according to the phase parameters of the near-infrared light emitted by the light sensor and its reflected light, the phase difference of the near-infrared light and its reflected light is calculated, and the historical frame can be calculated according to the phase difference. The depth of each object in the image, so as to obtain the depth information of the historical frame image.
  • depth information is acquired in different ways.
  • the linear perspective method can be used to obtain the depth information of the static area
  • the high-precision optical flow method based on the deformation theory can be used to obtain the motion vector and convert it into the depth information of the moving area.
  • an exposure processing model obtained by training a plurality of historical frame images is obtained, and the estimated depth information is obtained.
  • Input into the exposure processing model and obtain the estimated exposure parameter corresponding to the estimated depth information according to the change trend between the exposure parameter and the depth information in the exposure processing model.
  • the brightness information during the imaging of the historical frame images will be affected by the depth information and exposure parameters.
  • the depth information, brightness information and exposure parameters of multiple historical frame images are acquired, and the depth information, brightness information, and exposure parameters of the multiple historical frame images are used.
  • the brightness information and exposure parameters are trained to obtain an exposure processing model. For example, by learning the changing trend of brightness information and depth information in multiple historical frame images, the trend of brightness information changing with depth information can be obtained. By learning the brightness information and exposure parameters in multiple historical frame images, brightness information can be obtained. With the changing trend of exposure parameters, the changing trend between exposure parameters and depth information is learned in the exposure processing model.
  • the estimated depth information of the current frame image and the depth information input from the multiple historical frame images participating in the training of the exposure processing model should contain the same or similar content. If the depth information inputted by the multiple historical frame images participating in the training of the exposure processing model is the depth information of the entire image, the estimated depth information of the current frame image obtained here is also the estimated depth information of the entire current frame image; The depth information input by the multiple historical frame images of the exposure processing model is the depth information of a certain area in the image, and the estimated depth information of the current frame image obtained here is also the estimated depth information of the same area in the current frame image.
  • the estimated depth information is input into the exposure processing model trained from multiple historical frame images. Since the exposure processing model includes a change trend between exposure parameters and depth information, an estimated exposure parameter corresponding to the estimated depth information can be obtained according to the change trend and the input estimated depth information of the current frame image.
  • the estimated exposure parameter is an exposure parameter suitable for the current frame image inferred by the exposure processing model according to the rule summarized by the historical frame image.
  • the estimated exposure parameters may include estimated aperture, estimated exposure time, and/or estimated light sensitivity.
  • the estimated exposure parameters estimated by the model may be used to perform automatic exposure processing. For example, adjust the aperture to the estimated aperture size, adjust the exposure time to the estimated exposure time, and adjust the sensitivity value to the estimated sensitivity value to obtain the current frame image.
  • the exposure processing of the current frame image can be automatically performed by the electronic device, without requiring manual processing by the user.
  • multiple historical frame images and current frame images can be divided into focal plane areas and non-focal plane areas.
  • the brightness information, depth information and exposure parameters of the plurality of historical frame images used to train the exposure processing model may refer to the brightness information, depth information and exposure parameters of the focal plane regions of the plurality of historical frame images.
  • the estimated depth information of the current frame image input into the exposure processing model may be estimated depth information obtained by estimating the depth of the focal plane region of the current frame image.
  • the focal plane area is the key processing area after focusing processing, after focusing processing, the exposure of the focal plane area is more accurate than the exposure of the non-focal plane, and the exposure effect of the focal plane area is also better than that of the non-focal plane. Therefore, by using the brightness information, depth information and exposure parameters of the focal plane regions of multiple historical frame images to train the model, the obtained change trend between the exposure parameters and the depth information is more accurate.
  • the segmentation of an image (historical frame image and/or current frame image) into a focal plane area and a non-focal plane area can be achieved in the following manner:
  • the depth interval of the focal plane may be a depth interval in which a focus object is located when a certain focus object is photographed. It can be understood that since the object is three-dimensional, and the depth corresponding to the focal plane is a specific value, it is often impossible to include all the depth information of the object, so it needs to be summarized by the depth interval.
  • the depth interval of the focal plane When dividing the depth interval of the focal plane, the depths in the range before and after the depth of the focal plane are included, so that the area where the focused object is located can be correctly divided as the focal plane area.
  • the user may focus on the focus object through the focus operation pointing device, that is, the focus parameter may be acquired based on the user's focus operation.
  • the non-focal plane area may include at least two non-focal plane areas. After dividing the focal plane area, depth information of other areas except the focal plane area is acquired, and at least one area is divided according to the depth information of the other areas. Two depth intervals, the other areas are divided into at least two non-focal plane areas according to the depth intervals in which they are located.
  • the number of non-focal plane areas depends on the number of depth bins. For example, each non-focal plane area corresponds to one depth interval, or the non-focal plane area is divided according to every two adjacent depth intervals, that is, each non-focal plane area corresponds to two depth intervals, and so on.
  • This embodiment of the present application does not limit the way of dividing the non-focal plane area, as long as it is divided according to the depth interval.
  • the current frame image is divided into a focal plane area and a non-focal plane area, and the method for dividing can refer to the method mentioned above. Then, the depth information and brightness information of the focal plane area of the current frame image are acquired, and the depth information, brightness information and estimated exposure parameters of the focal plane area of the current frame image are input into the exposure processing model, and the exposure processing model is updated. And, the obtained current frame image is used as the historical frame image corresponding to the image to be captured subsequently.
  • the exposure processing method provided by the embodiment of the present application is not only applied to a certain frame of images, but is continuously applied to each frame of images during the acquisition of multiple frames of original images. And after each frame of image is shot using the estimated exposure parameters obtained by the exposure processing model, the depth information, brightness information and estimated exposure parameters of each frame of image are re-input into the exposure processing model, and the exposure processing model is constantly updated. .
  • FIG. 2 is a schematic flowchart of a second exposure processing method provided by an embodiment of the present application. It shows the process of obtaining estimated exposure parameters by using the exposure processing model for multiple frames of images and updating the exposure processing model.
  • images 11 to 19 are taken in order from small to large (images 17 and 18 are omitted in the figure), and the exposure processing model is obtained by training images 11, 12 and 13.
  • image 14 is used as the current frame.
  • images, images 11 to 13 are used as historical frame images, the exposure processing model obtained by training images 11 to 13 is used to obtain the estimated exposure parameters of the image 14, and the estimated exposure parameters are used for exposure to obtain the image 14.
  • the exposure processing model receives the data of the image 14, the data of the image 14 is also included in the calculation. Therefore, the exposure processing model is updated, and the updated exposure processing model is equivalent to training from images 11-14.
  • the image 15 is used as the current frame image, and the images 11-14 are used as the historical frame images, and the estimated exposure parameters of the image 15 are obtained by using the updated exposure processing model trained from the images 11-14, and the estimated exposure Expose the parameters to obtain image 15, and input the depth information, brightness information and estimated exposure parameters of image 15 into the exposure processing model, so that the exposure processing model (11-14) is further updated to the exposure processing model (11-15). ), the updated exposure processing models (11-15) are equivalent to being trained from images 11-15.
  • the exposure processing model (11-16) is used to obtain the estimated exposure parameters of the image 16 when the image 16 is subsequently captured, and the captured image 16 will cause the exposure processing model to be updated, . . .
  • the processing models (11-18) obtain estimated exposure parameters for image 19, and so on.
  • each current frame image in this application can be used as the historical frame image of the next frame image when the next frame image is taken, and the exposure processing model used in this application to obtain each current frame image is determined by the previous historical frame image.
  • the latest exposure processing model updated by the frame image makes the updated exposure processing model fit the latest shooting content.
  • the estimated exposure parameters of the image 15 are no longer input, but the actual exposure parameters of the image 15 are input, and the actual exposure parameters of the image 15 are input.
  • Exposure parameters participate in updating the exposure processing model, so that the estimated exposure parameters output by the updated exposure processing model are also more accurate and more in line with user expectations.
  • FIG. 3 is a schematic flowchart of a third type of exposure processing method provided by an embodiment of the present application.
  • the historical frame images participating in the training of the exposure processing model may not be all previous historical frame images, but only the latest captured several frames of historical frame images.
  • the exposure processing model is updated by the image 14
  • the exposure parameters in the updated exposure processing model and the previous change trend of the depth information can be obtained according to the relevant data of the images 12, 13, and 14.
  • the correlation of the image 11 The data does not participate in this update, that is, the exposure processing model (11-14) in Figure 2 is changed to the exposure processing model (12-14) in Figure 3, and similarly, the exposure processing model (11-15) in Figure 2 ) are changed to the exposure processing models (13-15) in Figure 3, and the exposure processing models (11-18) in Figure 2 are changed to the exposure processing models (16-18) in Figure 3.
  • Only the latest few historical frame images are used to train and update the exposure processing model, which can prevent too long historical frame images from affecting the accuracy of the exposure processing model.
  • selection manner and selection quantity of the historical frame images listed in FIG. 2 and FIG. 3 are all exemplary. Those skilled in the art should understand that the selection manner and selection quantity of historical frame images can be changed according to requirements.
  • the number of historical frame images used to train the exposure processing model may be determined according to the current power consumption of the electronic device. Preset the corresponding relationship between the power consumption level and the number of historical frame images, and obtain the current power consumption value of the electronic device (for example, the power consumption value can be the power consumed by the electronic device, the number of processes processed in the background, etc.), according to the preset
  • the power consumption level is determined, the power consumption level to which the current power consumption value belongs, and the number of historical frame images corresponding to the power consumption level is obtained.
  • the power consumption levels are from low power consumption to high power consumption, respectively, from one to five levels.
  • the first 7 frames of image training before the current frame image are obtained for training.
  • the exposure processing model when it is determined that the power consumption level to which the current power consumption value belongs is level 5, only the first three frames of images before the current frame image are captured to train the exposure processing model.
  • FIG. 4 is a schematic flowchart of a fourth type of exposure processing method provided by an embodiment of the present application.
  • the exposure processing method can be applied to the electronic equipment provided by the embodiments of the present application, and the exposure processing methods provided by the embodiments of the present application can include the following steps:
  • the camera is activated to obtain a video stream of the scene to be photographed, and multiple frames of images of the scene to be photographed are obtained.
  • the current frame image and the historical frame image can be distinguished.
  • the current frame image is the current frame image to be captured
  • the historical frame image is the image captured before the current frame image
  • the historical frame image and The current frame image may have the same or similar image content for the same scene to be shot.
  • the multiple historical frame images may have the same or different exposure parameters.
  • the historical frame image may also be an image that has been subjected to exposure processing using the exposure processing method provided in the embodiment of the present application.
  • the same historical frame image may include one non-focal plane area, or may include at least two non-focal plane areas.
  • the depth information of other areas except the focal plane area is acquired, at least two depth intervals are divided according to the depth information of the other areas, and the other areas are divided into at least two non-focal planes according to the depth interval in which they are located. regions, where each non-focal plane region corresponds to a depth interval.
  • the number of non-focal plane areas depends on the number of depth bins. For example, each non-focal plane area corresponds to one depth interval, or the non-focal plane area is divided according to every two adjacent depth intervals, that is, each non-focal plane area corresponds to two depth intervals, and so on.
  • This embodiment of the present application does not limit the way of dividing the non-focal plane area, as long as it is divided according to the depth interval.
  • the focal plane area is first divided, and after the focal plane area is divided, other areas except the focal plane area are non-focal plane areas.
  • the segmentation of the focal plane area can be achieved in the following ways:
  • the area in the image where the depth information is in the depth interval of the focal plane is divided into focal plane areas.
  • the exposure processing model includes a change trend between exposure parameters and depth information.
  • the focal plane regions of the multiple historical frame images may have the same or different exposure parameters. For example, by continuously adjusting the exposure parameters during shooting, multiple historical frame images with different exposure parameters can be obtained.
  • the brightness information of the focal plane area will be affected by depth information and exposure parameters.
  • the depth information, brightness information and exposure parameters of the focal plane regions of multiple historical frame images are acquired, and the exposure processing model is obtained by training the depth information, brightness information and exposure parameters of the focal plane regions of the multiple historical frame images. .
  • the trend of the brightness information of the focal plane region changing with the depth information can be obtained.
  • the brightness information and exposure parameters can be used to obtain the changing trend of the brightness information of the focal plane area with the exposure parameters, and then learn the changing trend between the exposure parameters and the depth information in the exposure processing model.
  • the focal plane area is the key processing area after focusing processing, after focusing processing, the exposure of the focal plane area is more accurate than the exposure of the non-focal plane, and the exposure effect of the focal plane area is also better than that of the non-focal plane. Therefore, by using the brightness information, depth information and exposure parameters of the focal plane regions of multiple historical frame images to train the model, the obtained change trend between the exposure parameters and the depth information is more accurate.
  • the exposure index can be an index such as white balance that can adjust the exposure of the imaged image
  • the white balance is an index that describes the accuracy of white generated after the three primary colors of red, green and blue are mixed, and is used for color restoration. and tint treatments.
  • the white balance setting can calibrate the deviation of color temperature. For both the shooting and the finished image, the white balance can be adjusted to achieve the desired picture effect.
  • Relative depth refers to the relative depth of the non-focal plane area relative to the focal plane area in the same image.
  • At least two non-focal plane regions may be included in the historical frame image. After segmenting the focal plane area in the historical frame images, obtain the depth information of other areas except the focal plane area, divide at least two depth intervals according to the depth information of the other areas, and divide the other areas into At least two non-focal plane regions, wherein each non-focal plane region corresponds to a depth interval.
  • the number of non-focal plane areas depends on the number of depth bins.
  • the exposure index and relative depth of each non-focal plane in each historical frame image are input into the exposure processing model for training, so that the exposure processing model can train the change trend between exposure parameters and depth information in the focal plane area. , and also trains the variation trend between exposure metrics and relative depth in non-focal plane regions.
  • the depth information of the current frame image may be estimated according to the depth information of the multiple historical frame images, so as to obtain the estimated depth information of the current frame image. Since the multiple historical frame images are images of the same scene to be shot successively, the content in the images generally does not change much. According to the depth information of the historical frame images, the estimated depth information of the current frame image can be obtained. If there is a moving object in the scene to be shot, the movement trend of the moving object can be obtained according to multiple historical frame images, and the moving object can be predicted according to the movement trend of the moving object and the depth information of the moving image in the multiple historical frame images. Depth information in the frame image (ie, the current frame image that is not captured).
  • the estimated depth information is input into the exposure processing model trained from multiple historical frame images. Since the exposure processing model includes a change trend between exposure parameters and depth information, an estimated exposure parameter corresponding to the estimated depth information can be obtained according to the change trend and the input estimated depth information of the current frame image.
  • the estimated exposure parameter is an exposure parameter suitable for the current frame image inferred by the exposure processing model according to the rule summarized by the historical frame image.
  • the estimated exposure parameters may include estimated aperture, estimated exposure time, and/or estimated light sensitivity.
  • the estimated exposure parameters estimated by the model may be used to perform automatic exposure processing.
  • the aperture is adjusted to the estimated aperture size
  • the exposure time is adjusted to the estimated exposure time
  • the light sensitivity value is adjusted to the estimated light sensitivity value, thereby obtaining the current frame image.
  • the exposure processing of the current frame image can be automatically performed by the electronic device, and does not require manual processing by the user.
  • the method of performing image segmentation on the current frame image can refer to the aforementioned method for image segmentation on the historical frame image (including the method of segmenting one focal plane area and at least two focal plane areas), which is not repeated here. Repeat.
  • the depth of each non-focal plane area relative to the focal plane can be obtained, that is, the relative depth of each focal plane area.
  • the exposure processing model contains the change trend between the exposure index and the relative depth, according to the input relative depth of the non-focal plane area of the current frame image. Depth, the estimated exposure index of the non-focal plane area can be obtained.
  • Each non-focal plane area corresponds to an estimated exposure index, which is used to adjust each non-focal plane area of the current frame image in a one-to-one correspondence.
  • the estimated exposure index may include an estimated white balance index, which is used to adjust the white balance of each non-focal plane area.
  • the exposure correction is performed on the non-focal plane areas of the current frame image one by one through the estimated exposure index.
  • the exposure parameters are estimated to ensure the automatic and accurate exposure of the focal plane area, and the exposure index is used to correct the exposure of each non-focal plane area based on the focal plane area. Therefore, while ensuring the imaging effect of the focal plane area of the image, the imaging effect of the non-focal plane area is improved. Avoid imaging problems such as "focusing on bright places, bright places are clear, dark places are not clear” or “focusing on dark places, dark places are clear, bright places are too dazzling", etc., to ensure the overall smoothness of the image.
  • the depth information, brightness information and estimated exposure parameters of the focal plane area of the current frame image are input into the exposure processing model to update the changing trend between the exposure parameters and the depth information in the exposure processing model, using The updated exposure processing model obtains the estimated exposure parameters of the next frame of image.
  • the exposure index and the relative depth of the non-focal plane area of the current frame image are input into the exposure processing model to update the changing trend between the exposure index and the relative depth in the exposure processing model, and the updated exposure is used.
  • the processing model obtains the estimated exposure index of the next frame of image.
  • FIG. 5 is a schematic flowchart of a fifth type of exposure processing method provided by an embodiment of the present application.
  • the camera is first turned on, the original image data (of the historical frame images) is obtained, the depth information is calculated according to the original image data of the historical frame images, and the history is determined according to the calculated depth information and the focusing parameters when shooting the historical frame images.
  • the focal plane in the frame image. Segment the image based on the focal plane, divide the historical frame image into a focal plane area and a non-focal plane area, perform brightness detection on the focal plane area, obtain the brightness information of the focal plane area, and then divide the brightness information and depth information of the focal plane area. and exposure parameters are input into the model for training, and the estimated exposure parameters of the next frame of image are obtained according to the change trend between the exposure parameters obtained by training and the depth information. When obtaining the original image data of the next frame of images, this The estimated exposure parameters are exposed, and at the same time, the estimated exposure parameters are re-input into the model to update the model.
  • exposure correction is also performed for the acquired original image data.
  • exposure correction please refer to the aforementioned acquisition of the relative depth and exposure indicators of the non-focal plane area for training, and acquisition of the estimated exposure of the non-focal plane. The relevant description of the parameters will not be repeated here.
  • the obtained image can be subjected to back-end image processing. For example, operations such as cropping, graffiti, watermarking, and text addition are performed on images.
  • the exposure processing method provided by the embodiment of the present application is not only applied to a certain frame of image, but is continuously applied to each frame of image during the acquisition of multiple frames of original images. And after each frame of image is shot using the estimated exposure parameters obtained by the exposure processing model, the depth information, brightness information and estimated exposure parameters of each frame of image are re-input into the exposure processing model, and the estimated exposure index is the same.
  • the exposure processing model is also constantly being updated.
  • the exposure processing method provided by the embodiment of the present application first obtains the estimated depth information of the current frame image; then obtains the estimated exposure parameter corresponding to the estimated depth information according to the change trend between the exposure parameter and the depth information, wherein , the change trend is determined based on a plurality of historical frame images; and then the estimated exposure parameters are used for exposure during shooting to obtain the current frame image.
  • the change trend between exposure parameters and depth information is determined according to historical frame images, and estimated exposure parameters are directly obtained according to the change trend between exposure parameters and depth information when the current frame image is captured.
  • the exposure situation automatically processes the exposure of the current frame image, which improves the efficiency of exposure processing.
  • the embodiment of the present application also provides an exposure processing device.
  • FIG. 6 is a schematic diagram of a first structure of an exposure processing apparatus provided by an embodiment of the present application.
  • the exposure processing apparatus 300 can be applied to electronic equipment or integrated circuit chips, and the exposure processing apparatus 300 includes a first acquisition module 301, a second acquisition module 302 and an exposure module 303, as follows:
  • the first obtaining module 301 is used to obtain the estimated depth information of the current frame image
  • the second obtaining module 302 is configured to obtain the estimated exposure parameter corresponding to the estimated depth information according to the change trend between the exposure parameter and the depth information, wherein the change trend is determined based on a plurality of historical frame images;
  • the exposure module 303 is configured to use the estimated exposure parameters to perform exposure during shooting to obtain the current frame image.
  • FIG. 7 is a schematic diagram of a second structure of an exposure processing apparatus 300 provided by an embodiment of the present application.
  • the exposure processing apparatus 300 further includes a third acquisition module 304 for:
  • the second obtaining module 302 can be used for:
  • the estimated depth information is input into the exposure processing model, and the estimated exposure parameter corresponding to the estimated depth information is obtained according to the change trend between the exposure parameter and the depth information.
  • the first acquisition module 301 when acquiring an exposure processing model trained from a plurality of historical frame images, the first acquisition module 301 may be used to:
  • the exposure processing model is obtained by training the depth information, brightness information and exposure parameters of the focal plane area of multiple historical frame images.
  • the first acquiring module 301 when acquiring the estimated depth information of the current frame image, the first acquiring module 301 may be used to:
  • the exposure processing apparatus 300 further includes a segmentation module 305, a fourth acquisition module 306, an update module 307, and a fifth acquisition module 308:
  • a segmentation module 305 configured to segment the current frame image into a focal plane area and a non-focal plane area
  • a fourth acquisition module 306 configured to acquire depth information and brightness information of the focal plane region of the current frame image
  • the update module 307 is used to input the depth information, brightness information and estimated exposure parameters of the focal plane area of the current frame image into the exposure processing model, so as to update the exposure processing model;
  • the fifth obtaining module 308 is configured to obtain the estimated exposure parameter of the next frame of image by using the updated exposure processing model.
  • the exposure processing model further includes a change trend between the exposure index and the relative depth, where the relative depth is the relative depth of the non-focal plane area relative to the focal plane area in the same image, and the exposure processing device 300 further includes:
  • the sixth acquisition module 309 is used to acquire the exposure index and relative depth of the non-focal plane area of the multiple historical frame images
  • the training module 310 is configured to input the exposure indexes and relative depths of the non-focal plane regions of the multiple historical frame images into the exposure processing model for training, and obtain the changing trend between the exposure indexes and the relative depths in the exposure processing model.
  • the current frame image is divided into a focal plane area and a non-focal plane area; the relative depth of the non-focal plane area of the current frame image is obtained; the relative depth of the non-focal plane area of the current frame image is input into the exposure
  • the estimated exposure index of the non-focal plane area is obtained according to the change trend between the exposure index and the relative depth; the estimated exposure index is used to correct the exposure of the non-focal plane area of the current frame image.
  • the exposure index and relative depth of the non-focal plane area of the current frame image are input into the exposure processing model to update the exposure processing model. ; Use the updated exposure processing model to obtain the estimated exposure index for the next frame of image.
  • the non-focal plane area of the current frame image includes at least two non-focal plane areas.
  • the segmentation module 305 can be used for:
  • the other areas are divided into at least two non-focal plane areas according to the depth interval in which they are located, wherein each non-focal plane area corresponds to a depth interval.
  • the focus parameters when the current frame image is captured are obtained; the focal plane when the current frame image is captured is determined according to the focus parameters; the depth interval of the focal plane is obtained; The area where the depth information is in the depth interval of the focal plane is divided into focal plane areas.
  • the first acquisition module 301 acquires the estimated depth information of the current frame image; then the second acquisition module 302 obtains an estimate according to the change trend between the exposure parameter and the depth information.
  • the estimated exposure parameter corresponding to the depth information wherein the change trend is determined based on a plurality of historical frame images; and the exposure module 303 uses the estimated exposure parameter to perform exposure during shooting to obtain the current frame image.
  • the change trend between exposure parameters and depth information is determined according to historical frame images, and estimated exposure parameters are directly obtained according to the change trend between exposure parameters and depth information when the current frame image is captured.
  • the exposure situation automatically processes the exposure of the current frame image, which improves the efficiency of exposure processing.
  • the embodiments of the present application also provide an electronic device.
  • the electronic device can be a smart phone, a tablet computer, a gaming device, an AR (Augmented Reality) device, a car, an obstacle detection device around a vehicle, an audio playback device, a video playback device, a notebook, a desktop computing device, and a wearable device such as a watch , glasses, helmets, electronic bracelets, electronic necklaces, electronic clothing and other equipment.
  • FIG. 8 is a schematic diagram of a first structure of an electronic device 400 according to an embodiment of the present application.
  • the electronic device 400 includes a processor 401 and a memory 402 .
  • a computer program is stored in the memory, and the processor invokes the computer program stored in the memory to execute the steps in any of the exposure processing methods provided in the embodiments of the present application.
  • the processor 401 is electrically connected to the memory 402 .
  • the processor 401 is the control center of the electronic device 400, uses various interfaces and lines to connect various parts of the entire electronic device, executes the electronic device by running or calling the computer program stored in the memory 402, and calling the data stored in the memory 402. Various functions of the device and processing data, so as to carry out the overall monitoring of the electronic device.
  • the processor 401 in the electronic device 400 can load the instructions corresponding to the processes of one or more computer programs into the memory 402 according to the steps in the above exposure processing method, and the processor 401 executes the instructions.
  • the estimated exposure parameter corresponding to the estimated depth information is obtained according to the change trend between the exposure parameter and the depth information, wherein the change trend is determined based on a plurality of historical frame images;
  • the estimated exposure parameters are used for exposure, and the current frame image is obtained.
  • FIG. 9 is a schematic diagram of a second structure of an electronic device 400 according to an embodiment of the present application.
  • the electronic device 400 further includes: a display screen 403 , a control circuit 404 , an input unit 405 , a sensor 406 and a power supply 407 .
  • the processor 401 is electrically connected to the display screen 403 , the control circuit 404 , the input unit 405 , the sensor 406 and the power source 407 , respectively.
  • the display screen 403 may be used to display information input by or provided to the user and various graphical user interfaces of the electronic device, which may be composed of images, text, icons, videos, and any combination thereof.
  • the control circuit 404 is electrically connected to the display screen 403 for controlling the display screen 403 to display information.
  • the input unit 405 may be used to receive input numbers, character information or user characteristic information (eg fingerprints), and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
  • the input unit 405 may include a touch sensing module.
  • the sensor 406 is used to collect the information of the electronic device itself or the user's information or the external environment information.
  • sensors 406 may include distance sensors, magnetic field sensors, light sensors, acceleration sensors, fingerprint sensors, Hall sensors, position sensors, gyroscopes, inertial sensors, attitude sensors, barometers, heart rate sensors, and the like.
  • Power supply 407 is used to power various components of electronic device 400 .
  • the power supply 407 may be logically connected to the processor 401 through a power management system, so as to implement functions such as managing charging, discharging, and power consumption through the power management system.
  • the electronic device 400 may further include a camera, a Bluetooth module, and the like, which will not be repeated here.
  • the processor 401 in the electronic device 400 can load the instructions corresponding to the processes of one or more computer programs into the memory 402 according to the steps in the above translation method, and the processor 401 executes and stores the instructions.
  • the estimated exposure parameter corresponding to the estimated depth information is obtained according to the change trend between the exposure parameter and the depth information, wherein the change trend is determined based on a plurality of historical frame images;
  • the estimated exposure parameters are used for exposure, and the current frame image is obtained.
  • the processor 401 before obtaining the estimated exposure parameter corresponding to the estimated depth information according to the change trend between the exposure parameter and the depth information, the processor 401 further performs the following steps:
  • the estimated depth information is input into the exposure processing model, and the estimated exposure parameter corresponding to the estimated depth information is obtained according to the change trend between the exposure parameter and the depth information.
  • the processor 401 when acquiring an exposure processing model trained from multiple historical frame images, performs the following steps:
  • the exposure processing model is obtained by training the depth information, brightness information and exposure parameters of the focal plane area of multiple historical frame images.
  • the processor 401 when acquiring the estimated depth information of the current frame image, the processor 401 further performs the following steps:
  • the processor 401 After obtaining the current frame image, the processor 401 performs the following steps:
  • the exposure processing model also includes a change trend between the exposure index and the relative depth, where the relative depth is the relative depth of the non-focal plane area relative to the focal plane area in the same image, and the processor 401 further performs the following steps:
  • the processor 401 further performs the following steps:
  • the processor 401 after using the estimated exposure index to perform exposure correction on the non-focal plane area of the current frame image, the processor 401 further performs the following steps:
  • the non-focal plane area of the current frame image includes at least two non-focal plane areas.
  • the processor 401 performs the following steps:
  • the other areas are divided into at least two non-focal plane areas according to the depth interval in which they are located, wherein each non-focal plane area corresponds to a depth interval.
  • the processor 401 when segmenting the focal plane region from the current frame image, performs the following steps:
  • the embodiment of the present application provides an electronic device, and the processor in the electronic device performs the following steps: firstly obtain the estimated depth information of the current frame image; The estimated exposure parameters corresponding to the estimated depth information, wherein the change trend is determined based on a plurality of historical frame images; and then the estimated exposure parameters are used for exposure during shooting to obtain the current frame image.
  • the change trend between exposure parameters and depth information is determined according to historical frame images, and estimated exposure parameters are directly obtained according to the change trend between exposure parameters and depth information when the current frame image is captured.
  • the exposure situation automatically processes the exposure of the current frame image, which improves the efficiency of exposure processing.
  • Embodiments of the present application also provide an integrated circuit chip.
  • the integrated circuit chip can be used in smart phones, tablet computers, game equipment, AR (Augmented Reality, augmented reality) equipment, automobiles, vehicle peripheral obstacle detection devices, audio playback devices, video playback devices, notebooks, desktop computing equipment, wearable devices Devices such as watches, glasses, helmets, electronic bracelets, electronic necklaces, etc.
  • the integrated circuit chip provided by the embodiment of the present application is independent of the central processing unit, adopts hardware acceleration technology, and allocates the work with a very large amount of calculation to special hardware for processing, so as to reduce the workload of the central processing unit, so that the central processing unit is not required.
  • Each pixel in the image is translated layer by layer through the software.
  • FIG. 10 is a schematic structural diagram of an integrated circuit chip 500 provided by an embodiment of the present application.
  • the integrated circuit chip 500 includes a processor 501 , a memory 502 and an exposure processing device 300 .
  • the processor 501 is electrically connected to the memory 502 .
  • the processor 501 is the control center of the integrated circuit chip 500, and uses various interfaces and lines to connect various parts of the entire integrated circuit chip, and realizes the front-end depth information collection and the back-end multi-frame synthesis in the exposure processing method provided by the embodiment of the present application. .
  • the exposure processing device 300 is responsible for implementing the steps in the above-mentioned exposure processing method, for example:
  • the estimated exposure parameter corresponding to the estimated depth information is obtained according to the change trend between the exposure parameter and the depth information, wherein the change trend is determined based on a plurality of historical frame images;
  • the estimated exposure parameters are used for exposure, and the current frame image is obtained.
  • Embodiments of the present application further provide a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on the computer, the computer executes the exposure processing method of any of the foregoing embodiments.
  • the computer when a computer program is run on a computer, the computer performs the following steps:
  • the estimated exposure parameter corresponding to the estimated depth information is obtained according to the change trend between the exposure parameter and the depth information, wherein the change trend is determined based on a plurality of historical frame images;
  • the estimated exposure parameters are used for exposure, and the current frame image is obtained.
  • the storage medium may include, but is not limited to, a read only memory (ROM, Read Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk or an optical disk, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例公开了一种曝光处理方法、装置、电子设备及计算机可读存储介质,其中,曝光处理方法包括:获取当前帧图像的预估深度信息;根据曝光参数与深度信息之间的变化趋势得到预估深度信息对应的预估曝光参数,其中,变化趋势基于多个历史帧图像而确定;在拍摄时采用预估曝光参数进行曝光,得到当前帧图像。

Description

曝光处理方法、装置、电子设备及计算机可读存储介质
本申请要求于2021年02月20日提交中国专利局、申请号为202110194282.3、发明名称为“曝光处理方法、装置、电子设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理领域,尤其涉及一种曝光处理方法、装置、电子设备及计算机可读存储介质。
背景技术
随着智能终端技术的不断发展,电子设备的使用越来越普及。绝大多数电子设备都内只有摄像头,并且随着移动终端处理能力的增强以及摄像头技术的发展,用户对拍摄的图像质量的要求也越来越高。为了获得高质量的图像,很多时候都需要对图像进行曝光处理。
发明内容
本申请实施例提供一种曝光处理方法、装置、电子设备及计算机可读存储介质,能够提高曝光处理的效率。
本申请实施例提供一种曝光处理方法,其中,曝光处理方法包括:
获取当前帧图像的预估深度信息;
根据所述曝光参数与深度信息之间的变化趋势得到所述预估深度信息对应的预估曝光参数,其中,所述变化趋势基于多个历史帧图像而确定;
在拍摄时采用所述预估曝光参数进行曝光,得到所述当前帧图像。
本申请实施例还提供了一种曝光处理装置,其中,曝光处理装置包括:
第一获取模块,用于获取当前帧图像的预估深度信息;
第二获取模块,用于根据所述曝光参数与深度信息之间的变化趋势得到所述预估深度信息对应的预估曝光参数,其中,所述变化趋势基于多个历史帧图像而确定;
曝光模块,用于在拍摄时采用所述预估曝光参数进行曝光,得到所述当前帧图像。
本申请实施例还提供一种电子设备,其中,电子设备包括处理器和存储器,存储器中存储有计算机程序,处理器通过调用存储器中存储的计算机程序,以执行本申请实施例提供的任一种曝光处理方法中的步骤。
本申请实施例还提供一种计算机可读存储介质,其中,存储介质中存储有计算机程序,当计算机程序在计算机上运行时,使得计算机执行本申请实施例提供的任一种曝光处理方法中的步骤。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍。显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的曝光处理方法的第一种流程示意图。
图2为本申请实施例提供的曝光处理方法的第二种流程示意图。
图3为本申请实施例提供的曝光处理方法的第三种流程示意图。
图4为本申请实施例提供的曝光处理方法的第四种流程示意图。
图5为本申请实施例提供的曝光处理方法的第五种流程示意图。
图6为本申请实施例提供的曝光处理装置的第一种结构示意图。
图7为本申请实施例提供的曝光处理装置的第二种结构示意图。
图8为本申请实施例提供的电子设备的第一种结构示意图。
图9为本申请实施例提供的电子设备的第二种结构示意图。
图10为本申请实施例提供的集成电路芯片的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有付出创造性劳动前提下所获得的所有实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书以及上述附图中的术语“第一”、“第二”、“第三”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应当理解,这样描述的对象在适当情况下可以互换。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含。例如,包含了一系列步骤的过程、方法或包含了一系列模块或单元的装置、电子设备、系统不必限于清楚地列出的那些步骤或模块和单元,还可以包括没有清楚地列出的步骤或模块或单元,也可以包括对于这些过程、方法、装置、电子设备或系统固有的其它步骤或模块或单元。
本申请实施例提供一种曝光处理方法,该曝光处理方法应用于电子设备。该曝光处理方法的执行主体可以是本申请实施例提供的曝光处理装置,或者集成了该曝光处理装置的电子设备,该曝光处理装置可以采用硬件或者软件的方式实现,电子设备可以是智能手机、平板电脑、掌上电脑、笔记本电脑、或者台式电脑等配置有处理器而具有处理能力的设备。
本申请实施例提供一种曝光处理方法,包括:
获取当前帧图像的预估深度信息;
根据所述曝光参数与深度信息之间的变化趋势得到所述预估深度信息对应的预估曝光参数,其中,所述变化趋势基于多个历史帧图像而确定;
在拍摄时采用所述预估曝光参数进行曝光,得到所述当前帧图像。
在一实施例中,所述根据所述曝光参数与深度信息之间的变化趋势得到所述预估深度信息对应的预估曝光参数之前,还包括:
获取由多个历史帧图像训练得到的曝光处理模型,所述曝光处理模型中包含曝光参数与深度信息之间的变化趋势;
将所述预估深度信息输入到所述曝光处理模型中,根据曝光参数与深度信息之间的变化趋势得到所述预估深度信息对应的预估曝光参数。
在一实施例中,所述获取由多个历史帧图像训练得到的曝光处理模型,包括:
获取多个历史帧图像;
将所述多个历史帧图像分别分割为焦平面区域和非焦平面区域;
获取所述多个历史帧图像的焦平面区域的深度信息、亮度信息和曝光参数;
用所述多个历史帧图像的焦平面区域的深度信息、亮度信息和曝光参数训练得到所述曝光处理模型。
在一实施例中,所述获取当前帧图像的预估深度信息包括:
根据所述多个历史帧图像的焦平面区域的深度信息对所述当前帧图像的焦平面区域的深度信息进行预估,得到所述当前帧图像的预估深度信息。
在一实施例中,所述得到所述当前帧图像之后,还包括:
将所述当前帧图像分割为焦平面区域和非焦平面区域;
获取所述当前帧图像的焦平面区域的深度信息和亮度信息;
将所述当前帧图像的焦平面区域的深度信息、亮度信息和所述预估曝光参数输入到所述曝光处理模型中,以对所述曝光处理模型进行更新;
使用更新后的曝光处理模型获取下一帧图像的预估曝光参数。
在一实施例中,所述曝光处理模型中还包括曝光指标与相对深度之间的变化趋势,所述相对深度为同一张图像中非焦平面区域相对焦平面区域的相对深度,所述方法还包括:
获取所述多个历史帧图像的非焦平面区域的曝光指标及相对深度;
将所述多个历史帧图像的非焦平面区域的曝光指标及相对深度输入到所述曝光处理模型中进行训练,在所述曝光处理模型中得到所述曝光指标与相对深度之间的变化趋势;
所述得到所述当前帧图像之后,还包括:
将所述当前帧图像分割为焦平面区域和非焦平面区域;
获取所述当前帧图像的非焦平面区域的相对深度;
将所述当前帧图像的非焦平面区域的相对深度输入到所述曝光处理模型中,根据所述曝光指标与相对深度之间的变化趋势,得到所述非焦平面区域的预估曝光指标;
使用所述预估曝光指标对所述当前帧图像的非焦平面区域进行曝光修正。
在一实施例中,所述使用所述预估曝光指标对所述当前帧图像的非焦平面区域进行曝光修正之后,还包括:
将所述当前帧图像的非焦平面区域的曝光指标及相对深度输入到所述曝光处理模型中,以对所述曝光处理模型进行更新;
使用更新后的曝光处理模型获取下一帧图像的预估曝光指标。
在一实施例中,所述当前帧图像的非焦平面区域包括至少两个非焦平面区域,所述将所述当前帧图像分割为焦平面区域和非焦平面区域包括:
从所述当前帧图像中分割出焦平面区域;
获取除焦平面区域以外的其他区域的深度信息;
根据所述其他区域的深度信息划分出至少两个深度区间;
将所述其他区域按照所处的深度区间分割为至少两个非焦平面区域,其中每一个非焦平面区域对应一个深度区间。
在一实施例中,所述从所述当前帧图像中分割出焦平面区域包括:
获取拍摄所述当前帧图像时的对焦参数;
根据所述对焦参数确定拍摄所述当前帧图像时的焦平面;
获取所述焦平面的深度区间;
将所述当前帧图像中深度信息处于所述焦平面的深度区间的区域分割为所述焦平面区域。
请参照图1,图1为本申请实施例提供的曝光处理方法的第一种流程示意图。该曝光处理方法的执行主体可以是本申请实施例提供的曝光处理装置,或者集成了该曝光处理装置的集成电路芯片或电子设备。本申请实施例提供的曝光处理方法可以包括以下步骤:
110,获取当前帧图像的预估深度信息。
在一实施例中,启动相机,获取待拍摄场景的视频流,得到待拍摄场景的多帧图像。该多帧图像可以为RAW(RAW Image Format,原始)格式,即图像感应器将捕捉到的光源信号转化为数字信号后,未经加工的原始图像。多帧图像的具体数量可以根据实际需要设置,本申请对此不做限制。
在视频流的多帧图像中可以区分出当前帧图像和历史帧图像,当前帧图像即为当前正要拍摄的一帧图像,历史帧图像为在当前帧图像之前拍摄的图像,历史帧图像和当前帧图像可以针对同一待拍摄场景,具有相同或相似的图像内容。
其中,多个历史帧图像可以具有相同或不同的曝光参数。例如,在拍摄时,不断调整曝光参数,可以得到不同曝光参数的多个历史帧图像。其中,曝光参数为在拍摄图像时影响图像曝光的参数,例如光圈、曝光时长、感光度。例如,曝光参数可以包括曝光时长,而曝光时长影响曝光量,在拍摄时使用不同的曝光时长拍摄,可以得到同一待拍摄场景的 不同曝光量的多个历史帧图像。除直接获取外,历史帧图像也可以是使用过本申请实施例提供的曝光处理方法进行曝光处理的图像。
在一实施例中,可以根据多个历史帧图像的深度信息对当前帧图像的深度信息进行预估,得到当前帧图像的预估深度信息。由于多个历史帧图像是对同一待拍摄场景先后拍摄的图像,图像中的内容一般变化不大,根据历史帧图像的深度信息,可以得到当前帧图像的预估深度信息。若待拍摄场景中有运动对象,可以根据多个历史帧图像得出运动对象的运动趋势,根据运动对象的运动趋势和运动图像在多个历史帧图像中的深度信息预估出运动对象在下一帧图像(即未拍摄的当前帧图像)中的深度信息。
其中,深度信息可以以至少一个深度图的形式存在,指示传感器(例如,图像传感器、深度传感器)与对象的距离(例如,从深度传感器到待拍摄场景中所有对象的距离)。由于不同对象分别有对应的深度信息,因而不同的深度信息可以反映图像中不同对象距离设备的位置。在一实施例中,电子设备设置有光线传感器,根据光线传感器发射的近红外光及其反射光的相位参数,计算出近红外光及其反射光的相位差,就可以根据相位差计算历史帧图像中各个对象的深度,从而得到历史帧图像的深度信息。
在一实施例中,对于历史帧图像中的静止区域和运动区域,分别采用不同的方式获取深度信息。例如,可利用线性透视方法获取静止区域的深度信息,可利用基于变形理论的高精度光流法获取运动矢量并转换为运动区域的深度信息。
120,根据曝光参数与深度信息之间的变化趋势得到预估深度信息对应的预估曝光参数,其中,变化趋势基于多个历史帧图像而确定。
在一实施例中,在根据曝光参数与深度信息之间的变化趋势得到预估深度信息对应的预估曝光参数之前,获取由多个历史帧图像训练得到的曝光处理模型,将预估深度信息输入到曝光处理模型中,根据曝光处理模型中曝光参数与深度信息之间的变化趋势得到预估深度信息对应的预估曝光参数。
历史帧图像成像时的亮度信息会受到深度信息和曝光参数的影响,在一实施例中,获取多个历史帧图像的深度信息、亮度信息和曝光参数,并用多个历史帧图像的深度信息、亮度信息和曝光参数训练得到曝光处理模型。例如,通过学习多个历史帧图像中的亮度信息和深度信息变化的趋势,可以得到亮度信息随深度信息变化的趋势,通过学习多个历史帧图像中的亮度信息和曝光参数,可以得到亮度信息随曝光参数变化的趋势,进而在曝光处理模型中学习得出曝光参数与深度信息之间的变化趋势。
需要说明的是,当前帧图像的预估深度信息与参与训练曝光处理模型的多个历史帧图像输入的深度信息应包含相同或相似的内容。若参与训练曝光处理模型的多个历史帧图像输入的深度信息是整个图像的深度信息,则此处获取的当前帧图像的预估深度信息也是整个当前帧图像的预估深度信息;若参与训练曝光处理模型的多个历史帧图像输入的深度信息是图像中某个区域的深度信息,则此处获取的当前帧图像的预估深度信息也是当前帧图像中同一区域的预估深度信息。
获取到当前帧图像的预估深度信息后,将预估深度信息输入到由多个历史帧图像训练得到的曝光处理模型中。由于曝光处理模型中包含有曝光参数与深度信息之间的变化趋势,因而,可以根据该变化趋势以及输入的当前帧图像的预估深度信息,得到预估深度信息对应的预估曝光参数。
该预估曝光参数是由曝光处理模型根据历史帧图像总结的规律推测出的适合当前帧图像的曝光参数。预估曝光参数中可以包括预估光圈、预估曝光时间和/或预估感光值。
130,在拍摄时采用预估曝光参数进行曝光,得到当前帧图像。
在拍摄时,我们需要调整光圈、曝光时间、感光值等曝光参数。本申请实施例在拍摄当前帧图像时,可以使用模型推测出的预估曝光参数进行自动曝光处理。例如将光圈调整 到预估光圈的大小,将曝光时间调整到预估曝光时间,将感光值调整到预估感光值,从而得到当前帧图像。当前帧图像的曝光处理可以由电子设备自动进行,不需用户手动处理。
在一实施例中,对于多个历史帧图像和当前帧图像,都可以分割为焦平面区域和非焦平面区域。用来训练曝光处理模型的多个历史帧图像的亮度信息、深度信息和曝光参数可以指的是多个历史帧图像的焦平面区域的亮度信息、深度信息和曝光参数。输入到曝光处理模型中的当前帧图像的预估深度信息,可以是对当前帧图像的焦平面区域的深度进行预估得到的预估深度信息。相应的,在使用预估曝光参数进行曝光处理时,主要保证焦平面区域的成像亮度符合预期。
由于焦平面区域是经过对焦处理的重点处理区域,通过对焦处理后,焦平面区域的曝光相对非焦平面的曝光更加准确,焦平面区域的曝光效果也比非焦平面的曝光效果更好。因而,采用多个历史帧图像的焦平面区域的亮度信息、深度信息和曝光参数训练模型,得到的曝光参数与深度信息之间的变化趋势更加准确。
在一实施例中,将图像(历史帧图像和/或当前帧图像)分割为焦平面区域和非焦平面区域可以通过以下方式实现:
首先,获取拍摄图像时的对焦参数,根据对焦参数确定拍摄图像时的焦平面,获取焦平面的深度区间,将图像中深度信息处于焦平面的深度区间的区域分割为焦平面区域。在分割出焦平面区域后,除焦平面区域以外的其他区域即为非焦平面区域。
其中,焦平面的深度区间可以是对某一对焦对象进行拍摄时,该对焦对象所处的深度区间。可以理解的是,由于物体是立体的,而焦平面对应的深度为一个具体数值,往往无法囊括对象的全部深度信息,因而需要用深度区间来概括。在划分焦平面的深度区间时,将焦平面深度的前后一段范围内的深度都归入进来,从而能够正确分割出聚焦对象所在的区域作为焦平面区域。用户可以通过对焦操作指示设备,对对焦对象进行对焦,即,对焦参数可以基于用户的对焦操作获取。
在一实施例中,非焦平面区域可以包括至少两个非焦平面区域,在分割出焦平面区域后,获取除焦平面区域以外的其他区域的深度信息,根据其他区域的深度信息划分出至少两个深度区间,将其他区域按照所处的深度区间分割为至少两个非焦平面区域。
非焦平面区域的数量取决于深度区间的数量。例如,其中每一个非焦平面区域对应一个深度区间,或者根据每相邻的两个深度区间划分非焦平面区域,即每一个非焦平面区域对应两个深度区间,等等。本申请实施例对非焦平面区域的划分方式不做限制,只要是根据深度区间划分均可。
在一实施例中,在得到当前帧图像后,将当前帧图像分割为焦平面区域和非焦平面区域,分割方法可参见上述提到的方法。然后,获取当前帧图像的焦平面区域的深度信息和亮度信息,将当前帧图像的焦平面区域的深度信息、亮度信息和预估曝光参数输入到曝光处理模型中,对曝光处理模型进行更新。并且,得到的当前帧图像作为后续要拍摄的图像对应的历史帧图像。
也就是说,本申请实施例提供的曝光处理方法并非是只应用于某一帧图像,而是在多帧原始图像获取期间,持续应用于每一帧图像中。并且每一帧图像利用曝光处理模型获取的预估曝光参数拍摄出来后,每一帧图像的深度信息、亮度信息和预估曝光参数又重新输入到曝光处理模型中,曝光处理模型也在不断更新。下面结合附图进行说明:
请参阅图2,图2为本申请实施例提供的曝光处理方法的第二种流程示意图。其中示出了多帧图像利用曝光处理模型得到预估曝光参数以及对曝光处理模型进行更新的流程。其中,图像11~19按照数字从小到大的顺序依次拍摄(图中略去了图像17和18),由图像11、12、13训练得到曝光处理模型,在拍摄图像14时,图像14作为当前帧图像,图像11~13作为历史帧图像,利用图像11~13训练得到的曝光处理模型获取图像14的预估曝光 参数,用预估曝光参数进行曝光,得到图像14。并且,拍摄得到图像14后,将图像14的深度信息、亮度信息和预估曝光参数输入到曝光处理模型中,曝光处理模型接收到图像14的这些数据后,将图像14的这些数据也纳入计算体系,从而曝光处理模型进行了更新,更新后的曝光处理模型相当于是由图像11~14训练得到的。
在拍摄图像15时,图像15作为当前帧图像,图像11~14作为历史帧图像,利用更新后的由图像11~14训练得到的曝光处理模型获取图像15的预估曝光参数,用预估曝光参数进行曝光,得到图像15,并将图像15的深度信息、亮度信息和预估曝光参数输入到曝光处理模型中,使得曝光处理模型(11~14)又进一步更新为曝光处理模型(11~15),更新后的曝光处理模型(11~15)相当于是由图像11~15训练出来的。曝光处理模型(11~16)又用于在后续拍摄图像16时获取图像16的预估曝光参数,拍摄出来的图像16又会使得曝光处理模型进行更新,……,如此往复,乃至之后用曝光处理模型(11~18)获取图像19的预估曝光参数,等等。
可见,本申请中的每一个当前帧图像在拍摄下一帧图像时都可作为下一帧图像的历史帧图像,并且本申请用于获取每一个当前帧图像的曝光处理模型都是由前面历史帧图像更新过的最新的曝光处理模型,使得更新后的曝光处理模型与最新的拍摄内容相契合。
其中,若用户在拍摄图像15时,手动调整了曝光参数,则在对曝光处理模型更新时,不再输入图像15的预估曝光参数,而是输入图像15的实际曝光参数,图像15的实际曝光参数参与更新曝光处理模型,使得更新后的曝光处理模型输出的预估曝光参数也更加精准,更符合用户的预期。
请继续参阅图3,图3为本申请实施例提供的曝光处理方法的第三种流程示意图。在一实施例中,在拍摄当前帧图像时,参与训练曝光处理模型的历史帧图像可以不是前面所有的历史帧图像,而是只选用最新拍摄的几帧历史帧图像。例如,由图像14对曝光处理模型进行更新时,更新后的曝光处理模型中的曝光参数与深度信息之前的变化趋势可以是根据图像12、13、14的相关数据得出的,图像11的相关数据不参与此次更新,即,图2中的曝光处理模型(11~14)更改为图3中的曝光处理模型(12~14),同理,图2中的曝光处理模型(11~15)更改为图3中的曝光处理模型(13~15),图2中的曝光处理模型(11~18)更改为图3中的曝光处理模型(16~18)。只选用最新的几帧历史帧图像训练和更新曝光处理模型,可以避免太久远的历史帧图像影响到曝光处理模型的准确度。
需要说明的是,图2及图3中所列出的历史帧图像的选择方式和选择数量等都是示例性的。本领域技术人员应当理解,根据需求可以对历史帧图像的选择方式和选择数量进行改变。
在一实施例中,可以根据电子设备当前的功耗情况确定用来训练曝光处理模型的历史帧图像的数量。预先设定功耗等级与历史帧图像数量的对应关系,获取电子设备当前的功耗值(该功耗值例如可以为电子设备已消耗的电量、后台处理的进程数量等),根据预先设定的功耗等级,确定当前的功耗值所属的功耗等级,并获取与该功耗等级对应数量的历史帧图像。例如,功耗等级从低功耗到高功耗分别为一到五级,当确定出当前的功耗值所属的功耗等级为一级时,获取当前帧图像拍摄前的前7帧图像训练曝光处理模型,当确定出当前的功耗值所属的功耗等级为五级时,只获取当前帧图像拍摄前的前3帧图像训练曝光处理模型。
功耗等级越低,参与训练和更新模型的历史帧图像的数量越多,需要处理的数据越多,训练出的曝光处理模型越精确。功耗等级越高,参与训练和更新模型的历史帧图像的数量越少,需要处理的数据越少,越节约功耗。
请参照图4,图4为本申请实施例提供的曝光处理方法的第四种流程示意图。该曝光处理方法可应用于本申请实施例提供的电子设备,本申请实施例提供的曝光处理方法可以 包括以下步骤:
201、获取多个历史帧图像。
在一实施例中,启动相机,获取待拍摄场景的视频流,得到待拍摄场景的多帧图像。在视频流的多帧图像中可以区分出当前帧图像和历史帧图像,当前帧图像即为当前正要拍摄的一帧图像,历史帧图像为在当前帧图像之前拍摄的图像,历史帧图像和当前帧图像可以针对同一待拍摄场景,具有相同或相似的图像内容。
其中,多个历史帧图像可以具有相同或不同的曝光参数。除直接拍摄获取外,历史帧图像也可以是使用过本申请实施例提供的曝光处理方法进行曝光处理的图像。
202、将多个历史帧图像分别分割为焦平面区域和非焦平面区域。
首先,获取拍摄历史帧图像时的对焦参数,根据对焦参数确定拍摄历史帧图像时的焦平面,获取焦平面的深度区间,将历史帧图像中深度信息处于焦平面的深度区间的区域分割为焦平面区域。在分割出焦平面区域后,除焦平面区域以外的其他区域即为非焦平面区域。
同一历史帧图像中可以包括一个非焦平面区域,也可以包括至少两个非焦平面区域。在一实施例中,获取除焦平面区域以外的其他区域的深度信息,根据其他区域的深度信息划分出至少两个深度区间,将其他区域按照所处的深度区间分割为至少两个非焦平面区域,其中每一个非焦平面区域对应一个深度区间。
非焦平面区域的数量取决于深度区间的数量。例如,其中每一个非焦平面区域对应一个深度区间,或者根据每相邻的两个深度区间划分非焦平面区域,即每一个非焦平面区域对应两个深度区间,等等。本申请实施例对非焦平面区域的划分方式不做限制,只要是根据深度区间划分均可。
在一实施例中,首先分割出焦平面区域,在分割出焦平面区域后,除焦平面区域以外的其他区域即为非焦平面区域。其中分割出焦平面区域可以通过以下方式实现:
获取拍摄图像时的对焦参数;
根据对焦参数确定拍摄图像时的焦平面,获取焦平面的深度区间;
将图像中深度信息处于焦平面的深度区间的区域分割为焦平面区域。
203、获取多个历史帧图像的焦平面区域的深度信息、亮度信息和曝光参数。
204、用多个历史帧图像的焦平面区域的深度信息、亮度信息和曝光参数训练得到曝光处理模型。
其中,曝光处理模型中包含曝光参数与深度信息之间的变化趋势。
多个历史帧图像的焦平面区域可以具有相同或不同的曝光参数。例如,在拍摄时,不断调整曝光参数,可以得到不同曝光参数的多个历史帧图像。历史帧图像成像时,焦平面区域的亮度信息会受到深度信息和曝光参数的影响。在一实施例中,获取多个历史帧图像的焦平面区域的深度信息、亮度信息和曝光参数,并用多个历史帧图像的焦平面区域的深度信息、亮度信息和曝光参数训练得到曝光处理模型。例如,通过学习多个历史帧图像中焦平面区域的亮度信息和深度信息变化的趋势,可以得到焦平面区域的亮度信息随深度信息变化的趋势,通过学习多个历史帧图像中焦平面区域的亮度信息和曝光参数,可以得到焦平面区域的亮度信息随曝光参数变化的趋势,进而在曝光处理模型中学习得出曝光参数与深度信息之间的变化趋势。
由于焦平面区域是经过对焦处理的重点处理区域,通过对焦处理后,焦平面区域的曝光相对非焦平面的曝光更加准确,焦平面区域的曝光效果也比非焦平面的曝光效果更好。因而,采用多个历史帧图像的焦平面区域的亮度信息、深度信息和曝光参数训练模型,得到的曝光参数与深度信息之间的变化趋势更加准确。
205、获取多个历史帧图像的非焦平面区域的曝光指标及相对深度。
206、将多个历史帧图像的非焦平面区域的曝光指标及相对深度输入到曝光处理模型中进行训练,在曝光处理模型中得到曝光指标与相对深度之间的变化趋势。
其中,曝光指标可以是诸如白平衡之类能够对成像图像的曝光情况进行调整的指标,白平衡是描述红、绿、蓝三基色混合后生成白色精准度的一项指标,用于进行色彩还原和色调处理。白平衡设定可以校准色温的偏差,对于拍摄时和拍摄完的图像,都可以调整白平衡来达到想要的画面效果。
相对深度指的是同一张图像中非焦平面区域相对焦平面区域的相对深度。
在一实施例中,历史帧图像中可以包含至少两个非焦平面区域。在历史帧图像中分割出焦平面区域后,获取除焦平面区域以外的其他区域的深度信息,根据其他区域的深度信息划分出至少两个深度区间,将其他区域按照所处的深度区间分割为至少两个非焦平面区域,其中每一个非焦平面区域对应一个深度区间。非焦平面区域的数量取决于深度区间的数量。
当历史帧图像中包含一个非焦平面区域时,获取该非焦平面区域的曝光指标及相对深度;当历史帧图像中包含至少两个非焦平面区域时,对应地获取每个非焦平面的曝光指标及相对深度,即对于每一个非焦平面区域,对应获取其曝光指标及相对深度。
将每个历史帧图像中每个非焦平面的曝光指标及相对深度一起输入到曝光处理模型中进行训练,使得曝光处理模型在训练出焦平面区域曝光参数与深度信息之间的变化趋势之余,还训练出非焦平面区域的曝光指标与相对深度之间的变化趋势。
207、根据多个历史帧图像的焦平面区域的深度信息对当前帧图像的焦平面区域的深度信息进行预估,得到当前帧图像的预估深度信息。
为了通过曝光处理模型得到拍摄当前帧图像的合适曝光参数,需要对当前帧图像的深度信息进行预估,得到当前帧图像的预估深度信息。
在一实施例中,可以根据多个历史帧图像的深度信息对当前帧图像的深度信息进行预估,得到当前帧图像的预估深度信息。由于多个历史帧图像是对同一待拍摄场景先后拍摄的图像,图像中的内容一般变化不大,根据历史帧图像的深度信息,可以得到当前帧图像的预估深度信息。若待拍摄场景中有运动对象,可以根据多个历史帧图像得出运动对象的运动趋势,根据运动对象的运动趋势和运动图像在多个历史帧图像中的深度信息预估出运动对象在下一帧图像(即未拍摄的当前帧图像)中的深度信息。
208、根据曝光参数与深度信息之间的变化趋势得到预估深度信息对应的预估曝光参数。
获取到当前帧图像的预估深度信息后,将预估深度信息输入到由多个历史帧图像训练得到的曝光处理模型中。由于曝光处理模型中包含有曝光参数与深度信息之间的变化趋势,因而,可以根据该变化趋势以及输入的当前帧图像的预估深度信息,得到预估深度信息对应的预估曝光参数。
该预估曝光参数是由曝光处理模型根据历史帧图像总结的规律推测出的适合当前帧图像的曝光参数。预估曝光参数中可以包括预估光圈、预估曝光时间和/或预估感光值。
209、在拍摄时采用预估曝光参数进行曝光,得到当前帧图像。
在拍摄时,我们需要调整光圈、曝光时间、感光值等曝光参数。本申请实施例在拍摄当前帧图像时,可以使用模型推测出的预估曝光参数进行自动曝光处理。例如将光圈调整到预估光圈的大小,将曝光时间调整到预估曝光时间,将感光值调整到预估感光值,从而得到当前帧图像。当前帧图像的曝光处理可以由电子设备自动进行,不需用户手动处理。
210、将当前帧图像分割为焦平面区域和非焦平面区域。
得到当前帧图像后,对当前帧图像进行图像分割的方式可以参见前述对历史帧图像进行图像分割的方法(包括分割出一个焦平面区域和至少两个焦平面区域的方法),在此不再赘述。
211、获取当前帧图像的非焦平面区域的相对深度。
分割出当前帧图像的焦平面区域和非焦平面区域后,可以获取每个非焦平面区域相对于焦平面的深度,即为每个焦平面区域的相对深度。
212、将当前帧图像的非焦平面区域的相对深度输入到曝光处理模型中,根据曝光指标与相对深度之间的变化趋势,得到非焦平面区域的预估曝光指标。
将当前帧图像的非焦平面区域的相对深度输入到曝光处理模型中,由于曝光处理模型中包含有曝光指标和相对深度之间的变化趋势,根据输入的当前帧图像的非焦平面区域的相对深度,可以得到非焦平面区域的预估曝光指标。其中每个非焦平面区域对应有一个预估曝光指标,用于一一对应地调整当前帧图像的每个非焦平面区域。预估曝光指标可以包括预估白平衡指标,用于调整各非焦平面区域的白平衡。
213、使用预估曝光指标对当前帧图像的非焦平面区域进行曝光修正。
通过预估曝光指标对当前帧图像的非焦平面区域一一进行曝光修正,
通过预估曝光参数保证焦平面区域的自动精准曝光,通过曝光指标以焦平面区域为基准对各非焦平面区域进行曝光修正。从而在保证图像焦平面区域成像效果的同时,提升了非焦平面区域的成像效果。避免出现诸如“对亮处聚焦、亮处很清楚、暗处看不清”或者“对暗处聚焦、暗处很清楚、亮处太刺眼”等的成像问题,保证图像整体的平滑度。
在一实施例中,将当前帧图像的焦平面区域的深度信息、亮度信息和预估曝光参数输入到曝光处理模型中,以更新曝光处理模型中曝光参数与深度信息之间的变化趋势,使用更新后的曝光处理模型获取下一帧图像的预估曝光参数。
在一实施例中,将当前帧图像的非焦平面区域的曝光指标及相对深度输入到曝光处理模型中,以更新曝光处理模型中曝光指标与相对深度之间的变化趋势,使用更新后的曝光处理模型获取下一帧图像的预估曝光指标。
请参阅图5,图5为本申请实施例提供的曝光处理方法的第五种流程示意图。
本申请实施例中,首先开启相机,获取(历史帧图像的)原始图像数据,根据历史帧图像的原始图像数据计算深度信息,根据计算的深度信息以及拍摄历史帧图像时的对焦参数确定出历史帧图像中的焦平面。基于焦平面对图像进行区域分割,将历史帧图像分割为焦平面区域和非焦平面区域,对焦平面区域进行亮度检测,获取焦平面区域的亮度信息,然后将焦平面区域的亮度信息和深度信息以及曝光参数一起输入到模型中进行训练,根据训练得到的曝光参数与深度信息之间的变化趋势获取下一帧图像的预估曝光参数,在获取下一帧图像的原始图像数据时,采用该预估曝光参数进行曝光,同时,预估曝光参数重新输入到模型中,对模型进行更新。
除此以外,对于获取到的原始图像数据,还进行曝光修正,曝光修正的具体可实施方式可参见前述获取非焦平面区域的相对深度及曝光指标进行训练、以及获取非焦平面的预估曝光参数的相关描述,在此不再赘述。
在利用预估曝光参数获取原始图像数据,并对原始图像数据进行曝光修正后,得到的图像可进行后端图像处理。例如对图像进行裁剪、涂鸦、加水印、加文字等操作。
本申请实施例提供的曝光处理方法并非是只应用于某一帧图像,而是在多帧原始图像获取期间,持续应用于每一帧图像中。并且每一帧图像利用曝光处理模型获取的预估曝光参数拍摄出来后,每一帧图像的深度信息、亮度信息和预估曝光参数又重新输入到曝光处理模型中,预估曝光指标也是同样,曝光处理模型也在不断更新。
由上可知,本申请实施例提供的曝光处理方法,首先获取当前帧图像的预估深度信息;然后根据曝光参数与深度信息之间的变化趋势得到预估深度信息对应的预估曝光参数,其中,变化趋势基于多个历史帧图像而确定;进而在拍摄时采用预估曝光参数进行曝光,得到当前帧图像。本申请实施例根据历史帧图像确定曝光参数与深度信息之间的变化趋势, 在拍摄当前帧图像时直接根据曝光参数与深度信息之间的变化趋势获取预估曝光参数,通过学习历史帧图像的曝光情况自动处理当前帧图像的曝光,提高了曝光处理的效率。
本申请实施例还提供一种曝光处理装置。请参照图6,图6为本申请实施例提供的曝光处理装置的第一种结构示意图。其中该曝光处理装置300可应用于电子设备或集成电路芯片,该曝光处理装置300包括第一获取模块301、第二获取模块302和曝光模块303,如下:
第一获取模块301,用于获取当前帧图像的预估深度信息;
第二获取模块302,用于根据曝光参数与深度信息之间的变化趋势得到预估深度信息对应的预估曝光参数,其中,变化趋势基于多个历史帧图像而确定;
曝光模块303,用于在拍摄时采用预估曝光参数进行曝光,得到当前帧图像。
请一并参阅图7,图7为本申请实施例提供的曝光处理装置300的第二种结构示意图。在一实施例中,曝光处理装置300还包括第三获取模块304,用于:
获取由多个历史帧图像训练得到的曝光处理模型,曝光处理模型中包含曝光参数与深度信息之间的变化趋势。
在根据曝光参数与深度信息之间的变化趋势得到预估深度信息对应的预估曝光参数时,第二获取模块302可以用于:
将预估深度信息输入到曝光处理模型中,根据曝光参数与深度信息之间的变化趋势得到预估深度信息对应的预估曝光参数。
在一实施例中,在获取由多个历史帧图像训练得到的曝光处理模型时,第一获取模块301可以用于:
获取多个历史帧图像;
将多个历史帧图像分别分割为焦平面区域和非焦平面区域;
获取多个历史帧图像的焦平面区域的深度信息、亮度信息和曝光参数;
用多个历史帧图像的焦平面区域的深度信息、亮度信息和曝光参数训练得到曝光处理模型。
在一实施例中,在获取当前帧图像的预估深度信息时,第一获取模块301可以用于:
根据多个历史帧图像的焦平面区域的深度信息对当前帧图像的焦平面区域的深度信息进行预估,得到当前帧图像的预估深度信息。
请继续参阅图7,在一实施例中,曝光处理装置300还包括分割模块305,第四获取模块306、更新模块307和第五获取模块308:
分割模块305,用于将当前帧图像分割为焦平面区域和非焦平面区域;
第四获取模块306,用于获取当前帧图像的焦平面区域的深度信息和亮度信息;
更新模块307,用于将当前帧图像的焦平面区域的深度信息、亮度信息和预估曝光参数输入到曝光处理模型中,以对曝光处理模型进行更新;
第五获取模块308,用于使用更新后的曝光处理模型获取下一帧图像的预估曝光参数。
在一实施例中,曝光处理模型中还包括曝光指标与相对深度之间的变化趋势,相对深度为同一张图像中非焦平面区域相对焦平面区域的相对深度,曝光处理装置300还包括:
第六获取模块309,用于获取多个历史帧图像的非焦平面区域的曝光指标及相对深度;
训练模块310,用于将多个历史帧图像的非焦平面区域的曝光指标及相对深度输入到曝光处理模型中进行训练,在曝光处理模型中得到曝光指标与相对深度之间的变化趋势。
在得到当前帧图像之后,将当前帧图像分割为焦平面区域和非焦平面区域;获取当前帧图像的非焦平面区域的相对深度;将当前帧图像的非焦平面区域的相对深度输入到曝光处理模型中,根据曝光指标与相对深度之间的变化趋势,得到非焦平面区域的预估曝光指 标;使用预估曝光指标对当前帧图像的非焦平面区域进行曝光修正。
其中,使用预估曝光指标对当前帧图像的非焦平面区域进行曝光修正之后,将当前帧图像的非焦平面区域的曝光指标及相对深度输入到曝光处理模型中,以对曝光处理模型进行更新;使用更新后的曝光处理模型获取下一帧图像的预估曝光指标。
在一实施例中,当前帧图像的非焦平面区域包括至少两个非焦平面区域,在将当前帧图像分割为焦平面区域和非焦平面区域时,分割模块305可以用于:
从当前帧图像中分割出焦平面区域;
获取除焦平面区域以外的其他区域的深度信息;
根据其他区域的深度信息划分出至少两个深度区间;
将其他区域按照所处的深度区间分割为至少两个非焦平面区域,其中每一个非焦平面区域对应一个深度区间。
其中,在从当前帧图像中分割出焦平面区域时,获取拍摄当前帧图像时的对焦参数;根据对焦参数确定拍摄当前帧图像时的焦平面;获取焦平面的深度区间;将当前帧图像中深度信息处于焦平面的深度区间的区域分割为焦平面区域。
以上各个模块的具体实施可参见前面的实施例,在此不再赘述。
由上可知,本申请实施例提供的曝光处理方法,首先第一获取模块301获取当前帧图像的预估深度信息;然后第二获取模块302根据曝光参数与深度信息之间的变化趋势得到预估深度信息对应的预估曝光参数,其中,变化趋势基于多个历史帧图像而确定;进而曝光模块303在拍摄时采用预估曝光参数进行曝光,得到当前帧图像。本申请实施例根据历史帧图像确定曝光参数与深度信息之间的变化趋势,在拍摄当前帧图像时直接根据曝光参数与深度信息之间的变化趋势获取预估曝光参数,通过学习历史帧图像的曝光情况自动处理当前帧图像的曝光,提高了曝光处理的效率。
本申请实施例还提供一种电子设备。电子设备可以是智能手机、平板电脑、游戏设备、AR(Augmented Reality,增强现实)设备、汽车、车辆周边障碍检测装置、音频播放装置、视频播放装置、笔记本、桌面计算设备、可穿戴设备诸如手表、眼镜、头盔、电子手链、电子项链、电子衣物等设备。
参考图8,图8为本申请实施例提供的电子设备400的第一种结构示意图。其中,电子设备400包括处理器401和存储器402。存储器中存储有计算机程序,处理器通过调用存储器中存储的计算机程序,以执行本申请实施例提供的任一种曝光处理方法中的步骤。处理器401与存储器402电性连接。
处理器401是电子设备400的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或调用存储在存储器402内的计算机程序,以及调用存储在存储器402内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。
在本实施例中,电子设备400中的处理器401可以按照上述曝光处理方法中的步骤,将一个或一个以上的计算机程序的进程对应的指令加载到存储器402中,并由处理器401来运行存储在存储器402中的计算机程序,从而实现上述曝光处理方法中的步骤,例如:
获取当前帧图像的预估深度信息;
根据曝光参数与深度信息之间的变化趋势得到预估深度信息对应的预估曝光参数,其中,变化趋势基于多个历史帧图像而确定;
在拍摄时采用预估曝光参数进行曝光,得到当前帧图像。
请继续参考图9,图9为本申请实施例提供的电子设备400的第二种结构示意图。其中,电子设备400还包括:显示屏403、控制电路404、输入单元405、传感器406以及电源407。其中,处理器401分别与显示屏403、控制电路404、输入单元405、传感器406 以及电源407电性连接。
显示屏403可用于显示由用户输入的信息或提供给用户的信息以及电子设备的各种图形用户接口,这些图形用户接口可以由图像、文本、图标、视频和其任意组合来构成。
控制电路404与显示屏403电性连接,用于控制显示屏403显示信息。
输入单元405可用于接收输入的数字、字符信息或用户特征信息(例如指纹),以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。例如,输入单元405可以包括触控感应模组。
传感器406用于采集电子设备自身的信息或者用户的信息或者外部环境信息。例如,传感器406可以包括距离传感器、磁场传感器、光线传感器、加速度传感器、指纹传感器、霍尔传感器、位置传感器、陀螺仪、惯性传感器、姿态感应器、气压计、心率传感器等多个传感器。
电源407用于给电子设备400的各个部件供电。在一些实施例中,电源407可以通过电源管理系统与处理器401逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管图8及图9中未示出,电子设备400还可以包括摄像头、蓝牙模块等,在此不再赘述。
在本实施例中,电子设备400中的处理器401可以按照上述翻译方法中的步骤,将一个或一个以上的计算机程序的进程对应的指令加载到存储器402中,并由处理器401来运行存储在存储器402中的计算机程序,从而实现上述曝光处理方法中的步骤,例如:
获取当前帧图像的预估深度信息;
根据曝光参数与深度信息之间的变化趋势得到预估深度信息对应的预估曝光参数,其中,变化趋势基于多个历史帧图像而确定;
在拍摄时采用预估曝光参数进行曝光,得到当前帧图像。
在一些情况下,在根据曝光参数与深度信息之间的变化趋势得到预估深度信息对应的预估曝光参数之前,处理器401还执行以下步骤:
获取由多个历史帧图像训练得到的曝光处理模型,曝光处理模型中包含曝光参数与深度信息之间的变化趋势;
将预估深度信息输入到曝光处理模型中,根据曝光参数与深度信息之间的变化趋势得到预估深度信息对应的预估曝光参数。
在一些情况下,在获取由多个历史帧图像训练得到的曝光处理模型时,处理器401执行以下步骤:
获取多个历史帧图像;
将多个历史帧图像分别分割为焦平面区域和非焦平面区域;
获取多个历史帧图像的焦平面区域的深度信息、亮度信息和曝光参数;
用多个历史帧图像的焦平面区域的深度信息、亮度信息和曝光参数训练得到曝光处理模型。
在一些情况下,在获取当前帧图像的预估深度信息时,处理器401还执行以下步骤:
根据多个历史帧图像的焦平面区域的深度信息对当前帧图像的焦平面区域的深度信息进行预估,得到当前帧图像的预估深度信息。
在一些情况下,在得到当前帧图像之后,处理器401执行以下步骤:
将当前帧图像分割为焦平面区域和非焦平面区域;
获取当前帧图像的焦平面区域的深度信息和亮度信息;
将当前帧图像的焦平面区域的深度信息、亮度信息和预估曝光参数输入到曝光处理模型中,以对曝光处理模型进行更新;
使用更新后的曝光处理模型获取下一帧图像的预估曝光参数。
在一些情况下,曝光处理模型中还包括曝光指标与相对深度之间的变化趋势,相对深度为同一张图像中非焦平面区域相对焦平面区域的相对深度,处理器401还执行以下步骤:
获取多个历史帧图像的非焦平面区域的曝光指标及相对深度;
将多个历史帧图像的非焦平面区域的曝光指标及相对深度输入到曝光处理模型中进行训练,在曝光处理模型中得到曝光指标与相对深度之间的变化趋势;
在一些情况下,在得到当前帧图像之后,处理器401还执行以下步骤:
将当前帧图像分割为焦平面区域和非焦平面区域;
获取当前帧图像的非焦平面区域的相对深度;
将当前帧图像的非焦平面区域的相对深度输入到曝光处理模型中,根据曝光指标与相对深度之间的变化趋势,得到非焦平面区域的预估曝光指标;
使用预估曝光指标对当前帧图像的非焦平面区域进行曝光修正。
在一些情况下,在使用预估曝光指标对当前帧图像的非焦平面区域进行曝光修正之后,处理器401还执行以下步骤:
将当前帧图像的非焦平面区域的曝光指标及相对深度输入到曝光处理模型中,以对曝光处理模型进行更新;
使用更新后的曝光处理模型获取下一帧图像的预估曝光指标。
在一些情况下,当前帧图像的非焦平面区域包括至少两个非焦平面区域,在将当前帧图像分割为焦平面区域和非焦平面区域时,处理器401执行以下步骤:
从当前帧图像中分割出焦平面区域;
获取除焦平面区域以外的其他区域的深度信息;
根据其他区域的深度信息划分出至少两个深度区间;
将其他区域按照所处的深度区间分割为至少两个非焦平面区域,其中每一个非焦平面区域对应一个深度区间。
在一些情况下,在从当前帧图像中分割出焦平面区域时,处理器401执行以下步骤:
获取拍摄当前帧图像时的对焦参数;
根据对焦参数确定拍摄当前帧图像时的焦平面;
获取焦平面的深度区间;
将当前帧图像中深度信息处于焦平面的深度区间的区域分割为焦平面区域。
由上可知,本申请实施例提供了一种电子设备,电子设备中的处理器执行以下步骤:首先获取当前帧图像的预估深度信息;然后根据曝光参数与深度信息之间的变化趋势得到预估深度信息对应的预估曝光参数,其中,变化趋势基于多个历史帧图像而确定;进而在拍摄时采用预估曝光参数进行曝光,得到当前帧图像。本申请实施例根据历史帧图像确定曝光参数与深度信息之间的变化趋势,在拍摄当前帧图像时直接根据曝光参数与深度信息之间的变化趋势获取预估曝光参数,通过学习历史帧图像的曝光情况自动处理当前帧图像的曝光,提高了曝光处理的效率。
本申请实施例还提供一种集成电路芯片。该集成电路芯片可用于智能手机、平板电脑、游戏设备、AR(Augmented Reality,增强现实)设备、汽车、车辆周边障碍检测装置、音频播放装置、视频播放装置、笔记本、桌面计算设备、可穿戴设备诸如手表、眼镜、头盔、电子手链、电子项链等设备。
本申请实施例提供的集成电路芯片独立于中央处理器,采取了硬件加速技术,把计算量非常大的工作分配给专门的硬件来处理以减轻中央处理器的工作量,从而不需要中央处理器通过软件一层层翻译图像中的每一个像素。以硬件加速的方式实施本申请实施例提供 的曝光处理方法,可以实现高速的曝光处理处理。
参考图10,图10为本申请实施例提供的集成电路芯片500的结构示意图。其中,集成电路芯片500包括处理器501、存储器502和曝光处理装置300。处理器501与存储器502电性连接。
处理器501是集成电路芯片500的控制中心,利用各种接口和线路连接整个集成电路芯片的各个部分,在本申请实施例提供的曝光处理方法中实现前端的深度信息采集和后端的多帧合成。
在本实施例中,曝光处理装置300,负责实现上述曝光处理方法中的步骤,例如:
获取当前帧图像的预估深度信息;
根据曝光参数与深度信息之间的变化趋势得到预估深度信息对应的预估曝光参数,其中,变化趋势基于多个历史帧图像而确定;
在拍摄时采用预估曝光参数进行曝光,得到当前帧图像。
本申请实施例还提供一种计算机可读存储介质,存储介质中存储有计算机程序,当计算机程序在计算机上运行时,计算机执行上述任一实施例的曝光处理方法。
例如,在一些实施例中,当计算机程序在计算机上运行时,计算机执行以下步骤:
获取当前帧图像的预估深度信息;
根据曝光参数与深度信息之间的变化趋势得到预估深度信息对应的预估曝光参数,其中,变化趋势基于多个历史帧图像而确定;
在拍摄时采用预估曝光参数进行曝光,得到当前帧图像。
需要说明的是,本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过计算机程序来指令相关的硬件来完成,计算机程序可以存储于计算机可读存储介质中,存储介质可以包括但不限于:只读存储器(ROM,Read Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁盘或光盘等。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见上文针对翻译方法的详细描述,此处不再赘述。
以上对本申请实施例所提供的曝光处理方法、装置、电子设备及存储介质进行了详细介绍。本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种曝光处理方法,其中,包括:
    获取当前帧图像的预估深度信息;
    根据所述曝光参数与深度信息之间的变化趋势得到所述预估深度信息对应的预估曝光参数,其中,所述变化趋势基于多个历史帧图像而确定;
    在拍摄时采用所述预估曝光参数进行曝光,得到所述当前帧图像。
  2. 根据权利要求1所述的曝光处理方法,其中,所述根据所述曝光参数与深度信息之间的变化趋势得到所述预估深度信息对应的预估曝光参数之前,还包括:
    获取由多个历史帧图像训练得到的曝光处理模型,所述曝光处理模型中包含曝光参数与深度信息之间的变化趋势;
    将所述预估深度信息输入到所述曝光处理模型中,根据曝光参数与深度信息之间的变化趋势得到所述预估深度信息对应的预估曝光参数。
  3. 根据权利要求2所述的曝光处理方法,其中,所述获取由多个历史帧图像训练得到的曝光处理模型,包括:
    获取多个历史帧图像;
    将所述多个历史帧图像分别分割为焦平面区域和非焦平面区域;
    获取所述多个历史帧图像的焦平面区域的深度信息、亮度信息和曝光参数;
    用所述多个历史帧图像的焦平面区域的深度信息、亮度信息和曝光参数训练得到所述曝光处理模型。
  4. 根据权利要求3所述的曝光处理方法,其中,所述获取当前帧图像的预估深度信息包括:
    根据所述多个历史帧图像的焦平面区域的深度信息对所述当前帧图像的焦平面区域的深度信息进行预估,得到所述当前帧图像的预估深度信息。
  5. 根据权利要求4所述的曝光处理方法,其中,所述得到所述当前帧图像之后,还包括:
    将所述当前帧图像分割为焦平面区域和非焦平面区域;
    获取所述当前帧图像的焦平面区域的深度信息和亮度信息;
    将所述当前帧图像的焦平面区域的深度信息、亮度信息和所述预估曝光参数输入到所述曝光处理模型中,以对所述曝光处理模型进行更新;
    使用更新后的曝光处理模型获取下一帧图像的预估曝光参数。
  6. 根据权利要求3所述的曝光处理方法,其中,所述曝光处理模型中还包括曝光指标与相对深度之间的变化趋势,所述相对深度为同一张图像中非焦平面区域相对焦平面区域的相对深度,所述方法还包括:
    获取所述多个历史帧图像的非焦平面区域的曝光指标及相对深度;
    将所述多个历史帧图像的非焦平面区域的曝光指标及相对深度输入到所述曝光处理模型中进行训练,在所述曝光处理模型中得到所述曝光指标与相对深度之间的变化趋势;
    所述得到所述当前帧图像之后,还包括:
    将所述当前帧图像分割为焦平面区域和非焦平面区域;
    获取所述当前帧图像的非焦平面区域的相对深度;
    将所述当前帧图像的非焦平面区域的相对深度输入到所述曝光处理模型中,根据所述曝光指标与相对深度之间的变化趋势,得到所述非焦平面区域的预估曝光指标;
    使用所述预估曝光指标对所述当前帧图像的非焦平面区域进行曝光修正。
  7. 根据权利要求6所述的曝光处理方法,其中,所述使用所述预估曝光指标对所述当前帧图像的非焦平面区域进行曝光修正之后,还包括:
    将所述当前帧图像的非焦平面区域的曝光指标及相对深度输入到所述曝光处理模型中,以对所述曝光处理模型进行更新;
    使用更新后的曝光处理模型获取下一帧图像的预估曝光指标。
  8. 根据权利要求5或7所述的曝光处理方法,其中,所述当前帧图像的非焦平面区域包括至少两个非焦平面区域,所述将所述当前帧图像分割为焦平面区域和非焦平面区域包括:
    从所述当前帧图像中分割出焦平面区域;
    获取除焦平面区域以外的其他区域的深度信息;
    根据所述其他区域的深度信息划分出至少两个深度区间;
    将所述其他区域按照所处的深度区间分割为至少两个非焦平面区域,其中每一个非焦平面区域对应一个深度区间。
  9. 根据权利要求8所述的曝光处理方法,其中,所述从所述当前帧图像中分割出焦平面区域包括:
    获取拍摄所述当前帧图像时的对焦参数;
    根据所述对焦参数确定拍摄所述当前帧图像时的焦平面;
    获取所述焦平面的深度区间;
    将所述当前帧图像中深度信息处于所述焦平面的深度区间的区域分割为所述焦平面区域。
  10. 一种曝光处理装置,其中,包括:
    第一获取模块,用于获取当前帧图像的预估深度信息;
    第二获取模块,用于根据所述曝光参数与深度信息之间的变化趋势得到所述预估深度信息对应的预估曝光参数,其中,所述变化趋势基于多个历史帧图像而确定;
    曝光模块,用于在拍摄时采用所述预估曝光参数进行曝光,得到所述当前帧图像。
  11. 一种电子设备,其中,所述电子设备包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器通过调用所述存储器中存储的所述计算机程序,执行:
    获取当前帧图像的预估深度信息;
    根据所述曝光参数与深度信息之间的变化趋势得到所述预估深度信息对应的预估曝光参数,其中,所述变化趋势基于多个历史帧图像而确定;
    在拍摄时采用所述预估曝光参数进行曝光,得到所述当前帧图像。
  12. 根据权利要求11所述的电子设备,其中,在根据所述曝光参数与深度信息之间的变化趋势得到所述预估深度信息对应的预估曝光参数之前,所述处理器还用于执行:
    获取由多个历史帧图像训练得到的曝光处理模型,所述曝光处理模型中包含曝光参数与深度信息之间的变化趋势;
    将所述预估深度信息输入到所述曝光处理模型中,根据曝光参数与深度信息之间的变化趋势得到所述预估深度信息对应的预估曝光参数。
  13. 根据权利要求12所述的电子设备,其中,在获取由多个历史帧图像训练得到的曝光处理模型时,所述处理器用于执行:
    获取多个历史帧图像;
    将所述多个历史帧图像分别分割为焦平面区域和非焦平面区域;
    获取所述多个历史帧图像的焦平面区域的深度信息、亮度信息和曝光参数;
    用所述多个历史帧图像的焦平面区域的深度信息、亮度信息和曝光参数训练得到所述曝光处理模型。
  14. 根据权利要求13所述的电子设备,其中,在获取当前帧图像的预估深度信息时,所述处理器用于执行:
    根据所述多个历史帧图像的焦平面区域的深度信息对所述当前帧图像的焦平面区域的深度信息进行预估,得到所述当前帧图像的预估深度信息。
  15. 根据权利要求14所述的电子设备,其中,在得到所述当前帧图像之后,所述处理器还用于执行:
    将所述当前帧图像分割为焦平面区域和非焦平面区域;
    获取所述当前帧图像的焦平面区域的深度信息和亮度信息;
    将所述当前帧图像的焦平面区域的深度信息、亮度信息和所述预估曝光参数输入到所述曝光处理模型中,以对所述曝光处理模型进行更新;
    使用更新后的曝光处理模型获取下一帧图像的预估曝光参数。
  16. 根据权利要求13所述的电子设备,其中,所述曝光处理模型中还包括曝光指标与相对深度之间的变化趋势,所述相对深度为同一张图像中非焦平面区域相对焦平面区域的相对深度,所述处理器还用于执行:
    获取所述多个历史帧图像的非焦平面区域的曝光指标及相对深度;
    将所述多个历史帧图像的非焦平面区域的曝光指标及相对深度输入到所述曝光处理模型中进行训练,在所述曝光处理模型中得到所述曝光指标与相对深度之间的变化趋势;
    在得到所述当前帧图像之后,所述处理器还用于执行:
    将所述当前帧图像分割为焦平面区域和非焦平面区域;
    获取所述当前帧图像的非焦平面区域的相对深度;
    将所述当前帧图像的非焦平面区域的相对深度输入到所述曝光处理模型中,根据所述曝光指标与相对深度之间的变化趋势,得到所述非焦平面区域的预估曝光指标;
    使用所述预估曝光指标对所述当前帧图像的非焦平面区域进行曝光修正。
  17. 根据权利要求16所述的电子设备,其中,在使用所述预估曝光指标对所述当前帧图像的非焦平面区域进行曝光修正之后,所述处理器还用于执行:
    将所述当前帧图像的非焦平面区域的曝光指标及相对深度输入到所述曝光处理模型中,以对所述曝光处理模型进行更新;
    使用更新后的曝光处理模型获取下一帧图像的预估曝光指标。
  18. 根据权利要求15或17所述的电子设备,其中,所述当前帧图像的非焦平面区域包括至少两个非焦平面区域,在将所述当前帧图像分割为焦平面区域和非焦平面区域时,所述处理器用于执行:
    从所述当前帧图像中分割出焦平面区域;
    获取除焦平面区域以外的其他区域的深度信息;
    根据所述其他区域的深度信息划分出至少两个深度区间;
    将所述其他区域按照所处的深度区间分割为至少两个非焦平面区域,其中每一个非焦平面区域对应一个深度区间。
  19. 根据权利要求18所述的电子设备,其中,在从所述当前帧图像中分割出焦平面区域时,所述处理器用于执行:
    获取拍摄所述当前帧图像时的对焦参数;
    根据所述对焦参数确定拍摄所述当前帧图像时的焦平面;
    获取所述焦平面的深度区间;
    将所述当前帧图像中深度信息处于所述焦平面的深度区间的区域分割为所述焦平面区域。
  20. 一种计算机读存储介质,其中,所述存储介质中存储有计算机程序,当计算机程序在计算机上运行时,使得计算机执行如权利要求1至9任一项所述的曝光处理方法中的步骤。
PCT/CN2022/071410 2021-02-20 2022-01-11 曝光处理方法、装置、电子设备及计算机可读存储介质 WO2022174696A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110194282.3A CN114979498B (zh) 2021-02-20 2021-02-20 曝光处理方法、装置、电子设备及计算机可读存储介质
CN202110194282.3 2021-02-20

Publications (1)

Publication Number Publication Date
WO2022174696A1 true WO2022174696A1 (zh) 2022-08-25

Family

ID=82931182

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/071410 WO2022174696A1 (zh) 2021-02-20 2022-01-11 曝光处理方法、装置、电子设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN114979498B (zh)
WO (1) WO2022174696A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160191776A1 (en) * 2014-12-30 2016-06-30 The Lightco Inc. Exposure control methods and apparatus
US20160261783A1 (en) * 2014-03-11 2016-09-08 Sony Corporation Exposure control using depth information
CN106851123A (zh) * 2017-03-09 2017-06-13 广东欧珀移动通信有限公司 曝光控制方法、曝光控制装置及电子装置
CN108401457A (zh) * 2017-08-25 2018-08-14 深圳市大疆创新科技有限公司 一种曝光的控制方法、装置以及无人机
CN108848320A (zh) * 2018-07-06 2018-11-20 京东方科技集团股份有限公司 深度检测系统及其曝光时间调整方法
CN109474790A (zh) * 2018-11-05 2019-03-15 浙江大华技术股份有限公司 曝光调整方法、装置和摄像机及计算机存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004088590A1 (ja) * 2003-03-28 2004-10-14 Fujitsu Limited 撮影装置および個人識別システム
JP2008070611A (ja) * 2006-09-14 2008-03-27 Casio Comput Co Ltd 撮像装置、露出条件調整方法及びプログラム
US9191578B2 (en) * 2012-06-29 2015-11-17 Broadcom Corporation Enhanced image processing with lens motion
CN104363381B (zh) * 2014-10-15 2018-03-02 北京智谷睿拓技术服务有限公司 图像采集控制方法和装置
US9704250B1 (en) * 2014-10-30 2017-07-11 Amazon Technologies, Inc. Image optimization techniques using depth planes
CN107147823A (zh) * 2017-05-31 2017-09-08 广东欧珀移动通信有限公司 曝光方法、装置、计算机可读存储介质和移动终端
CN108876739B (zh) * 2018-06-15 2020-11-24 Oppo广东移动通信有限公司 一种图像补偿方法、电子设备及计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160261783A1 (en) * 2014-03-11 2016-09-08 Sony Corporation Exposure control using depth information
US20160191776A1 (en) * 2014-12-30 2016-06-30 The Lightco Inc. Exposure control methods and apparatus
CN106851123A (zh) * 2017-03-09 2017-06-13 广东欧珀移动通信有限公司 曝光控制方法、曝光控制装置及电子装置
CN108401457A (zh) * 2017-08-25 2018-08-14 深圳市大疆创新科技有限公司 一种曝光的控制方法、装置以及无人机
CN108848320A (zh) * 2018-07-06 2018-11-20 京东方科技集团股份有限公司 深度检测系统及其曝光时间调整方法
CN109474790A (zh) * 2018-11-05 2019-03-15 浙江大华技术股份有限公司 曝光调整方法、装置和摄像机及计算机存储介质

Also Published As

Publication number Publication date
CN114979498B (zh) 2023-06-30
CN114979498A (zh) 2022-08-30

Similar Documents

Publication Publication Date Title
JP7003238B2 (ja) 画像処理方法、装置、及び、デバイス
CN107810505B (zh) 实时图像捕获参数的机器学习
CN108322646B (zh) 图像处理方法、装置、存储介质及电子设备
WO2020034737A1 (zh) 成像控制方法、装置、电子设备以及计算机可读存储介质
WO2021047345A1 (zh) 图像降噪方法、装置、存储介质及电子设备
WO2020037959A1 (en) Image processing method, image processing apparatus, electronic device and storage medium
WO2019105297A1 (zh) 图像虚化处理方法、装置、移动设备及存储介质
US8400532B2 (en) Digital image capturing device providing photographing composition and method thereof
US20220329729A1 (en) Photographing method, storage medium and electronic device
WO2021179830A1 (zh) 构图指导方法、装置及电子设备
EP4376433A1 (en) Camera switching method and electronic device
CN111246093A (zh) 图像处理方法、装置、存储介质及电子设备
US10769416B2 (en) Image processing method, electronic device and storage medium
CN101470248B (zh) 对焦装置及方法
WO2021147650A1 (zh) 拍照方法、装置、存储介质及电子设备
US20230033956A1 (en) Estimating depth based on iris size
JP2022120681A (ja) 画像処理装置および画像処理方法
CN110047126B (zh) 渲染图像的方法、装置、电子设备和计算机可读存储介质
WO2022174696A1 (zh) 曝光处理方法、装置、电子设备及计算机可读存储介质
CN117132515A (zh) 一种图像处理方法及电子设备
JP2023078061A (ja) イメージングにおける露出制御方法、装置、デバイス及び記憶媒体
WO2021147648A1 (zh) 提示方法、装置、存储介质及电子设备
WO2022011621A1 (zh) 一种人脸光照图像生成装置及方法
WO2022183876A1 (zh) 拍摄方法、装置、计算机可读存储介质及电子设备
CN111479074A (zh) 图像采集方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22755469

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22755469

Country of ref document: EP

Kind code of ref document: A1