WO2019148978A1 - 图像处理方法、装置、存储介质及电子设备 - Google Patents

图像处理方法、装置、存储介质及电子设备 Download PDF

Info

Publication number
WO2019148978A1
WO2019148978A1 PCT/CN2018/120683 CN2018120683W WO2019148978A1 WO 2019148978 A1 WO2019148978 A1 WO 2019148978A1 CN 2018120683 W CN2018120683 W CN 2018120683W WO 2019148978 A1 WO2019148978 A1 WO 2019148978A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
composite image
exposure parameters
composite
Prior art date
Application number
PCT/CN2018/120683
Other languages
English (en)
French (fr)
Inventor
姜小刚
谭国辉
杨涛
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019148978A1 publication Critical patent/WO2019148978A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • G06T5/70
    • G06T5/77
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10144Varying exposure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to an image processing method, apparatus, storage medium, and electronic device.
  • users usually use an electronic device having a photographing function to take an image, and can record what is happening around the scene, the scene to be seen, and the like through these electronic devices.
  • the focus of the captured image is focused on the target of the shooting, and the background area of the target in the captured image can be blurred.
  • the embodiment of the present application provides an image processing method, device, storage medium, and electronic device, which can improve the image blurring effect.
  • an embodiment of the present application provides an image processing method, including:
  • the target area in the first composite image is subjected to blurring processing to obtain a first composite image after the blurring process.
  • an image processing apparatus provided by the embodiment of the present application includes:
  • An image acquisition module configured to acquire multiple images with different exposure parameters, wherein image images of the plurality of images are the same;
  • An image synthesis module configured to perform image synthesis on the plurality of images with different exposure parameters to obtain a first composite image
  • An information acquiring module configured to acquire depth information of the first composite image
  • a region determining module configured to determine, in the first composite image, a target region that needs to be blurred by the depth information according to the depth information;
  • the blurring processing module is configured to perform a blurring process on the target area in the first composite image to obtain a first composite image after the blurring process.
  • a storage medium provided by an embodiment of the present application has a computer program stored thereon, and when the computer program runs on a computer, causes the computer to perform the following steps:
  • the target area in the first composite image is subjected to blurring processing to obtain a first composite image after the blurring process.
  • an embodiment of the present application provides an electronic device, including a central processing unit and a memory, where the memory has a computer program, and the central processing unit is configured to perform the following steps by calling the computer program:
  • the target area in the first composite image is subjected to blurring processing to obtain a first composite image after the blurring process.
  • an embodiment of the present application further provides an electronic device, including a central processing unit, a graphics processor, and a memory, where the memory stores a computer program, and the central processing unit acquires the computer program for acquiring Multiple images with different exposure parameters;
  • the graphics processor is configured to perform image synthesis on the plurality of images with different exposure parameters by calling the computer program to obtain a first composite image
  • the central processor is further configured to acquire depth information of the first composite image while the graphics processor synthesizes the first composite image
  • the method further includes performing a blurring process on the target area in the first composite image to obtain a first composite image after the blurring process.
  • FIG. 1 is a schematic diagram of an application scenario of an image processing method according to an embodiment of the present disclosure.
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an operation of triggering an image capturing request in the embodiment of the present application.
  • FIG. 4 is a diagram showing an example of obtaining an image of three different exposure parameters by bracketing exposure in the embodiment of the present application.
  • FIG. 5 is a schematic diagram of an electronic device acquiring an initial image set and a second image set in the embodiment of the present application.
  • FIG. 6 is a diagram showing an example of performing collective image synthesis in the embodiment of the present application.
  • FIG. 7 is a schematic diagram showing the installation positions of the first camera and the second camera in the embodiment of the present application.
  • FIG. 8 is a schematic diagram of imaging by a first camera and a second camera in the embodiment of the present application.
  • FIG. 9 is a schematic diagram of performing a blurring process in the embodiment of the present application.
  • FIG. 10 is a diagram showing an example of performing a blurring process on a first composite image in the embodiment of the present application.
  • FIG. 11 is another schematic flowchart of an image processing method provided in an embodiment of the present application.
  • FIG. 12 is a diagram showing an example of synthesizing a first composite image and performing a blurring process on the first composite image in the embodiment of the present application.
  • FIG. 13 is still another schematic flowchart of an image processing method provided in an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 16 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
  • 17 is a detailed structural diagram of an image processing circuit in an embodiment of the present application.
  • FIG. 18 is another refinement structure diagram of an image processing circuit in the embodiment of the present application.
  • the embodiment of the present application provides an image processing method, and the execution body of the image processing method may be an image processing device provided by an embodiment of the present application, or an electronic device integrated with the image processing device, where the image processing device may adopt hardware or The way the software is implemented.
  • the electronic device may be a device such as a smart phone, a tablet computer, a palmtop computer, a notebook computer, or a desktop computer.
  • FIG. 1 is a schematic diagram of an application scenario of an image processing method according to an embodiment of the present disclosure.
  • the image processing device is integrated into an electronic device as an example.
  • the electronic device may first acquire multiple images with different exposure parameters, of which multiple The image content of the images is the same; then the acquired images with different exposure parameters are image-combined to obtain a first composite image; and the depth information of the first composite image is obtained; and then the first composite is obtained according to the acquired depth information.
  • the target area that needs to be blurred is determined in the image; finally, the target area in the first composite image is blurred, and the first composite image after the blurring process is obtained.
  • An embodiment of the present application provides an image processing method, including:
  • the target area in the first composite image is subjected to blurring processing to obtain a first composite image after the blurring process.
  • the obtaining a plurality of images with different exposure parameters includes:
  • the acquiring depth information of the first composite image includes:
  • the obtaining a plurality of images with different exposure parameters includes:
  • each image set includes at least two images, and exposure parameters of the images in the set are the same;
  • the plurality of second composite images are used as images of the plurality of exposure parameters.
  • the performing in-collection image synthesis on each image set to obtain a plurality of second composite images includes:
  • the step of acquiring multiple images with different exposure parameters includes:
  • the subject to be photographed is subjected to backlighting environment recognition
  • the object to be photographed is subjected to backlighting environment recognition, including:
  • the object to be photographed is subjected to backlighting environment recognition, including:
  • the backlight is identified by the subject according to the acquired histogram information.
  • the depth information is a depth value
  • the determining, by using the depth information, a target area that needs to be blurred in the first composite image including:
  • the area where the depth value reaches the preset depth threshold is determined as the target area where the blurring process is required.
  • the performing the blurring process on the target area in the first composite image comprises:
  • Each sub-target area is blushed according to the blurring intensity corresponding to each sub-target area.
  • FIG. 2 is a schematic flowchart diagram of an image processing method according to an embodiment of the present application.
  • the specific process of the image processing method provided by the embodiment of the present application may be as follows:
  • the electronic device may acquire multiple images with different exposure parameters when receiving the triggered image capturing request.
  • Image capture requests can be triggered in a variety of ways, such as by way of virtual buttons, by physical buttons, by voice commands, and so on.
  • the user moves the electronic device to align the camera of the electronic device with the object to be photographed (the object to be photographed includes
  • the image capturing request can be triggered by clicking the “photograph” button (for the virtual button) provided by the application interface.
  • the voice command “photograph” can be spoken, the image capturing request is triggered, or the direct click is performed.
  • a physical camera button set by the electronic device triggers an image capture request.
  • the electronic device After receiving the triggered image capturing request, the electronic device immediately responds to the received image capturing request, that is, the subject to be photographed according to different exposure parameters, and acquires images with different exposure parameters corresponding to the object to be photographed.
  • the images corresponding to different exposure parameters are only caused by different exposure parameters, and the image brightness information is different, but the image contents of the images are the same, that is, the image content of the object to be photographed.
  • the exposure parameters include, but are not limited to, sensitivity, shutter speed, and aperture size.
  • the electronic device may sequentially acquire N preset different exposure parameters locally, and each time an exposure parameter is acquired, according to the acquired exposure parameter, combined with other shooting parameters. Shooting the subject, and so on, will capture multiple images corresponding to N different exposure parameters.
  • the electronic device sequentially acquiring the pre-stored different exposure parameters may obtain the brightness information of the image from low to high according to the exposure parameter.
  • the plurality of images obtained by the shooting are identical except for the exposure parameters.
  • the electronic device pre-stores two sets of exposure parameters, that is, a first exposure parameter and a second exposure parameter, wherein the brightness of the image obtained by using the first exposure parameter is lower than the brightness obtained by using the second exposure parameter, and then the response is
  • a first exposure parameter and a second exposure parameter
  • the brightness of the image obtained by using the first exposure parameter is lower than the brightness obtained by using the second exposure parameter
  • the response is
  • first acquiring the first exposure parameter taking the subject according to the first exposure parameter in combination with other shooting parameters
  • acquiring the second exposure parameter according to the second exposure parameter and combining other shooting parameters Shoot the subject.
  • the subject to be photographed may be photographed by means of bracketing exposure. Specifically, the subject is first metered to obtain a photometric value of the object to be photographed, and then Determining an exposure parameter corresponding to the photometric value according to a preset mapping relationship between the photometric value and the exposure parameter, and photographing the subject according to the determined exposure parameter; and then, based on the determined exposure parameter, according to the preset
  • the step value is used to raise and attenuate the determined exposure parameters, and the subject to be photographed is taken according to the elevated exposure parameter and the attenuated exposure parameter, thereby obtaining a plurality of images corresponding to different exposure parameters.
  • the electronic device performs metering on the object to be photographed, and determines that the exposure parameter corresponding to the photometric value is Z.
  • the subject to be photographed is photographed according to the exposure parameter Z to obtain a first image; then the exposure parameter Z is attenuated.
  • One step long value 1ev, the attenuation parameter Z-1ev is obtained, and the subject to be photographed is taken according to the exposure parameter Z-1ev to obtain a second image; then the exposure parameter Z is raised by a step value of 1 ev to obtain the improved exposure parameter.
  • Z+1ev and according to the exposure parameter Z+1ev, the subject to be photographed is obtained to obtain a third image.
  • three images with different exposure parameters will be obtained, and the image content of the multiple images is the same, that is, the image content of the object to be photographed.
  • acquiring multiple images with different exposure parameters includes:
  • the subject to be photographed is subjected to backlighting environment recognition
  • an image of a plurality of exposure parameters corresponding to the object to be photographed is acquired.
  • the backlighting environment recognition of the object to be photographed can be implemented in various manners.
  • the object to be photographed is subjected to backlighting environment recognition, including:
  • the backlight is identified by the object to be photographed according to the obtained environmental parameters.
  • the environmental parameter of the electronic device can be acquired, and the environmental parameter of the electronic device is taken as the environmental parameter of the object to be photographed.
  • the environmental parameters include, but are not limited to, time information, time zone information of the location where the electronic device is located, location information, weather information, and orientation information of the electronic device.
  • the acquired environment parameters may be input into a pre-trained support vector machine classifier, and the support vector machine classifier classifies according to the input environment parameters to determine that the object to be photographed is to be photographed. Whether the object is in a backlit environment.
  • the object to be photographed is subjected to backlighting environment recognition, including:
  • the backlight is identified by the subject according to the acquired histogram information.
  • the preset channel includes three channels: R, G, and B.
  • R, G, and B When the histogram information of the object to be photographed is acquired, a preview image of the object to be photographed can be acquired, and then the preview image is obtained in three channels of R, G, and B.
  • the histogram information is used to obtain the histogram information of the three channels R, G, and B as the histogram information of the object to be photographed in the preset channel.
  • the histogram information of the subject is counted to obtain a statistical result.
  • the number of pixels under different brightness is specifically counted.
  • the preset condition may be set to: the number of pixels in the first brightness interval and the second brightness interval both reach a preset number threshold, and the lowest brightness is less than the first preset brightness threshold and/or the highest brightness is greater than the second preset
  • the brightness threshold wherein the preset number preset, the first preset brightness threshold, and the second preset brightness threshold are empirical parameters, which can be set by a person skilled in the art according to actual needs.
  • the exposure parameter is additionally adjusted, and the subject is specifically metered to obtain a photometric value, and according to The pre-stored photometric value and the mapping relationship of the exposure parameters obtain the exposure parameters corresponding to the photometric values, and then the exposure parameters are raised according to the preset adjustment amount, and when the image is taken with the elevated exposure parameters, the image is actually obtained.
  • the exposed image has a higher overall brightness of the image, so that the brightness of the foreground area of the image to be photographed is moderate, so that most of the image details of the object to be photographed are preserved, but at the same time, the brightness of the background area of the image is too high, so that Most of the details of the background area are lost.
  • a plurality of different exposure parameters are set according to the current degree of backlighting.
  • the degree of backlighting may be output by the support vector machine classifier as a result of whether the object to be photographed is in a backlight environment, and the output result is “backlighting environment”, and the corresponding backlighting degree is synchronously output.
  • the electronic device When the electronic device obtains the result that the object to be photographed outputted by the support vector machine classifier is in a backlight environment, the degree of backlighting of the output of the support vector machine classifier is simultaneously acquired as the current degree of backlighting. Thereafter, a plurality of different exposure parameters corresponding to the current degree of backlighting are set according to the mapping relationship between the pre-stored degree of backlight and the exposure parameter. In this way, when acquiring an image with different exposure parameters, the electronic device may respectively perform shooting on the object according to the plurality of exposure parameters that are set, and obtain multiple images corresponding to different exposure parameters, so that the plurality of acquired exposure parameters are different.
  • the brightness information of the images is different, from dark to bright, but the image content of the images with different exposure parameters is the same, that is, the image content of the object to be photographed.
  • the electronic device may further acquire multiple images with different exposure parameters when the triggered blurring request is detected.
  • the user can trigger the continuous shooting request in advance, and control the electronic device to continuously shoot the subject, and obtain multiple images with the same image content but different exposure parameters; then, the user can select one of the multiple images to trigger the blurring. Processing the request, correspondingly, when detecting the triggered blurring request, the electronic device will acquire the selected image and other images that are different from the selected image exposure parameters but have the same image content.
  • acquiring multiple images with different exposure parameters includes:
  • each image set includes at least two images, and exposure parameters of the images in the set are the same;
  • a plurality of second composite images are taken as a plurality of images corresponding to different exposure parameters.
  • the electronic device After receiving the triggered image capturing request, the electronic device immediately responds to the received image capturing request, that is, the subject to be photographed according to different exposure parameters, wherein, for each exposure parameter, multiple images are captured, Thereby a plurality of image sets corresponding to different exposure parameters are obtained.
  • the number of images included in the image set is not specifically limited herein, and the number of images of different image sets may be the same or different.
  • the electronic device after receiving the triggered image capturing request, the electronic device firstly shoots the subject according to the exposure parameter Z-1ev, and obtains four exposure parameters as Z-1ev, and the image content is the same (ie, corresponding The image of the image to be photographed), the four images with the same exposure parameter as Z-1ev are combined into the first image set; after that, according to the exposure parameter Z+1ev, the subject is photographed, and four exposure parameters are obtained.
  • the four images with the same exposure parameter as Z+1ev are combined into a second image set; thus, two image sets are obtained: An image set and a second image set, the first image set corresponding to the exposure parameter Z-1ev, the second image set corresponding to the exposure parameter Z+1ev, and the image content of all the images in the first image set and the second image set are the same, That is, the image content corresponding to the object to be photographed.
  • performing in-collection image synthesis on each image set includes:
  • a second composite image of the selected image set is obtained based on each average pixel value, and the step of selecting a set of images is returned until a second composite image of each image set is obtained.
  • each image set can be synthesized one by one.
  • the image with the largest eye opening degree can be selected as the reference image; for example It can be combined with the degree of human eye opening and the size of each image to be comprehensively selected.
  • the eye opening and closing degree and the size of the human eye are normalized, and the weight of the human eye opening and closing degree is set to ⁇ , and the setting is set.
  • the weight of the size is 1- ⁇ , and each image in the set is weighted and evaluated, and the image with the largest value is selected as the reference image; for example, the image with the highest definition can be selected as the reference image.
  • the average pixel value of each pixel is calculated. For example, if there are four images in the selected image set, the pixel values of the pixel at a certain position in the four images are: “0.8, 0.9 , 1.1, 1.2", then the average pixel value of the pixel at the position can be calculated as "1".
  • the second composite image of the selected image set is obtained according to each average pixel value.
  • the pixel values of the pixels of the reference image may be adjusted correspondingly to the calculated average pixel values, thereby obtaining the selected image set.
  • the second composite image for example, each average pixel value may be calculated according to the calculation, and a new image is generated, and the generated image is used as the second composite image of the selected image set.
  • the selected image set includes four images, which are a first image, a second image, a third image, and a fourth image, respectively, and the exposure parameters of the four images are the same, both are Z, and the image content is The same is true, but there are some noises in these images; after aligning and denoising these images, a second composite image with an exposure parameter of Z is obtained, but the second composite image has no noise.
  • the obtained first composite image is a high dynamic range image
  • the high dynamic range image can provide a larger dynamic range and image detail than an ordinary image, and can utilize each of the plurality of images with different exposure parameters. Details are synthesized to obtain a high dynamic range image.
  • the foreground region of the first image carries a large amount of image details due to different exposure parameters
  • the background region of the second image carries a large amount of image details, such that
  • the image detail of the foreground region of the first image and the image details of the background region of the second image may be utilized to synthesize a high dynamic range image, and the obtained high dynamic range is obtained.
  • the image will include image detail of the foreground area of the first image and image detail of the background area of the second image, and the image content of the synthesized high dynamic range image is identical to the image content of the first image and the second image.
  • HDR represents the synthesized high dynamic range image
  • HDR(i) represents the gray value of the i-th pixel of the synthesized high dynamic range image
  • k represents an image with several different exposure parameters
  • w(Zij) represents The compensation weight of the i-th pixel in the j-th image
  • the compensation weight is a value in the compensation weight function
  • the compensation weight function can be obtained by a trigonometric function or a normal distribution function
  • Zij represents the j-th image.
  • the high dynamic range image synthesis may be performed by using the following formula:
  • HDR represents the synthesized high dynamic range image
  • HDR(i) represents the i-th pixel of the synthesized high dynamic range image
  • LE represents the underexposed image
  • LE(i) represents the ith pixel on the underexposed image
  • m represents the compensation weight corresponding to the underexposed image
  • HE represents the overexposed image
  • HE(i) represents the ith pixel point on the overexposed image
  • n represents the compensation weight corresponding to the overexposed image.
  • the depth information of the first composite image is also the depth information of the object to be photographed corresponding to the first composite image.
  • the depth information may describe the distance from any pixel of the "object to be photographed" in the first composite image to the electronic device.
  • the method before acquiring the depth information of the first composite image, the method further includes:
  • Obtaining depth information of the first composite image including:
  • the acquired depth information is used as depth information of the first composite image.
  • the electronic device receives the light energy emitted or reflected from the object to be photographed through the set depth sensor, forms a light energy distribution function related to the object to be photographed, that is, a grayscale image, and then restores the object to be photographed based on the grayscale images.
  • the depth information; or the electronic device emits energy to the object to be photographed by the depth sensor, and then receives the reflected energy of the object to be photographed on the emitted energy, forming a light energy distribution function related to the object to be photographed, that is, a grayscale image, and then in the gray Based on the degree image, the depth information of the shooting scene is restored.
  • the depth information of the object to be photographed can be acquired by the depth sensor while the subject to be photographed is photographed and an image with different exposure parameters is acquired.
  • the electronic device includes a first camera and a second camera, and acquiring multiple images with different exposure parameters, including:
  • Obtaining depth information of the first composite image including:
  • the depth information of the first composite image is acquired according to the two images with the same exposure parameters acquired by the first camera and the second camera.
  • the electronic device photographs the object to be photographed according to different exposure parameters by the first camera, acquires a plurality of images corresponding to different exposure parameters, and simultaneously captures the object to be photographed by the second camera, and acquires at least one and the first camera. Get the same image with the same exposure parameters.
  • the depth information of the object to be photographed is obtained by the triangulation algorithm, and the information is obtained.
  • the depth information is used as the depth information of the first composite image.
  • the object to be photographed includes multiple objects, and the depth information of an object is calculated as an example:
  • the two cameras have parallax.
  • the depth information of the same object in the two images with the same exposure parameters synchronously captured by the first camera and the second camera can be calculated, that is, the distance of the object from the plane of the first camera and the second camera.
  • OR indicates the position of the first camera
  • OT indicates the position where the second camera is located
  • the distance between the first camera and the second camera is B
  • the distance between the focal plane and the plane of the first camera and the second camera is f.
  • the electronic device is synchronized by the first camera and the second camera according to the same exposure parameter, the first camera will image the first image in the focal plane, and the second camera will image the second image in the focal plane.
  • P represents the position of the object in the first image
  • P' represents the position of the same object in the second image
  • the distance of the P point from the left boundary of the first image is XR
  • P' is from the left border of the second image.
  • the distance is XT.
  • Equation 1 and Equation 2 are further obtained.
  • B1 represents the distance from the first camera to the object projection point
  • B2 represents the distance from the second camera to the object projection point
  • XR' represents the distance from the P point to the right edge of the first image
  • X1 represents the right edge of the first image to The distance of the object projection point
  • X2 represents the distance from the left edge of the second image to the object projection point.
  • Equation 3 Adding Equation 1 and Equation 2 to get Equation 3,
  • the focal plane widths of the first camera and the second camera are both 2K
  • the half focal plane width is K
  • Equation 4 and Equation 5 are obtained.
  • Equation 6 is obtained.
  • Equation 7 Equation 7
  • d is the position difference of the object in the first image and the second image, that is, "XR-XT"
  • B and f are both fixed values.
  • step 203 may be performed concurrently with step 202.
  • the background area of the first composite image may be determined according to the acquired depth information, and the determined background area is used as the target area that needs to be blurred.
  • the depth information is a depth value
  • the target area that needs to be subjected to the blurring process is determined in the first composite image according to the obtained depth information, including:
  • the area where the depth value reaches the preset depth threshold is determined as the target area where the blurring process is required.
  • the preset depth threshold is used to define whether a pixel is located in the foreground area or the background area, and the area where the depth value reaches the preset depth threshold is also the background area. After determining the background area of the first composite image, the first The background area of the composite image is determined as the target area where the blurring process is required.
  • the target area in the first composite image may be blurred, and the blurring process of the target area may be implemented by using a Gaussian blur.
  • the target area of the first composite image is blurred, including:
  • Each sub-target area is blushed according to the blurring intensity corresponding to each sub-target area.
  • the target area in the first composite image is first divided into a plurality of sub-target areas corresponding to different depth values. For example, setting a depth value, adding or subtracting the same change value to the depth value to obtain a depth value interval corresponding to the depth value, and a plurality of pixel points whose depth values are located in the depth value interval are aggregated into one sub-target area; a depth value, the same change value is added to the depth value to obtain a depth value interval corresponding to the depth value, and a plurality of pixel points whose depth values are located in the depth value interval are aggregated into another sub-target region, and so on. Multiple sub-target regions corresponding to different depth values.
  • the blurring strength corresponding to each sub-target region is determined according to the depth value corresponding to each sub-target region and the mapping relationship between the preset depth value and the blurring intensity.
  • the setting of the foregoing mapping relationship is not specifically limited, and may be set by a person skilled in the art according to actual needs.
  • the ambiguity intensity may be set to be proportional to the depth value, that is, the larger the depth value, the more the degree of blurring is. Big.
  • each sub-target region can be blurred according to the degree of blur of each sub-target region.
  • the first composite image before the blurring process is shown on the left side, wherein the portrait is located in the foreground area, and no blurring process is required, and the three groups of plants are located in the background area, and the depth values are different.
  • the order from bottom right to top is in an increasing trend; the first composite image after blurring is shown on the right side.
  • the embodiment of the present application first obtains a plurality of images with different exposure parameters, wherein the image content of the plurality of images is the same; and then the acquired images with different exposure parameters are image-combined to obtain a first composite image; And acquiring the depth information of the first composite image; and determining, in the first composite image, the target region that needs to be blurred according to the obtained depth information; and finally performing the blurring process on the target region in the first composite image to obtain the virtual image
  • the first composite image after the processing, the first composite image obtained by the synthesis carries the image details of the bright and/or dark portions of the different images, and the blurring process is performed after the first synthesized image obtained by the composition is blurred.
  • the first composite image will still carry more image details, which will enhance the image blurring effect.
  • the image processing method may include:
  • the electronic device includes a first camera and a second camera, and the electronic device can synchronously acquire images through the first camera and the second camera when receiving the triggered image capturing request.
  • the image capturing request can be triggered by multiple ways, such as triggering by a virtual button, triggering by a physical button, triggering by a voice command, and the like.
  • the electronic device After receiving the triggered image capturing request, the electronic device immediately responds to the received image capturing request, and the image is taken by the first camera according to different exposure parameters, and multiple images corresponding to different exposure parameters are acquired, and are synchronously passed.
  • the second camera photographs the subject, and acquires at least one image that is the same as the exposure parameter acquired by the first camera.
  • the obtained first composite image is a high dynamic range image
  • the high dynamic range image can provide a larger dynamic range and image detail than an ordinary image, and can utilize each of the plurality of images with different exposure parameters.
  • the high dynamic range image is synthesized.
  • the depth information of the object to be photographed is obtained by the triangulation algorithm, and the obtained information is obtained.
  • the depth information is used as the depth information of the first composite image.
  • the background area of the first composite image may be determined according to the acquired depth information, and the determined background area is used as the target area that needs to be blurred.
  • the preset depth threshold is used to define whether a pixel is located in the foreground area or the background area, and the area where the depth value reaches the preset depth threshold is also the background area. After determining the background area of the first composite image, the first The background area of the composite image is determined as the target area where the blurring process is required.
  • the target area in the first composite image is first divided into a plurality of sub-target areas corresponding to different depth values. For example, setting a depth value, adding or subtracting the same change value to the depth value to obtain a depth value interval corresponding to the depth value, and a plurality of pixel points whose depth values are located in the depth value interval are aggregated into one sub-target area; a depth value, the same change value is added to the depth value to obtain a depth value interval corresponding to the depth value, and a plurality of pixel points whose depth values are located in the depth value interval are aggregated into another sub-target region, and so on. Multiple sub-target regions corresponding to different depth values.
  • the blurring strength corresponding to each sub-target region is determined according to the depth value corresponding to each sub-target region and the mapping relationship between the preset depth value and the blurring intensity.
  • the setting of the foregoing mapping relationship is not specifically limited, and may be set by a person skilled in the art according to actual needs.
  • the ambiguity intensity may be set to be proportional to the depth value, that is, the larger the depth value, the more the degree of blurring is. Big.
  • each sub-target region can be blurred according to the degree of blur of each sub-target region.
  • the first composite image before the blurring process is shown on the left side, wherein the portrait is located in the foreground area, and no blurring process is required, and the three groups of plants are located in the background area, and the depth values are different.
  • the order from bottom right to top is in an increasing trend; the first composite image after blurring is shown on the right side.
  • the electronic device acquires a first image with an exposure parameter of Z-1ev and a second image with an exposure parameter of Z+1ev through the first camera, and acquires an exposure parameter and a second through the second camera.
  • the second image is exposed to the same third image; after that, the first image and the second image are combined to obtain a first composite image, which preserves the dark details of the first image and the bright portions of the second image
  • the first composite image is synthesized
  • the depth information of the first composite image is acquired according to the second image and the third image that are synchronously captured and the exposure parameters are the same; and then, the first composite image is determined according to the acquired depth information.
  • the target area of the blurring process is performed, and the target area is blurred.
  • the three groups of plants in the first synthesized image after the blurring are blurred, but the degree of blurring is from bottom to top.
  • the degree of blurring of the lower plants is lower, the degree of blurring of the upper plants is higher, and the degree of blurring of the intermediate plants is middle.
  • the image processing method may include:
  • the subject to be photographed is subjected to backlighting environment recognition.
  • the image capturing request can be triggered by multiple ways, such as triggering by a virtual button, triggering by a physical button, triggering by a voice command, and the like.
  • the electronic device After receiving the triggered image capturing request, the electronic device first performs a backlight environment recognition on the subject to determine whether the object to be photographed is in a backlight environment.
  • the backlighting environment recognition of the object to be photographed can be implemented in various manners.
  • the object to be photographed is subjected to backlighting environment recognition, including:
  • the backlight is identified by the object to be photographed according to the obtained environmental parameters.
  • the environmental parameter of the electronic device can be acquired, and the environmental parameter of the electronic device is taken as the environmental parameter of the object to be photographed.
  • the environmental parameters include, but are not limited to, time information, time zone information of the location where the electronic device is located, location information, weather information, and orientation information of the electronic device.
  • the acquired environment parameters may be input into a pre-trained support vector machine classifier, and the support vector machine classifier classifies according to the input environment parameters to determine that the object to be photographed is to be photographed. Whether the object is in a backlit environment.
  • the object to be photographed is subjected to backlighting environment recognition, including:
  • the backlight is identified by the subject according to the acquired histogram information.
  • the preset channel includes three channels: R, G, and B.
  • R, G, and B When the histogram information of the object to be photographed is acquired, a preview image of the object to be photographed can be acquired, and then the preview image is obtained in three channels of R, G, and B.
  • the histogram information is used to obtain the histogram information of the three channels R, G, and B as the histogram information of the object to be photographed in the preset channel.
  • the histogram information of the subject is counted to obtain a statistical result.
  • the number of pixels under different brightness is specifically counted.
  • the preset condition may be set to: the number of pixels in the first brightness interval and the second brightness interval both reach a preset number threshold, and the lowest brightness is less than the first preset brightness threshold and/or the highest brightness is greater than the second preset
  • the brightness threshold wherein the preset number preset, the first preset brightness threshold, and the second preset brightness threshold are empirical parameters, which can be set by a person skilled in the art according to actual needs.
  • each image set includes at least two images, and the exposure of the images in the set The same parameters
  • the electronic device includes a first camera and a second camera, and the electronic device can acquire an image of the object to be photographed through the first camera and the second camera synchronously when the object to be photographed is in a backlight environment.
  • the electronic device photographs the object to be photographed according to different exposure parameters by using the first camera, wherein, for each exposure parameter, multiple images are captured, thereby obtaining a plurality of image sets corresponding to different exposure parameters, and passing the first While the camera is photographing, the subject to be photographed is synchronously photographed by the second camera, and at least one image having the same exposure parameter as that acquired by the first camera is acquired.
  • the number of images included in the image set is not specifically limited herein, and the number of images between different image sets may be the same or different.
  • each image set can be synthesized one by one.
  • the image with the largest eye opening degree can be selected as the reference image; for example It can be combined with the degree of human eye opening and the size of each image to be comprehensively selected.
  • the eye opening and closing degree and the size of the human eye are normalized, and the weight of the human eye opening and closing degree is set to ⁇ , and the setting is set.
  • the weight of the size is 1- ⁇ , and each image in the set is weighted and evaluated, and the image with the largest value is selected as the reference image; for example, the image with the highest definition can be selected as the reference image.
  • the average pixel value of each pixel is calculated. For example, if there are four images in the selected image set, the pixel values of the pixel at a certain position in the four images are: “0.8, 0.9 , 1.1, 1.2", then the average pixel value of the pixel at the position can be calculated as "1".
  • the second composite image of the selected image set is obtained according to each average pixel value.
  • the pixel values of the pixels of the reference image may be adjusted correspondingly to the calculated average pixel values, thereby obtaining the selected image set.
  • the second composite image for example, each average pixel value may be calculated according to the calculation, and a new image is generated, and the generated image is used as the second composite image of the selected image set.
  • the depth information of the first composite image is acquired.
  • the obtained first composite image is a high dynamic range image
  • the high dynamic range image can provide a larger dynamic range and image detail than an ordinary image, and can utilize each of the plurality of images with different exposure parameters.
  • the high dynamic range image is synthesized.
  • the first composite image is synthesized, and the first camera and the second camera are synchronously acquired.
  • the two images with the same exposure parameters acquire depth information of the first composite image.
  • the depth information of the object to be photographed is obtained by the triangulation algorithm, and the obtained information is obtained.
  • the depth information is used as the depth information of the first composite image.
  • the background area of the first composite image may be determined according to the acquired depth information, and the determined background area is used as the target area that needs to be blurred.
  • the preset depth threshold is used to define whether a pixel is located in the foreground area or the background area, and the area where the depth value reaches the preset depth threshold is also the background area. After determining the background area of the first composite image, the first The background area of the composite image is determined as the target area where the blurring process is required.
  • the target area in the first composite image is first divided into a plurality of sub-target areas corresponding to different depth values. For example, setting a depth value, adding or subtracting the same change value to the depth value to obtain a depth value interval corresponding to the depth value, and a plurality of pixel points whose depth values are located in the depth value interval are aggregated into one sub-target area; a depth value, the same change value is added to the depth value to obtain a depth value interval corresponding to the depth value, and a plurality of pixel points whose depth values are located in the depth value interval are aggregated into another sub-target region, and so on. Multiple sub-target regions corresponding to different depth values.
  • the blurring strength corresponding to each sub-target region is determined according to the depth value corresponding to each sub-target region and the mapping relationship between the preset depth value and the blurring intensity.
  • the setting of the foregoing mapping relationship is not specifically limited, and may be set by a person skilled in the art according to actual needs.
  • the ambiguity intensity may be set to be proportional to the depth value, that is, the larger the depth value, the more the degree of blurring is. Big.
  • each sub-target region can be blurred according to the degree of blur of each sub-target region.
  • the first composite image before the blurring process is shown on the left side, wherein the portrait is located in the foreground area, and no blurring process is required, and the three groups of plants are located in the background area, and the depth values are different.
  • the order from bottom right to top is in an increasing trend; the first composite image after blurring is shown on the right side.
  • FIG. 14 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • the image processing device is applied to an electronic device, and the image processing device includes an image acquisition module 401, an image synthesis module 402, an information acquisition module 403, an area determination module 404, and a blurring processing module 405, as follows:
  • the image obtaining module 401 is configured to acquire an image with different exposure parameters, wherein the image content of the multiple images is the same;
  • the image synthesis module 402 is configured to perform image synthesis on multiple images with different exposure parameters to obtain a first composite image.
  • the information acquiring module 403 is configured to acquire depth information of the first composite image.
  • the area determining module 404 is configured to determine, in the first composite image, a target area that needs to be subjected to blurring processing according to the acquired depth information;
  • the blurring processing module 405 is configured to perform a blurring process on the target area in the first composite image to obtain a first composite image after the blurring process.
  • the image acquisition module 401 when acquiring multiple images with different exposure parameters, can be used to:
  • the information obtaining module 403 can be used to:
  • the depth information of the first composite image is acquired according to the two images with the same exposure parameters acquired by the first camera and the second camera.
  • the image acquisition module 401 when acquiring multiple images with different exposure parameters, can be used to:
  • each image set includes at least two images, and exposure parameters of the images in the set are the same;
  • the obtained plurality of composite images are taken as a plurality of images corresponding to different exposure parameters.
  • the image obtaining module 401 when performing in-collection image synthesis on each image set to obtain a plurality of second composite images, the image obtaining module 401 may be configured to:
  • a second composite image of the selected image set is obtained based on each average pixel value, and an image set is selected until a composite image of each image set is obtained.
  • the image acquisition module 401 when acquiring multiple images with different exposure parameters, can be used to:
  • the subject to be photographed is subjected to backlighting environment recognition
  • an image of a plurality of exposure parameters corresponding to the object to be photographed is acquired.
  • the image acquisition module 401 can be configured to:
  • the image obtaining module 401 when performing backlighting environment recognition on a subject to be photographed, the image obtaining module 401 may be configured to:
  • the backlight is identified by the subject according to the acquired histogram information.
  • the depth information is a depth value.
  • the area determining module 404 may be configured to:
  • the area where the depth value reaches the preset depth threshold is determined as the target area where the blurring process is required.
  • the blur processing module 405 can be used to:
  • Each sub-target area is blushed according to the blurring intensity corresponding to each sub-target area.
  • each of the above modules may be implemented as a separate entity, or may be implemented in any combination, as the same or several entities.
  • the image processing apparatus belongs to the same concept as the image processing method in the above embodiment, in image processing. Any one of the methods provided in the embodiment of the image processing method can be run on the device. The specific implementation process is described in the embodiment of the image processing method, and details are not described herein again.
  • the electronic device 500 includes a central processing unit 501 and a memory 502.
  • the central processing unit 501 is electrically connected to the memory 502.
  • the central processing unit 500 is a control center of the electronic device 500 that connects various portions of the entire electronic device using various interfaces and lines, by running or loading a computer program stored in the memory 502, and recalling data stored in the memory 502.
  • the various functions of the electronic device 500 are performed and the data is processed to achieve accurate identification of the user's gender.
  • the memory 502 can be used to store software programs and modules, and the central processor 501 executes various functional applications and data processing by running computer programs and modules stored in the memory 502.
  • the memory 502 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, a computer program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of electronic devices, etc.
  • memory 502 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 502 can also include a memory controller to provide central processor 501 access to memory 502.
  • the central processing unit 501 in the electronic device 500 executes the image processing method in any of the above embodiments by running a computer program stored in the memory 502, for example, acquiring multiple images with different exposure parameters.
  • the image content of the plurality of images is the same; the image of the plurality of different exposure parameters is image-combined to obtain a first composite image; the depth information of the first composite image is acquired; and the depth information is Determining, in a composite image, a target area that needs to be blurred; and performing a blurring process on the target area in the first composite image to obtain a first composite image after the blurring process.
  • the electronic device belongs to the same concept as the image processing method in the above embodiment, and any method provided in the embodiment of the image processing method can be run on the electronic device, and the specific implementation process is described in the image processing method. For example, it will not be described here.
  • the electronic device 500 may further include: a display 503, a radio frequency circuit 504, an audio circuit 505, a power source 506, an image processing circuit 507, and a graphics processor 508.
  • the display 503, the radio frequency circuit 504, the audio circuit 505, and the power source 506 are electrically connected to the central processing unit 501, respectively.
  • Display 503 can be used to display information entered by a user or information provided to a user, as well as various graphical user interfaces, which can be composed of graphics, text, icons, video, and any combination thereof.
  • the display 503 can include a display panel.
  • the display panel can be configured in the form of a liquid crystal display (LCD) or an organic light-emitting diode (OLED).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • the radio frequency circuit 504 can be used to transmit and receive radio frequency signals to establish wireless communication with network devices or other electronic devices through wireless communication, and to transmit and receive signals with network devices or other electronic devices.
  • the audio circuit 505 can be used to provide an audio interface between the user and the electronic device through the speaker, the microphone.
  • Power source 506 can be used to power various components of electronic device 500.
  • the power supply 506 can be logically coupled to the central processing unit 501 via a power management system to enable functions such as managing charging, discharging, and power management through the power management system.
  • the image processing circuit 507 can be implemented by hardware and/or software components, and can include various processing units defining an ISP (Image Signal Processing) pipeline. Referring to FIG. 17, in an embodiment, the image processing circuit 507 includes ISP processor 5071 and control logic 5072.
  • the image data captured by camera 5073 is first processed by ISP processor 5071, which analyzes the image data to capture image statistics that can be used to determine and/or one or more control parameters of camera 5073.
  • Camera 5073 can include a camera having one or more lenses 50731 and image sensors 50732.
  • Image sensor 50732 can include a color filter array (such as a Bayer filter) that can capture light intensity and wavelength information captured with each imaging pixel of image sensor 50732 and provide a set of primitives that can be processed by ISP processor 5071 Image data.
  • a sensor 5074 such as a gyroscope, can provide acquired image processing parameters (such as anti-shake parameters) to the ISP processor 5071 based on the sensor 5074 interface type.
  • the sensor 5074 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
  • SMIA Standard Mobile Imaging Architecture
  • image sensor 50732 can also transmit raw image data to sensor 5074, which can provide raw image data to ISP processor 5071 based on sensor 5074 interface type, or sensor 5074 stores raw image data into image memory 5075.
  • the ISP processor 5071 processes the original image data pixel by pixel in a plurality of formats.
  • each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 5071 may perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Among them, image processing operations can be performed with the same or different bit depth precision.
  • the ISP processor 5071 can also receive image data from the image memory 5075.
  • the sensor 5074 interface transmits raw image data to the image memory 5075, and the raw image data in the image memory 5075 is then provided to the ISP processor 5071 for processing.
  • Image memory 5075 can be part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and can include DMA (Direct Memory Access) features.
  • DMA Direct Memory Access
  • the ISP processor 5071 may perform one or more image processing operations, such as time domain filtering, upon receiving raw image data from the image sensor 50732 interface or from the sensor 5074 interface or from the image memory 5075.
  • the processed image data can be sent to image memory 5075 for additional processing prior to being displayed.
  • the ISP processor 5071 receives processing data from the image memory 5075 and performs image data processing in the original domain and in the RGB and YCbCr color spaces.
  • the image data processed by the ISP processor 5071 can be output to the display 503 for viewing by the user and/or further processed by the graphics engine or graphics processor 508. Additionally, the output of ISP processor 5071 can also be sent to image memory 5075, and display 503 can read image data from image memory 5075.
  • image memory 5075 can be configured to implement one or more frame buffers. Additionally, the output of ISP processor 5071 can be sent to encoder/decoder 5076 to encode/decode image data. The encoded image data can be saved and decompressed before being displayed on the display 503 device. Encoder/decoder 5076 can be implemented by a CPU or GPU or coprocessor.
  • the statistics determined by the ISP processor 5071 can be sent to the control logic 5072 unit.
  • the statistical data may include image sensor 50732 statistical information such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, lens 50731 shading correction, and the like.
  • Control logic 5072 can include a processor and/or a microcontroller that executes one or more routines, such as firmware, and one or more routines can determine control parameters and ISP processor of camera 5073 based on received statistical data.
  • Control parameters of 5071 may include sensor 5074 control parameters (eg, gain, integration time of exposure control, anti-shake parameters, etc.), camera flash control parameters, lens 50731 control parameters (eg, focus or zoom focal length), or these parameters.
  • the ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (eg, during RGB processing), as well as lens 50731 shading correction parameters and the like.
  • the graphics processor 508 converts the display data that the electronic device needs to display, and provides a line scan signal to the display 503 to control the correct display of the display 503.
  • the image processing circuit 507 includes a first camera 507301 and a second.
  • the camera 507302, the first camera 507301 includes a first lens 507311 and a first image sensor 507321, and the second camera 507302 includes a second lens 507312 and a second image sensor 507322.
  • the first camera 507301 and the second camera 507302 may be disposed in the same plane of the electronic device, for example, at the same time on the back or front side of the electronic device.
  • the installation distance of the dual camera in the electronic device can be determined according to the size determination and/or the shooting effect of the electronic device.
  • the first camera 507301 can be used. The closer to the second camera 507302, the better, for example, within 10 mm.
  • ISP processor 5071 The functions of the ISP processor 5071, the control logic 5072, and other parts not shown (such as sensors, image memories, etc.) are the same as those of the single-camera camera, and are not described herein again.
  • the depth of field information in the embodiment in which the depth of field information is acquired using the depth sensor, it can be performed in a mode in which one camera operates. In an embodiment where depth of field information acquisition is required using images acquired by the first camera 507301 and the second camera 507302, two cameras are required to operate simultaneously.
  • the central processing unit 501 in the electronic device 500 runs a computer program stored in the memory 502 for acquiring images with different exposure parameters;
  • the graphics processor 508 runs a computer program stored in the memory 502 for performing image synthesis on a plurality of images having different exposure parameters to obtain a first composite image;
  • the central processing unit 501 is further configured to acquire depth information of the first composite image while the graphics processor 508 synthesizes the first composite image;
  • the central processing unit 501 is also used to:
  • the target area in the first composite image is blurred, and the first composite image after the blurring process is obtained.
  • the electronic device in this embodiment further includes an additional graphics processor 508, which replaces the central portion after the central processor 501 acquires images with different exposure parameters.
  • the processor 501 performs image synthesis on the images with different exposure parameters to obtain a first composite image, so that the central processing unit 501 can acquire the depth information of the first composite image while the graphics processor 508 synthesizes the first composite image. Thereby, the efficiency of image processing is improved.
  • the embodiment of the present application further provides a storage medium, where the storage medium stores a computer program, and when the computer program runs on the computer, causes the computer to execute the image processing method in any of the above embodiments, for example, first acquiring different exposure parameters.
  • the depth information determines a target area in which the blurring process needs to be performed in the first composite image; finally, the target area in the first composite image is blurred, and the first composite image after the blurring process is obtained.
  • the storage medium may be a magnetic disk, an optical disk, a read only memory (ROM), or a random access memory (RAM).
  • the computer program can be stored in a computer readable storage medium, such as in a memory of the electronic device, and executed by at least one central processing unit within the electronic device, and can include, for example, an implementation of an image processing method during execution.
  • the storage medium may be a magnetic disk, an optical disk, a read only memory, a random access memory, or the like.
  • each functional module may be integrated into one processing chip, or each module may exist physically separately, or two or more modules may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • An integrated module, if implemented in the form of a software functional module and sold or used as a standalone product, may also be stored in a computer readable storage medium such as a read only memory, a magnetic disk or an optical disk.

Abstract

提供一种图像处理方法、装置、存储介质及电子设备,其中,所述图像处理方法包括:获取多个曝光参数不同但图像内容相同的图像,将获取到的多个图像进行图像合成,得到第一合成图像;获取第一合成图像的深度信息,根据获取到的深度信息在第一合成图像中确定需要进行虚化处理的目标区域,进行虚化处理。

Description

图像处理方法、装置、存储介质及电子设备
本申请要求于2018年01月31日提交中国专利局、申请号为201810097898.7、发明名称为“图像处理方法、装置、存储介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,具体涉及一种图像处理方法、装置、存储介质及电子设备。
背景技术
目前,使用者通常利用具有拍摄功能的电子设备拍摄图像,能够通过这些电子设备随时随地的记录身边发生的事情,看到的景物等。通常的,为了突出拍摄的目标,使得拍摄图像的焦点聚焦在拍摄的目标上,可虚化拍摄图像中目标的背景区域。
发明内容
本申请实施例提供了一种图像处理方法、装置、存储介质及电子设备,能够提升图像的虚化效果。
第一方面,本申请实施例了提供了的一种图像处理方法,包括:
获取多个曝光参数不同的图像,其中,多个图像的图像内容相同;
将所述多个曝光参数不同的图像进行图像合成,得到第一合成图像;
获取所述第一合成图像的深度信息;
根据所述深度信息在所述第一合成图像中确定需要进行虚化处理的目标区域;
对所述第一合成图像中的所述目标区域进行虚化处理,得到虚化处理后的第一合成图像。
第二方面,本申请实施例了提供了的一种图像处理装置,包括:
图像获取模块,用于获取多个曝光参数不同的图像,其中,多个图像的图像内容相同;
图像合成模块,用于将所述多个曝光参数不同的图像进行图像合成,得到第一合成图像;
信息获取模块,用于获取所述第一合成图像的深度信息;
区域确定模块,用于根据所述深度信息在所述第一合成图像中确定需要进行虚化处理的目标区域;
虚化处理模块,用于对所述第一合成图像中的所述目标区域进行虚化处理,得到虚化处理后的第一合成图像。
第三方面,本申请实施例提供的存储介质,其上存储有计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行以下步骤:
获取多个曝光参数不同的图像,其中,多个图像的图像内容相同;
将所述多个曝光参数不同的图像进行图像合成,得到第一合成图像;
获取所述第一合成图像的深度信息;
根据所述深度信息在所述第一合成图像中确定需要进行虚化处理的目标区域;
对所述第一合成图像中的所述目标区域进行虚化处理,得到虚化处理后的第一合成图像。
第四方面,本申请实施例提供了一种电子设备,包括中央处理器和存储器,所述存储器有计算机程序,所述中央处理器通过调用所述计算机程序,用于执行以下步骤:
获取多个曝光参数不同的图像,其中,多个图像的图像内容相同;
将所述多个曝光参数不同的图像进行图像合成,得到第一合成图像;
获取所述第一合成图像的深度信息;
根据所述深度信息在所述第一合成图像中确定需要进行虚化处理的目标区域;
对所述第一合成图像中的所述目标区域进行虚化处理,得到虚化处理后的第一合成图像。
第五方面,本申请实施例还提供了一种电子设备,包括中央处理器、图形处理器和存储器,所述存储器储存有计算机程序,所述中央处理器通过调用所述计算机程序,用于获取多个曝光参数不同的图像;
所述图形处理器通过调用所述计算机程序,用于将所述多个曝光参数不同的图像进行图像合成,得 到第一合成图像;
所述中央处理器还用于在所述图形处理器合成所述第一合成图像的同时,获取所述第一合成图像的深度信息;
还用于根据所述深度信息在所述第一合成图像中确定需要进行虚化处理的目标区域;
还用于对所述第一合成图像中的所述目标区域进行虚化处理,得到虚化处理后的第一合成图像。
附图说明
下面结合附图,通过对本发明的具体实施方式详细描述,将使本发明的技术方案及其有益效果显而易见。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的图像处理方法的应用场景示意图。
图2是本申请实施例提供的图像处理方法的一个流程示意图。
图3是本申请实施例中触发图像拍摄请求的操作示意图。
图4是本申请实施例中通过包围曝光方式得到三个不同曝光参数的图像的示例图。
图5是本申请实施例中电子设备获取得到第一图像集合以及第二图像集合的示意图。
图6是本申请实施例中进行集合图像合成的示例图。
图7是本申请实施例中第一摄像头和第二摄像头的设置位置示意图。
图8是本申请实施例中通过第一摄像头和第二摄像头成像的示意图。
图9是本申请实施例中进行虚化处理的示意图。
图10是本申请实施例中对第一合成图像进行虚化处理的示例图。
图11是本申请实施例中提供的图像处理方法的另一个流程示意图。
图12是本申请实施例中合成得到第一合成图像并对第一合成图像进行虚化处理的示例图。
图13是本申请实施例中提供的图像处理方法的又一个流程示意图。
图14是本申请实施例提供的图像处理装置的一结构示意图。
图15是本申请实施例提供的电子设备的一个结构示意图。
图16是本申请实施例提供的电子设备的另一结构示意图。
图17是本申请实施例中图像处理电路的一个细化结构示意图。
图18是本申请实施例中图像处理电路的另一个细化结构示意图。
具体实施方式
请参照图式,其中相同的组件符号代表相同的组件,本申请的原理是以实施在一适当的运算环境中来举例说明。以下的说明是基于所例示的本申请具体实施例,其不应被视为限制本申请未在此详述的其它具体实施例。
本申请实施例提供一种图像处理方法,该图像处理方法的执行主体可以是本申请实施例提供的图像处理装置,或者集成了该图像处理装置的电子设备,其中该图像处理装置可以采用硬件或者软件的方式实现。其中,电子设备可以是智能手机、平板电脑、掌上电脑、笔记本电脑、或者台式电脑等设备。
请参阅图1,图1为本申请实施例提供的图像处理方法的应用场景示意图,以图像处理装置集成在电子设备中为例,电子设备可以首先获取多个曝光参数不同的图像,其中,多个图像的图像内容相同;然后将获取到的多个曝光参数不同的图像进行图像合成,得到第一合成图像;再获取第一合成图像的深度信息;再根据获取到的深度信息在第一合成图像中确定需要进行虚化处理的目标区域;最后对第一合成图像中的目标区域进行虚化处理,得到虚化处理后的第一合成图像。
本申请实施例提供一种图像处理方法,包括:
获取多个曝光参数不同的图像,其中,多个图像的图像内容相同;
将所述多个曝光参数不同的图像进行图像合成,得到第一合成图像;
获取所述第一合成图像的深度信息;
根据所述深度信息在所述第一合成图像中确定需要进行虚化处理的目标区域;
对所述第一合成图像中的所述目标区域进行虚化处理,得到虚化处理后的第一合成图像。
在一些实施例中,所述获取多个曝光参数不同的图像,包括:
通过第一摄像头获取所述多个曝光参数不同的图像,同步通过第二摄像头获取至少一个与所述第一摄像头获取的曝光参数相同的图像;
所述获取所述第一合成图像的深度信息,包括:
根据所述第一摄像头和所述第二摄像头同步获取到的曝光参数相同的两个图像,获取所述第一合成图像的深度信息。
在一些实施例中,所述获取多个曝光参数不同的图像,包括:
获取对应不同曝光参数的多个图像集合,其中,每个图像集合包括至少两个图像,且集合内图像的曝光参数相同;
对各图像集合进行集合内图像合成,得到多个第二合成图像;
将所述多个第二合成图像作为所述多个曝光参数不同的图像。
在一些实施例中,所述对各图像集合进行集合内图像合成,得到多个第二合成图像,包括:
选中一图像集合;
将选中的图像集合的集合内图像对齐,并获取对齐后的集合内图像各像素点的平均像素值;
根据各所述平均像素值得到选中的图像集合的第二合成图像,并返回所述选中一图像集合的步骤,直至得到各图像集合的合成图像。
在一些实施例中,所述获取多个曝光参数不同的图像的步骤,包括:
在接收到图像拍摄请求时,对待拍摄对象进行逆光环境识别;
当识别到所述待拍摄对象处于逆光环境时,获取对应所述待拍摄对象的多个曝光参数不同的图像。
在一些实施例中,所述对待拍摄对象进行逆光环境识别,包括:
获取所述待拍摄对象的环境参数;
根据获取到的环境参数对所述待拍摄对象进行逆光环境识别。
在一些实施例中,所述对待拍摄对象进行逆光环境识别,包括:
获取待拍摄对象在预设通道的直方图信息;
根据获取到的直方图信息对待拍摄对象进行逆光环境识别。
在一实施例中,所述深度信息为深度值,所述根据所述深度信息在所述第一合成图像中确定需要进行虚化处理的目标区域,包括:
确定所述第一合成图像中深度值达到预设深度阈值的区域;
将深度值达到所述预设深度阈值的区域确定为需要进行虚化处理的目标区域。
在一实施例中,所述对所述第一合成图像中的所述目标区域进行虚化处理,包括:
将所述第一合成图像中的所述目标区域划分为对应不同深度值的多个子目标区域;
根据各子目标区域对应的深度值,以及预设的深度值和虚化强度的映射关系,确定各子目标区域对应的虚化强度;
按照各子目标区域对应的虚化强度,分别对各子目标区域进行虚化处理。
请参照图2,图2为本申请实施例提供的图像处理方法的流程示意图。本申请实施例提供的图像处理方法的具体流程可以如下:
101、获取多个曝光参数不同的图像,其中,多个图像的图像内容相同;
本申请实施例中,电子设备可以在接收到触发的图像拍摄请求时,获取多个曝光参数不同的图像。图像拍摄请求可以通过多种方式触发,如通过虚拟按键的方式触发,通过物理按键的方式触发,通过语音指令的方式触发等。
例如,请参照图3,使用者在操作电子设备启动拍照类应用(比如电子设备的系统应用“相机”)之后,通过移动电子设备,使得电子设备的摄像头对准待拍摄对象(待拍摄对象包括图3所示的人物,以及该人物所处的场景)之后,可以通过点击应用界面提供的“拍照”按键(为虚拟按键)触发图像拍摄请求。
又例如,使用者在操作电子设备启动拍照类应用之后,通过移动电子设备,使得电子设备的摄像头对准待拍摄对象之后,可以说出语音指令“拍照”,触发图像拍摄请求,或者是直接点击电子设备设置的物理拍照按键,触发图像拍摄请求。
电子设备在接收到触发的图像拍摄请求之后,即刻响应接收到的图像拍摄请求,也即是按照不同的曝光参数对待拍摄对象进行拍摄,将获取到对应待拍摄对象的多个曝光参数不同的图像,其中,这些对应不同曝光参数的图像仅仅是因为曝光参数不同导致图像亮度信息不同,但这些图像的图像内容相同,也即同为待拍摄对象的图像内容。其中,曝光参数包括但不限于感光度、快门速度以及光圈大小等。
作为一种可选的实施方式,电子设备在拍摄时,可以依序获取本地预存的N组不同曝光参数,在每次获取到一个曝光参数时,按照获取到的曝光参数,并结合其它拍摄参数对待拍摄对象进行拍摄,以此类推,将拍摄得到对应N组不同曝光参数的多个图像。其中,电子设备依序获取预存的不同曝光参数可按照曝光参数导致图像的亮度信息由低到高的顺序获取。此外,拍摄得到的多个图像除曝光参数不同之外,其对应的其它拍摄参数均相同。
例如,电子设备本地预存有两组曝光参数,即第一曝光参数和第二曝光参数,其中,利用第一曝光参数拍摄得到图像的亮度较利用第二曝光参数拍摄得到的亮度低,则在响应接收到的图像拍摄请求时,首先获取到第一曝光参数,按照第一曝光参数并结合其它拍摄参数对待拍摄对象进行拍摄,然后获取到第二曝光参数,按照第二曝光参数并结合其它拍摄参数对待拍摄对象进行拍摄。
作为另一种可选的的实施方式,电子设备在拍摄时,可以采用包围曝光的方式对待拍摄对象进行拍摄,具体的,首先对待拍摄对象进行测光,得到待拍摄对象的测光值,再根据预设的测光值和曝光参数的映射关系,确定测光值对应的曝光参数,并按照确定的曝光参数对待拍摄对象进行拍摄;之后,在确定的曝光参数的基础上,按照预设的步长值对确定的曝光参数进行提升和衰减,分别按照提升的曝光参数和衰减的曝光参数对待拍摄对象进行拍摄,从而得到对应不同曝光参数的多个图像。其中,对于提升和衰减曝光参数的次数不做限制,比如,可以衰减一次,提升一次,这样将得到三个曝光参数不同的图像;又比如,可以衰减两次,提升两次,这样将得到五个曝光参数不同的图像。
例如,请参照图4,电子设备通过对待拍摄对象进行测光,确定对应测光值的曝光参数为Z,首先按照曝光参数Z对待拍摄对象进行拍摄,得到第一图像;然后对曝光参数Z衰减一步长值1ev,得到衰减后的曝光参数Z-1ev,并按照曝光参数Z-1ev对待拍摄对象进行拍摄,得到第二图像;再对曝光参数Z提升一步长值1ev,得到提升后的曝光参数Z+1ev,并按照曝光参数Z+1ev对待拍摄对象进行拍摄,得到第三图像。最后,将得到三个曝光参数不同的图像,且多个图像的图像内容相同,也即同为待拍摄对象的图像内容。
可选的,在一实施例中,获取多个曝光参数不同的图像,包括:
在接收到图像拍摄请求时,对待拍摄对象进行逆光环境识别;
当识别到待拍摄对象处于逆光环境时,获取对应待拍摄对象的多个曝光参数不同的图像。
其中,对待拍摄对象进行逆光环境识别可以通过多种方式实现,比如,在一个可选的实施方式中,对待拍摄对象进行逆光环境识别,包括:
获取待拍摄对象的环境参数;
根据获取到的环境参数对待拍摄对象进行逆光环境识别。
在具体实施时,由于电子设备与待拍摄对象处于同一环境下,可以获取电子设备的环境参数,将电子设备的环境参数作为待拍摄对象的环境参数。其中,环境参数包括但不限于时间信息,电子设备所处位置的时区信息、位置信息、天气信息,以及电子设备的方位信息等。
在获取到待拍摄对象的环境参数之后,可以将获取到的这些环境参数输入到预先训练的支持向量机 分类器中,由该支持向量机分类器根据输入的环境参数进行分类,以判定待拍摄对象是否处于逆光环境中。
又比如,在另一个可选的实施方式中,对待拍摄对象进行逆光环境识别,包括:
获取待拍摄对象在预设通道的直方图信息;
根据获取到的直方图信息对待拍摄对象进行逆光环境识别。
其中,预设通道包括R、G、B三个通道,在获取待拍摄对象的直方图信息时,可以获取到待拍摄对象的预览图像,再获取预览图像在R、G、B三个通道的直方图信息,将获取到的R、G、B三个通道的直方图信息作为待拍摄对象在预设通道的直方图信息。
之后,对待拍摄对象的直方图信息进行统计,得到统计结果。其中,具体统计不同亮度下的像素数量。
在得到统计结果之后,判断统计结果是否满足预设条件,若是,则确定待拍摄对象处于逆光环境。
可具体的,预设条件可以设置为:第一亮度区间和第二亮度区间的像素数量均达到预设数量阈值,且最低亮度小于第一预设亮度阈值和/或最高亮度大于第二预设亮度阈值,其中,预设数量预置、第一预设亮度阈值和第二预设亮度阈值为经验参数,可由本领域技术人员根据实际需要进行设置。
换言之,在该实施例中,若在待拍摄对象处于逆光环境时,获取多个曝光参数不同的图像。
需要说明的是,本申请实施例中获取曝光参数不同的图像的数量不做限制,可由本领域技术人员根据实际需要进行设置。
比如,以某型号电子设备为例,该型号电子设备在对处于逆光环境的待拍摄对象进行拍摄时,会额外对曝光参数进行调整,具体对待拍摄对象进行测光,得到测光值,并根据预存的测光值以及曝光参数的映射关系,得到测光值对应的曝光参数,再按照预设的调整量对曝光参数进行提升,在采用提升后的曝光参数进行拍摄时,得到图像实际为过曝的图像,其图像整体亮度更高,使得待拍摄对象所在的图像前景区域的亮度适中,从而待拍摄对象的大部分图像细节得以保留,但同时也会导致图像背景区域的亮度过高,使得背景区域的大部分细节丢失。
针对这种情况,在逆光环境拍摄时,可以实际拍摄一个过曝光的图像,以保留待拍摄对象所在前景区域的大部分图像细节,再拍摄一种欠曝光的图像,以保留背景区域的大部分图像细节。
可选的,在一实施例中,获取多个曝光参数不同的图像之前,还包括以下步骤:
当识别到待拍摄对象处于逆光环境时,根据当前的逆光程度设置多个不同的曝光参数。
其中,逆光程度可由支持向量机分类器在输出待拍摄对象是否处于逆光环境的结果,且输出结果为“逆光环境”时,同步输出对应的逆光程度。
电子设备在获取到支持向量机分类器输出的待拍摄对象处于逆光环境的结果时,同时获取到支持向量机分类器输出的逆光程度,作为当前的逆光程度。之后,根据预存的逆光程度和曝光参数的映射关系,设置对应当前的逆光程度的多个不同的曝光参数。这样,电子设备在获取多个曝光参数不同的图像时,可以按照设置的多个曝光参数,分别对待拍摄对象进行拍摄,得到对应不同曝光参数的多个图像,这样获取到的多个曝光参数不同的图像的亮度信息均不相同,从暗到亮均有,但多个曝光参数不同的图像的图像内容相同,也即同为待拍摄对象的图像内容。
可选的,在一实施例中,电子设备还可以是在侦测到触发的虚化处理请求时,获取多个曝光参数不同的图像。
其中,使用者可以预先触发连拍请求,控制电子设备对待拍摄对象进行连续拍摄,得到多个图像内容相同但曝光参数不同的图像;然后,使用者可以选中多个图像中的一个图像触发虚化处理请求,相应的,电子设备在侦测到触发的虚化处理请求时,将获取到选中的图像,以及与选中的图像曝光参数不同但图像内容相同的其他图像。
可具体的,在一实施例中,获取多个曝光参数不同的图像,包括:
获取对应不同曝光参数的多个图像集合,其中,每个图像集合包括至少两个图像,且集合内图像的曝光参数相同;
对各图像集合进行集合内图像合成,得到多个第二合成图像;
将多个第二合成图像作为对应不同曝光参数的多个图像。
其中,电子设备在接收到触发的图像拍摄请求之后,即刻响应接收到的图像拍摄请求,也即是按照不同的曝光参数对待拍摄对象进行拍摄,其中,对于每一曝光参数,拍摄多个图像,从而得到对应不同曝光参数的多个图像集合。需要说明的是,对于图像集合包括的图像个数,此处不做具体限定,不同图像集合的图像个数可以相同,也可以不同。
比如,请参照图5,电子设备在接收到触发的图像拍摄请求之后,首先按照曝光参数Z-1ev对待拍摄对象进行拍摄,共得到4个曝光参数为Z-1ev、且图像内容相同(即对应待拍摄对象的图像内容)的图像,这4个曝光参数同为Z-1ev的图像组合为第一图像集合;之后,按照曝光参数Z+1ev对待拍摄对象进行拍摄,共得到4个曝光参数为Z+1ev、且图像内容相同(及对应待拍摄对象的图像内容)的图像,这4个曝光参数同为Z+1ev的图像组合为第二图像集合;如此,便得到两个图像集合:第一图像集合和第二图像集合,第一图像集合对应曝光参数Z-1ev,第二图像集合对应曝光参数Z+1ev,且第一图像集合和第二图像集合中所有图像的图像内容均相同,即为对应待拍摄对象的图像内容。
之后,对各图像集合进行集合内图像合成,也即是对各图像集合进行多帧降噪合成,得到多个第二合成图像,将多个第二合成图像作为对应不同曝光参数的多个图像,这样,使得得到的多个曝光参数不同的图像均具有较高的清晰度。
可具体的,在一实施例中,对各图像集合进行集合内图像合成,包括:
选中一图像集合;
将选中的图像集合的集合内图像对齐,并获取对齐后的集合内图像各像素点的平均像素值;
根据各平均像素值得到选中的图像集合的第二合成图像,并返回选中一图像集合的步骤,直至得到各图像集合的第二合成图像。
其中,在进行集合内图像合成时,可以逐个对各图像集合进行合成。
首先,选中一图像集合,然后从选中的图像集合中选取一个图像作为基准图像,例如,假设图像集合中各图像内容均为同一人像,可以选取人眼开合度最大的图像作为基准图像;又例如,可以结合各图像的人眼开合度大小以及清晰度大小进行综合选择,首先将人眼开合度大小和清晰度大小进行归一化,并设置人眼开合度大小的权重为α,设置清晰度大小的权重为1-α,对集合内各图像进行加权求值,选取值最大的图像作为基准图像;又例如,可以选择清晰度最高的图像作为基准图像。
之后,基于选择的基准图像,将集合内的其它图像与基准图像对齐。
基于对齐后的各图像,计算得到各像素点的平均像素值,比如,假设选中的图像集合中共有四个图像,某位置的像素点在四个图像中的像素值分别为:“0.8,0.9,1.1,1.2”,则可计算得到该位置的像素点的平均像素值为“1”。
之后,根据各平均像素值得到选中的图像集合的第二合成图像,比如,可以将前述基准图像的各像素点的像素值相应调整为计算得到的各平均像素值,从而得到选中的图像集合的第二合成图像;又比如,可以根据计算得到各平均像素值,生成一幅新的图像,将生成的图像作为选中的图像集合的第二合成图像。
比如,请参照图6,选中的一图像集合包括四个图像,分别为第一图像、第二图像、第三图像以及第四图像,这四个图像的曝光参数相同,均为Z,图像内容也相同,但这些图像均存在一些噪点;通过对这些图像进行对齐并降噪合成之后,得到曝光参数为Z的第二合成图像,但第二合成图像已不存在噪点。
基于以上描述的合成方案,依次选中其它图像集合,并完成集合内图像合成,从而得到各图像集合的第二合成图像。
102、将获取到的多个曝光参数不同的图像进行图像合成,得到第一合成图像;
其中,得到的第一合成图像为高动态范围图像,高动态范围图像相比普通的图像,能够提供更大的动态范围和图像细节,可以利用多个曝光参数不同的图像中各自最佳的图像细节来合成得到高动态范围 图像。
比如,获取到两个曝光参数不同的图像:第一图像和第二图像,由于曝光参数不同,第一图像的前景区域携带有大量图像细节,第二图像的背景区域携带有大量图像细节,这样,在对第一图像和第二图像进行高动态范围图像合成时,可以利用第一图像前景区域的图像细节以及第二图像背景区域的图像细节来合成得到高动态范围图像,得到的高动态范围图像将包括第一图像前景区域的图像细节以及第二图像背景区域的图像细节,且合成得到的高动态范围图像的图像内容与第一图像和第二图像的图像内容相同。
需要说明的是,本申请实施例对于采用何种高动态范围合成技术不做具体限制,可由本领域技术人员根据实际需要进行选择,比如,本申请实施例中,可以利用以下公式进行高动态范围图像合成:
Figure PCTCN2018120683-appb-000001
其中,HDR表示合成得到的高动态范围图像,HDR(i)表示合成得到的高动态范围图像第i个像素点的灰度值,k表示共有几个曝光参数不同的图像,w(Zij)表示第j个图像中第i个像素点的补偿权值,补偿权值为补偿权值函数中的一个值,补偿权值函数可以通过三角函数或者正态分布函数获得,Zij表示第j个图像中第i个像素点的灰度值。
可选的,在一实施例中,当获取到两个曝光参数不同的图像,且一个图像为过曝光图像,另一个图像为欠曝光图像时,可以利用以下公式进行高动态范围图像合成:
HDR(i)=m*LE(i)+n*HE(i);
其中,HDR表示合成得到的高动态范围图像,HDR(i)表示合成得到的高动态范围图像第i个像素点,LE表示欠曝光图像,LE(i)表示欠曝光图像上第i个像素点,m表示欠曝光图像对应的补偿权值,HE表示过曝光图像,HE(i)表示过曝光图像上第i个像素点,n表示过曝光图像对应的补偿权值。
103、获取第一合成图像的深度信息。
其中,第一合成图像的深度信息也即是第一合成图像对应的待拍摄对象的深度信息。深度信息可以描述第一合成图像中“待拍摄对象”的任一像素点到电子设备的距离。
可具体的,在一实施例中,获取第一合成图像的深度信息之前,还包括:
通过深度传感器获取待拍摄对象的深度信息,并缓存获取到的深度信息;
获取第一合成图像的深度信息,包括:
获取缓存的待拍摄对象的深度信息;
将获取到的深度信息作为第一合成图像的深度信息。
其中,电子设备通过设置的深度传感器接收来自待拍摄对象发射或反射的光能量,形成有关待拍摄对象的光能量分布函数,即灰度图像,然后在这些灰度图像的基础上恢复待拍摄对象的深度信息;或者电子设备通过深度传感器向待拍摄对象发射能量,然后接收待拍摄对象对所发射能量的反射能量,形成有关待拍摄对象的光能量分布函数,即灰度图像,然后在这些灰度图像的基础上恢复拍摄场景的深度信息。
换言之,可以在对待拍摄对象进行拍摄,获取得到多个曝光参数不同的图像的同时,通过深度传感器获取到待拍摄对象的深度信息。
可具体的,请参照图7,在一实施例中,电子设备包括第一摄像头和第二摄像头,获取多个曝光参数不同的图像,包括:
通过第一摄像头获取多个曝光参数不同的图像,同步通过第二摄像头获取至少一个与第一摄像头获取的曝光参数相同的图像;
获取第一合成图像的深度信息,包括:
根据第一摄像头和第二摄像头同步获取到的曝光参数相同的两个图像,获取第一合成图像的深度信息。
首先,电子设备通过第一摄像头按照不同的曝光参数对待拍摄对象进行拍摄,获取到多个对应不同曝光参数的图像,并同步通过第二摄像头对待拍摄对象进行拍摄,获取到至少一个与第一摄像头获取的曝光参数相同的图像。
之后,根据第一摄像头和第二摄像头同步拍摄得到的曝光参数相同的两个图像,以及第一摄像头和第二摄像头的距离,通过三角测距算法获取待拍摄对象的深度信息,并将获取到的深度信息作为第一合成图像的深度信息。
具体的,待拍摄对象包括多个对象,以计算一对象的深度信息为例进行说明:
由于第一摄像头和第二摄像头并列设置于电子设备的同一平面,且第一摄像头和第二摄像头之间具有一定的距离,从而导致了这两个摄像头具有视差。根据三角测距算法可以计算得到第一摄像头和第二摄像头同步拍摄的曝光参数相同的两个图像中同一对象的深度信息,也即是该对象距离第一摄像头和第二摄像头所在平面的距离。
请参照图8,OR表示第一摄像头所在的位置,OT表示第二摄像头所在的位置,第一摄像头和第二摄像头的距离为B,焦平面距离第一摄像头和第二摄像头所在平面的距离为f。
电子设备同步通过第一摄像头和第二摄像头按照相同的曝光参数进行拍摄,第一摄像头将在焦平面成像得到第一图像,第二摄像头将在焦平面成像得到第二图像。
P表示对象在第一图像中的位置,P’表示同一对象在第二图像中的位置,其中,P点距离第一图像左侧边界的距离为XR,P’距离第二图像左侧边界的距离为XT。
现假设对象距离第一摄像头和第二摄像头所在平面的距离为Z,则有以下公式:
Figure PCTCN2018120683-appb-000002
利用两个三角形相似的原理,进一步得到公式1和公式2,
公式1:B1/Z=(XR’+X1)/(Z-f)
公式2:B2/Z=(XT+X2)/(Z-f)
其中,B1表示第一摄像头到对象投影点的距离,B2表示第二摄像头到对象投影点的距离,XR’表示P点到第一图像右侧边界的距离,X1表示第一图像右侧边界到对象投影点的距离,X2表示第二图像左侧边界到对象投影点的距离。
对公式1与公式2进行加法操作,得到公式3,
公式3:(B1+B2)/Z=(XR’+X1+XT+X2)/(Z-f),
即B/Z=(XR’+X1+XT+X2)/(Z-f)
由于第一摄像头和第二摄像头的焦平面宽度均为2K,则半个焦平面宽度为K,得到公式4和公式5,
公式4:(K+X1)+(X2+K)=B
即B-X1-X2=2K
公式5:XR’+XR=2K
由公式4和公式5,得到公式6,
公式6:B-X1-X2=XR’+XR
即XR’=B-X1-X2-XR
将公式6代入公式3中,得到公式7,
公式7:B/Z=[(B-X1-X2-XR)+X1+XT+X2]/(Z-f)
即B/Z=(B-XR+XT)/(Z-f),得到Z=Bf/(XR-XT)
令(XR-XT)=d,对公式7进行替换,得到公式8,
公式8:Z=Bf/d
其中,d为对象在第一图像和第二图像中的位置差,即“XR-XT”,B和f均为固定值。
需要说明的是,在一实施例中,步骤203可以和步骤202同时执行。
104、根据获取到的深度信息在第一合成图像中确定需要进行虚化处理的目标区域;
通常的,可以根据获取到的深度信息确定出第一合成图像的背景区域,将确定出的背景区域作为需要进行虚化处理的目标区域。
可具体的,在一实施例中,深度信息为深度值,根据获取到的深度信息在第一合成图像中确定需要进行虚化处理的目标区域,包括:
确定第一合成图像中深度值达到预设深度阈值的区域;
将深度值达到预设深度阈值的区域确定为需要进行虚化处理的目标区域。
其中,预设深度阈值用于界定某像素点位于前景区域还是背景区域,深度值达到预设深度阈值的区域也即是背景区域,在确定第一合成图像的背景区域之后,即可将第一合成图像的背景区域确定为需要进行虚化处理的目标区域。
105、对第一合成图像中的目标区域进行虚化处理,得到虚化处理后的第一合成图像。
本申请实施例中,在确定需要进行虚化处理的目标区域之后,即可对第一合成图像中的目标区域进行虚化处理,具体可通过高斯模糊的方式实现对目标区域的虚化处理。
比如,请参照图9,在对目标区域进行虚化处理时,以目标区域中的某像素点为例,假设该像素点的像素值为2,其周围8个像素点的像素值均为1,计算周围像素点的平均像素值,将该像素点的像素值调整为计算得到的平均像素值,即调整为1,显然的,在数字上,这是一种“平滑化”,在图像效果上,就相当于产生虚化的效果,前述像素点的细节被丢失。
可具体的,在一实施例中,对第一合成图像的目标区域进行虚化处理,包括:
将第一合成图像中的目标区域划分为对应不同深度值的多个子目标区域;
根据各子目标区域对应的深度值,以及预设的深度值和虚化强度的映射关系,确定各子目标区域对应的虚化强度;
按照各子目标区域对应的虚化强度,分别对各子目标区域进行虚化处理。
其中,在进行虚化处理时,首先将第一合成图像中的目标区域划分为对应不同深度值的多个子目标区域。比如,设置一深度值,对该深度值加减相同的变化值得到对应该深度值的深度值区间,将深度值位于前述深度值区间内的多个像素点聚合为一个子目标区域;设置另一深度值,对该深度值加减相同的变化值得到对应该深度值的深度值区间,将深度值位于前述深度值区间内的多个像素点聚合为另一个子目标区域,如此类推,得到对应不同深度值的多个子目标区域。
在划分得到多个子目标区域之后,根据各子目标区域对应的深度值,以及预设的深度值和虚化强度的映射关系,确定各子目标区域对应的虚化强度。其中,对于前述映射关系的设置不做具体限制,可由本领域技术人员根据实际需要进行设置,比如,可以设置虚化强度和深度值成正比关系,也即是深度值越大,虚化程度越大。
在确定个子目标区域的虚化程度之后,即可按照各子目标区域的虚化程度,分别对各子目标区域进行虚化处理。比如,请参照图10,左侧所示为虚化处理前的第一合成图像,其中,人像位于前景区域,不需要进行虚化处理,3团植物位于背景区域,且深度值均不相同,右下至上的顺序呈递增趋势;右侧所示为虚化处理后的第一合成图像,可见,3团植物均被虚化,但虚化程度同样由下至上的顺序呈递增趋势,下侧植物的虚化程度较低,上侧植物的虚化程度较高,中间植物的虚化程度居中。
由上可知,本申请实施例首先获取多个曝光参数不同的图像,其中,多个图像的图像内容相同;然后将获取到的多个曝光参数不同的图像进行图像合成,得到第一合成图像;再获取第一合成图像的深度信息;再根据获取到的深度信息在第一合成图像中确定需要进行虚化处理的目标区域;最后对第一合成图像中的目标区域进行虚化处理,得到虚化处理后的第一合成图像,由于合成得到的第一合成图像携带了不同图像的明处和/或暗处的图像细节,在对合成得到的第一合成图像进行虚化处理后,虚化处理后的第一合成图像仍会携带较多的图像细节,使得图像的虚化效果得以提升。
下面将在上述实施例描述的方法基础上,对本申请的图像处理方法做进一步介绍。参考图11,该图像处理方法可以包括:
201、通过第一摄像头获取多个曝光参数不同的图像,同步通过第二摄像头获取至少一个与第一摄 像头获取的曝光参数相同的图像;
本申请实施例中,请参照图7,电子设备包括第一摄像头和第二摄像头,电子设备可以在接收到触发的图像拍摄请求时,同步通过第一摄像头和第二摄像头获取图像。其中,图像拍摄请求可以通过多种方式触发,如通过虚拟按键的方式触发,通过物理按键的方式触发,通过语音指令的方式触发等。
电子设备在接收到触发的图像拍摄请求之后,即刻响应接收到的图像拍摄请求,通过第一摄像头按照不同的曝光参数对待拍摄对象进行拍摄,获取到多个对应不同曝光参数的图像,并同步通过第二摄像头对待拍摄对象进行拍摄,获取到至少一个与第一摄像头获取的曝光参数相同的图像。
202、将多个曝光参数不同的图像进行图像合成,得到第一合成图像;
其中,得到的第一合成图像为高动态范围图像,高动态范围图像相比普通的图像,能够提供更大的动态范围和图像细节,可以利用多个曝光参数不同的图像中各自最佳的图像细节来合成得到高动态范围图像,具体可参照以上实施例的相关描述,此处不再赘述。
203、根据第一摄像头和第二摄像头同步获取到的曝光参数相同的两个图像,获取第一合成图像的深度信息。
其中,根据第一摄像头和第二摄像头同步拍摄得到的曝光参数相同的两个图像,以及第一摄像头和第二摄像头的距离,通过三角测距算法获取待拍摄对象的深度信息,并将获取到的深度信息作为第一合成图像的深度信息,具体可参照以上实施例的相关描述,此处不再赘述。
204、将第一合成图像中深度值达到预设深度阈值的区域确定为需要进行虚化处理的目标区域;
通常的,可以根据获取到的深度信息确定出第一合成图像的背景区域,将确定出的背景区域作为需要进行虚化处理的目标区域。其中,预设深度阈值用于界定某像素点位于前景区域还是背景区域,深度值达到预设深度阈值的区域也即是背景区域,在确定第一合成图像的背景区域之后,即可将第一合成图像的背景区域确定为需要进行虚化处理的目标区域。
205、将第一合成图像中的目标区域划分为对应不同深度值的多个子目标区域;
其中,在进行虚化处理时,首先将第一合成图像中的目标区域划分为对应不同深度值的多个子目标区域。比如,设置一深度值,对该深度值加减相同的变化值得到对应该深度值的深度值区间,将深度值位于前述深度值区间内的多个像素点聚合为一个子目标区域;设置另一深度值,对该深度值加减相同的变化值得到对应该深度值的深度值区间,将深度值位于前述深度值区间内的多个像素点聚合为另一个子目标区域,如此类推,得到对应不同深度值的多个子目标区域。
206、根据各子目标区域对应的深度值,以及预设的深度值和虚化强度的映射关系,确定各子目标区域对应的虚化强度;
在划分得到多个子目标区域之后,根据各子目标区域对应的深度值,以及预设的深度值和虚化强度的映射关系,确定各子目标区域对应的虚化强度。其中,对于前述映射关系的设置不做具体限制,可由本领域技术人员根据实际需要进行设置,比如,可以设置虚化强度和深度值成正比关系,也即是深度值越大,虚化程度越大。
207、按照各子目标区域对应的虚化强度,分别对各子目标区域进行虚化处理。
在确定个子目标区域的虚化程度之后,即可按照各子目标区域的虚化程度,分别对各子目标区域进行虚化处理。比如,请参照图10,左侧所示为虚化处理前的第一合成图像,其中,人像位于前景区域,不需要进行虚化处理,3团植物位于背景区域,且深度值均不相同,右下至上的顺序呈递增趋势;右侧所示为虚化处理后的第一合成图像,可见,3团植物均被虚化,但虚化程度同样由下至上的顺序呈递增趋势,下侧植物的虚化程度较低,上侧植物的虚化程度较高,中间植物的虚化程度居中。
比如,请参照图12,电子设备通过第一摄像头获取到曝光参数为Z-1ev的第一图像,以及曝光参数为Z+1ev的第二图像,并同步通过第二摄像头获取到曝光参数与第二图像曝光参数相同的第三图像;之后,将第一图像和第二图像进行合成,得到第一合成图像,该第一合成图像保留了第一图像的暗处细节和第二图像的亮处细节,在合成第一合成图像的同时,根据同步拍摄且曝光参数相同的第二图像和第三图像获取到第一合成图像的深度信息;之后,根据获取到的深度信息确定第一合成图像需要进行虚化处 理的目标区域,并对目标区域进行虚化处理,如图12所示,虚化后的第一合成图像中的3团植物均被虚化,但虚化程度由下至上的顺序呈递增趋势,下侧植物的虚化程度较低,上侧植物的虚化程度较高,中间植物的虚化程度居中。
请参照图13,在本申请图像处理方法的又一实施例中,该图像处理方法的可以包括:
301、在接收到图像拍摄请求时,对待拍摄对象进行逆光环境识别。
其中,图像拍摄请求可以通过多种方式触发,如通过虚拟按键的方式触发,通过物理按键的方式触发,通过语音指令的方式触发等。
电子设备在接收到触发的图像拍摄请求之后,首先对待拍摄对象进行逆光环境识别,以确定待拍摄对象是否处于逆光环境。
其中,对待拍摄对象进行逆光环境识别可以通过多种方式实现,比如,在一个可选的实施方式中,对待拍摄对象进行逆光环境识别,包括:
获取待拍摄对象的环境参数;
根据获取到的环境参数对待拍摄对象进行逆光环境识别。
在具体实施时,由于电子设备与待拍摄对象处于同一环境下,可以获取电子设备的环境参数,将电子设备的环境参数作为待拍摄对象的环境参数。其中,环境参数包括但不限于时间信息,电子设备所处位置的时区信息、位置信息、天气信息,以及电子设备的方位信息等。
在获取到待拍摄对象的环境参数之后,可以将获取到的这些环境参数输入到预先训练的支持向量机分类器中,由该支持向量机分类器根据输入的环境参数进行分类,以判定待拍摄对象是否处于逆光环境中。
又比如,在另一个可选的实施方式中,对待拍摄对象进行逆光环境识别,包括:
获取待拍摄对象在预设通道的直方图信息;
根据获取到的直方图信息对待拍摄对象进行逆光环境识别。
其中,预设通道包括R、G、B三个通道,在获取待拍摄对象的直方图信息时,可以获取到待拍摄对象的预览图像,再获取预览图像在R、G、B三个通道的直方图信息,将获取到的R、G、B三个通道的直方图信息作为待拍摄对象在预设通道的直方图信息。
之后,对待拍摄对象的直方图信息进行统计,得到统计结果。其中,具体统计不同亮度下的像素数量。
在得到统计结果之后,判断统计结果是否满足预设条件,若是,则确定待拍摄对象处于逆光环境。
可具体的,预设条件可以设置为:第一亮度区间和第二亮度区间的像素数量均达到预设数量阈值,且最低亮度小于第一预设亮度阈值和/或最高亮度大于第二预设亮度阈值,其中,预设数量预置、第一预设亮度阈值和第二预设亮度阈值为经验参数,可由本领域技术人员根据实际需要进行设置。
302、当识别到待拍摄对象处于逆光环境时,通过第一摄像头获取对应待拍摄对象的多个曝光参数不同的图像集合,其中,每个图像集合包括至少两个图像,且集合内图像的曝光参数相同;
同步通过第二摄像头获取至少一个对应待拍摄图像、且与第一摄像头获取的曝光参数相同的图像;
本申请实施例中,请参照图7,电子设备包括第一摄像头和第二摄像头,电子设备可以在待拍摄对象处于逆光环境时,同步通过第一摄像头和第二摄像头获取待拍摄对象的图像。
可具体的,电子设备通过第一摄像头按照不同的曝光参数对待拍摄对象进行拍摄,其中,对于每一曝光参数,拍摄多个图像,从而得到对应不同曝光参数的多个图像集合,在通过第一摄像头拍摄的同时,同步通过第二摄像头对待拍摄对象进行拍摄,获取到至少一个与第一摄像头获取的曝光参数相同的图像。需要说明的是,对于图像集合包括的图像个数,此处不做具体限定,不同图像集合间的图像个数可以相同,也可以不同。
303、对各图像集合进行集合内图像合成,得到多个第二合成图像。
其中,在进行集合内图像合成时,可以逐个对各图像集合进行合成。
首先,选中一图像集合,然后从选中的图像集合中选取一个图像作为基准图像,例如,假设图像集 合中各图像内容均为同一人像,可以选取人眼开合度最大的图像作为基准图像;又例如,可以结合各图像的人眼开合度大小以及清晰度大小进行综合选择,首先将人眼开合度大小和清晰度大小进行归一化,并设置人眼开合度大小的权重为α,设置清晰度大小的权重为1-α,对集合内各图像进行加权求值,选取值最大的图像作为基准图像;又例如,可以选择清晰度最高的图像作为基准图像。
之后,基于选择的基准图像,将集合内的其它图像与基准图像对齐。
基于对齐后的各图像,计算得到各像素点的平均像素值,比如,假设选中的图像集合中共有四个图像,某位置的像素点在四个图像中的像素值分别为:“0.8,0.9,1.1,1.2”,则可计算得到该位置的像素点的平均像素值为“1”。
之后,根据各平均像素值得到选中的图像集合的第二合成图像,比如,可以将前述基准图像的各像素点的像素值相应调整为计算得到的各平均像素值,从而得到选中的图像集合的第二合成图像;又比如,可以根据计算得到各平均像素值,生成一幅新的图像,将生成的图像作为选中的图像集合的第二合成图像。
基于以上描述的合成方案,依次选中其它图像集合,并完成集合内图像合成,从而得到各图像集合的第二合成图像。
304、将多个第二合成图像进行图像合成,得到第一合成图像。
同时,根据第一摄像头和第二摄像头同步获取到的曝光参数相同的两个图像,获取第一合成图像的深度信息。
其中,得到的第一合成图像为高动态范围图像,高动态范围图像相比普通的图像,能够提供更大的动态范围和图像细节,可以利用多个曝光参数不同的图像中各自最佳的图像细节来合成得到高动态范围图像,具体可参照以上实施例的相关描述,此处不再赘述。
需要说明的是,由于第一合成图像与多个第二合成图像的图像内容均相同,本申请实施例中在合成得到第一合成图像的同时,根据第一摄像头和第二摄像头同步获取到的曝光参数相同的两个图像,获取第一合成图像的深度信息。
其中,根据第一摄像头和第二摄像头同步拍摄得到的曝光参数相同的两个图像,以及第一摄像头和第二摄像头的距离,通过三角测距算法获取待拍摄对象的深度信息,并将获取到的深度信息作为第一合成图像的深度信息,具体可参照以上实施例的相关描述,此处不再赘述。
305、将第一合成图像中深度值达到预设深度阈值的区域确定为需要进行虚化处理的目标区域。
通常的,可以根据获取到的深度信息确定出第一合成图像的背景区域,将确定出的背景区域作为需要进行虚化处理的目标区域。其中,预设深度阈值用于界定某像素点位于前景区域还是背景区域,深度值达到预设深度阈值的区域也即是背景区域,在确定第一合成图像的背景区域之后,即可将第一合成图像的背景区域确定为需要进行虚化处理的目标区域。
306、将第一合成图像中的目标区域划分为对应不同深度值的多个子目标区域。
其中,在进行虚化处理时,首先将第一合成图像中的目标区域划分为对应不同深度值的多个子目标区域。比如,设置一深度值,对该深度值加减相同的变化值得到对应该深度值的深度值区间,将深度值位于前述深度值区间内的多个像素点聚合为一个子目标区域;设置另一深度值,对该深度值加减相同的变化值得到对应该深度值的深度值区间,将深度值位于前述深度值区间内的多个像素点聚合为另一个子目标区域,如此类推,得到对应不同深度值的多个子目标区域。
307、根据各子目标区域对应的深度值,以及预设的深度值和虚化强度的映射关系,确定各子目标区域对应的虚化强度。
在划分得到多个子目标区域之后,根据各子目标区域对应的深度值,以及预设的深度值和虚化强度的映射关系,确定各子目标区域对应的虚化强度。其中,对于前述映射关系的设置不做具体限制,可由本领域技术人员根据实际需要进行设置,比如,可以设置虚化强度和深度值成正比关系,也即是深度值越大,虚化程度越大。
308、按照各子目标区域对应的虚化强度,分别对各子目标区域进行虚化处理。
在确定个子目标区域的虚化程度之后,即可按照各子目标区域的虚化程度,分别对各子目标区域进行虚化处理。比如,请参照图10,左侧所示为虚化处理前的第一合成图像,其中,人像位于前景区域,不需要进行虚化处理,3团植物位于背景区域,且深度值均不相同,右下至上的顺序呈递增趋势;右侧所示为虚化处理后的第一合成图像,可见,3团植物均被虚化,但虚化程度同样由下至上的顺序呈递增趋势,下侧植物的虚化程度较低,上侧植物的虚化程度较高,中间植物的虚化程度居中。
在一实施例中还提供了一种图像处理装置。请参阅图14,图14为本申请实施例提供的图像处理装置的结构示意图。其中该图像处理装置应用于电子设备,该图像处理装置包括图像获取模块401、图像合成模块402、信息获取模块403、区域确定模块404以及虚化处理模块405,如下:
图像获取模块401,用于获取多个曝光参数不同的图像,其中,多个图像的图像内容相同;
图像合成模块402,用于将多个曝光参数不同的图像进行图像合成,得到第一合成图像;
信息获取模块403,用于获取第一合成图像的深度信息;
区域确定模块404,用于根据获取到的深度信息在第一合成图像中确定需要进行虚化处理的目标区域;
虚化处理模块405,用于对第一合成图像中的目标区域进行虚化处理,得到虚化处理后的第一合成图像。
在一实施例中,在获取多个曝光参数不同的图像时,图像获取模块401可以用于:
通过第一摄像头获取多个曝光参数不同的图像,同步通过第二摄像头获取至少一个与第一摄像头获取的曝光参数相同的图像;
而在获取第一合成图像的深度信息时,信息获取模块403可以用于:
根据第一摄像头和第二摄像头同步获取到的曝光参数相同的两个图像,获取第一合成图像的深度信息。
在一实施例中,在获取多个曝光参数不同的图像时,图像获取模块401可以用于:
获取对应不同曝光参数的多个图像集合,其中,每个图像集合包括至少两个图像,且集合内图像的曝光参数相同;
对各图像集合进行集合内图像合成,得到多个第二合成图像;
将得到的多个合成图像作为对应不同曝光参数的多个图像。
在一实施例中,在对各图像集合进行集合内图像合成,得到多个第二合成图像时,图像获取模块401可以用于:
选中一图像集合;
将选中的图像集合的集合内图像对齐,并获取对齐后的集合内图像各像素点的平均像素值;
根据各平均像素值得到选中的图像集合的第二合成图像,并继续选中一图像集合,直至得到各图像集合的合成图像。
在一实施例中,在获取多个曝光参数不同的图像时,图像获取模块401可以用于:
在接收到图像拍摄请求时,对待拍摄对象进行逆光环境识别;
当识别到待拍摄对象处于逆光环境时,获取对应待拍摄对象的多个曝光参数不同的图像。
在一实施例中,在对待拍摄对象进行逆光环境识别时,图像获取模块401可以用于:
获取所述待拍摄对象的环境参数;
根据获取到的环境参数对所述待拍摄对象进行逆光环境识别。
在一实施方式中,在对待拍摄对象进行逆光环境识别时,图像获取模块401可以用于:
取待拍摄对象在预设通道的直方图信息;
根据获取到的直方图信息对待拍摄对象进行逆光环境识别。
在一实施例中,深度信息为深度值,在根据深度信息在第一合成图像中确定需要进行虚化处理的目标区域时,区域确定模块404可以用于:
确定第一合成图像中深度值达到预设深度阈值的区域;
将深度值达到预设深度阈值的区域确定为需要进行虚化处理的目标区域。
在一实施例中,在对第一合成图像中的目标区域进行虚化处理时,虚化处理模块405可以用于:
将第一合成图像中的目标区域划分为对应不同深度值的多个子目标区域;
根据各子目标区域对应的深度值,以及预设的深度值和虚化强度的映射关系,确定各子目标区域对应的虚化强度;
按照各子目标区域对应的虚化强度,分别对各子目标区域进行虚化处理。
具体实施时,以上各个模块可以作为独立的实体实现,也可以进行任意组合,作为同一或若干个实体来实现,该图像处理装置与上文实施例中的图像处理方法属于同一构思,在图像处理装置上可以运行图像处理方法实施例中提供的任一方法,其具体实现过程详见图像处理方法实施例,此处不再赘述。
本申请实施例还提供一种电子设备。请参阅图15,电子设备500包括中央处理器501以及存储器502。其中,中央处理器501与存储器502电性连接。
所述中央处理器500是电子设备500的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或加载存储在存储器502内的计算机程序,以及调用存储在存储器502内的数据,执行电子设备500的各种功能并处理数据,从而实现对用户性别的准确识别。
所述存储器502可用于存储软件程序以及模块,中央处理器501通过运行存储在存储器502的计算机程序以及模块,从而执行各种功能应用以及数据处理。存储器502可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的计算机程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据电子设备的使用所创建的数据等。此外,存储器502可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器502还可以包括存储器控制器,以提供中央处理器501对存储器502的访问。
在本申请实施例中,电子设备500中的中央处理器501通过运行存储在存储器502中的计算机程序,执行上述任一实施例中的图像处理方法,比如:获取多个曝光参数不同的图像,其中,多个图像的图像内容相同;将所述多个曝光参数不同的图像进行图像合成,得到第一合成图像;获取所述第一合成图像的深度信息;根据所述深度信息在所述第一合成图像中确定需要进行虚化处理的目标区域;对所述第一合成图像中的所述目标区域进行虚化处理,得到虚化处理后的第一合成图像。
应当说明的是,该电子设备与上文实施例中的图像处理方法属于同一构思,在电子设备上可以运行图像处理方法实施例中提供的任一方法,其具体实现过程详见图像处理方法实施例,此处不再赘述。
请一并参阅图16,在某些实施方式中,电子设备500还可以包括:显示器503、射频电路504、音频电路505、电源506、图像处理电路507以及图形处理器508。其中,其中,显示器503、射频电路504、音频电路505以及电源506分别与中央处理器501电性连接。
显示器503可以用于显示由用户输入的信息或提供给用户的信息以及各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。显示器503可以包括显示面板,在某些实施方式中,可以采用液晶显示器(Liquid Crystal Display,LCD)、或者有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板。
射频电路504可以用于收发射频信号,以通过无线通信与网络设备或其他电子设备建立无线通讯,与网络设备或其他电子设备之间收发信号。
音频电路505可以用于通过扬声器、传声器提供用户与电子设备之间的音频接口。
电源506可以用于给电子设备500的各个部件供电。在一些实施例中,电源506可以通过电源管理系统与中央处理器501逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
图像处理电路507可以利用硬件和/或软件组件实现,可包括定义ISP(Image Signal Processing,图像信号处理)管线的各种处理单元,请参照图17,在一实施例中,图像处理电路507包括ISP处理器5071和控制逻辑器5072。摄像头5073捕捉的图像数据首先由ISP处理器5071处理,ISP处理器5071对图像数据进行分析以捕捉可用于确定和/或摄像头5073的一个或多个控制参数的图像统计信息。摄像 头5073可包括具有一个或多个透镜50731和图像传感器50732的照相机。图像传感器50732可包括色彩滤镜阵列(如Bayer滤镜),图像传感器50732可获取用图像传感器50732的每个成像像素捕捉的光强度和波长信息,并提供可由ISP处理器5071处理的一组原始图像数据。传感器5074(如陀螺仪)可基于传感器5074接口类型把采集的图像处理的参数(如防抖参数)提供给ISP处理器5071。传感器5074接口可以利用SMIA(Standard Mobile Imaging Architecture,标准移动成像架构)接口、其它串行或并行照相机接口或上述接口的组合。
此外,图像传感器50732也可将原始图像数据发送给传感器5074,传感器5074可基于传感器5074接口类型把原始图像数据提供给ISP处理器5071,或者传感器5074将原始图像数据存储到图像存储器5075中。
ISP处理器5071按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器5071可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。
ISP处理器5071还可从图像存储器5075接收图像数据。例如,传感器5074接口将原始图像数据发送给图像存储器5075,图像存储器5075中的原始图像数据再提供给ISP处理器5071以供处理。图像存储器5075可为存储器装置的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。
当接收到来自图像传感器50732接口或来自传感器5074接口或来自图像存储器5075的原始图像数据时,ISP处理器5071可进行一个或多个图像处理操作,如时域滤波。处理后的图像数据可发送给图像存储器5075,以便在被显示之前进行另外的处理。ISP处理器5071从图像存储器5075接收处理数据,并对所述处理数据进行原始域中以及RGB和YCbCr颜色空间中的图像数据处理。ISP处理器5071处理后的图像数据可输出给显示器503,以供用户观看和/或由图形引擎或图形处理器508进一步处理。此外,ISP处理器5071的输出还可发送给图像存储器5075,且显示器503可从图像存储器5075读取图像数据。在一个实施例中,图像存储器5075可被配置为实现一个或多个帧缓冲器。此外,ISP处理器5071的输出可发送给编码器/解码器5076,以便编码/解码图像数据。编码的图像数据可被保存,并在显示于显示器503设备上之前解压缩。编码器/解码器5076可由CPU或GPU或协处理器实现。
ISP处理器5071确定的统计数据可发送给控制逻辑器5072单元。例如,统计数据可包括自动曝光、自动白平衡、自动聚焦、闪烁检测、黑电平补偿、透镜50731阴影校正等图像传感器50732统计信息。控制逻辑器5072可包括执行一个或多个例程(如固件)的处理器和/或微控制器,一个或多个例程可根据接收的统计数据,确定摄像头5073的控制参数及ISP处理器5071的控制参数。例如,摄像头5073的控制参数可包括传感器5074控制参数(例如增益、曝光控制的积分时间、防抖参数等)、照相机闪光控制参数、透镜50731控制参数(例如聚焦或变焦用焦距)、或这些参数的组合。ISP控制参数可包括用于自动白平衡和颜色调整(例如,在RGB处理期间)的增益水平和色彩校正矩阵,以及透镜50731阴影校正参数等。
图形处理器508对电子设备需要进行显示的显示数据进行转换驱动,并向显示器503提供行扫描信号,控制显示器503的正确显示。
进一步地,在上述实施例描述的图像处理电路507的基础上,对该图像处理电路507做进一步介绍,请参照图18,与上述实施例的区别在于,摄像头5073包括第一摄像头507301和第二摄像头507302,第一摄像头507301包括第一透镜507311和第一图像传感器507321,第二摄像头507302包括第二透镜507312和第二图像传感器507322。
其中,对第一摄像头507301和第二摄像头507302的性能参数(例如,焦距、光圈大小、解像力等等)不做任何限制。第一摄像头507301和第二摄像头507302可设置于电子设备的同一平面内,比如,同时设置在电子设备的背面或正面。双摄像头在电子设备的安装距离可根据电子设备的尺寸确定和/或拍摄效果等确定,比如,为了使第一摄像头507301和第二摄像头507302拍摄的图像内容重叠度高,可将第一摄像头507301和第二摄像头507302安装得越近越好,例如,10mm以内。
其中,ISP处理器5071、控制逻辑器5072以及其它未示出部分(如传感器、图像存储器等)的功能和单摄摄像头情况的描述相同,此处不再赘述。
在本申请的实施例中,在利用深度传感器进行景深信息获取的实施例中,可以在一个摄像头工作的模式下进行。在需要利用第一摄像头507301和第二摄像头507302采集的图像进行景深信息获取的实施例中,需要两个摄像头同时工作。
在一实施例中,电子设备500中的中央处理器501运行存储在存储器502中的计算机程序,用于获取多个曝光参数不同的图像;
图形处理器508运行存储在存储器502中的计算机程序,用于将多个曝光参数不同的图像进行图像合成,得到第一合成图像;
中央处理器501还用于在图形处理器508合成第一合成图像的同时,获取第一合成图像的深度信息;
中央处理器501还用于:
根据获取到的深度信息在第一合成图像中确定需要进行虚化处理的目标区域;
对第一合成图像中的目标区域进行虚化处理,得到虚化处理后的第一合成图像。
与图15所示电子设备的区别在于,本实施例中的电子设备还包括额外的图形处理器508,该图形处理器508在中央处理器501获取到多个曝光参数不同的图像之后,代替中央处理器501执行将多个曝光参数不同的图像进行图像合成,得到第一合成图像,使得中央处理器501能够在图形处理器508合成第一合成图像的同时,获取第一合成图像的深度信息,由此,使得图像处理的效率得以提高。
本申请实施例还提供一种存储介质,存储介质存储有计算机程序,当计算机程序在计算机上运行时,使得计算机执行上述任一实施例中的图像处理方法,比如:首先获取多个曝光参数不同的图像,其中,多个图像的图像内容相同;然后将获取到的多个曝光参数不同的图像进行图像合成,得到第一合成图像;再获取第一合成图像的深度信息;再根据获取到的深度信息在第一合成图像中确定需要进行虚化处理的目标区域;最后对第一合成图像中的目标区域进行虚化处理,得到虚化处理后的第一合成图像。其中,存储介质可以是磁碟、光盘、只读存储器(Read Only Memory,ROM,)、或者随机存取记忆体(Random Access Memory,RAM)等。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
需要说明的是,对本申请实施例的图像处理方法而言,本领域普通测试人员可以理解实现本申请实施例的图像处理方法的全部或部分流程,是可以通过计算机程序来控制相关的硬件来完成,计算机程序可存储于一计算机可读取存储介质中,如存储在电子设备的存储器中,并被该电子设备内的至少一个中央处理器执行,在执行过程中可包括如图像处理方法的实施例的流程。其中,的存储介质可为磁碟、光盘、只读存储器、随机存取记忆体等。
对本申请实施例的图像处理装置而言,其各功能模块可以集成在一个处理芯片中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中,存储介质譬如为只读存储器,磁盘或光盘等。
以上对本申请实施例所提供的一种图像处理方法、装置、存储介质及电子设备进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种图像处理方法,其中,包括:
    获取多个曝光参数不同的图像,其中,多个图像的图像内容相同;
    将所述多个曝光参数不同的图像进行图像合成,得到第一合成图像;
    获取所述第一合成图像的深度信息;
    根据所述深度信息在所述第一合成图像中确定需要进行虚化处理的目标区域;
    对所述第一合成图像中的所述目标区域进行虚化处理,得到虚化处理后的第一合成图像。
  2. 如权利要求1所述的图像处理方法,其中,所述获取多个曝光参数不同的图像,包括:
    通过第一摄像头获取所述多个曝光参数不同的图像,同步通过第二摄像头获取至少一个与所述第一摄像头获取的曝光参数相同的图像;
    所述获取所述第一合成图像的深度信息,包括:
    根据所述第一摄像头和所述第二摄像头同步获取到的曝光参数相同的两个图像,获取所述第一合成图像的深度信息。
  3. 如权利要求1所述的图像处理方法,其中,所述获取多个曝光参数不同的图像,包括:
    获取对应不同曝光参数的多个图像集合,其中,每个图像集合包括至少两个图像,且集合内图像的曝光参数相同;
    对各图像集合进行集合内图像合成,得到多个第二合成图像;
    将所述多个第二合成图像作为所述多个曝光参数不同的图像。
  4. 如权利要求3所述的图像处理方法,其中,所述对各图像集合进行集合内图像合成,得到多个第二合成图像,包括:
    选中一图像集合;
    将选中的图像集合的集合内图像对齐,并获取对齐后的集合内图像各像素点的平均像素值;
    根据各所述平均像素值得到选中的图像集合的第二合成图像,并返回所述选中一图像集合的步骤,直至得到各图像集合的合成图像。
  5. 如权利要求1所述的图像处理方法,其中,所述获取多个曝光参数不同的图像,包括:
    在接收到图像拍摄请求时,对待拍摄对象进行逆光环境识别;
    当识别到所述待拍摄对象处于逆光环境时,获取对应所述待拍摄对象的多个曝光参数不同的图像。
  6. 如权利要求5所述的图像处理方法,其中,所述对待拍摄对象进行逆光环境识别,包括:
    获取所述待拍摄对象的环境参数;
    根据获取到的环境参数对所述待拍摄对象进行逆光环境识别。
  7. 如权利要求5所述的图像处理方法,其中,所述对待拍摄对象进行逆光环境识别,包括:
    获取待拍摄对象在预设通道的直方图信息;
    根据获取到的直方图信息对待拍摄对象进行逆光环境识别。
  8. 如权利要求1所述的图像处理方法,其中,所述深度信息为深度值,根据所述深度信息在所述第一合成图像中确定需要进行虚化处理的目标区域,包括:
    确定所述第一合成图像中深度值达到预设深度阈值的区域;
    将深度值达到所述预设深度阈值的区域确定为需要进行虚化处理的目标区域。
  9. 如权利要求8所述的图像处理方法,其中,对所述第一合成图像中的所述目标区域进行虚化处理,包括:
    将所述第一合成图像中的所述目标区域划分为对应不同深度值的多个子目标区域;
    根据各子目标区域对应的深度值,以及预设的深度值和虚化强度的映射关系,确定各子目标区域对应的虚化强度;
    按照各子目标区域对应的虚化强度,分别对各子目标区域进行虚化处理。
  10. 一种图像处理装置,其中,包括:
    图像获取模块,用于获取多个曝光参数不同的图像,其中,多个图像的图像内容相同;
    图像合成模块,用于将所述多个曝光参数不同的图像进行图像合成,得到第一合成图像;
    信息获取模块,用于获取所述第一合成图像的深度信息;
    区域确定模块,用于根据所述深度信息在所述第一合成图像中确定需要进行虚化处理的目标区域;
    虚化处理模块,用于对所述第一合成图像中的所述目标区域进行虚化处理,得到虚化处理后的第一合成图像。
  11. 一种存储介质,其上存储有计算机程序,其中,当所述计算机程序在计算机上运行时,使得所述计算机执行如下步骤:
    获取多个曝光参数不同的图像,其中,多个图像的图像内容相同;
    将所述多个曝光参数不同的图像进行图像合成,得到第一合成图像;
    获取所述第一合成图像的深度信息;
    根据所述深度信息在所述第一合成图像中确定需要进行虚化处理的目标区域;
    对所述第一合成图像中的所述目标区域进行虚化处理,得到虚化处理后的第一合成图像。
  12. 一种电子设备,包括中央处理器和存储器,所述存储器存储有计算机程序,其中,所述中央处理器通过调用所述计算机程序,用于执行以下步骤:
    获取多个曝光参数不同的图像,其中,多个图像的图像内容相同;
    将所述多个曝光参数不同的图像进行图像合成,得到第一合成图像;
    获取所述第一合成图像的深度信息;
    根据所述深度信息在所述第一合成图像中确定需要进行虚化处理的目标区域;
    对所述第一合成图像中的所述目标区域进行虚化处理,得到虚化处理后的第一合成图像。
  13. 如权利要求12所述的电子设备,其中,在获取多个曝光参数不同的图像时,所述中央处理器用于执行以下步骤:
    通过第一摄像头获取所述多个曝光参数不同的图像,同步通过第二摄像头获取至少一个与所述第一摄像头获取的曝光参数相同的图像;
    而在获取所述第一合成图像的深度信息时,所述中央处理器用于执行以下步骤:
    根据所述第一摄像头和所述第二摄像头同步获取到的曝光参数相同的两个图像,获取所述第一合成图像的深度信息。
  14. 如权利要求12所述的电子设备,其中,在获取多个曝光参数不同的图像时,所述中央处理器用于执行以下步骤:
    获取对应不同曝光参数的多个图像集合,其中,每个图像集合包括至少两个图像,且集合内图像的曝光参数相同;
    对各图像集合进行集合内图像合成,得到多个第二合成图像;
    将所述多个第二合成图像作为所述多个曝光参数不同的图像。
  15. 如权利要求14所述的电子设备,其中,在对各图像集合进行集合内图像合成,得到多个第二合成图像时,所述中央处理器用于执行以下步骤:
    选中一图像集合;
    将选中的图像集合的集合内图像对齐,并获取对齐后的集合内图像各像素点的平均像素值;
    根据各所述平均像素值得到选中的图像集合的第二合成图像,并返回所述选中一图像集合的步骤,直至得到各图像集合的合成图像。
  16. 如权利要求12所述的电子设备,其中,在获取多个曝光参数不同的图像时,所述中央处理器用于执行以下步骤:
    在接收到图像拍摄请求时,对待拍摄对象进行逆光环境识别;
    当识别到所述待拍摄对象处于逆光环境时,获取对应所述待拍摄对象的多个曝光参数不同的图像。
  17. 如权利要求16所述的电子设备,其中,对待拍摄对象进行逆光环境识别时,所述中央处理器用于执行以下步骤:
    获取所述待拍摄对象的环境参数;
    根据获取到的环境参数对所述待拍摄对象进行逆光环境识别。
  18. 如权利要求12所述的电子设备,其中,所述深度信息为深度值,在对所述第一合成图像中的所述目标区域进行虚化处理时,所述中央处理器用于执行以下步骤:
    将所述第一合成图像中的所述目标区域划分为对应不同深度值的多个子目标区域;
    根据各子目标区域对应的深度值,以及预设的深度值和虚化强度的映射关系,确定各子目标区域对应的虚化强度;
    按照各子目标区域对应的虚化强度,分别对各子目标区域进行虚化处理。
  19. 如权利要求18所述的电子设备,其中,对所述第一合成图像中的所述目标区域进行虚化处理时,所述中央处理器用于执行以下步骤:
    将所述第一合成图像中的所述目标区域划分为对应不同深度值的多个子目标区域;
    根据各子目标区域对应的深度值,以及预设的深度值和虚化强度的映射关系,确定各子目标区域对 应的虚化强度;
    按照各子目标区域对应的虚化强度,分别对各子目标区域进行虚化处理。
  20. 一种电子设备,包括中央处理器、图形处理器和存储器,所述存储器储存有计算机程序,其中,所述中央处理器通过调用所述计算机程序,用于获取多个曝光参数不同的图像;
    所述图形处理器通过调用所述计算机程序,用于将所述多个曝光参数不同的图像进行图像合成,得到第一合成图像;
    所述中央处理器还用于在所述图形处理器合成所述第一合成图像的同时,获取所述第一合成图像的深度信息;
    还用于根据所述深度信息在所述第一合成图像中确定需要进行虚化处理的目标区域;
    还用于对所述第一合成图像中的所述目标区域进行虚化处理,得到虚化处理后的第一合成图像。
PCT/CN2018/120683 2018-01-31 2018-12-12 图像处理方法、装置、存储介质及电子设备 WO2019148978A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810097898.7A CN108322646B (zh) 2018-01-31 2018-01-31 图像处理方法、装置、存储介质及电子设备
CN201810097898.7 2018-01-31

Publications (1)

Publication Number Publication Date
WO2019148978A1 true WO2019148978A1 (zh) 2019-08-08

Family

ID=62890387

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/120683 WO2019148978A1 (zh) 2018-01-31 2018-12-12 图像处理方法、装置、存储介质及电子设备

Country Status (2)

Country Link
CN (1) CN108322646B (zh)
WO (1) WO2019148978A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114222075A (zh) * 2022-01-28 2022-03-22 广州华多网络科技有限公司 移动端图像处理方法及其装置、设备、介质、产品

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108322646B (zh) * 2018-01-31 2020-04-10 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN108718388B (zh) * 2018-08-29 2020-02-11 维沃移动通信有限公司 一种拍照方法及移动终端
CN109413152B (zh) * 2018-09-25 2021-02-26 上海瑾盛通信科技有限公司 图像处理方法、装置、存储介质及电子设备
CN110072051B (zh) * 2019-04-09 2021-09-03 Oppo广东移动通信有限公司 基于多帧图像的图像处理方法和装置
CN110072052B (zh) * 2019-04-09 2021-08-27 Oppo广东移动通信有限公司 基于多帧图像的图像处理方法、装置、电子设备
CN110166709B (zh) * 2019-06-13 2022-03-18 Oppo广东移动通信有限公司 夜景图像处理方法、装置、电子设备以及存储介质
CN110290300A (zh) * 2019-06-28 2019-09-27 Oppo广东移动通信有限公司 设备成像方法、装置、存储介质及电子设备
CN110443766B (zh) * 2019-08-06 2022-05-31 厦门美图之家科技有限公司 图像处理方法、装置、电子设备及可读存储介质
CN112995490A (zh) * 2019-12-12 2021-06-18 华为技术有限公司 图像处理方法及终端拍照方法、介质和系统
CN113129241B (zh) * 2019-12-31 2023-02-07 RealMe重庆移动通信有限公司 图像处理方法及装置、计算机可读介质、电子设备
CN111416936B (zh) * 2020-03-24 2021-09-17 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备及存储介质
CN112261307B (zh) * 2020-09-27 2022-08-19 厦门亿联网络技术股份有限公司 一种图像曝光方法、装置及存储介质
CN113225606B (zh) * 2021-04-30 2022-09-23 上海哔哩哔哩科技有限公司 视频弹幕处理方法及装置
CN113298735A (zh) * 2021-06-22 2021-08-24 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120120279A1 (en) * 2010-11-12 2012-05-17 Altek Corporation Image capturing device and image synthesis method thereof
CN103841324A (zh) * 2014-02-20 2014-06-04 小米科技有限责任公司 拍摄处理方法、装置和终端设备
CN105791707A (zh) * 2015-12-31 2016-07-20 北京金山安全软件有限公司 一种图像处理的方法、装置及电子设备
CN106993112A (zh) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 基于景深的背景虚化方法及装置和电子装置
CN107493432A (zh) * 2017-08-31 2017-12-19 广东欧珀移动通信有限公司 图像处理方法、装置、移动终端及计算机可读存储介质
CN107563979A (zh) * 2017-08-31 2018-01-09 广东欧珀移动通信有限公司 图像处理方法、装置、计算机可读存储介质和计算机设备
CN107592453A (zh) * 2017-09-08 2018-01-16 维沃移动通信有限公司 一种拍摄方法及移动终端
CN108322646A (zh) * 2018-01-31 2018-07-24 广东欧珀移动通信有限公司 图像处理方法、装置、存储介质及电子设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9124762B2 (en) * 2012-12-20 2015-09-01 Microsoft Technology Licensing, Llc Privacy camera
CN105959585B (zh) * 2016-05-12 2019-08-16 南昌黑鲨科技有限公司 多级逆光检测方法及装置
CN107241559B (zh) * 2017-06-16 2020-01-10 Oppo广东移动通信有限公司 人像拍照方法、装置以及摄像设备
CN107635093A (zh) * 2017-09-18 2018-01-26 维沃移动通信有限公司 一种图像处理方法、移动终端及计算机可读存储介质
CN107610046A (zh) * 2017-10-24 2018-01-19 上海闻泰电子科技有限公司 背景虚化方法、装置及系统
CN107592473A (zh) * 2017-10-31 2018-01-16 广东欧珀移动通信有限公司 曝光参数调整方法、装置、电子设备和可读存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120120279A1 (en) * 2010-11-12 2012-05-17 Altek Corporation Image capturing device and image synthesis method thereof
CN103841324A (zh) * 2014-02-20 2014-06-04 小米科技有限责任公司 拍摄处理方法、装置和终端设备
CN105791707A (zh) * 2015-12-31 2016-07-20 北京金山安全软件有限公司 一种图像处理的方法、装置及电子设备
CN106993112A (zh) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 基于景深的背景虚化方法及装置和电子装置
CN107493432A (zh) * 2017-08-31 2017-12-19 广东欧珀移动通信有限公司 图像处理方法、装置、移动终端及计算机可读存储介质
CN107563979A (zh) * 2017-08-31 2018-01-09 广东欧珀移动通信有限公司 图像处理方法、装置、计算机可读存储介质和计算机设备
CN107592453A (zh) * 2017-09-08 2018-01-16 维沃移动通信有限公司 一种拍摄方法及移动终端
CN108322646A (zh) * 2018-01-31 2018-07-24 广东欧珀移动通信有限公司 图像处理方法、装置、存储介质及电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114222075A (zh) * 2022-01-28 2022-03-22 广州华多网络科技有限公司 移动端图像处理方法及其装置、设备、介质、产品
CN114222075B (zh) * 2022-01-28 2023-08-01 广州华多网络科技有限公司 移动端图像处理方法及其装置、设备、介质、产品

Also Published As

Publication number Publication date
CN108322646B (zh) 2020-04-10
CN108322646A (zh) 2018-07-24

Similar Documents

Publication Publication Date Title
WO2019148978A1 (zh) 图像处理方法、装置、存储介质及电子设备
CN110445988B (zh) 图像处理方法、装置、存储介质及电子设备
US11582400B2 (en) Method of image processing based on plurality of frames of images, electronic device, and storage medium
WO2020259179A1 (zh) 对焦方法、电子设备和计算机可读存储介质
CN109218628B (zh) 图像处理方法、装置、电子设备及存储介质
US10630906B2 (en) Imaging control method, electronic device and computer readable storage medium
US20200068121A1 (en) Imaging Processing Method and Apparatus for Camera Module in Night Scene, Electronic Device and Storage Medium
WO2020034737A1 (zh) 成像控制方法、装置、电子设备以及计算机可读存储介质
WO2019105154A1 (en) Image processing method, apparatus and device
CN110691193B (zh) 摄像头切换方法、装置、存储介质及电子设备
US20210014411A1 (en) Method for image processing, electronic device, and computer readable storage medium
CN110290289B (zh) 图像降噪方法、装置、电子设备以及存储介质
US20200045219A1 (en) Control method, control apparatus, imaging device, and electronic device
CN110072052B (zh) 基于多帧图像的图像处理方法、装置、电子设备
CN110191291B (zh) 基于多帧图像的图像处理方法和装置
CN111028189A (zh) 图像处理方法、装置、存储介质及电子设备
CN107948538B (zh) 成像方法、装置、移动终端和存储介质
CN110445989B (zh) 图像处理方法、装置、存储介质及电子设备
CN111028190A (zh) 图像处理方法、装置、存储介质及电子设备
WO2020038087A1 (zh) 超级夜景模式下的拍摄控制方法、装置和电子设备
US20220166930A1 (en) Method and device for focusing on target subject, and electronic device
US10805508B2 (en) Image processing method, and device
US11431915B2 (en) Image acquisition method, electronic device, and non-transitory computer readable storage medium
WO2019105297A1 (zh) 图像虚化处理方法、装置、移动设备及存储介质
CN110349163B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18904298

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18904298

Country of ref document: EP

Kind code of ref document: A1