CN113298735A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113298735A
CN113298735A CN202110690391.4A CN202110690391A CN113298735A CN 113298735 A CN113298735 A CN 113298735A CN 202110690391 A CN202110690391 A CN 202110690391A CN 113298735 A CN113298735 A CN 113298735A
Authority
CN
China
Prior art keywords
image
depth
thread
blurring
reference image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110690391.4A
Other languages
Chinese (zh)
Inventor
王顺飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110690391.4A priority Critical patent/CN113298735A/en
Publication of CN113298735A publication Critical patent/CN113298735A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • G06T5/90
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/557Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: performing image quality enhancement processing on multi-frame images with different exposure values through a first thread to obtain an enhanced image fused with the multi-frame images; generating a depth image of a reference image in the multi-frame image through a second thread; the depth image at least comprises depth information of a background area in the reference image; and generating a blurring parameter according to the depth image, and blurring the enhanced image by using the blurring parameter. By implementing the embodiment of the application, the calculation of image quality enhancement and image blurring can be processed in parallel, and the image quality is improved while the time consumption of an algorithm is reduced.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
At present, most electronic devices such as smart phones have a shooting function. However, the captured image may have quality problems such as under exposure and over exposure, which are limited by the hardware condition of the imaging apparatus. In order to improve the image quality, part of the electronic equipment can optimize the shot image through a software algorithm. However, in practice, the data volume of the image itself is large, and the calculation amount of the image optimization algorithm is also large, so that the algorithm takes too long time.
Disclosure of Invention
The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a storage medium, which can be used for processing the calculation of image quality enhancement and image blurring in parallel, and can improve the image quality while reducing the time consumption of an algorithm.
The embodiment of the application discloses an image processing method, which comprises the following steps: performing image quality enhancement processing on multi-frame images with different exposure values through a first thread to obtain an enhanced image fused with the multi-frame images; generating a depth image of a reference image in the multi-frame image through a second thread; the depth image at least comprises depth information of a background area in the reference image; and generating a blurring parameter according to the depth image, and blurring the enhanced image by using the blurring parameter.
An embodiment of the present application discloses an image processing apparatus, including: the first processing module is used for carrying out image quality enhancement processing on multi-frame images with different exposure values through a first thread to obtain an enhanced image fused with the multi-frame images; the second processing module is used for generating a depth image of a reference image in the multi-frame images through a second thread; the depth image at least comprises depth information of a background area in the reference image; and the blurring module is used for generating blurring parameters according to the depth-of-field image and performing blurring processing on the enhanced image by using the blurring parameters.
The embodiment of the application discloses an electronic device, which comprises a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor is enabled to realize any image processing method disclosed by the embodiment of the application.
The embodiment of the application discloses a computer readable storage medium which stores a computer program, wherein the computer program realizes any image processing method disclosed by the embodiment of the application when being executed by a processor.
Compared with the related art, the embodiment of the application has the following beneficial effects:
performing image quality enhancement processing on multi-frame images with different exposure values through a first thread to obtain an enhanced image fused with the multi-frame images; and generating a depth image of the reference image in the multi-frame images through the second thread. The depth image at least comprises depth information of a background area in the reference image, therefore, a blurring parameter can be generated on the basis of the depth image, and the blurring parameter is utilized to perform blurring processing on the enhanced image subjected to image quality enhancement, the finally generated image not only comprises an image quality enhancement effect, but also has an image blurring effect, and the image quality is improved. In addition, the image enhancement and the image blurring are performed through two different parallel threads, so that the image enhancement and the image blurring can be calculated at the same time, and the time consumption of an algorithm is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram of an image processing circuit according to an embodiment;
FIG. 2 is a flow diagram of an image processing method according to one embodiment;
FIG. 3 is a flowchart illustrating an embodiment of a process for performing image quality enhancement via a first thread;
FIG. 4 is a flow diagram illustrating a process for generating a depth image via a second thread according to one embodiment;
FIG. 5 is a flow diagram that illustrates another exemplary embodiment of a method for image processing;
FIG. 6 is a flow diagram that illustrates another embodiment of generating a depth image via a second thread;
FIG. 7 is a flow diagram that illustrates another exemplary embodiment of a disclosed image processing method;
FIG. 8 is an exemplary diagram of a blurring region in an enhanced image according to one embodiment disclosed herein;
FIG. 9 is a schematic diagram of an image processing apparatus according to an embodiment;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the examples and figures of the present application are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
In the related art, the quality of an image captured by an electronic device is easily affected by the brightness of a scene. For example, in a low-illuminance scene, the quality of a captured image is likely to be underexposed; in a highlight scene, the quality of a captured image is likely to be overexposed. In order to reduce the image quality problem caused by the scene brightness, image quality enhancement processing is often performed on a captured image. The algorithm of the image quality enhancement processing may include, but is not limited to, a High-Dynamic Range (HDR) algorithm.
In addition to the image quality enhancement processing, the electronic device may perform image blurring processing on the captured image in order to blur the subject of the portrait when the portrait is captured. If the captured image needs to be subjected to both image quality enhancement processing and image blurring processing, the electronic device generally performs image quality enhancement processing on the captured image first, and performs image blurring based on the image quality enhancement result. In practice, it is found that an image quality enhancement processing algorithm such as an HDR algorithm often uses a plurality of frames of images as a processing object, so that the calculation amount of the algorithm is large, the algorithm takes a long time, and the overall time consumption of image processing is long.
The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a storage medium, which can be used for processing the calculation of image quality enhancement and image blurring in parallel, and can ensure the image quality while reducing the time consumption of an algorithm. The following are detailed below.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an image processing circuit according to an embodiment. The image processing circuit can be applied to electronic equipment such as a smart phone, a smart tablet, a smart watch and the like, but is not limited to the electronic equipment. As shown in fig. 1, the Image Processing circuit may include an imaging device (camera) 110, an attitude sensor 120, an Image memory 130, an Image Signal Processing (ISP) processor 140, a logic controller 150, and a display 160.
The image processing circuitry includes an ISP processor 140 and control logic 150. The image data captured by the imaging device 110 is first processed by the ISP processor 140, and the ISP processor 140 analyzes the image data to capture image statistics that may be used to determine one or more control parameters of the imaging device 110. The imaging device 110 may include one or more lenses 112 and an image sensor 114. Image sensor 114 may include an array of color filters (e.g., Bayer filters), and image sensor 114 may acquire light intensity and wavelength information captured by each imaging pixel and provide a set of raw image data that may be processed by ISP processor 140. The attitude sensor 120 (e.g., a three-axis gyroscope, hall sensor, accelerometer, etc.) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 140 based on the type of interface of the attitude sensor 120. The attitude sensor 120 interface may employ an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination thereof.
In addition, the image sensor 114 may also transmit raw image data to the attitude sensor 120, the attitude sensor 120 may provide the raw image data to the ISP processor 140 based on the type of interface of the attitude sensor 120, or the attitude sensor 120 may store the raw image data in the image memory 130.
The ISP processor 140 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 140 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The ISP processor 140 may also receive image data from the image memory 130. For example, the attitude sensor 120 interface sends raw image data to the image memory 130, and the raw image data in the image memory 130 is then provided to the ISP processor 140 for processing. The image Memory 130 may be a portion of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the image sensor 114 interface or from the attitude sensor 120 interface or from the image memory 130, the ISP processor 140 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 130 for additional processing before being displayed. ISP processor 140 receives the processed data from image memory 130 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The image data processed by ISP processor 140 may be output to display 160 for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of the ISP processor 140 may also be sent to the image memory 130, and the display 160 may read image data from the image memory 130. In one embodiment, image memory 130 may be configured to implement one or more frame buffers.
The statistics determined by the ISP processor 140 may be sent to the control logic 150. For example, the statistical data may include image sensor 114 statistics such as gyroscope vibration frequency, auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 112 shading correction, and the like. The control logic 150 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 110 and control parameters of the ISP processor 140 based on the received statistical data. For example, the control parameters of the imaging device 110 may include attitude sensor 120 control parameters (e.g., gain, integration time of exposure control, anti-shake parameters, etc.), camera flash control parameters, camera anti-shake displacement parameters, lens 112 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 112 shading correction parameters.
In one embodiment, the ISP processor 140 may set the auto exposure parameter to a plurality of different exposure values and transmit the auto exposure parameter to the control logic 150. Control logic 150 may determine control parameters for imaging device 110 and control parameters for attitude sensor 120 based on the auto-exposure parameters. At least the integration time of the exposure control may be determined according to the auto exposure parameter, so that the imaging device 110 may capture a plurality of frames of images with different exposure values and transmit the plurality of frames of images to the ISP processor 140.
The ISP processor 140 may perform image quality enhancement processing on the multi-frame images with different exposure values through the first thread to obtain an enhanced image with a fused multi-frame image. Meanwhile, the ISP processor 140 may determine a reference image from the multi-frame image through the second thread and generate a depth image of the reference image, where the depth image may include at least depth information of a background area in the reference image.
After obtaining the enhanced image of the first thread and the depth image of the second thread, the ISP processor 140 may generate a blurring parameter according to the depth image, and perform blurring processing on the enhanced image by using the generated blurring parameter.
Referring to fig. 2, fig. 2 is a flowchart illustrating an image processing method according to an embodiment, which is applicable to any one of the electronic devices, and is not limited specifically. As shown in fig. 2, the method may include the steps of:
210. and performing image quality enhancement processing on the multi-frame images with different exposure values through the first thread to obtain an enhanced image fused with the multi-frame images.
Threads may refer to actual units of operation in a process, a thread may refer to a single sequential control flow in a process, and multiple threads may be in parallel in a process. The first thread and the second thread may be any two different parallel threads, for example, two parallel threads in the same image processing process, but not limited thereto.
The electronic equipment can shoot multiple frames of images according to different exposure values. The image format of the multi-frame image may be a RAW image, a YUV image, or an RGB image, but is not limited thereto. The multi-frame image may include a low exposure image with a low exposure value, a high exposure image with a high exposure value, and a normal exposure image with a moderate exposure value. The exposure value of the low-exposure image is low, the exposure time is short, and the brightness of the low-exposure image is dark; the exposure value of the high-exposure image is higher, the exposure time is longer, and the brightness of the high-exposure image is higher; the exposure value and the exposure time of the normal exposure image are moderate, and the image brightness is moderate.
The electronic equipment can perform image quality enhancement processing on the multi-frame images with different exposure values in the first thread, and the image quality enhancement processing can fuse information in the multi-frame images to obtain an enhanced image with improved image quality. The picture quality enhancement processing algorithm may include, but is not limited to, HDR algorithm, multi-frame noise reduction algorithm.
220. And generating the depth image of the reference image in the multi-frame images through the second thread.
The electronic device may select the reference image after obtaining the multi-frame images having different exposure values. Alternatively, the reference image may be an image of the plurality of frames of images in which the exposure value is within a target range, and the target range may be smaller than the first exposure value and larger than the second exposure value. The first exposure value and the second exposure value may be set according to actual business requirements, and are not particularly limited. For example, the target range may be set to the exposure value range corresponding to the normal exposure image according to business experience, i.e., the reference image may be the normal exposure image. Further optionally, the first exposure value may also be a maximum exposure value in the multi-frame image, and the second exposure value may be a minimum exposure value in the multi-frame image. That is, the reference image may be an image of the multi-frame image in which the exposure value is in the middle range.
It should be noted that the determination of the reference image may be performed after obtaining the multi-frame image and before the steps 210 and 220, and the steps 210 and 220 may be performed simultaneously.
After the electronic device determines the reference image, the electronic device may generate a depth image of the reference image through the second thread. The depth image may include at least depth information of a background region in the reference image; alternatively, the depth image may also include depth information of each pixel point in the reference image. Wherein the depth information may comprise at least a depth value indicating a physical distance between an object in the background area and the imaging device.
The electronic device may perform depth estimation on the reference image in the second thread to determine depth information such as a depth value of each pixel point in the reference image. The electronic device may perform depth estimation on the reference image by using methods such as structured light, Time of Flight (TOF), binocular stereo imaging, monocular phase detection, monocular depth estimation based on depth learning or machine learning, and the like, but is not limited thereto.
For example, in depth estimation through structured light, the electronic device may project a specific light signal to the surface of the object to be photographed, and the imaging device captures reflection information of the specific light signal while photographing an image, so as to calculate depth information of the object according to the change of the light signal brought by the surface of the object.
For example, in depth estimation by TOF, the electronic device may emit invisible light such as infrared light, capture a reflected light signal of the invisible light reflected by an object by an invisible light sensor such as an infrared sensor, and determine depth information of the object to be photographed according to a flight time required from emission to reception of the invisible light.
For example, when depth estimation is performed through binocular stereo imaging, the electronic device may photograph the same photographed object through two imaging devices, and perform binocular matching and triangulation on images photographed by the two imaging devices, thereby determining depth information of the photographed object based on a principle of triangulation.
For example, when monocular depth estimation is performed by a method based on depth learning or machine learning, algorithms such as monoDepth, SfMLearner, vid2depth, GeoNet, and the like may be used.
After obtaining the depth of field information of each pixel point in the reference image, the electronic device may further determine, in the second thread, a background region in the reference image according to the depth of field information in the depth of field image. The depth of field value of the background area pixel point of the reference image in the depth of field image is generally larger than that of the foreground area pixel point in the depth of field image.
Optionally, a depth of field threshold may be set, and the electronic device may identify a pixel point having a depth of field value greater than the depth of field threshold as a background area pixel point of the reference image. The depth of field threshold may be set according to actual service requirements, or the depth of field threshold may also be determined according to depth of field information of each pixel in the depth of field image, for example, the depth of field threshold may be a median of the depth of field values of each pixel, and is not particularly limited.
230. And generating a blurring parameter according to the depth image, and blurring the enhanced image by using the generated blurring parameter.
The electronic device may generate the blurring parameter according to depth of field information of the background region in the depth of field image. The blurring parameter acts on the enhanced image and can be used for blurring the enhanced image. Blurring parameters may include, but are not limited to: blurring levels or blurring regions.
The blurring level may be a parameter indicating a degree of blurring the image region, and the higher the blurring level is, the more blurred the image region after blurring; conversely, the lower the blurring level, the clearer the image area after blurring processing.
For example, the electronic device may obtain a depth value of each pixel included in a background region in the depth image, determine an average depth value of each pixel, and determine a virtualization level according to the average depth value, where the average depth value may be in a positive correlation with the virtualization level, and the larger the average depth value is, the higher the virtualization level is, and the more blurred the image region after virtualization processing is.
For example, the electronic device may also preset a correspondence between the depth of field value and the virtualization level, and the electronic device may find the virtualization level corresponding to each pixel point from the correspondence according to the depth of field value of each pixel point included in the background region in the depth of field image.
The blurring region may refer to a region of the image in the enhanced image that needs to be blurred. When the electronic device performs blurring processing on the enhanced image, blurring processing can be performed on a blurring area in the enhanced image, and other image areas can be kept unchanged.
For example, the electronic device may determine a background region of the enhanced image according to the background region of the depth image, and further determine the background region of the enhanced image as a blurring region, and determine a foreground region of the enhanced image other than the background region as a region that remains unchanged. The electronic device may directly determine the background region of the depth image as the background region of the enhanced image, or correct the background region of the depth image and determine the corrected background region as the background region of the enhanced image.
In this embodiment of the application, the enhanced image may include one or more blurring regions, and the blurring level corresponding to each blurring region may be the same or different, and is not limited specifically.
It should be noted that although the reference image and the enhanced image are different images, boundaries of a foreground region and a background region in the reference image and the enhanced image are similar, and depth information of each pixel in the reference image is similar to depth information of each pixel in the enhanced image. Therefore, the blurring parameter generated from the depth image of the reference image can be used as it is for blurring the enhanced image, and a good blurring effect can be obtained even in the enhanced image.
The electronic device may perform step 230 by the first thread; alternatively, step 230 may also be performed by the second thread; alternatively, the first thread may be ended after the step 210 is completed, the second thread may be ended after the step 220 is completed, and the step 230 may be additionally executed by a new third thread, which is not limited specifically.
For example, if the step 240 is executed by the first thread, when the step 220 is executed to obtain the depth image, the depth image in the second thread may be transmitted to the first thread, a blurring parameter is generated by the first thread according to the depth image, and the enhanced image is blurred by using the blurring parameter.
For example, if step 230 is executed by the second thread, after the step 210 is executed to obtain the enhanced image, the enhanced image in the first thread may be transmitted to the second thread, a blurring parameter is generated by the second thread according to the depth image obtained by the step 220, and the enhanced image is blurred by using the blurring parameter.
Since the first thread and the second thread may be executed simultaneously, the timing of generation of the enhanced image in the first thread may be different from the timing of generation of the depth image in the second thread. If step 230 is executed in a thread with a later image generation time, the transmission of another image between different threads can be completed while the depth image or the enhanced image is generated, which is beneficial to further reducing the overall time consumption of the algorithm. For example, when the image quality enhancement processing in the first thread employs an algorithm with a large calculation amount such as an HDR algorithm and the like, and the generation of the range image in the second thread employs an algorithm with a small calculation amount such as TOF and the like, the generation timing of the range image in the second thread may be earlier than the generation timing of the enhanced image in the first thread. At this time, if step 230 is executed in the first thread, the depth image generated in the second thread may be transmitted to the first thread while the enhanced image is generated by the first thread, without waiting for the first thread to generate the enhanced image before transmitting. Therefore, after the first thread generates the enhanced image, the blurring parameter can be generated immediately according to the acquired depth image, and the blurring processing can be performed on the generated enhanced image by using the blurring parameter.
It can be seen that, in the foregoing embodiment, the computations of image quality enhancement and image blurring may be processed through two different parallel threads, and the blurring parameter may be computed on the basis of the result of image quality enhancement without performing image quality enhancement first, but may be computed by using the depth-of-field image, which is beneficial to reducing the time consumption of the algorithm. Meanwhile, the generated blurring parameters still act on the result of image quality enhancement, namely the blurring parameters can act on the enhanced image, so that the finally generated image not only has an image quality enhancement effect, but also has an image blurring effect, and the image quality can be ensured while the time consumption of the algorithm is reduced.
To better explain the image processing method disclosed in the embodiments of the present application. The following describes steps performed in the first thread and the second thread.
Referring to fig. 3, fig. 3 is a flowchart illustrating an embodiment of performing an image quality enhancement process by a first thread. It should be noted that the steps shown in fig. 3 may be executed by the first thread, and details are not described below. As shown in fig. 3, the following steps may be included:
310. and respectively carrying out brightness alignment, image registration and moving object detection on the multi-frame images with different exposure values.
The brightness alignment may refer to adjusting the exposure values of the multiple frames of images with different brightness to the same brightness.
And image registration, which can mean identifying pixel points belonging to the same object in a multi-frame image and determining the pixel points belonging to the same object as matching point pairs. At the time of image capturing, there may be slight differences between the captured multi-frame images due to shaking or the like, and the differences may be corrected by image registration. For example, the electronic device may respectively extract feature descriptors from multiple frames of images, and perform feature matching on the multiple frames of images according to the extracted feature descriptors, to obtain a plurality of matched point pairs. The electronic device may further perform screening on the matched pairs of points, such as removing some pairs of points with matching errors from the pairs of points through the RANSAC algorithm, so as to obtain matched pairs of points. The electronic equipment calculates transformation parameters according to matching point pairs between each frame of image in the multi-frame images, and can perform image registration on the multi-frame images according to the transformation parameters.
The moving object detection may refer to detecting an object in a moving state.
320. And determining a weight map when the multi-frame images are fused according to the detection result of the moving target of each frame of image in the multi-frame images.
The moving object detection result may include an image region where the moving object is located in the single frame image, and the electronic device may assign a lower fusion weight to the image region where the moving object is located, thereby obtaining a weight map.
330. And carrying out image fusion on the multi-frame images with different exposure values according to the weight map to obtain a fused image.
Wherein, the electronic device can synthesize multiple frame images with different exposure values into an HDR image with high dynamic range by performing steps 310-330.
340. The fused image is mapped from a high dynamic range to a low dynamic range to obtain an enhanced image.
The electronic device may map the fused image from a high Dynamic Range to a Low Dynamic Range (LDR) through a compression algorithm such as Tone-Mapping (Tone-Mapping). Optionally, if the multi-frame image processed by the first thread is in a RAW format, after mapping the fused image from the high dynamic range to the low dynamic range, performing image format conversion, converting the low dynamic range fused image in the RAW format into an RGB image or a YUV image, and using the image after format conversion as an enhanced image.
Therefore, the electronic equipment can fuse multiple frames of images with different exposure values in the first thread to generate an enhanced image, the enhanced image is full in light and shadow level, and the phenomena of overexposure or underexposure are less. On the basis, the enhanced image is subjected to image blurring, and the finally generated image can not only support the shooting subject, but also has a full light and shadow effect.
Referring to fig. 4, fig. 4 is a flowchart illustrating a process of generating a depth image by a second thread according to an embodiment. It should be noted that the steps shown in fig. 4 may be executed by the second thread, and details are not described below. As shown in fig. 4, the following steps may be included:
410. and identifying a portrait area and a hair area in the reference image to obtain a portrait segmentation result and a hair matting result.
The electronic equipment can identify the portrait area in the reference image to obtain the portrait segmentation result. The portrait segmentation result may be a portrait mask that may be used to indicate a portrait region in the reference image.
The method for identifying the portrait area may include, but is not limited to, the following image segmentation methods: a graph theory based segmentation method, a cluster based segmentation method, a semantic based segmentation method, an instance based segmentation method, a deeplab series based Network model based segmentation method, a U-type Network (U-Net) based segmentation method, or a Full Convolution Network (FCN) based segmentation method.
Alternatively, the method of identifying the portrait area may also include, but is not limited to, the following portrait matting methods: traditional Matting methods without deep learning, such as Poisson Matting, bayesian Matting based on bayesian theory, machine learning Matting based on data driving, or closed surface Matting, or Matting based on deep learning using artificial neural networks, such as Convolutional Neural Networks (CNN).
The electronic device can also identify a hair region of the reference image to obtain a hair matting result. The hair matting result can be a hair mask indicating a hair region in the reference image.
Wherein the hair region of the reference image can be identified by a deep learning method. For example, a hair identification model comprising an encoder and a decoder can be constructed, and the hair identification model is trained by using hair sample data, so that the hair identification model has hair identification capability and can identify pixel points belonging to hair. The hair sample data may include the portrait data and a hair mask label carried by the portrait data, the hair mask label indicating a hair region in the portrait data. The electronic equipment inputs the reference image into the hair identification model, and the hair identification model can identify whether each pixel point included in the reference image belongs to hair or not, so that the hair area of the reference image can be determined according to the pixel point identified as belonging to the hair.
In step 410, the electronic device may identify the portrait area or the hair area of the reference image, respectively, i.e., the steps of identifying the portrait area and identifying the hair area are processed in parallel.
Alternatively, the electronic device may recognize the portrait area of the reference image to obtain the portrait segmentation result, and then recognize the hair area of the reference image according to the portrait segmentation result. Wherein:
the electronic equipment can identify the portrait area of the reference image to obtain a portrait segmentation result. Optionally, the portrait segmentation result may be a three-valued portrait mask, which may be used to indicate a portrait area, a background area, and a boundary area of the portrait area or the background area of the reference image.
And the electronic equipment carries out channel splicing on the portrait segmentation result and the reference image to obtain a spliced image. The electronic device may further identify the hair region of the stitched image as the hair region of the reference image. The identification of the hair region is carried out based on the portrait segmentation result and the reference image, and the accuracy of hair identification can be improved.
For example, the reference image may be an RGB image, and the result of segmenting the human image may be a three-value mask of the reference image, where the three-value mask includes three different pixel values respectively corresponding to a pixel value of a pixel point in a human image region, a pixel value of a pixel point in a background region, and a pixel value of a pixel point in a boundary region between the human image region and the background region. After obtaining the ternary mask, the electronic device may stitch the ternary mask into the 4 th channel of the reference image, except for the R, G, B channel, to obtain a stitched image. The electronic device may input the stitched image to the hair recognition model described above, and determine the hair region of the reference image according to the hair region output by the hair recognition model.
420. And carrying out depth estimation on the reference image to obtain a depth image of the reference image.
The electronic device may perform depth estimation on the reference image by using methods such as structured light, Time of Flight (TOF), binocular stereo imaging, monocular phase detection, monocular depth estimation based on depth learning or machine learning, and the like, which are not limited specifically. The monocular depth estimation method may include, but is not limited to, a multi-scale local plane guide (BTS) depth estimation method, among others.
For example, a depth estimation model including an encoder and a decoder may be constructed and trained using depth sample data, which may include portrait data and depth information carried by the portrait data, such that the depth estimation model has depth estimation capabilities. The electronic equipment inputs the reference image into the depth estimation model, and the depth estimation model can calculate depth information corresponding to each pixel point included in the reference image, so that the depth image of the reference image is obtained.
430. And correcting the foreground region and the background region of the depth image according to the portrait segmentation result and the hairline matting result to obtain the depth-of-field image of the reference image.
The depth image comprises depth information corresponding to each pixel point in the reference image, so that the electronic equipment can determine a foreground area and a background area of the depth image according to the depth information, and further generate the depth image of the reference image.
For example, a depth threshold may be set, and the depth information includes a depth value; the electronic equipment can identify pixel points with the depth of field value larger than the depth of field threshold value in the depth image as background area pixel points; and setting the pixel points with the depth of field value less than or equal to the depth of field threshold value as foreground region pixel points.
The foreground region and the background region divided based on the depth information may have a certain error from the actual foreground region and the actual background region. Therefore, the electronic device can further correct the foreground region and the background region of the depth image by utilizing the portrait segmentation result and the hairline matting result. During correction, the portrait segmentation result and the hair cutting result can be the standard.
For example, the electronic device may compare a background region determined based on the depth information with a portrait region indicated by the portrait segmentation result or a hair region indicated by the hair matting result, and correct an overlapping region to belong to the foreground region if the background region and the portrait region or the hair region have the overlapping region. Or, the electronic device may compare the foreground region determined based on the depth information with the portrait region indicated by the portrait segmentation result or the hair region indicated by the hair matting result, and modify the unique region to belong to the background region if the foreground region includes the portrait region or the unique region that does not exist in the hair region.
Therefore, the foreground region and the background region of the depth image are corrected by utilizing the portrait segmentation result and the hairline matting result, so that the foreground region and the background region of the corrected depth image can be divided more accurately, and the corrected depth image can be used as a depth-of-field image of the reference image. Therefore, the electronic device can generate the depth-of-field image which not only contains the depth-of-field information of the pixel points, but also can accurately indicate the foreground area and the background area of the reference image.
In the foregoing embodiment, the electronic device may perform portrait recognition, hair identification, and depth estimation on the reference image in the second thread, respectively, and may fuse the portrait segmentation result, the hair matting result, and the depth image obtained by the depth estimation to generate the depth image. The depth-of-field image not only contains depth-of-field information of the pixel points, but also can accurately indicate a foreground area and a background area of the reference image. Therefore, accurate blurring parameters can be generated with the depth image as a reference.
Exemplarily, please refer to fig. 5, and fig. 5 is a flowchart illustrating another image processing method according to an embodiment. As shown in fig. 5:
a plurality of images having different exposure values may be input to the first thread 510, and the first thread 510 may perform image quality enhancement processing such as HDR to obtain an enhanced image.
A reference image is determined from the multi-frame images different in exposure value, and the reference image is input to the second thread 520. The portrait area and the hair area of the reference image are respectively identified through the second thread 520, depth estimation is carried out on the reference image, and then the portrait segmentation result, the hair matting result and the depth image obtained through the depth estimation are fused to obtain a depth image. As shown in fig. 5, the portrait area of the reference image may be recognized to obtain a portrait segmentation result, and then the hair area of the reference image may be recognized according to the portrait segmentation result.
The electronic equipment can generate a blurring parameter according to the depth-of-field image output by the second thread, and perform blurring processing on the enhanced image output by the first thread by using the blurring parameter, so that the finally obtained blurring image not only has an image quality enhancement effect, but also has an image blurring effect, and the time consumption of an algorithm can be reduced and the image quality can be ensured.
In one embodiment, to further reduce the algorithm time consumption, the foregoing steps 410 and 420 may be processed in parallel by two different sub-threads in the second thread. That is, the second thread may include: a first sub-thread and a second sub-thread. Referring to fig. 6, fig. 6 is a schematic flow chart illustrating another embodiment of generating a depth image by a second thread. As shown in fig. 6, the following steps may be included:
610. and preprocessing the reference image through the first sub-thread to obtain a preprocessed reference image.
The electronic device may perform pre-processing on the reference image through the first sub-thread, where the pre-processing may include one or more of rotation, scaling, normalization, and the like.
The rotation operation may be an operation of rotating a certain pixel of the reference image by a certain angle. The electronic device may determine the shooting direction of the reference image according to the width and the height of the reference image, for example, when the width is greater than the height, the shooting direction of the reference image is horizontal shooting; when the height is larger than the width, the shooting direction of the reference image is a vertical shooting. Alternatively, the shooting direction of the reference image is determined from the shooting direction value recorded by the imaging apparatus that shot the reference image. Wherein, the shooting direction may include: horizontal or vertical.
The zoom operation may be an operation of reducing or enlarging the image size of the reference image. For example, if the image size of the input image of the network model does not coincide with the image size of the reference image, the electronic device may perform a reduction or enlargement operation on the reference image so that the reduced or enlarged image size of the reference image coincides with the image size of the input image of the network model. For example, if the image size of the input image of the network model is 640 × 480, the image size of the reference image needs to be reduced or enlarged to 640 × 480. The aforementioned network models may include, but are not limited to: the human image segmentation model is used for identifying a human image area, the hair identification model is used for identifying a hair area, and the depth estimation model is used for carrying out depth estimation.
The normalization operation may refer to mapping the image data values of the respective pixels in the reference image to a range of [0,1 ]. The normalization operation may include: and carrying out operation of firstly subtracting the mean value and then dividing the variance on the RGB three-channel numerical value corresponding to each pixel point in the reference image. For example, assuming that the mean value is 127.5, the operation of subtracting the mean value and then dividing the variance for the value X of the RGB channel corresponding to any one pixel point in the reference image can be represented by the following formula: (X-127.5)/127.5. Alternatively, the normalization operation may include: the RGB three channel values corresponding to each pixel point in the reference image are directly divided by 255. For example, the direct division by 255 for the value X of the RGB channel corresponding to any one pixel point in the reference image can be expressed by the following formula: and (5) X/255.
620. And identifying a portrait area of the reference image through the first sub thread to obtain a portrait segmentation result.
630. And normalizing the portrait segmentation result through the first sub thread, performing channel splicing on the normalized portrait segmentation result and the reference image to obtain a spliced image, and identifying a hair region of the spliced image as a hair matting result of the reference image.
After the electronic device obtains the portrait segmentation result and before the portrait segmentation result and the reference image are subjected to channel splicing, the portrait segmentation result can be normalized first, so that the normalized portrait segmentation result and the normalized reference image correspond to the same numerical range.
640. And performing depth estimation on the preprocessed reference image through a second sub-thread to obtain a depth image of the reference image, and transmitting the depth image in the second sub-thread to the first sub-thread.
In the embodiment of the application, the hair identification can be performed according to the portrait segmentation result obtained by the portrait identification and the reference image, and the portrait identification and the hair identification can be performed in the same sub-thread.
Therefore, after the electronic device preprocesses the reference image, the preprocessed reference image can be divided into two paths of data: executing the previous steps 620-630 by one path of data, and continuing to identify the portrait area and the hair area in the first sub-thread; another path of data is input to the second sub-thread, and the aforementioned step 640 is performed to perform depth estimation in the second sub-thread. Since the first and second sub-threads are two parallel threads, steps in the first and second sub-threads may be performed simultaneously.
It should be noted that, in some possible embodiments, if the electronic device performs portrait recognition and hair recognition on the reference image respectively, the second thread may also include three parallel sub-threads. The preprocessed reference image can be divided into three paths of data which are respectively input into three parallel sub-threads, and the electronic equipment can respectively carry out portrait recognition, hair recognition and depth estimation through the three parallel sub-threads.
650. And correcting the foreground region and the background region of the depth image according to the portrait segmentation result and the hairline matting result through the first sub thread to obtain the depth-of-field image of the reference image.
The electronic equipment can transmit the depth image in the second sub-thread to the first sub-thread, and in the first sub-thread, the foreground region and the background region of the depth image are determined according to the depth of field information, and then the foreground region and the background region determined based on the depth of field information are corrected based on the portrait segmentation result and the hairline matting result identified in the first sub-thread, so that the finally generated depth of field image not only contains the depth of field information of the pixel points, but also can accurately indicate the foreground region and the background region of the reference image, and a good blurring effect can be obtained on the enhanced image according to blurring parameters generated by the depth of field image.
Therefore, in the foregoing embodiment, in the second thread, the identification of the portrait and the hair and the depth estimation can be performed through two parallel sub-threads, so that the time consumption for generating the depth-of-field image can be reduced, which is beneficial to further reducing the time consumption of the overall image processing algorithm.
Referring to fig. 7, fig. 7 is a flowchart illustrating another image processing method according to an embodiment. As shown in fig. 7, the following steps may be included:
710. and performing image quality enhancement processing on the multi-frame images with different exposure values through the first thread to obtain an enhanced image fused with the multi-frame images.
720. And generating the depth image of the reference image in the multi-frame images through the second thread.
730. And dividing the background area of the enhanced image into one or more blurring areas according to the depth information of the background area in the depth image.
The electronic equipment determines a background area of the enhanced image according to the depth image, and divides the background area of the enhanced image according to depth information of the background area in the depth image. Optionally, a difference between corresponding depth of field values of the pixels in the same blurring region in the depth of field image may be smaller than a preset threshold; the difference between the corresponding depth of field values of the pixel points in the different blurring regions in the depth of field image may be greater than or equal to a preset threshold.
Referring to fig. 8, fig. 8 is a diagram illustrating an exemplary blurring region in an enhanced image according to an embodiment of the disclosure. As shown in fig. 8, the background area of the enhanced image is divided into a blurring area 810 and a blurring area 820. The blurred region 810 may be a white region, and the blurred region 820 may be a region including lines. The depth value of the blurred region 810 in the depth image may be greater than that of the blurred region 820.
740. And determining the virtualization level of each virtualization area according to the depth information of each virtualization area in the depth image, and performing virtualization processing on each virtualization area according to the virtualization level.
After the electronic device divides the background area of the enhanced image into the blurring areas, the blurring level may be further determined according to the depth information of the blurring areas. That is, each virtualization region may correspond to a respective level of virtualization. The virtualization level of each virtualization area can be in positive correlation with the depth of field information corresponding to the virtualization area; alternatively, the depth information corresponding to the blurred region may be in a negative correlation.
For example, referring to fig. 8, if the blurring level is positively correlated to the depth information corresponding to the blurring region, the blurring level of the blurring region 810 may be greater than the blurring level of the blurring region 820. After the blurring process, the blurred region 810 is more blurred than the blurred region 820.
750. And generating light spot rendering parameters according to the blurring parameters, and performing light spot rendering processing on the light spot image of the enhanced image by using the light spot rendering parameters.
The electronic equipment can also perform light spot rendering processing on the light spot image in the enhanced image, and can also improve the image quality. The spot rendering parameters may include, but are not limited to: and rendering the radius by the light spot.
Optionally, when the blurring parameter includes a blurring level, the spot rendering radius may have a positive correlation with the blurring level. Illustratively, the higher the blurring level, the larger the spot rendering radius.
When the blurring parameter includes a blurring level and a blurring region, the spot rendering radius may be determined according to the blurring level of the blurring region where the spot image is located. Illustratively, the larger the spot rendering radius for a blurring region with a larger blurring level.
It should be noted that the aforementioned steps 730-740 can be executed by the same thread, and the step 750 can be executed by the same thread or different thread as the steps 730-740. Illustratively, steps 730 and 740 may be performed by a first thread and step 740 may be performed by a second thread.
Therefore, in the foregoing embodiment, the electronic device may perform the computation of image enhancement and image blurring through two different parallel threads, which may improve the image quality while reducing the time consumption of the algorithm. In addition, when the image blurring processing is performed, the electronic device may further divide the background area to be blurred into different blurring areas according to the depth information, and perform blurring processing on each blurring area by using the blurring levels corresponding to each blurring area, which is beneficial to make the blurring effect transition natural. Furthermore, the image can be subjected to facula rendering, and the image quality can be further improved.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment, where the image processing apparatus is applicable to the electronic device. As shown in fig. 9, the image processing apparatus 900 may include: a first processing module 910, a second processing module 920, and a blurring module 930.
The first processing module 910 is configured to perform image quality enhancement processing on a plurality of frames of images with different exposure values through a first thread to obtain an enhanced image in which the plurality of frames of images are fused;
the second processing module 920 may be configured to generate a depth image of the reference image in the multi-frame image through a second thread; the depth image includes at least depth information of a background area in the reference image. Alternatively, the exposure value of the reference image may be within a target range, the target range being less than the first exposure value and greater than the second exposure value.
The blurring module 930 may be configured to generate a blurring parameter according to the depth-of-field image, and perform blurring processing on the enhanced image by using the blurring parameter.
In one embodiment, the second thread may include: a first sub-thread and a second sub-thread.
The second processing module 920 may include: the device comprises a first processing unit, a second processing unit and a generating unit.
The first processing unit can be used for identifying a portrait area and a hair area in the reference image through a first sub-thread to obtain a portrait segmentation result and a hair matting result;
the second processing unit is used for carrying out depth estimation on the reference image through a second sub thread to obtain a depth image of the reference image; the depth image comprises depth information corresponding to each pixel point in the reference image;
and the generating unit can be used for correcting the foreground area and the background area of the depth image according to the portrait segmentation result and the hairline matting result through the second thread to obtain the depth-of-field image of the reference image.
In an embodiment, the first processing unit may be further configured to pre-process the reference image through the first sub-thread to obtain a pre-processed reference image; identifying a portrait area and a hair area from the preprocessed reference image through the first sub-thread;
the second processing unit may be further configured to perform depth estimation on the preprocessed reference image through a second sub thread to obtain a depth image of the reference image.
In one embodiment, the first processing unit is further configured to identify a portrait area of the reference image through the first sub-thread to obtain a portrait segmentation result; the portrait segmentation result is used for indicating a portrait area, a background area and a boundary area of the portrait area and the background area of the reference image; performing channel splicing on the portrait segmentation result and the reference image through the first sub thread to obtain a spliced image; and identifying a hair region of the spliced image through the first sub-thread as a hair matting result of the reference image.
In an embodiment, the second processing unit may be further configured to, after performing depth estimation on the reference image through the second sub-thread to obtain a depth image of the reference image, transmit the depth image in the second sub-thread to the first sub-thread;
the first processing unit can also be used for correcting the foreground area and the background area of the depth image according to the portrait segmentation result and the hairline matting result through the first sub-thread to obtain the depth-of-field image of the reference image.
In one embodiment, the first processing module 910 is further operable to transmit the enhanced image in the first thread to the second thread;
the blurring module 930 may be further configured to generate a blurring parameter according to the depth image through the second thread, and perform blurring processing on the enhanced image by using the blurring parameter.
In one embodiment, the blurring parameters may include: a blurring level and a blurring region.
The blurring module 930 may be further configured to divide the background area of the enhanced image into one or more blurring areas according to the depth-of-field information of the background area in the depth-of-field image; and determining the virtualization level of each virtualization area according to the depth information of each virtualization area in the depth image, and performing virtualization processing on each virtualization area according to the virtualization level.
In one embodiment, the image processing apparatus 900 may further include: and a light spot rendering module.
And the light spot rendering module can be used for generating light spot rendering parameters according to the blurring parameters and performing light spot rendering processing on the light spot image of the enhanced image by using the light spot rendering parameters.
It can be seen that, in the foregoing embodiment, the image processing apparatus may process the computations of image quality enhancement and image blurring through two different parallel threads, and may compute the blurring parameter based on the result of image quality enhancement without performing image quality enhancement first, but may compute the blurring parameter by using the depth-of-field image, which is beneficial to reducing the time consumption of the algorithm. Meanwhile, the generated blurring parameters still act on the result of image quality enhancement, namely the blurring parameters can act on the enhanced image, so that the finally generated image not only has an image quality enhancement effect, but also has an image blurring effect, and the image quality can be ensured while the time consumption of the algorithm is reduced.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an electronic device according to an embodiment.
As shown in fig. 10, the electronic device 1000 may include:
a memory 1010 storing executable program code;
a processor 1020 coupled with the memory 1010;
the processor 1020 calls the executable program code stored in the memory 1010 to execute any one of the image processing methods disclosed in the embodiments of the present application.
It should be noted that the electronic device shown in fig. 10 may further include components, which are not shown, such as a power supply, an input key, a camera, a speaker, a screen, an RF circuit, a Wi-Fi module, a bluetooth module, and a sensor, which are not described in detail in this embodiment.
The embodiment of the application discloses a computer readable storage medium which stores a computer program, wherein the computer program enables a computer to execute any image processing method disclosed by the embodiment of the application.
An embodiment of the present application discloses a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute any one of the image processing methods disclosed in the embodiment of the present application.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are all alternative embodiments and that the acts and modules involved are not necessarily required for this application.
In various embodiments of the present application, it should be understood that the size of the serial number of each process described above does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present application, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, may be embodied in the form of a software product, stored in a memory, including several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of the embodiments of the present application.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The foregoing detailed description has provided a detailed description of an image processing method, an image processing apparatus, an electronic device, and a storage medium, which are disclosed in the embodiments of the present application, and the principles and implementations of the present application are described herein using specific examples, and the descriptions of the foregoing embodiments are only used to help understand the method and the core idea of the present application. Meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (12)

1. An image processing method, characterized in that the method comprises:
performing image quality enhancement processing on multi-frame images with different exposure values through a first thread to obtain an enhanced image fused with the multi-frame images;
generating a depth image of a reference image in the multi-frame image through a second thread; the depth image at least comprises depth information of a background area in the reference image;
and generating a blurring parameter according to the depth image, and blurring the enhanced image by using the blurring parameter.
2. The method of claim 1, wherein the second thread comprises: a first sub-thread and a second sub-thread; the generating, by the second thread, the depth image of the reference image in the multiple frames of images includes:
identifying a portrait region and a hair region in the reference image through the first sub thread to obtain a portrait segmentation result and a hair matting result, and performing depth estimation on the reference image through the second sub thread to obtain a depth image of the reference image; the depth image comprises depth information corresponding to each pixel point in the reference image;
and correcting the foreground region and the background region of the depth image according to the portrait segmentation result and the haircut matting result through the second thread to obtain the depth-of-field image of the reference image.
3. The method according to claim 2, wherein the identifying a portrait region and a hair region in the reference image through the first sub-thread and performing depth estimation on the reference image through the second sub-thread to obtain a depth image of the reference image comprises:
preprocessing the reference image through the first sub thread to obtain a preprocessed reference image;
identifying a portrait area and a hair area from the preprocessed reference image through the first sub-thread;
and performing depth estimation on the preprocessed reference image through the second sub thread to obtain a depth image of the reference image.
4. The method according to claim 2, wherein the identifying the portrait area and the hair area in the reference image through the first sub-thread to obtain a portrait segmentation result and a hair matting result comprises:
identifying a portrait area of the reference image through the first sub thread to obtain a portrait segmentation result; the portrait segmentation result is used for indicating a portrait area, a background area and a boundary area of the portrait area and the background area of the reference image;
performing channel splicing on the portrait segmentation result and the reference image through the first sub thread to obtain a spliced image;
and identifying a hair region of the spliced image through the first sub-thread as a hair matting result of the reference image.
5. The method of claim 2, wherein after the depth estimation of the reference image by the second sub-thread to obtain the depth image of the reference image, the method further comprises:
transmitting the depth image in the second sub-thread to the first sub-thread;
and correcting the foreground region and the background region of the depth image according to the portrait segmentation result and the haircut matting result through the second thread to obtain the depth image of the reference image, wherein the method comprises the following steps:
and correcting the foreground region and the background region of the depth image according to the portrait segmentation result and the hairline cutout result through the first sub thread to obtain the depth-of-field image of the reference image.
6. The method of claim 1, wherein generating the blurring parameter from the depth image and blurring the enhanced image using the blurring parameter comprises:
transmitting the enhanced image in the first thread to the second thread;
and generating a blurring parameter according to the depth image through the second thread, and performing blurring processing on the enhanced image by using the blurring parameter.
7. The method of claim 1, wherein the blurring parameters comprise: a blurring level and a blurring region; and generating a blurring parameter according to the depth image, and performing blurring processing on the enhanced image by using the blurring parameter, wherein the blurring processing comprises:
dividing the background area of the enhanced image into one or more virtual areas according to the depth information of the background area in the depth image;
and determining the virtualization level of each virtualization area according to the depth information of each virtualization area in the depth image, and performing virtualization processing on each virtualization area according to the virtualization level.
8. The method of claim 1, further comprising:
and generating light spot rendering parameters according to the blurring parameters, and performing light spot rendering processing on the light spot image of the enhanced image by using the light spot rendering parameters.
9. The method of any of claims 1-8, wherein the exposure value of the reference image is within a target range, the target range being less than the first exposure value and greater than the second exposure value.
10. An image processing apparatus characterized by comprising:
the first processing module is used for carrying out image quality enhancement processing on multi-frame images with different exposure values through a first thread to obtain an enhanced image fused with the multi-frame images;
the second processing module is used for generating a depth image of a reference image in the multi-frame images through a second thread; the depth image at least comprises depth information of a background area in the reference image;
and the blurring module is used for generating blurring parameters according to the depth-of-field image and performing blurring processing on the enhanced image by using the blurring parameters.
11. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program that, when executed by the processor, causes the processor to implement the method of any of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 9.
CN202110690391.4A 2021-06-22 2021-06-22 Image processing method, image processing device, electronic equipment and storage medium Withdrawn CN113298735A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110690391.4A CN113298735A (en) 2021-06-22 2021-06-22 Image processing method, image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110690391.4A CN113298735A (en) 2021-06-22 2021-06-22 Image processing method, image processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113298735A true CN113298735A (en) 2021-08-24

Family

ID=77329116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110690391.4A Withdrawn CN113298735A (en) 2021-06-22 2021-06-22 Image processing method, image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113298735A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113963000A (en) * 2021-10-21 2022-01-21 北京字节跳动网络技术有限公司 Image segmentation method, device, electronic equipment and program product
CN116757963A (en) * 2023-08-14 2023-09-15 荣耀终端有限公司 Image processing method, electronic device, chip system and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107948519A (en) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 Image processing method, device and equipment
CN108024054A (en) * 2017-11-01 2018-05-11 广东欧珀移动通信有限公司 Image processing method, device and equipment
CN108322646A (en) * 2018-01-31 2018-07-24 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109146767A (en) * 2017-09-04 2019-01-04 成都通甲优博科技有限责任公司 Image weakening method and device based on depth map
CN109493283A (en) * 2018-08-23 2019-03-19 金陵科技学院 A kind of method that high dynamic range images ghost is eliminated
CN109903321A (en) * 2018-10-16 2019-06-18 迈格威科技有限公司 Image processing method, image processing apparatus and storage medium
CN110121882A (en) * 2017-10-13 2019-08-13 华为技术有限公司 A kind of image processing method and device
CN110166706A (en) * 2019-06-13 2019-08-23 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146767A (en) * 2017-09-04 2019-01-04 成都通甲优博科技有限责任公司 Image weakening method and device based on depth map
CN110121882A (en) * 2017-10-13 2019-08-13 华为技术有限公司 A kind of image processing method and device
CN108024054A (en) * 2017-11-01 2018-05-11 广东欧珀移动通信有限公司 Image processing method, device and equipment
CN107948519A (en) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 Image processing method, device and equipment
CN108322646A (en) * 2018-01-31 2018-07-24 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109493283A (en) * 2018-08-23 2019-03-19 金陵科技学院 A kind of method that high dynamic range images ghost is eliminated
CN109903321A (en) * 2018-10-16 2019-06-18 迈格威科技有限公司 Image processing method, image processing apparatus and storage medium
CN110166706A (en) * 2019-06-13 2019-08-23 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113963000A (en) * 2021-10-21 2022-01-21 北京字节跳动网络技术有限公司 Image segmentation method, device, electronic equipment and program product
CN113963000B (en) * 2021-10-21 2024-03-15 抖音视界有限公司 Image segmentation method, device, electronic equipment and program product
CN116757963A (en) * 2023-08-14 2023-09-15 荣耀终端有限公司 Image processing method, electronic device, chip system and readable storage medium
CN116757963B (en) * 2023-08-14 2023-11-07 荣耀终端有限公司 Image processing method, electronic device, chip system and readable storage medium

Similar Documents

Publication Publication Date Title
CN107948519B (en) Image processing method, device and equipment
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
CN111402135B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN109348089B (en) Night scene image processing method and device, electronic equipment and storage medium
JP7371081B2 (en) Night view photography methods, devices, electronic devices and storage media
CN109089047B (en) Method and device for controlling focusing, storage medium and electronic equipment
EP3499863B1 (en) Method and device for image processing
CN108055452B (en) Image processing method, device and equipment
CN107509031B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN109194882B (en) Image processing method, image processing device, electronic equipment and storage medium
CN109068058B (en) Shooting control method and device in super night scene mode and electronic equipment
CN110191291B (en) Image processing method and device based on multi-frame images
CN111028190A (en) Image processing method, image processing device, storage medium and electronic equipment
CN113766125B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN110072052B (en) Image processing method and device based on multi-frame image and electronic equipment
CN109672819B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
US20210344826A1 (en) Image Acquisition Method, Electronic Device, andNon-Transitory Computer Readable Storage Medium
CN108156369B (en) Image processing method and device
CN107846556B (en) Imaging method, imaging device, mobile terminal and storage medium
CN110349163B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108093158B (en) Image blurring processing method and device, mobile device and computer readable medium
CN111246093B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110213498B (en) Image generation method and device, electronic equipment and computer readable storage medium
CN110177212B (en) Image processing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210824