CN113256482A - Photographing background blurring method, mobile terminal and storage medium - Google Patents

Photographing background blurring method, mobile terminal and storage medium Download PDF

Info

Publication number
CN113256482A
CN113256482A CN202010085333.4A CN202010085333A CN113256482A CN 113256482 A CN113256482 A CN 113256482A CN 202010085333 A CN202010085333 A CN 202010085333A CN 113256482 A CN113256482 A CN 113256482A
Authority
CN
China
Prior art keywords
background
blurring
foreground
depth
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010085333.4A
Other languages
Chinese (zh)
Other versions
CN113256482B (en
Inventor
李鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan TCL Group Industrial Research Institute Co Ltd
Original Assignee
Wuhan TCL Group Industrial Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan TCL Group Industrial Research Institute Co Ltd filed Critical Wuhan TCL Group Industrial Research Institute Co Ltd
Priority to CN202010085333.4A priority Critical patent/CN113256482B/en
Priority to PCT/CN2020/128657 priority patent/WO2021135676A1/en
Publication of CN113256482A publication Critical patent/CN113256482A/en
Application granted granted Critical
Publication of CN113256482B publication Critical patent/CN113256482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a photographing background blurring method, a mobile terminal and a storage medium, wherein the method comprises the following steps: synthesizing the first picture and the second picture into a depth map; preprocessing the depth map, and segmenting the preprocessed depth map to obtain a foreground mask and a background mask; determining a foreground depth value parameter and a background depth value parameter according to the foreground mask and the background mask respectively; and determining the size of a fuzzy kernel for blurring according to the foreground depth value parameter and the background depth value parameter, and blurring the background of the first picture according to the size of the fuzzy kernel. According to the method, two pictures acquired from different angles are combined into a depth map, the preprocessed depth map is rapidly segmented to obtain a rough foreground and background, then foreground and background depth value parameters are respectively counted, the size of a fuzzy kernel used for blurring is selected according to the depth map and the parameter values to perform pixel-by-pixel background blurring, and the effect of background blurring is improved.

Description

Photographing background blurring method, mobile terminal and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a photographing background blurring method, a mobile terminal and a storage medium.
Background
In the existing photographing function of the smart phone, a photographing background blurring function based on two cameras (background blurring is to make depth of field shallow and focus on a subject) is becoming more and more popular.
At present, background blurring mainly comprises the following steps: generating a depth map by using a binocular camera (the depth map is an image which is formed by combining a main map and a secondary map shot by using the binocular camera and has distance information, each pixel in the image represents the distance from a scene to the camera, and is also called as depth map, so the depth map is called), and segmenting a foreground and a background based on the depth map; and grading the background according to the depth value, and superposing the background and the foreground after different fuzzy smoothing processing is carried out on each grade to realize the background blurring effect.
However, the methods of segmenting the foreground and the background based on the depth map, and layering and multi-level blurring and then superimposing the background all have certain defects, the depth map estimated by the two cameras may be inaccurate (the depth map is estimated from the pictures shot by the main camera and the auxiliary camera of the binocular camera), which results in poor segmentation effect or error of the foreground and the background, and layering and multi-level blurring using depth distance information (i.e., depth values) may cause the blurring levels of adjacent areas to be inconsistent, and the superposition of the foreground and the background after blurring may appear to be inconsistent in transition at the edge or may cause halo effect; the above defects will eventually cause the visual effect of the blurred picture to be poor.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
The invention mainly aims to provide a photographing background blurring method, a mobile terminal and a storage medium, and aims to solve the problems that in the prior art, segmentation of a foreground and a background is inaccurate, and blurring levels of adjacent background blocks are inconsistent, so that halos appear at the overlapped edges of the foreground and the background or transition is unnatural.
In order to achieve the above object, the present invention provides a method for blurring a photographing background, comprising the steps of:
synthesizing the first picture and the second picture into a depth map;
preprocessing the depth map, and segmenting the preprocessed depth map to obtain a foreground mask and a background mask;
determining a foreground depth value parameter and a background depth value parameter according to the foreground mask and the background mask respectively;
and determining the size of a fuzzy kernel for blurring according to the foreground depth value parameter and the background depth value parameter, and blurring the background of the first picture according to the size of the fuzzy kernel.
Optionally, the photographing background blurring method includes that the first picture is a picture photographed by a first camera group of the terminal device, where the first camera group includes one or more cameras;
the second picture is a picture shot by a second camera group of the terminal equipment, and the second camera group comprises one or more cameras;
at least one of the cameras of the first camera group is different from the cameras of the second camera group.
Optionally, the photographing background blurring method includes:
performing edge-preserving filtering processing on the depth map;
and performing median filtering operation on the depth map subjected to edge-preserving filtering processing.
Optionally, the photographing background blurring method, wherein the segmenting the preprocessed depth map to obtain a foreground mask and a background mask specifically includes:
performing segmentation processing on the depth map subjected to the median filtering operation;
and obtaining the foreground mask and the background mask according to a preset segmentation threshold.
Optionally, the method for blurring the photographed background, where the obtaining the foreground mask and the background mask according to a preset segmentation threshold specifically includes:
classifying pixel points in the depth map with pixel values larger than the segmentation threshold value as the foreground mask;
and classifying pixel points of the depth image, the pixel values of which are less than or equal to the segmentation threshold value, as the background mask.
Optionally, the method for blurring the photographed background, where the determining the foreground depth value parameter and the background depth value parameter according to the foreground mask and the background mask respectively includes:
and respectively counting the mean value of the foreground depth values and the mean variance of the foreground depth values, the maximum value of the background depth values and the minimum value of the background depth values according to the foreground mask and the background mask.
Optionally, the method for blurring the photographed background, where the determining the foreground depth value parameter and the background depth value parameter according to the foreground mask and the background mask respectively includes:
and respectively counting the mean value of the foreground depth values and the mean variance of the foreground depth values, the maximum value of the background depth values and the minimum value of the background depth values according to the foreground mask and the background mask.
Optionally, the photographing background blurring method, where determining a blur kernel size for blurring according to the foreground depth value parameter and the background depth value parameter specifically includes:
determining the blurring kernel of the foreground as a gaussian kernel, and then the blurring radius Fg _ r of the current pixel is:
Figure BDA0002381839220000041
wherein Fg _ level is a blurring stage number, depthVal is a depth value, Fg _ mean is a foreground depth value mean value, and Fg _ std is a foreground depth value mean square error;
the size of the blur kernel is (2r +1) × (2r +1), wherein the calculation formula of each weight is as follows:
Fg_k=exp(-Fg_r*dist);
wherein dist is the Euclidean distance between the kernel midpoint (r, r) and its neighborhood;
determining that the blurring kernel of the background is a defocus blurring kernel, and then the blurring radius Bg _ r of the current pixel is:
Bg_r=Bg_level*(depthVal-Bg_min)/(Bg_max-Bg_min+1);
wherein Bg _ level is the number of virtualization stages, Bg _ max is the maximum value of the background depth value, and Bg _ min is the minimum value of the background depth value;
the size of the blur kernel is (2r +1) × (2r +1), wherein the calculation formula of each weight is as follows:
Bg_k=a*dist+b;
wherein a and b represent weight values, 0< a <1, and b > 1.
Optionally, the photographing background blurring method includes setting Fg _ level to 2, setting Bg _ level to 11, setting a to 0.1, and setting b to 10.
Optionally, the method for blurring the background for photographing is provided, wherein the blurring of the background is performed on a pixel-by-pixel basis.
In addition, to achieve the above object, the present invention also provides a mobile terminal, wherein the mobile terminal includes: the photographing apparatus comprises a memory, a processor and a photographing background blurring program stored on the memory and capable of running on the processor, wherein the photographing background blurring program realizes the steps of the photographing background blurring method when being executed by the processor.
The mobile terminal comprises a first camera group and a second camera group;
the first camera group and the second camera group are used for acquiring a first picture and a second picture which are shot at different angles;
the first picture is a picture obtained by shooting by the first camera group, and the first camera group comprises one or more cameras;
the second picture is a picture obtained by shooting by the second camera group, and the second camera group comprises one or more cameras;
at least one of the cameras of the first camera group is different from the cameras of the second camera group.
In addition, to achieve the above object, the present invention further provides a storage medium, wherein the storage medium stores a photographing background blurring program, and the photographing background blurring program implements the steps of the photographing background blurring method when executed by a processor.
The method comprises the steps of synthesizing a first picture and a second picture into a depth map; preprocessing the depth map, and segmenting the preprocessed depth map to obtain a foreground mask and a background mask; determining a foreground depth value parameter and a background depth value parameter according to the foreground mask and the background mask respectively; and determining the size of a fuzzy kernel for blurring according to the foreground depth value parameter and the background depth value parameter, and blurring the background of the first picture according to the size of the fuzzy kernel. The method comprises the steps of synthesizing two pictures acquired from different angles into a depth map, preprocessing the depth map to enable the depth map to be uniform, rapidly segmenting the preprocessed depth map to obtain a rough foreground and a rough background, then respectively counting foreground and background depth value parameters, selecting a fuzzy kernel size for blurring according to the depth map and the parameter values to perform pixel-by-pixel background blurring, and improving the background blurring effect.
Drawings
FIG. 1 is a flowchart illustrating a method for blurring a photographing background according to a preferred embodiment of the present invention;
FIG. 2 is a depth map after preprocessing in the preferred embodiment of the method for blurring a photographing background according to the present invention;
FIG. 3 is a diagram illustrating the effect of background blurring according to the preferred embodiment of the present invention;
FIG. 4 is a diagram illustrating an operating environment of a mobile terminal according to a preferred embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, the method for blurring a photographing background according to a preferred embodiment of the present invention includes the following steps:
and step S10, synthesizing the first picture and the second picture into a depth map.
Specifically, the first picture is a picture taken by a first camera group of the terminal device, where the first camera group includes one or more cameras; the second picture is a picture shot by a second camera group of the terminal equipment, and the second camera group comprises one or more cameras; at least one of the cameras of the first camera group is different from the cameras of the second camera group.
The method for blurring the photographed background is applied to a mobile terminal (such as a most commonly used smart phone, and other smart devices with dual cameras), where the mobile terminal includes a main camera (i.e. the first camera group) and a sub-camera (i.e. the second camera group), and the most obvious effect of the dual camera or the sub-camera is to have an excellent background blurring effect, first, the main camera and the sub-camera are used to photograph pictures, and two first pictures and two second pictures photographed at different angles are obtained (it is to be noted that the first picture and the second picture photographed at two different angles can also be obtained by two devices at the same time stamp, such as the first picture and the second picture at different angles obtained by two mobile phones at the same time, and the first picture and the second picture at different angles obtained by the main camera and the sub-camera at the same time are preferred in the present invention, for example, the main camera takes a first picture, the auxiliary camera takes a second picture, and the first picture and the second picture are synthesized into a depth map, the depth map is an image synthesized by a main image and an auxiliary image taken by a binocular camera and provided with distance information, each pixel value of the image represents the distance from an object to an xy plane of the camera, and the depth map (depth image) is also called a range image (range image) and refers to an image taking the distance (depth) from an image collector to each point in a scene as a pixel value, and directly reflects the geometric shape of a visible surface of the scene; and the depth map is estimated from pictures shot by a main camera and an auxiliary camera of the binocular camera.
And step S20, preprocessing the depth map, and segmenting the preprocessed depth map to obtain a foreground mask and a background mask.
Specifically, after the depth map is synthesized, the depth map needs to be preprocessed, and the purpose of the preprocessing is to obtain a uniform depth map, that is, the depth map is preprocessed so that the depth map is uniform and consistent (the uniform depth map is characterized in that the depth values in the depth map change smoothly, and the local area has smaller depth value changes), as shown in fig. 2.
Wherein the pretreatment specifically comprises: performing edge-preserving filtering processing on the depth map, wherein the edge-preserving filtering processing is used for protecting the edge of an image (namely, the depth map), and edge information in the image can be effectively preserved in the filtering process, for example, the edge-preserving filtering processing can be performed through an edge-preserving filter; and performing median filtering operation on the depth map subjected to edge-preserving filtering, wherein the median filtering operation is used for removing image burrs, the median filtering is a nonlinear smoothing technology, the gray value of each pixel point is set as the median of the gray values of all the pixel points in a certain neighborhood window of the point, meanwhile, the median filtering is a nonlinear signal processing technology which is based on a sequencing statistic theory and can effectively inhibit noise, and the basic principle of the median filtering is that the value of one point in a digital image or a digital sequence is replaced by the median of all the point values in one neighborhood of the point, so that the surrounding pixel values are close to the true values, and the noise point is eliminated.
Further, the depth map after the median filtering operation is segmented, the depth map is roughly segmented by using OTSU (the OTSU method or the maximum inter-class variance method, which is an efficient algorithm for binarizing the image, the image is divided into a foreground and a background by using a threshold value), a preset segmentation threshold value (T) is obtained, and the foreground mask and the background mask are obtained according to the preset segmentation threshold value.
Wherein the segmentation threshold (T) is a pixel value such as: pixels of one image are between 0 and 255, if T is 128, pixels smaller than or equal to T are classified as background, and pixels larger than T are classified as foreground.
Therefore, in the invention, the pixel points with the pixel values larger than the segmentation threshold value in the depth image are classified as the foreground mask; and classifying pixel points of the depth image, the pixel values of which are less than or equal to the segmentation threshold value, as the background mask.
Wherein the foreground mask (Fg _ mask) represents a logo image in which pixels are only 0 and 1, a pixel being 1 indicates that the point belongs to the foreground, and a pixel being 0 indicates that the point belongs to the background; the background mask (Bg _ mask) is the inverse of the foreground mask (Fg _ mask).
And step S30, respectively determining a foreground depth value parameter and a background depth value parameter according to the foreground mask and the background mask.
Specifically, respectively counting a foreground depth value mean value and a foreground depth value mean variance, a background depth value maximum value and a background depth value minimum value according to the foreground mask and the background mask; i.e. the foreground depth value parameters comprise a foreground depth value mean (Fg mean) and a foreground depth value mean square difference (Fg std), and the background depth value parameters comprise a background depth value maximum (Bg max) and a background depth value minimum (Bg min).
Further, according to the pixel position of the flag bit 1 in the foreground mask (Fg _ mask), the corresponding depth image pixel can be obtained, the pixel values of all foreground pixels in the depth image are counted, the mean value and the variance are calculated, and after the pixel values of all background pixels in the depth image are obtained, the background mask (Bg _ mask) can obtain the background maximum value and the background depth value minimum value.
Step S40, determining a blur kernel size for blurring according to the foreground depth value parameter and the background depth value parameter, and performing background blurring on the first picture according to the blur kernel size.
Wherein the background blurring is a pixel-by-pixel background blurring.
Specifically, determining the blurring stage number of the coarse foreground and the coarse background, wherein the coarse foreground and the vast majority of the foreground are overlapped, possibly are not overlapped at the edge part, and are rough, the foreground is more accurate than the coarse foreground, and the relationship between the coarse background and the background is the same; and determining the types and sizes of the multi-level blurring kernels, wherein the inconsistency of the sizes of the blurring kernels is called the multi-level blurring kernels, such as square kernels of different sizes of 3x3, 5x5, 7x7 and the like.
Further, the determining the size of the blur kernel for blurring according to the foreground depth value parameter and the background depth value parameter specifically includes:
(1) determining the blurring kernel of the foreground as a gaussian kernel (the gaussian kernel is that the weight value in the kernel obeys two-dimensional discrete gaussian distribution), the blurring radius Fg _ r of the current pixel is:
Figure BDA0002381839220000101
wherein Fg _ level is a blurring stage number, depthVal is a depth value, Fg _ mean is a foreground depth value mean value, and Fg _ std is a foreground depth value mean square error; preferably, Fg _ level is 2;
the size of the fuzzy kernel (the fuzzy kernel is a square with a weight value) is (2r +1) × (2r +1), wherein the calculation formula of each weight (each weight represents each different weight value in one kernel, the weight is a numerical value, and the importance degree of a certain pixel point is reflected) is as follows:
Fg_k=exp(-Fg_r*dist);
wherein dist is the Euclidean distance between the kernel midpoint (r, r) and its neighborhood;
(2) determining that the blurring kernel of the background is a defocus blurring kernel (the defocus blurring kernel is in a square kernel, the weight values in the inscribed circle are not 0, and the weight values outside the inscribed circle are all 0), and then the blurring radius Bg _ r of the current pixel is:
Bg_r=Bg_level*(depthVal-Bg_min)/(Bg_max-Bg_min+1);
wherein Bg _ level is the number of virtualization stages, Bg _ max is the maximum value of the background depth value, and Bg _ min is the minimum value of the background depth value; preferably, Bg _ level is 11;
the size of the blur kernel (the blur kernel is a square with a weight value) is (2r +1) × (2r +1), wherein the calculation formula of each weight is as follows:
Bg_k=a*dist+b;
wherein a and b represent weight values for controlling the magnitude of the effect of dist value, and 0< a <1, b >1, preferably, a is 0.1 and b is 10.
And performing pixel-by-pixel blurring processing on a first picture (pictures of the auxiliary camera are not processed, and the auxiliary camera is used for being matched with the main camera to generate a depth map) shot by the main camera of the mobile terminal according to corresponding depth map traversal, wherein a blurring kernel of each pixel depends on the corresponding depth value depthVal, and the blurring kernel is selected as described above.
Further, since the pixel-by-pixel traversal is time-consuming and the blurring operation between the pixels is independent, a background blurring effect map can be obtained by adopting multi-thread parallel accelerated calculation, as shown in fig. 3. The invention can improve the visual effect of blurring the background of the double-camera shooting of the mobile terminal and reduce the memory consumption of the mobile terminal.
The method directly performs fuzzy processing based on different fuzzy kernel sizes on the original image (such as a first image and a second image), the depth of a foreground area is small, the fuzzy kernel size is basically 1, and the method is essentially equivalent to not performing blurring; even if the abnormal depth value exists, the final visual effect is not influenced by the fuzzy processing of a small fuzzy core (3x 3); the background is not overlapped by the layered multilevel blurring treatment, the phenomena of halation or unnatural transition at the overlapped edge of the foreground and the background are avoided, and the real-time multilevel background blurring effect is realized.
Further, as shown in fig. 4, based on the above photographing background blurring method, the present invention also provides a mobile terminal, which includes a processor 10, a memory 20 and a display 30. Fig. 4 shows only some of the components of the mobile terminal, but it is to be understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead.
The mobile terminal also comprises a first camera group and a second camera group; the first camera group and the second camera group are used for acquiring a first picture and a second picture which are shot at different angles; the first picture is a picture obtained by shooting by the first camera group, and the first camera group comprises one or more cameras; the second picture is a picture obtained by shooting by the second camera group, and the second camera group comprises one or more cameras; at least one of the cameras of the first camera group is different from the cameras of the second camera group.
The memory 20 may be an internal storage unit of the mobile terminal in some embodiments, such as a hard disk or a memory of the mobile terminal. The memory 20 may also be an external storage device of the mobile terminal in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the mobile terminal. Further, the memory 20 may also include both an internal storage unit and an external storage device of the mobile terminal. The memory 20 is used for storing application software installed in the mobile terminal and various data, such as program codes of the installed mobile terminal. The memory 20 may also be used to temporarily store data that has been output or is to be output. In an embodiment, the memory 20 stores a photographing background blurring program 40, and the photographing background blurring program 40 can be executed by the processor 10, so as to implement the photographing background blurring method of the present application.
The processor 10 may be a Central Processing Unit (CPU), microprocessor or other data Processing chip in some embodiments, and is used for running program codes stored in the memory 20 or Processing data, such as executing the photographing background blurring method.
The display 30 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch panel, or the like in some embodiments. The display 30 is used for displaying information at the mobile terminal and for displaying a visual user interface. The components 10-30 of the mobile terminal communicate with each other via a system bus.
In one embodiment, the following steps are implemented when the processor 10 executes the photographing background blurring program 40 in the memory 20:
synthesizing the first picture and the second picture into a depth map;
preprocessing the depth map, and segmenting the preprocessed depth map to obtain a foreground mask and a background mask;
determining a foreground depth value parameter and a background depth value parameter according to the foreground mask and the background mask respectively;
and determining the size of a fuzzy kernel for blurring according to the foreground depth value parameter and the background depth value parameter, and blurring the background of the first picture according to the size of the fuzzy kernel.
The first picture is a picture shot by a first camera group of the terminal equipment, and the first camera group comprises one or more cameras;
the second picture is a picture shot by a second camera group of the terminal equipment, and the second camera group comprises one or more cameras;
at least one of the cameras of the first camera group is different from the cameras of the second camera group.
The pretreatment comprises the following steps:
performing edge-preserving filtering processing on the depth map;
and performing median filtering operation on the depth map subjected to edge-preserving filtering processing.
The segmenting the preprocessed depth map to obtain a foreground mask and a background mask specifically comprises:
performing segmentation processing on the depth map subjected to the median filtering operation;
and obtaining the foreground mask and the background mask according to a preset segmentation threshold.
The obtaining the foreground mask and the background mask according to a preset segmentation threshold specifically includes:
classifying pixel points in the depth map with pixel values larger than the segmentation threshold value as the foreground mask;
and classifying pixel points of the depth image, the pixel values of which are less than or equal to the segmentation threshold value, as the background mask.
The determining a foreground depth value parameter and a background depth value parameter according to the foreground mask and the background mask respectively specifically includes:
and respectively counting the mean value of the foreground depth values and the mean variance of the foreground depth values, the maximum value of the background depth values and the minimum value of the background depth values according to the foreground mask and the background mask.
Determining the size of the blur kernel for blurring according to the foreground depth value parameter and the background depth value parameter specifically includes:
determining the blurring kernel of the foreground as a gaussian kernel, and then the blurring radius Fg _ r of the current pixel is:
Figure BDA0002381839220000151
wherein Fg _ level is a blurring stage number, depthVal is a depth value, Fg _ mean is a foreground depth value mean value, and Fg _ std is a foreground depth value mean square error;
the size of the blur kernel is (2r +1) × (2r +1), wherein the calculation formula of each weight is as follows:
Fg_k=exp(-Fg_r*dist);
wherein dist is the Euclidean distance between the kernel midpoint (r, r) and its neighborhood;
determining that the blurring kernel of the background is a defocus blurring kernel, and then the blurring radius Bg _ r of the current pixel is:
Bg_r=Bg_level*(depthVal-Bg_min)/(Bg_max-Bg_min+1);
wherein Bg _ level is the number of virtualization stages, Bg _ max is the maximum value of the background depth value, and Bg _ min is the minimum value of the background depth value;
the size of the blur kernel is (2r +1) × (2r +1), wherein the calculation formula of each weight is as follows:
Bg_k=a*dist+b;
wherein a and b represent weight values, and 0< a <1, b > 1.
Wherein the Fg _ level is 2, the Bg _ level is 11, a is 0.1, and b is 10.
Wherein the background blurring is a pixel-by-pixel background blurring.
The present invention further provides a storage medium, wherein the storage medium stores a photographing background blurring program, and the photographing background blurring program implements the steps of the photographing background blurring method when executed by a processor.
In summary, the present invention provides a method for blurring a photographing background, a mobile terminal and a storage medium, where the method includes: synthesizing the first picture and the second picture into a depth map; preprocessing the depth map, and segmenting the preprocessed depth map to obtain a foreground mask and a background mask; determining a foreground depth value parameter and a background depth value parameter according to the foreground mask and the background mask respectively; and determining the size of a fuzzy kernel for blurring according to the foreground depth value parameter and the background depth value parameter, and blurring the background of the first picture according to the size of the fuzzy kernel. The method synthesizes two pictures acquired from different angles into a depth map, preprocesses the depth map to make the depth map uniform, rapidly segments the preprocessed depth map to obtain a rough foreground and background, then counts the depth value parameters of the foreground and the background respectively, selects the size of a fuzzy kernel for blurring to perform pixel-by-pixel background blurring according to the depth map and the parameter values, and improves the effect of background blurring.
Of course, it will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by a computer program instructing relevant hardware (such as a processor, a controller, etc.), and the program may be stored in a computer readable storage medium, and when executed, the program may include the processes of the above method embodiments. The storage medium may be a memory, a magnetic disk, an optical disk, etc.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (12)

1. A method for blurring a photographing background, the method comprising:
synthesizing the first picture and the second picture into a depth map;
preprocessing the depth map, and segmenting the preprocessed depth map to obtain a foreground mask and a background mask;
determining a foreground depth value parameter and a background depth value parameter according to the foreground mask and the background mask respectively;
and determining the size of a fuzzy kernel for blurring according to the foreground depth value parameter and the background depth value parameter, and blurring the background of the first picture according to the size of the fuzzy kernel.
2. The photographing background blurring method according to claim 1, wherein the first picture is a picture taken by a first camera group of the terminal device, the first camera group including one or more cameras;
the second picture is a picture shot by a second camera group of the terminal equipment, and the second camera group comprises one or more cameras;
at least one of the cameras of the first camera group is different from the cameras of the second camera group.
3. The method of blurring photographing background according to claim 1, wherein the preprocessing comprises:
performing edge-preserving filtering processing on the depth map;
and performing median filtering operation on the depth map subjected to edge-preserving filtering processing.
4. The photographing background blurring method according to claim 3, wherein the segmenting the preprocessed depth map to obtain a foreground mask and a background mask specifically comprises:
performing segmentation processing on the depth map subjected to the median filtering operation;
and obtaining the foreground mask and the background mask according to a preset segmentation threshold.
5. The photographing background blurring method according to claim 4, wherein the obtaining the foreground mask and the background mask according to a preset segmentation threshold specifically comprises:
classifying pixel points in the depth map with pixel values larger than the segmentation threshold value as the foreground mask;
and classifying pixel points of the depth image, the pixel values of which are less than or equal to the segmentation threshold value, as the background mask.
6. The photographic background blurring method according to any one of claims 1 to 5, wherein the determining a foreground depth value parameter and a background depth value parameter according to the foreground mask and the background mask respectively specifically comprises:
and respectively counting the mean value of the foreground depth values and the mean variance of the foreground depth values, the maximum value of the background depth values and the minimum value of the background depth values according to the foreground mask and the background mask.
7. The method of claim 6, wherein the determining a blur kernel size for blurring according to the foreground depth value parameter and the background depth value parameter comprises:
determining the blurring kernel of the foreground as a gaussian kernel, and then the blurring radius Fg _ r of the current pixel is:
Figure FDA0002381839210000021
wherein Fg _ level is a blurring stage number, depthVal is a depth value, Fg _ mean is a foreground depth value mean value, and Fg _ std is a foreground depth value mean square error;
the size of the blur kernel is (2r +1) × (2r +1), wherein the calculation formula of each weight is as follows:
Fg_k=exp(-Fg_r*dist);
wherein dist is the Euclidean distance between the kernel midpoint (r, r) and its neighborhood;
determining that the blurring kernel of the background is a defocus blurring kernel, and then the blurring radius Bg _ r of the current pixel is:
Bg_r=Bg_level*(depthVal-Bg_min)/(Bg_max-Bg_min+1);
wherein Bg _ level is the number of virtualization stages, Bg _ max is the maximum value of the background depth value, and Bg _ min is the minimum value of the background depth value;
the size of the blur kernel is (2r +1) × (2r +1), wherein the calculation formula of each weight is as follows:
Bg_k=a*dist+b;
wherein a and b represent weight values, 0< a <1, and b > 1.
8. The method of claim 7, wherein Fg _ level is 2, Bg _ level is 11, a is 0.1, and b is 10.
9. The method of claim 1 or 7, wherein the background blurring is a pixel-by-pixel background blurring.
10. A mobile terminal, characterized in that the mobile terminal comprises: a memory, a processor and a photo context blurring program stored on the memory and executable on the processor, the photo context blurring program when executed by the processor implementing the steps of the photo context blurring method as claimed in any one of claims 1 to 9.
11. The mobile terminal of claim 10, wherein the mobile terminal comprises a first camera group and a second camera group;
the first camera group and the second camera group are used for acquiring a first picture and a second picture which are shot at different angles;
the first picture is a picture obtained by shooting by the first camera group, and the first camera group comprises one or more cameras;
the second picture is a picture obtained by shooting by the second camera group, and the second camera group comprises one or more cameras;
at least one of the cameras of the first camera group is different from the cameras of the second camera group.
12. A storage medium storing a photographing background blurring program which, when executed by a processor, implements the steps of the photographing background blurring method according to any one of claims 1 to 9.
CN202010085333.4A 2019-12-30 2020-02-10 Photographing background blurring method, mobile terminal and storage medium Active CN113256482B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010085333.4A CN113256482B (en) 2020-02-10 2020-02-10 Photographing background blurring method, mobile terminal and storage medium
PCT/CN2020/128657 WO2021135676A1 (en) 2019-12-30 2020-11-13 Photographing background blurring method, mobile terminal, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010085333.4A CN113256482B (en) 2020-02-10 2020-02-10 Photographing background blurring method, mobile terminal and storage medium

Publications (2)

Publication Number Publication Date
CN113256482A true CN113256482A (en) 2021-08-13
CN113256482B CN113256482B (en) 2023-05-12

Family

ID=77219491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010085333.4A Active CN113256482B (en) 2019-12-30 2020-02-10 Photographing background blurring method, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN113256482B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103957397A (en) * 2014-04-02 2014-07-30 宁波大学 Method for achieving up-sampling of low-resolution depth image based on image features
CN104966266A (en) * 2015-06-04 2015-10-07 福建天晴数码有限公司 Method and system to automatically blur body part
CN106530241A (en) * 2016-10-31 2017-03-22 努比亚技术有限公司 Image blurring processing method and apparatus
CN106952222A (en) * 2017-03-17 2017-07-14 成都通甲优博科技有限责任公司 A kind of interactive image weakening method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103957397A (en) * 2014-04-02 2014-07-30 宁波大学 Method for achieving up-sampling of low-resolution depth image based on image features
CN104966266A (en) * 2015-06-04 2015-10-07 福建天晴数码有限公司 Method and system to automatically blur body part
CN106530241A (en) * 2016-10-31 2017-03-22 努比亚技术有限公司 Image blurring processing method and apparatus
CN106952222A (en) * 2017-03-17 2017-07-14 成都通甲优博科技有限责任公司 A kind of interactive image weakening method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HUADONG SUN,ETC: "DEPTH FROM DEFOCUS AND BLUR FOR SINGLE IMAGE" *

Also Published As

Publication number Publication date
CN113256482B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
US9292928B2 (en) Depth constrained superpixel-based depth map refinement
US9311901B2 (en) Variable blend width compositing
Xiao et al. Fast image dehazing using guided joint bilateral filter
WO2014126613A1 (en) Method and apparatus for image enhancement and edge verification using at least one additional image
JP2010525486A (en) Image segmentation and image enhancement
CN109064504B (en) Image processing method, apparatus and computer storage medium
KR20110124965A (en) Apparatus and method for generating bokeh in out-of-focus shooting
CN107610149B (en) Image segmentation result edge optimization processing method and device and computing equipment
CN112258440B (en) Image processing method, device, electronic equipment and storage medium
WO2020038065A1 (en) Image processing method, terminal, and computer storage medium
CN111681198A (en) Morphological attribute filtering multimode fusion imaging method, system and medium
CN111161136B (en) Image blurring method, image blurring device, equipment and storage device
CN112651953A (en) Image similarity calculation method and device, computer equipment and storage medium
CN113129207B (en) Picture background blurring method and device, computer equipment and storage medium
CN113505702A (en) Pavement disease identification method and system based on double neural network optimization
CN113658197B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN111563517A (en) Image processing method, image processing device, electronic equipment and storage medium
Liu et al. Deblurring saturated night image with function-form kernel
CN109934777B (en) Image local invariant feature extraction method, device, computer equipment and storage medium
CN113256482B (en) Photographing background blurring method, mobile terminal and storage medium
CN115841632A (en) Power transmission line extraction method and device and binocular ranging method
CN115601616A (en) Sample data generation method and device, electronic equipment and storage medium
CN114519675A (en) Image processing method and device, electronic equipment and readable storage medium
WO2021135676A1 (en) Photographing background blurring method, mobile terminal, and storage medium
CN114596210A (en) Noise estimation method, device, terminal equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant