CN111311482B - Background blurring method and device, terminal equipment and storage medium - Google Patents

Background blurring method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN111311482B
CN111311482B CN201811516403.6A CN201811516403A CN111311482B CN 111311482 B CN111311482 B CN 111311482B CN 201811516403 A CN201811516403 A CN 201811516403A CN 111311482 B CN111311482 B CN 111311482B
Authority
CN
China
Prior art keywords
image
foreground
blurring
depth
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811516403.6A
Other languages
Chinese (zh)
Other versions
CN111311482A (en
Inventor
樊顺利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Technology Group Co Ltd
Original Assignee
TCL Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Technology Group Co Ltd filed Critical TCL Technology Group Co Ltd
Priority to CN201811516403.6A priority Critical patent/CN111311482B/en
Publication of CN111311482A publication Critical patent/CN111311482A/en
Application granted granted Critical
Publication of CN111311482B publication Critical patent/CN111311482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/162Segmentation; Edge detection involving graph-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20072Graph-based image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application is suitable for the technical field of image processing and discloses a background blurring method, a background blurring device, terminal equipment and a storage medium, wherein the method comprises the following steps: acquiring an original image shot by a first camera; calculating a depth of field image foreground segmentation threshold according to a focus point selected by a user, and carrying out image segmentation on a depth of field image obtained in advance based on double-shot estimation according to the depth of field image foreground segmentation threshold to obtain a first binary foreground image; taking the first foreground image as an initial mask image, and performing foreground segmentation on the original image through a preset image segmentation algorithm and a preset color segmentation algorithm to obtain a second foreground image; determining a transition area between a foreground area and a background area of the original image according to the second foreground image; and blurring the background area and the transition area of the original image according to the second foreground image to obtain a background blurring image. The embodiment of the application can improve the accuracy of foreground segmentation.

Description

Background blurring method and device, terminal equipment and storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to a background blurring method, apparatus, terminal device, and computer-readable storage medium.
Background
With the development and progress of science and technology, the terminal with double cameras is more and more popular.
At present, background blurring based on two cameras has gradually become the standard configuration of a two-camera intelligent terminal. For example, the background blurring function of a dual-camera phone. Background blurring based on two cameras usually adopts two cameras to estimate a depth of field image, then foreground segmentation is carried out on the depth of field image to determine a foreground region, and blurring with different intensities is carried out on each pixel point of the background region and the transition region to obtain a blurring image.
However, due to some limitations of the dual-camera based depth estimation algorithm itself, for example, when the texture features of the object are not significant, the estimated depth estimation algorithm has an error. Therefore, the foreground segmentation of the picture only by means of the depth map is inevitable, and some deviation exists, and situations such as incomplete foreground object segmentation or over-segmentation of the foreground exist.
Disclosure of Invention
In view of this, embodiments of the present application provide a background blurring method, an apparatus, a terminal device, and a computer-readable storage medium, so as to solve the problem in the prior art that when performing foreground segmentation based on a depth map estimated by two cameras, accuracy is low.
A first aspect of an embodiment of the present application provides a background blurring method, which is applied to an intelligent terminal that includes at least a first camera and a second camera, where the background blurring method includes:
acquiring an original image shot by the first camera;
calculating a depth of field image foreground segmentation threshold according to a focus selected by a user, and performing image segmentation on a depth of field image obtained in advance based on double-shot estimation according to the depth of field image foreground segmentation threshold to obtain a first binarized foreground image;
taking the first foreground image as an initial mask image, and performing foreground segmentation on the original image through a preset image segmentation algorithm and a preset color segmentation algorithm to obtain a second foreground image;
determining a transition area between a foreground area and a background area of the original image according to the second foreground image;
and blurring the background area and the transition area of the original image according to the second foreground image to obtain a background blurring image.
With reference to the first aspect, in a first possible implementation, the blurring, according to the second foreground image, the background area and the transition area of the original image to obtain a background blurring image includes:
performing background blurring on the original image once by using a preset background blurring mode to obtain a fuzzy image;
performing distance conversion on the second foreground image to obtain a distance conversion image;
calculating a transition region fusion coefficient and a background region fusion coefficient according to the depth map and the distance transformation map, wherein the transition region fusion coefficient is not equal to the background region fusion coefficient;
and fusing the fuzzy image and the original image according to the transition region fusion coefficient and the background region fusion coefficient to obtain the background blurring images with different blurring strengths.
With reference to the first aspect, in a second possible implementation, the fusing the blur image and the original image according to the transition region fusion coefficient and the background region fusion coefficient to obtain background blurring images with different blurring strengths includes:
fusing the blurred image and the original image through a formula B (i, j) = G (i, j) × r (i, j) + S (i, j) (-1-r (i, j)), so as to obtain the background blurring images with different blurring strengths;
wherein B (i, j) represents the background blurring map, G (i, j) represents the blur map, S (i, j) represents the original image, r (i, j) represents the transition region fusion coefficient or the background region fusion coefficient, and i, j respectively represent coordinate positions of pixel points.
With reference to the first aspect, in a third possible implementation, the calculating a transition region fusion coefficient and a background region fusion coefficient according to the depth map and the distance transform map includes:
according to the depth map and the distance transformation map, passing through a formula r t (i,j)=r depth (i,j)*0.5+r dis (i, j) × 0.5, calculating the transition region fusion coefficients;
according to the depth map and the focusing point, passing through a formula r b (i,j)=r depth (i,j)*0.5+r focus (i, j) × 0.5, calculating the background region fusion coefficient;
wherein r is t (i, j) represents the transition region fusion coefficient, r b (i, j) represents the background region fusion coefficient, r depth (i, j) denotes a coefficient calculated based on the depth map, r dis (i, j) represents a fusion coefficient calculated based on the distance transformation map, r focus (i, j) represents a coefficient calculated based on the focus position, i, j representing the coordinate position of the pixel point, respectively;
r depth (i,j)=abs(p depth (i,j)-m_focus_depth)/m_focus_depth,p depth (i, j) is the depth of fieldThe pixel value size of the image, m _ focus _ depth is the average value of the focusing depth determined based on the focusing point selected by the user;
r dis (i,j)=p dis (i,j)/255,p dis (i, j) represents pixel values of the distance transform map; r is focus (i,j)=d(c (i,j) ,c focus )/max(w,h),d(c (i,j) ,c focus ) Representing the euclidean distance of the current pixel from the in-focus point, w, h being the width and height of the input image.
With reference to the first aspect, in a fourth possible implementation, the performing background blurring on the original image once by using a preset background blurring manner to obtain a blurred image includes:
reducing the original image to an image with a preset size;
performing Gaussian smoothing on the image with the preset size for one time to obtain a blurring image;
and reducing the size of the blurring graph to the size of the original image to obtain the blurring graph.
With reference to the first aspect, in a fifth possible implementation, the determining, according to the second foreground map, a transition region between a foreground region and a background region of the original image includes:
expanding and corroding the second foreground image respectively to obtain an expansion image and a corrosion image;
calculating a first difference map of the second foreground map and the inflation map;
calculating a second difference map of the second foreground map and the corrosion map;
and taking a banded region formed by the first difference map and the second difference map as a transition region between a foreground region and a background region.
With reference to the first aspect, in a sixth possible implementation, the calculating a depth map foreground segmentation threshold according to an autofocus point selected by a user includes:
forming a first rectangular area by taking an opposite focus selected by a user as a center and a preset distance as a side length, and dividing the first rectangular area into a preset number of second rectangular areas;
respectively calculating the depth average value of the first rectangular area and each second rectangular area;
taking the maximum value in the depth average values as a focusing depth average value;
and multiplying the focusing depth average value by a preset multiple to be used as the depth-of-field map foreground segmentation threshold.
A second aspect of the embodiments of the present application provides a background blurring device, integrated in an intelligent terminal including at least a first camera and a second camera, the background blurring device including:
the acquisition module is used for acquiring an original image shot by the first camera;
the first segmentation module is used for calculating a depth of field image foreground segmentation threshold according to a focus point selected by a user, and carrying out image segmentation on a depth of field image obtained in advance based on double-shot estimation according to the depth of field image foreground segmentation threshold to obtain a first binarized foreground image;
the second segmentation module is used for carrying out foreground segmentation on the original image by taking the first foreground image as an initial mask image through a preset image segmentation algorithm and a preset color segmentation algorithm to obtain a second foreground image;
the determining module is used for determining a transition area between a foreground area and a background area of the original image according to the second foreground image;
and the blurring module is used for blurring the background area and the transition area of the original image according to the second foreground image to obtain a background blurring image.
A third aspect of embodiments of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the method according to any one of the above first aspects when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, in which a computer program is stored, which, when executed by a processor, performs the steps of the method according to any one of the above first aspects.
Compared with the prior art, the embodiment of the application has the beneficial effects that:
according to the method and the device, the first foreground image obtained by segmentation based on the depth of field image is used as the initial mask image, the original image is subjected to auxiliary segmentation by using the image segmentation algorithm and the color segmentation algorithm, the foreground image is obtained, and the accuracy of foreground segmentation is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic block diagram of a flow of a background blurring method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of step S104 according to an embodiment of the present disclosure;
fig. 3 is another schematic flow chart of a background blurring method according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of step S305 according to an embodiment of the present application;
fig. 5 is a schematic specific flowchart of step S302 according to an embodiment of the present application;
fig. 6 is a schematic diagram of an original image provided in an embodiment of the present application;
FIG. 7 is a graphical illustration of depth of field provided by an embodiment of the present application;
FIG. 8 is a diagram illustrating foreground segmentation provided by an embodiment of the present application;
FIG. 9 is a graph of fusion coefficients provided by an embodiment of the present application;
FIG. 10 is a background blurring illustration provided by an embodiment of the present application;
fig. 11 is a schematic block diagram illustrating a structure of a background blurring apparatus according to an embodiment of the present disclosure;
fig. 12 is a schematic diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical means described in the present application, the following description will be given by way of specific examples.
Example one
The background blurring method provided by the embodiment of the application can be specifically applied to an intelligent terminal at least comprising a first camera and a second camera, the intelligent terminal can be, for example, a smart phone, a tablet computer and other devices, the intelligent terminal at least comprises the first camera and the second camera, the first camera is used for shooting a color image, and the second camera is used for performing double-shot depth estimation in cooperation with the first camera. In the concrete application, first camera can regard as main camera, and the second camera is as vice camera. In general, the first camera captures a color image, and the second camera may be used to capture a color image, an infrared image, or a grayscale image, that is, the second camera may be a visible light camera module or an infrared light camera module, or the second camera may also be another camera module, which is not limited herein. And carrying out double-shot depth estimation through images shot by the first camera and the second camera.
This embodiment will describe a specific flow of the background blurring method with reference to fig. 1.
Referring to fig. 1, a schematic flow chart of a background blurring method according to an embodiment of the present disclosure is shown, where the background blurring method includes the following steps:
and step S101, acquiring an original image shot by a first camera.
Step S102, calculating a depth map foreground segmentation threshold according to a focus point selected by a user, and carrying out image segmentation on a depth map obtained in advance based on double-shot estimation according to the depth map foreground segmentation threshold to obtain a first binary foreground map.
It can be understood that the smart terminal often presents a picture taken through the first camera, that is, the main camera, and the user may select the focus of the picture when taking the picture, and the selecting manner may be, for example, touching the corresponding picture position on the display screen with a hand, or other manners, which are not limited herein.
After the user selects the focus, the intelligent terminal can calculate the depth value of the corresponding pixel point according to the focus, and then obtains the foreground segmentation threshold of the depth map. The depth map foreground segmentation threshold is used for performing foreground and background segmentation on the depth map, and the depth map is obtained based on double-shot estimation.
The bi-shot based depth map estimation algorithm may be any existing depth estimation algorithm. For example, the method may specifically be an algorithm for performing depth of field estimation based on a geometric relationship between two cameras, the principle of which is similar to that of binocular vision, and generally, when a main camera and an auxiliary camera of an intelligent terminal with two cameras shoot a same object, images of the object at different angles are obtained; based on images of different angles, corresponding geometric relationships are used for calibration, and the intelligent terminal can obtain corresponding depth of field images through a double-shot estimation algorithm. It should be noted that the scene in the depth map obtained based on the double-shot estimation is identical to the scene in the original image, and may be different only in the shooting angle.
After the foreground segmentation threshold is calculated, image segmentation can be performed on the depth map according to the threshold to obtain a binarized foreground image. In a specific application, the depth of field value of each pixel point may be compared with the foreground segmentation threshold, when the depth of field value is greater than (or less than) the threshold, the pixel value of the pixel point is set to 0 (or 255), and when the depth of field value is less than (or greater than) the threshold, the pixel value of the pixel point is set to 255 (or 0), so that a binarized image with a foreground area of white (or black) and a background area of black (or white) may be obtained.
In some cases, after the depth-of-field map is subjected to binary segmentation according to the foreground segmentation threshold, the focus coordinate of the binary foreground map can be updated, and the focus coordinate can be recorded as c focus
And S103, taking the first foreground image as an initial mask image, and performing foreground segmentation on the original image through a preset image segmentation algorithm and a preset color segmentation algorithm to obtain a second foreground image.
It should be noted that the preset image segmentation algorithm may be, but is not limited to, a grabcut image segmentation algorithm, and the preset color segmentation algorithm may be, but is not limited to, a kmeans color segmentation algorithm.
The first foreground image is used as the initial mask image, the foreground of the original image is finely divided, so that the foreground division is more accurate, the situations of incomplete division or over-division of the foreground and the like are avoided, and the foreground division can be optimized.
In some cases, to provide segmentation efficiency, the foreground segmentation may be optimized on a small map and then restored to the original size.
The process of obtaining the first foreground map by image segmentation may be regarded as a relatively coarse segmentation process, and the process of performing foreground segmentation on the original image by using the first foreground map as the initial mask image may be regarded as a relatively fine segmentation process. In the embodiment, the foreground segmentation is performed twice, so that the accuracy of the foreground segmentation is higher.
And step S104, determining a transition area between the foreground area and the background area of the original image according to the second foreground image.
It is understood that the transition region of the image refers to the region between the foreground region and the background region of the image. The determination method of the transition region has many methods, and in specific applications, the expansion and corrosion can be performed according to the second foreground map to obtain an expansion map and a corrosion map, and then a region formed according to a difference map between the second foreground map and the expansion map and a difference map between the second foreground map and the corrosion map is used as the transition region. Of course, it can be determined by other methods which regions in the picture belong to the transition region. Optionally, referring to fig. 2, a specific flowchart of step S104 provided in this embodiment of the present application is shown, in some embodiments of the present application, the step S104, that is, the determining the transition region between the foreground region and the background region of the original image according to the second foreground map may include:
and step S201, performing expansion and corrosion on the second foreground image respectively to obtain an expansion image and a corrosion image.
And S202, calculating a first difference value map of the second foreground map and the expansion map.
And step S203, calculating a second difference value map of the second foreground map and the corrosion map.
And step S204, taking a banded region formed by the first difference map and the second difference map as a transition region between the foreground region and the background region.
The transition region determined here can be used as a label as a basis for determining which regions in the image belong to the transition region when the transition region fusion of the image is performed subsequently.
And step S105, blurring the background area and the transition area of the original image according to the second foreground image to obtain a background blurring image.
After the second foreground image is obtained by using the optimized foreground segmentation method, background blurring may be performed according to an existing general background blurring method, for example, different blurring strength values may be respectively set for depth distances between each pixel point in the background area and the transition area and the focus point selected by the user according to the second foreground image and the foreground area and the background area in the determined image, so as to perform corresponding background blurring on different pixel points, and obtain background blurring images with different blurring strengths. Namely, different blurring strengths are designed according to different depths of field, and a background blurring process with different blurring strengths is obtained by performing blurring processing for multiple times.
However, the background blurring method requires different blurring strengths to be designed according to different depths of field, which is relatively time-consuming and cannot perform background blurring quickly and effectively. In order to improve the background blurring efficiency, the original image may be subjected to background blurring once to obtain a blurred image, then a background region fusion coefficient and a transition region fusion coefficient are calculated according to the second background image and the depth map, and the original image and the blurred image are fused according to different fusion coefficients to obtain background blurring images with different blurring strengths. Therefore, the background blurring is only performed once, different blurring strengths are not required to be set according to different depths of field, time consumption is low, and efficiency is high.
It is to be understood that the process of background blurring according to the second foreground image is not limited herein, as long as the purpose of background blurring can be achieved.
In the embodiment, the method uses the first foreground image obtained by segmentation based on the depth map as the initial mask image, and uses the image segmentation algorithm and the color segmentation algorithm to perform auxiliary segmentation on the original image to obtain the foreground image, so that the accuracy of foreground segmentation is improved.
Example two
There are many methods for performing background blurring according to the second foreground image, wherein a foreground region and a background region in the image can be determined according to the second foreground image, and then different blurring strength values are respectively set for depth distances between each pixel point in the background region and the transition region and the focus point selected by the user, so as to perform corresponding background blurring on different pixel points, and obtain background blurring images with different blurring strengths. Namely, different blurring strengths are designed according to different depths of field, and a background blurring process with different blurring strengths is obtained by performing blurring processing for multiple times. The process of this method is well known to those skilled in the art and will not be described further herein.
In addition, in order to improve the background blurring efficiency, the original image may be subjected to background blurring once to obtain a blurred image, then a background region fusion coefficient and a transition region fusion coefficient are calculated according to the second background image and the depth map, and the original image and the blurred image are fused according to different fusion coefficients to obtain background blurring images with different blurring strengths. Therefore, the background blurring is only performed once, different blurring strengths are not required to be set according to different depths of field, time consumption is low, and efficiency is high. This embodiment will be described with reference to this background blurring process.
Referring to fig. 3, another flow chart of a background blurring method according to an embodiment of the present application is shown, where the method includes the following steps:
and S301, acquiring an original image shot by the first camera.
Step S302, calculating a depth map foreground segmentation threshold according to the focus selected by the user, and performing image segmentation on a depth map obtained in advance based on double-shot estimation according to the depth map foreground segmentation threshold to obtain a first binarized foreground map.
And S303, taking the first foreground image as an initial mask image, and performing foreground segmentation on the original image through a preset image segmentation algorithm and a preset color segmentation algorithm to obtain a second foreground image.
And step S304, determining a transition area between the foreground area and the background area of the original image according to the second foreground image.
It should be understood that steps S301 to S304 are the same as steps S101 to S104 of the first embodiment, and for a detailed description, reference is made to corresponding contents of the first embodiment, which is not repeated herein.
Step S305, performing background blurring on the original image once by using a preset background blurring mode to obtain a fuzzy image.
It should be noted that, the preset background blurring manner may include, but is not limited to, frequency domain filtering, mean filtering, gaussian smoothing, and the like, and in a specific application, because of separability of a gaussian function, background blurring is performed on an original image by using gaussian smoothing once, so that a blurring processing procedure can be effectively accelerated, that is, time consumed for background blurring can be further reduced. Of course, other background blurring methods can be used to achieve the purpose of the embodiments of the present application.
In the background blurring process in the prior art, different blurring strengths are generally required to be designed according to different depths of field to obtain background blurring diagrams with different blurring strengths, and the process needs to perform blurring for multiple times, which is time-consuming. In the embodiment, the background blurring is performed only once on the original image, so that the time consumption is obviously reduced.
It should be noted that the execution order of this step only needs to be before step S308, that is, this step may be any step before step S308.
And S306, performing distance conversion on the second foreground image to obtain a distance conversion image.
It can be understood that the binarized foreground map includes a transition region, and when performing distance transformation, the transition region in the foreground map is also subjected to corresponding distance transformation.
And S307, calculating a transition region fusion coefficient and a background region fusion coefficient according to the depth map and the distance transformation map, wherein the transition region fusion coefficient is not equal to the background region fusion coefficient.
It should be noted that the transition region fusion coefficient may be used to fuse the transition regions of the two images, and the background region fusion coefficient may be used to fuse the background regions of the two images. The fusion coefficient of the transition region is not equal to that of the background region, so that the blurring strengths of the background region and the transition region are not consistent, and the background region and the transition region of the background blurring image obtained by fusion have different blurring strengths.
In some embodiments of the present application, the specific process of calculating the transition region fusion coefficient and the background region fusion coefficient according to the depth map and the distance transform map may include: from the depth map and the distance transformation map, by the formula r t (i,j)=r depth (i,j)*0.5+r dis (i, j) × 0.5, calculating a transition region fusion coefficient; from the depth map and the focus point, by the formula r b (i,j)=r depth (i,j)*0.5+r focus (i, j) × 0.5, calculating a background region fusion coefficient; wherein r is t (i, j) represents a transition region fusion coefficient, r b (i, j) represents a background region fusion coefficient, r depth (i, j) denotes a coefficient calculated based on the depth map, r dis (i, j) represents a fusion coefficient calculated based on the distance transformation map, r focus (i, j) represents a coefficient calculated based on the focused point position, and i, j respectively represent coordinates of pixel pointsLocation.
In a specific application, r depth (i,j)=abs(p depth (i,j)-m_focμs_depth)/m_focus_depth,p depth (i, j) is the pixel value size of the depth map, and m _ focus _ depth is the average value of the in-focus depth determined based on the in-focus point selected by the user. r is dis (i,j)=p dis (i,j)/255,p dis (i, j) represents a pixel value of the distance conversion map. r is focus (i,j)=d(c (i,j) ,c focus )/max(w,h),d(c (i,j) ,c focus ) Representing the euclidean distance of the current pixel from the in-focus point, w, h being the width and height of the input image.
And S308, fusing the blurred image and the original image according to the transition region fusion coefficient and the background region fusion coefficient to obtain background blurring images with different blurring strengths.
Specifically, the transition region and the background region of the image are fused by using the transition region fusion coefficient and the background region fusion coefficient, respectively, and the foreground region of the background blurring image can be directly replaced by the corresponding foreground region of the original image.
In some embodiments of the present application, the specific process of fusing the blur image and the original image according to the transition region fusion coefficient and the background region fusion coefficient to obtain the background blurring maps with different blurring strengths may include: fusing the blurred image and the original image through a formula B (i, j) = G (i, j) × r (i, j) + S (i, j) (-1-r (i, j)), so as to obtain background blurring images with different blurring strengths; wherein B (i, j) represents a background blurring map, G (i, j) represents a blurring map, S (i, j) represents an original image, r (i, j) represents a transition region fusion coefficient or a background region fusion coefficient, and i, j represent coordinate positions of pixel points, respectively.
It is understood that r (i, j) represents a transition region fusion coefficient or a background region fusion coefficient, which means that r (i, j) may be a transition region fusion coefficient, and in this case, the transition region fusion may be performed by formula B (i, j) = G (i, j) × r (i, j) + S (i, j) × (1-r (i, j)); in this case, the background region fusion may be performed by the formula B (i, j) = G (i, j) × r (i, j) + S (i, j) × (1-r (i, j)). That is, r (i, j) can take corresponding data according to different fusion requirements.
In this embodiment, the first foreground image obtained by segmentation based on the depth map is used as an initial mask image, and the original image is subjected to auxiliary segmentation by using an image segmentation algorithm and a color segmentation algorithm to obtain the foreground image, so that the accuracy of foreground segmentation is improved. In addition, the original image is subjected to background blurring once by using a preset background blurring mode to obtain a blurred image, and then the blurred image and the original image are fused according to different background region fusion coefficients and transition region fusion coefficients to obtain background blurring images with different blurring strengths. Namely, the background blurring is only carried out once, and then the blurred image and the original image are fused to obtain the background blurring image, so that different blurring strengths are not required to be designed for multiple times according to different depths of field, time consumption is low, and efficiency is high.
EXAMPLE III
Referring to fig. 4, a specific flowchart of the step S305 provided in this embodiment of the present application is shown, in some embodiments of the present application, the step S305 is to perform background blurring on the original image once in a preset background blurring manner, and a specific process of obtaining the blur image includes:
step S401 reduces the original image to an image of a predetermined size.
And S402, performing Gaussian smoothing on the image with the preset size once to obtain a blurring graph.
In step S403, the blurred image is obtained by reducing the size of the blurred image to the size of the original image.
The predetermined size may be any size as long as it is smaller than the size of the original image, and in general, the predetermined size is half of the original image, and the original image may be reduced by half.
Performing one-time gaussian smoothing on the reduced image to obtain a blurred image, wherein the two-dimensional gaussian function may be specifically defined as
Figure BDA0001902073510000121
And then reducing the size of the blurring graph into the size of the original image to obtain the Gaussian blurring graph. The separability of the Gaussian function can effectively accelerate the blurring processing process.
It is understood that the blurring is performed by gaussian smoothing, and other blurring methods, such as mean filtering, are similar to the process, and are not described herein again.
It can be seen that, the blurring processing is performed after the image is reduced, so that the amount of calculation can be reduced, and the blurring efficiency can be further improved. Of course, in other embodiments, the original image may be directly subjected to the gaussian smoothing blurring processing.
Optionally, referring to fig. 5, a specific flowchart of step S302 provided in this embodiment of the present application is shown, in some embodiments of the present application, the step S302, that is, the specific process of calculating the depth map foreground segmentation threshold according to the focus point selected by the user may include:
step S501, forming a first rectangular area with the focus point selected by the user as the center and the preset distance as the side length, and dividing the first rectangular area into a preset number of second rectangular areas.
The preset distance may be set according to actual needs, and may be set to 80 pixels, for example.
The preset number can be set according to actual needs, but the larger the value is, the larger the calculation amount is, and when the value is too large, the calculation speed may be affected. Typically, the predetermined number may be 4. When the first rectangular region is divided into a plurality of second rectangular regions, the first rectangular region may be divided uniformly or non-uniformly, and a uniform division method is generally adopted.
And step S502, respectively calculating the depth average value of the first rectangular area and each second rectangular area.
In step S503, the maximum value of the depth average values is set as the in-focus depth average value.
And step S504, multiplying the focusing depth average value by a preset multiple to serve as a depth map foreground segmentation threshold, and updating the focusing point coordinate.
It should be noted that the preset multiple may be set according to an actual application scenario. For example, it may be set to 0.75 times.
For example, when the preset number is 4 and the preset multiple is 0.75, a large rectangular area is determined with the focus as the center and a certain distance as the side length, then the rectangular area is evenly divided into 4 small rectangles, the large rectangular area is added, 5 rectangular areas are totally provided, the depth average values of the 5 rectangular areas are respectively calculated, and the maximum depth average value is taken as the focus depth average value and is recorded as m _ focus _ depth. And then multiplying the average value of the in-focus depths by 0.75 times to be used as a foreground segmentation threshold of the depth map.
After the foreground segmentation threshold is calculated, foreground and background segmentation may be performed from the depth map. Meanwhile, the coordinate of the focusing point can be updated, and the coordinate of the focusing point is marked as c focus
It can be seen that the foreground image segmentation threshold is determined according to the focus selected by the user, so that the foreground image segmentation threshold can better meet the actual situation, and the coordinate of the focus can be updated.
To better describe the implementation of the embodiments of the present application, the following description will be made with reference to fig. 6 to 10. Fig. 6 is a schematic view of an original image provided in the embodiment of the present application, fig. 7 is a view illustrating a depth of field provided in the embodiment of the present application, fig. 8 is a view illustrating foreground segmentation provided in the embodiment of the present application, fig. 9 is a view illustrating a fusion coefficient provided in the embodiment of the present application, and fig. 10 is a view illustrating background blurring provided in the embodiment of the present application.
It can be seen that fig. 7, 8, 9, and 10 are the depth map, the foreground segmentation map, the fusion coefficient map, and the background blurring map of fig. 6, respectively. The depth-of-field map shown in fig. 7 can be obtained by performing double-shot estimation on fig. 6, and then the depth-of-field map is subjected to binarization segmentation according to the foreground map segmentation threshold value, so that the foreground segmentation map shown in fig. 8 can be obtained, where in fig. 8, a white area is a foreground and a black area is a background. According to the distance transformation map and the depth map, a fusion coefficient matrix can be calculated, and an image as shown in fig. 9 can be obtained after the fusion coefficient matrix is imaged, in fig. 9, a black area is a foreground area, a white area is a background area, and a gray area is a transition area. After the original image and the blurred image are fused, an image as shown in fig. 10 can be obtained, and the background region and the transition region in fig. 10 have different blurring strengths. In this way, by fusing the original image and the blurred image subjected to the once gaussian smoothing blurring according to different fusion coefficients, even if the background blurring is performed only once, the background blurring images with different blurring strengths can be obtained, which is less time-consuming and efficient.
In this embodiment, when blurring the background, the image is first reduced and then blurring is performed, so that the amount of computation can be reduced, and blurring efficiency can be further improved. The blurring processing process can be effectively accelerated by utilizing the separability of the Gaussian function. The foreground image segmentation threshold is determined according to the focus selected by the user, so that the foreground image segmentation threshold can better accord with the actual situation, and the coordinate of the focus can be updated.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Example four
Referring to fig. 11, a schematic block diagram of a structure of a background blurring device provided in an embodiment of the present application, where the device may be integrated in an intelligent terminal including at least a first camera and a second camera, and the background blurring device includes:
an obtaining module 111, configured to obtain an original image captured by a first camera;
the first segmentation module 112 is configured to calculate a depth map foreground segmentation threshold according to an opposite focus selected by a user, and perform image segmentation on a depth map obtained in advance based on double-shot estimation according to the depth map foreground segmentation threshold to obtain a binarized first foreground map;
the second segmentation module 113 is configured to perform foreground segmentation on the original image by using the first foreground image as an initial mask image and using a preset image segmentation algorithm and a preset color segmentation algorithm to obtain a second foreground image;
a determining module 114, configured to determine a transition area between a foreground area and a background area of the original image according to the second foreground image;
and a blurring module 115, configured to blur a background region and a transition region of the original image according to the second foreground image, so as to obtain a background blurring image.
In one possible implementation, the blurring module may include:
the blurring unit is used for performing background blurring on the original image once by using a preset background blurring mode to obtain a blurred image;
the distance conversion unit is used for carrying out distance conversion on the second foreground image to obtain a distance conversion image;
the fusion coefficient calculation unit is used for calculating a transition region fusion coefficient and a background region fusion coefficient according to the depth map and the distance transformation map, wherein the transition region fusion coefficient is not equal to the background region fusion coefficient;
and the fusion unit is used for fusing the fuzzy image and the original image according to the transition region fusion coefficient and the background region fusion coefficient to obtain background blurring images with different blurring strengths.
In a possible implementation, the fusion unit may include:
a fusion subunit, configured to fuse the blurred image and the original image by using a formula B (i, j) = G (i, j) = r (i, j) + S (i, j) × (1-r (i, j)), so as to obtain background blurred images with different blurring strengths;
wherein B (i, j) represents a background blurring map, G (i, j) represents a blurring map, S (i, j) represents an original image, r (i, j) represents a transition region fusion coefficient or a background region fusion coefficient, and i, j represent coordinate positions of pixel points, respectively.
In a possible implementation, the fusion coefficient calculating unit may include:
a first calculation subunit for calculating a depth map and a distance transformation map according to the formula r t (i,j)=r depth (i,j)*0.5+r dis (i, j) × 0.5, calculating a transition region fusion coefficient;
a second calculating subunit for calculating the depth map and the focus point according to the formula r b (i,j)=r depth (i,j)*0.5+r focus (i, j) × 0.5, calculating a background region fusion coefficient;
wherein r is t (i, j) represents a transition region fusion coefficient, r b (i, j) represents a background region fusion coefficient, r depth (i, j) denotes a coefficient calculated based on the depth map, r dis (i, j) represents a fusion coefficient calculated based on the distance transformation map, r focus (i, j) represents a coefficient calculated based on the focus position, i, j representing the coordinate position of the pixel point, respectively;
r depth (i,j)=abs(p depth (i,j)-m_focus_depth)/m_focus_depth,p depth (i, j) is the pixel value size of the depth map, and m _ focus _ depth is the average value of the focusing depth determined based on the focusing point selected by the user;
r dis (i,j)=p dis (i,j)/255,p dis (i, j) represents pixel values of the distance transform map; r is focus (i,j)=d(c (i,j) ,c focus )/max(w,h),d(c (i,j) ,c focus ) Representing the euclidean distance of the current pixel from the in-focus point, w, h being the width and height of the input image.
In a possible implementation, the blurring unit may include:
a reduction subunit, configured to reduce the original image to an image of a preset size;
the Gaussian smoothing subunit is used for performing Gaussian smoothing on the image with the preset size for the first time to obtain a blurring image;
and the restoring subunit is used for restoring the size of the blurring image to the size of the original image to obtain a fuzzy image.
In one possible implementation, the determining module may include:
the expansion corrosion unit is used for respectively expanding and corroding the second foreground image to obtain an expansion image and a corrosion image;
a first calculation unit for calculating a first difference map of the second foreground map and the inflation map;
the second calculating unit is used for calculating a second difference value map of the second foreground map and the corrosion map;
and the unit is used for taking a strip-shaped area formed by the first difference map and the second difference map as a transition area between the foreground area and the background area.
In a possible implementation, the first segmentation module may include:
the dividing unit is used for forming a first rectangular area by taking the focus point selected by the user as the center and taking the preset distance as the side length, and dividing the first rectangular area into a preset number of second rectangular areas;
a depth evaluation value calculation unit for calculating depth average values of the first rectangular region and each of the second rectangular regions, respectively;
a screening unit for taking a maximum value of the depth average values as a focusing depth average value;
and the threshold value calculating unit is used for multiplying the focusing depth average value by a preset multiple to be used as a depth map foreground segmentation threshold value.
It should be noted that the background blurring apparatuses provided in the embodiments of the present application correspond to the background blurring methods of the foregoing embodiments one to one, and for specific introduction, please refer to the above corresponding contents, which are not described herein again.
In the embodiment, the device uses the first foreground image obtained by segmentation based on the depth map as the initial mask image, and uses the image segmentation algorithm and the color segmentation algorithm to perform auxiliary segmentation on the original image to obtain the foreground image, so that the accuracy of foreground segmentation is improved.
EXAMPLE five
Fig. 12 is a schematic diagram of a terminal device according to an embodiment of the present application. As shown in fig. 12, the terminal device 12 of this embodiment includes: a processor 120, a memory 121, and a computer program 122 stored in the memory 121 and executable on the processor 120. The processor 120 implements the steps in the above-mentioned various embodiments of the background blurring method, such as the steps S101 to S105 shown in fig. 1, when executing the computer program 122. Alternatively, the processor 120, when executing the computer program 122, implements the functions of each module or unit in the above-mentioned device embodiments, such as the functions of the modules 111 to 115 shown in fig. 11.
Illustratively, the computer program 122 may be partitioned into one or more modules or units, which are stored in the memory 121 and executed by the processor 120 to accomplish the present application. The one or more modules or units may be a series of computer program instruction segments capable of performing certain functions, which are used to describe the execution of the computer program 122 in the terminal device 12. For example, the computer program 122 may be divided into an acquisition module, a first division module, a second division module, a determination module, and a blurring module, each module having the following specific functions:
the acquisition module is used for acquiring an original image shot by the first camera; the first segmentation module is used for calculating a depth map foreground segmentation threshold according to a focus point selected by a user, and carrying out image segmentation on a depth map obtained in advance based on double-shot estimation according to the depth map foreground segmentation threshold to obtain a first binarized foreground map;
the second segmentation module is used for performing foreground segmentation on the original image by using the first foreground image as an initial mask image through a preset image segmentation algorithm and a preset color segmentation algorithm to obtain a second foreground image; the determining module is used for determining a transition area between a foreground area and a background area of the original image according to the second foreground image; and the blurring module is used for blurring the background area and the transition area of the original image according to the second foreground image to obtain a background blurring image.
The terminal device 12 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 120, a memory 121. Those skilled in the art will appreciate that fig. 12 is merely an example of a terminal device 12 and does not constitute a limitation of terminal device 12 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 120 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 121 may be an internal storage unit of the terminal device 12, such as a hard disk or a memory of the terminal device 12. The memory 121 may also be an external storage device of the terminal device 12, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 12. Further, the memory 121 may also include both an internal storage unit and an external storage device of the terminal device 12. The memory 121 is used to store the computer program and other programs and data required by the terminal device. The memory 121 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus, terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus and the terminal device are merely illustrative, and for example, the division of the module or the unit is only one logical function division, and there may be another division in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated modules or units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (9)

1. A background blurring method is applied to an intelligent terminal at least comprising a first camera and a second camera, and comprises the following steps:
acquiring an original image shot by the first camera;
calculating a depth of field image foreground segmentation threshold according to an opposite focus selected by a user, and performing image segmentation on a depth of field image obtained in advance based on double-shot estimation according to the depth of field image foreground segmentation threshold to obtain a first binarized foreground image, wherein the image is segmented into relatively rough segments;
taking the first foreground image as an initial mask image, and performing foreground segmentation on the original image through a preset image segmentation algorithm and a preset color segmentation algorithm to obtain a second foreground image, wherein the foreground is segmented into relatively fine segments;
determining a transition area between a foreground area and a background area of the original image according to the second foreground image;
blurring a background area and the transition area of the original image according to the second foreground image to obtain a background blurring image;
blurring the background area and the transition area of the original image according to the second foreground image to obtain a background blurring image, including:
performing background blurring on the original image once by using a preset background blurring mode to obtain a fuzzy image;
performing distance conversion on the second foreground image to obtain a distance conversion image;
calculating a transition region fusion coefficient and a background region fusion coefficient according to the depth map and the distance transformation map, wherein the transition region fusion coefficient is not equal to the background region fusion coefficient;
and fusing the fuzzy image and the original image according to the transition region fusion coefficient and the background region fusion coefficient to obtain the background blurring images with different blurring strengths.
2. The background blurring method according to claim 1, wherein the fusing the blur map and the original image according to the transition region fusion coefficient and the background region fusion coefficient to obtain background blurring maps with different blurring strengths comprises:
fusing the blurred image and the original image through a formula B (i, j) = G (i, j) × r (i, j) + S (i, j) (-1-r (i, j)), so as to obtain the background blurring images with different blurring strengths;
wherein B (i, j) represents the background blurring map, G (i, j) represents the blur map, S (i, j) represents the original image, r (i, j) represents the transition region fusion coefficient or the background region fusion coefficient, and i, j respectively represent coordinate positions of pixel points.
3. The background blurring method according to claim 1, wherein the calculating a transition region fusion coefficient and a background region fusion coefficient according to the depth map and the distance transform map comprises:
according to the depth map and the distance transformation map, passing a formula r t (i,j)=r depth (i,j)*0.5+r dis (i, j) × 0.5, calculating the transition region fusion coefficient;
according to the depth map and the focusing point, passing through a formula r b (i,j)=r depth (i,j)*0.5+r focus (i, j) × 0.5, calculating the background region fusion coefficient;
wherein r is t (i, j) represents the transition region fusion coefficient, r b (i, j) represents the background region fusion coefficient, r depth (i, j) denotes a coefficient calculated based on the depth map, r dis (i, j) represents a fusion coefficient calculated based on the distance transformation map, r focus (i, j) represents a coefficient calculated based on the focus position, i, j representing the coordinate position of the pixel point, respectively;
r depth (i,j)=abs(p depth (i,j)-m_focus_depth)/m_focus_depth,
p depth (i, j) is the pixel value size of the depth map, and m _ focus _ depth is the average value of the focusing depth determined based on the focusing point selected by the user;
r dis (i,j)=p dis (i,j)/255,p dis (i, j) represents pixel values of the distance transform map; r is focus (i,j)=d(c (i,j) ,c focus )/max(w,h),d(c (i,j) ,c focus ) Representing the euclidean distance of the current pixel from the in-focus point, w, h being the width and height of the input image.
4. The method of any one of claims 1 to 3, wherein the background blurring the original image once by using a predetermined background blurring manner to obtain a blurred image comprises:
reducing the original image to an image with a preset size;
performing primary Gaussian smoothing on the image with the preset size to obtain a blurring graph;
and reducing the size of the blurring graph to the size of the original image to obtain the blurring graph.
5. The method of claim 1, wherein determining the transition region between the foreground region and the background region of the original image according to the second foreground map comprises:
expanding and corroding the second foreground image respectively to obtain an expansion image and a corrosion image;
calculating a first difference map of the second foreground map and the dilation map;
calculating a second difference map of the second foreground map and the corrosion map;
and taking a banded region formed by the first difference map and the second difference map as a transition region between a foreground region and a background region.
6. The method of claim 1, wherein the calculating a depth map foreground segmentation threshold according to a user-selected focus point comprises:
forming a first rectangular area by taking an opposite focus selected by a user as a center and a preset distance as a side length, and dividing the first rectangular area into a preset number of second rectangular areas;
respectively calculating the depth average value of the first rectangular area and each second rectangular area;
taking the maximum value in the depth average values as a focusing depth average value;
and multiplying the focusing depth average value by a preset multiple to be used as the depth-of-field map foreground segmentation threshold.
7. The utility model provides a background blurring device which characterized in that, is integrated in the intelligent terminal who includes first camera and second camera at least, background blurring device includes:
the acquisition module is used for acquiring an original image shot by the first camera;
the first segmentation module is used for calculating a depth of field image foreground segmentation threshold according to a focus point selected by a user, and carrying out image segmentation on a depth of field image obtained in advance based on double-shot estimation according to the depth of field image foreground segmentation threshold to obtain a first binarized foreground image, wherein the image is segmented into relatively rough segments;
the second segmentation module is used for performing foreground segmentation on the original image by using the first foreground image as an initial mask image through a preset image segmentation algorithm and a preset color segmentation algorithm to obtain a second foreground image, and the foreground is segmented into relatively fine segments;
the determining module is used for determining a transition area between a foreground area and a background area of the original image according to the second foreground image;
the blurring module is used for blurring a background area and the transition area of the original image according to the second foreground image to obtain a background blurring image;
the blurring module comprises:
the blurring unit is used for performing background blurring on the original image once by using a preset background blurring mode to obtain a blurred image;
the distance conversion unit is used for carrying out distance conversion on the second foreground image to obtain a distance conversion image;
the fusion coefficient calculation unit is used for calculating a transition region fusion coefficient and a background region fusion coefficient according to the depth map and the distance transformation map, wherein the transition region fusion coefficient is not equal to the background region fusion coefficient;
and the fusion unit is used for fusing the blurred image and the original image according to the transition region fusion coefficient and the background region fusion coefficient to obtain background blurring images with different blurring strengths.
8. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to any one of claims 1 to 6 when executing the computer program.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the steps of the method according to any one of claims 1 to 6.
CN201811516403.6A 2018-12-12 2018-12-12 Background blurring method and device, terminal equipment and storage medium Active CN111311482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811516403.6A CN111311482B (en) 2018-12-12 2018-12-12 Background blurring method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811516403.6A CN111311482B (en) 2018-12-12 2018-12-12 Background blurring method and device, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111311482A CN111311482A (en) 2020-06-19
CN111311482B true CN111311482B (en) 2023-04-07

Family

ID=71146659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811516403.6A Active CN111311482B (en) 2018-12-12 2018-12-12 Background blurring method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111311482B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938578B (en) * 2020-07-13 2024-07-30 武汉Tcl集团工业研究院有限公司 Image blurring method, storage medium and terminal equipment
CN113965663B (en) * 2020-07-21 2024-09-20 深圳Tcl新技术有限公司 Image quality optimization method, intelligent terminal and storage medium
CN114143442B (en) * 2020-09-03 2023-08-01 武汉Tcl集团工业研究院有限公司 Image blurring method, computer device, and computer-readable storage medium
CN113077481B (en) * 2021-03-29 2022-12-09 上海闻泰信息技术有限公司 Image processing method and device, computer equipment and storage medium
CN113610884B (en) * 2021-07-08 2024-08-02 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN113538270A (en) * 2021-07-09 2021-10-22 厦门亿联网络技术股份有限公司 Portrait background blurring method and device
CN114245011B (en) * 2021-12-10 2022-11-08 荣耀终端有限公司 Image processing method, user interface and electronic equipment
CN115499577B (en) * 2022-06-27 2024-04-30 华为技术有限公司 Image processing method and terminal equipment
CN117795284A (en) * 2022-07-29 2024-03-29 宁德时代新能源科技股份有限公司 Measuring method and measuring device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787930A (en) * 2016-02-17 2016-07-20 上海文广科技(集团)有限公司 Sharpness-based significance detection method and system for virtual images
CN106530241A (en) * 2016-10-31 2017-03-22 努比亚技术有限公司 Image blurring processing method and apparatus
CN106657782A (en) * 2016-12-21 2017-05-10 努比亚技术有限公司 Picture processing method and terminal

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2013206601A1 (en) * 2013-06-28 2015-01-22 Canon Kabushiki Kaisha Variable blend width compositing
CN104778673B (en) * 2015-04-23 2018-11-09 上海师范大学 A kind of improved gauss hybrid models depth image enhancement method
CN107948519B (en) * 2017-11-30 2020-03-27 Oppo广东移动通信有限公司 Image processing method, device and equipment
CN107977940B (en) * 2017-11-30 2020-03-17 Oppo广东移动通信有限公司 Background blurring processing method, device and equipment
CN107945105B (en) * 2017-11-30 2021-05-25 Oppo广东移动通信有限公司 Background blurring processing method, device and equipment
CN108154514B (en) * 2017-12-06 2021-08-13 Oppo广东移动通信有限公司 Image processing method, device and equipment
CN108156378B (en) * 2017-12-27 2020-12-18 努比亚技术有限公司 Photographing method, mobile terminal and computer-readable storage medium
CN108259770B (en) * 2018-03-30 2020-06-02 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN108776800B (en) * 2018-06-05 2021-03-12 Oppo广东移动通信有限公司 Image processing method, mobile terminal and computer readable storage medium
CN108848367B (en) * 2018-07-26 2020-08-07 宁波视睿迪光电有限公司 Image processing method and device and mobile terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787930A (en) * 2016-02-17 2016-07-20 上海文广科技(集团)有限公司 Sharpness-based significance detection method and system for virtual images
CN106530241A (en) * 2016-10-31 2017-03-22 努比亚技术有限公司 Image blurring processing method and apparatus
CN106657782A (en) * 2016-12-21 2017-05-10 努比亚技术有限公司 Picture processing method and terminal

Also Published As

Publication number Publication date
CN111311482A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
CN111311482B (en) Background blurring method and device, terminal equipment and storage medium
CN109840881B (en) 3D special effect image generation method, device and equipment
CN109474780B (en) Method and device for image processing
US8873835B2 (en) Methods and apparatus for correcting disparity maps using statistical analysis on local neighborhoods
WO2018082185A1 (en) Image processing method and device
CN111368717B (en) Line-of-sight determination method, line-of-sight determination device, electronic apparatus, and computer-readable storage medium
CN111311481A (en) Background blurring method and device, terminal equipment and storage medium
CN108230333B (en) Image processing method, image processing apparatus, computer program, storage medium, and electronic device
EP2863362B1 (en) Method and apparatus for scene segmentation from focal stack images
CN105243371A (en) Human face beauty degree detection method and system and shooting terminal
Kil et al. A dehazing algorithm using dark channel prior and contrast enhancement
CN109214996B (en) Image processing method and device
CN108805838B (en) Image processing method, mobile terminal and computer readable storage medium
CN111131688B (en) Image processing method and device and mobile terminal
CN111368587B (en) Scene detection method, device, terminal equipment and computer readable storage medium
CN111383254A (en) Depth information acquisition method and system and terminal equipment
CN111161299A (en) Image segmentation method, computer program, storage medium, and electronic device
CN111340722B (en) Image processing method, processing device, terminal equipment and readable storage medium
CN116485645B (en) Image stitching method, device, equipment and storage medium
CN110047126B (en) Method, apparatus, electronic device, and computer-readable storage medium for rendering image
CN108810407B (en) Image processing method, mobile terminal and computer readable storage medium
CN104504667B (en) image processing method and device
CN110363723B (en) Image processing method and device for improving image boundary effect
CN110910439B (en) Image resolution estimation method and device and terminal
CN114596210A (en) Noise estimation method, device, terminal equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 516006 TCL science and technology building, No. 17, Huifeng Third Road, Zhongkai high tech Zone, Huizhou City, Guangdong Province

Applicant after: TCL Technology Group Co.,Ltd.

Address before: 516006 Guangdong province Huizhou Zhongkai hi tech Development Zone No. nineteen District

Applicant before: TCL Corp.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant