CN116188284A - Image blurring method and device, computer readable storage medium and terminal - Google Patents

Image blurring method and device, computer readable storage medium and terminal Download PDF

Info

Publication number
CN116188284A
CN116188284A CN202211102808.1A CN202211102808A CN116188284A CN 116188284 A CN116188284 A CN 116188284A CN 202211102808 A CN202211102808 A CN 202211102808A CN 116188284 A CN116188284 A CN 116188284A
Authority
CN
China
Prior art keywords
size
image
coc
pixel point
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211102808.1A
Other languages
Chinese (zh)
Inventor
杨兴宇
姬弘桢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Shanghai Co Ltd
Original Assignee
Spreadtrum Communications Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Shanghai Co Ltd filed Critical Spreadtrum Communications Shanghai Co Ltd
Priority to CN202211102808.1A priority Critical patent/CN116188284A/en
Publication of CN116188284A publication Critical patent/CN116188284A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

An image blurring method and device, a computer readable storage medium and a terminal, wherein the method comprises the following steps: performing downsampling treatment on the first-size original image to obtain a second-size original image; filtering the main body edge area of the first-size original image to obtain a first-size edge blurring image, and filtering the main body background area of the second-size original image to obtain a second-size background blurring image; performing up-sampling processing on the second-size background blurring image to obtain a first-size background blurring image; and carrying out pyramid fusion processing on the first-size background virtual image and the first-size edge virtual image to obtain a first-size target virtual image. The scheme can reduce the operation cost, improve the blurring processing efficiency, solve the problem of losing edge details of a main body in image blurring and improve the image quality after blurring processing.

Description

Image blurring method and device, computer readable storage medium and terminal
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image blurring method and apparatus, a computer readable storage medium, and a terminal.
Background
With the development of mobile phone photographing technology, people continuously increase photographing demands under different environments, and portrait mode is born under the background. The portrait mode on the mobile phone, which is also called as a background blurring or double-shot blurring mode, can blur other backgrounds while keeping people or objects appointed in a picture clear, so that the large aperture shallow depth effect of a professional camera is simulated on the mobile phone. In this way, the main part of the picture will appear to be prominent and more aesthetically subjective. Therefore, how to make the effect more similar to the blurring effect of the professional camera also becomes a hot spot for research of various mobile phone manufacturers.
At present, a mask image adopted by the mainstream mobile phone portrait mode blurring processing technology is mainly a depth map calculated by a double-shot stereo matching algorithm. And carrying out filtering processing on the original picture according to the information of the depth map. Specifically, when blurring a picture, there are mainly two routes, i.e., filtering on the original size and filtering on the small-size picture obtained by downsampling the original. When blurring processing is carried out on the original image, edge details of the character can be well maintained, but the filtering calculation amount is large and the time consumption is high when the original image is directly carried out on the original image due to the large size of the original image. When blurring processing is performed on the small image, since the up-sampling operation is to obtain new pixel information through interpolation, details of a part of images are lost, the details are particularly obvious at the edges of the figures, and the problem of detail loss of the edges of the figures possibly caused affects the final blurring effect.
Therefore, it is needed to provide an image blurring method, which can reduce the operation cost, improve the blurring processing efficiency, solve the problem of losing edge details of the image main body, and improve the image quality after blurring processing.
Disclosure of Invention
The invention solves the technical problems that the prior image blurring technology has large filtering calculation amount and low efficiency caused by blurring processing on the original image size directly, or has poor blurring effect caused by losing edge details of an image main body caused by blurring processing on a small-size image after original image downsampling.
In order to solve the above technical problems, an embodiment of the present invention provides an image blurring method, including the following steps: performing downsampling treatment on the first-size original image to obtain a second-size original image; filtering the main body edge area of the first-size original image to obtain a first-size edge blurring image, and filtering the main body background area of the second-size original image to obtain a second-size background blurring image; performing up-sampling processing on the second-size background blurring image to obtain a first-size background blurring image; and carrying out pyramid fusion processing on the first-size background virtual image and the first-size edge virtual image to obtain a first-size target virtual image.
Optionally, the filtering the main body edge area of the first size original image includes: determining a filter kernel of each pixel point to be filtered according to the CoC value of each pixel point to be filtered in the main body edge area of the first-size original image; and filtering each pixel point to be filtered by adopting the determined filter kernel of each pixel point to be filtered so as to determine the first-size edge blurring image.
Optionally, the following formula is adopted, and a filter kernel of each pixel point to be filtered is determined according to the CoC value of each pixel point to be filtered in the edge area of the main body of the original image with the first size:
r=floor(C×exp(-(CoC) 2 ÷(sigma1) 2 ));
wherein r is used for representing the radius of a filtering kernel of the pixel point to be filtered, floor () is used for representing a downward rounding function, exp () is used for representing an exponential function based on a natural constant e, coC is used for representing the CoC value of the pixel point to be filtered, sigma1 is used for representing a first debugging parameter and has a preset first debugging parameter value, and C is a constant.
Optionally, the filtering processing for each pixel point to be filtered by using the determined filtering kernel of each pixel point to be filtered includes: for each pixel point to be filtered, determining a filtering weight value of each pixel point in the filter kernel of the pixel point to be filtered; judging whether a CoC value equal to the CoC value of each pixel point in the kernel exists in the CoC value of each pixel point in the kernel in the first main body area of the first size original image; if the judgment result is yes, reducing the filtering weight value of the pixel point in the kernel by a preset multiple; filtering the pixel point to be filtered based on the filter kernel of the pixel point to be filtered and the determined filter weight value of each pixel point in the filter kernel; for each in-core pixel point, the larger the difference between the CoC value of the in-core pixel point and the CoC value of the pixel point to be filtered, the larger the value of the preset multiple.
Optionally, the first main area of the first-size original image is an area formed by each pixel point, in which a depth value in the first-size depth map is within a first preset depth value range, after determining the first-size depth map of the first-size original image.
Optionally, for each pixel to be filtered, determining the filtering weight value of each pixel in the filtering kernel of the pixel to be filtered includes: for each in-core pixel point in the filtering core of the pixel point to be filtered, determining a filtering weight value of the in-core pixel point according to the distance between the in-core pixel point and the pixel point to be filtered; the larger the distance between the pixel point in the kernel and the pixel point to be filtered is, the smaller the filtering weight value of the pixel point in the kernel is.
Optionally, for each in-core pixel point in the filtering core of the pixel point to be filtered, the following formula is adopted to calculate a filtering weight value of the in-core pixel point:
z=exp(-((x-x 0 ) 2 +(y-y 0 ) 2 )÷(sigma2) 2 );
wherein z is used for representing the filtering weight value of the pixel point to be filtered, and x is 0 And the abscissa, y, used for representing the pixel point to be filtered 0 For representing the ordinate of the pixel to be filtered, x for representing the abscissa of the pixel in the kernel within the filtering kernel of the pixel to be filtered, y for representing the ordinate of the pixel in the kernel within the filtering kernel of the pixel to be filtered, exp () for representing an exponential function based on a natural constant e, sigma2 for representing a second tone The test parameter has a preset second debugging parameter value.
Optionally, the filtering the main background area of the second size original image includes: filtering the main background area of the second-size original image by adopting preset filtering times to obtain a first preset number of second-size preliminary background blurring images, wherein each filtering adopts filtering kernels with different preset sizes; and carrying out fusion processing on each second-size preliminary background blurring image based on the CoC value of each pixel point in the main background area of the second-size original image and a second preset number of fusion weight values so as to obtain the second-size background blurring image.
Optionally, the first preset number is the same as the filtering times, and the second preset number is greater than the first preset number.
Optionally, the performing fusion processing on each second-size preliminary background virtual image based on the CoC value of each pixel point in the main background area of the second-size original image and a second preset number of fusion weight values, so as to obtain the second-size background virtual image includes: comparing the CoC value of each pixel point in the main background area of the second-size original image with each fusion weight value, and determining the pixel value of each pixel point in the same area as the main background area in the second-size background virtual image according to the comparison result; and adopting the original pixel values of all the pixel points outside the main background area of the second-size original image as the pixel values of the pixel points at the same position in the second-size background blurring image.
Optionally, the number of the preliminary background blurring images with the second size is 3, and the number of the fusion weight values is 4; comparing the CoC value of each pixel point in the main background area of the second-size original image with each fusion weight value, and determining the pixel value of each pixel point in the same area as the main background area in the second-size background virtual image according to the comparison result comprises the following steps:
if CoC (i, j) < Weight1, fusion Image (i, j) =image 1 (i, j);
if Weight1 is less than or equal to CoC (i, j) is less than or equal to Weight2,
fusion Image (i, j) =image 1 (i, j) × (1-w 1) +image2 (i, j) ×w1;
if Weight2< CoC (i, j) < Weight3, fusion Image (i, j) =image 2 (i, j);
if Weight 3. Ltoreq.CoC (i, j). Ltoreq.weight 4,
fusion Image (i, j) =image 2 (i, j) × (1-w 2) +image3 (i, j) ×w2;
if CoC (i, j) > Weight4, fusion Image (i, j) =image 3 (i, j);
wherein w1= (CoC (i, j) -Weight 1)/(256), w2= (CoC (i, j) -Weight 3)/(256);
wherein Weight1< Weight2< Weight3< Weight4;
wherein CoC (i, j) is used to represent CoC values of pixel points with coordinates of (i, j) in the background region of the main body, fusion Image (i, j) is used to represent pixel values of pixel points with coordinates of (i, j) in the same region as the background region of the main body in the second size background virtual Image, image1 (i, j) is used to represent pixel values of pixel points with coordinates of (i, j) in the first second size preliminary background virtual Image, image2 (i, j) is used to represent pixel values of pixel points with coordinates of (i, j) in the second size preliminary background virtual Image, image3 (i, j) is used to represent pixel values of pixel points with coordinates of (i, j) in the third second size preliminary background virtual Image, weight1 is used to represent pixel values of the first fusion Weight, weight2 is used to represent pixel values of the second fusion Weight, weight3 is used to represent pixel values of the first fusion Weight value, and Weight3 is used to represent the fourth fusion coefficient w 4 is used to represent fusion coefficient w.
Optionally, before filtering the main body edge area of the first-size original image to obtain a first-size edge blurred image, and filtering the main body background area of the second-size original image to obtain a second-size background blurred image, the method further includes: determining a first size CoC value graph of the first size original image and determining a second size CoC value graph of the second size original image; and determining a main body edge area of the first-size original image and a main body background area of the second-size original image based on the first-size CoC value diagram and the second-size CoC value diagram.
Optionally, the determining the first size CoC value graph of the first size original image includes: determining a first-size depth map of the first-size original image, wherein each pixel point of the first-size depth map has a respective depth value; taking a region formed by each pixel point with a depth value within a first preset depth value range in the first size depth map as a first main body region and taking other regions as a first background region; and respectively carrying out linear transformation processing on the depth values of all the pixel points in the first main body area and the first background area, determining the CoC value of all the pixel points in the first main body area, and determining the CoC value of all the pixel points in the first background area to obtain the first size CoC value graph.
Optionally, performing linear transformation processing on the depth value of each pixel point in the first main body area, and determining the CoC value of each pixel point in the first main body area includes: and assigning the CoC value of each pixel point in the first main body area as a preset CoC value.
Optionally, the determining the second size CoC value graph of the second size original image includes: determining a second-size depth map of the second-size original image, wherein each pixel point of the second-size depth map has a respective depth value; taking a region formed by each pixel point with the depth value within a second preset depth value range in the second size depth map as a second main body region and taking other regions as second background regions; and respectively carrying out linear transformation processing on the depth values of all the pixel points in the second main body area and the second background area, determining the CoC value of all the pixel points in the second main body area, and determining the CoC value of all the pixel points in the second background area to obtain a second size CoC value graph.
Optionally, the determining the main body edge area of the first size original image and the main body background area of the second size original image based on the first size CoC value graph and the second size CoC value graph includes: binarizing the CoC values of each pixel point in the second main body area and the second background area to obtain a second-size CoC binary image; performing morphological expansion processing on the second main body area based on the second dimension CoC binary diagram to determine an expansion main body area, and taking an area outside the expansion main body area as an expansion background area; and taking the area which is the same as the expansion background area in the second-size original image as the main body background area.
Optionally, the determining the main body edge area of the first size original image and the main body background area of the second size original image based on the first size CoC value graph and the second size CoC value graph includes: performing up-sampling processing on a second-size CoC binary image divided by the expansion main body region and the expansion background region to determine a first-size expansion main body region corresponding to the expansion main body region and a first-size expansion background region corresponding to the expansion background region; intersection is carried out on the same area as the first main body area in the first-size original image and the same area as the first-size expansion background area in the first-size original image so as to determine the main body edge area of the first-size original image.
Optionally, the algorithm for performing pyramid fusion processing is selected from: gaussian pyramid fusion algorithm, laplacian pyramid fusion algorithm, contrast pyramid fusion algorithm and gradient pyramid fusion algorithm.
The embodiment of the invention also provides an image blurring device, which comprises: the original image downsampling module is used for downsampling the first-size original image to obtain a second-size original image; the regional filtering module is used for carrying out filtering treatment on the main body edge region of the first-size original image to obtain a first-size edge blurring image, and carrying out filtering treatment on the main body background region of the second-size original image to obtain a second-size background blurring image; the up-sampling module is used for up-sampling the second-size background blurring image to obtain a first-size background blurring image; and the image fusion module is used for carrying out pyramid fusion processing on the first-size background virtual image and the first-size edge virtual image to obtain a first-size target virtual image.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, performs the steps of the above-mentioned image blurring method.
The embodiment of the invention also provides a terminal, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor executes the steps of the image blurring method when running the computer program.
Compared with the prior art, the technical scheme of the embodiment of the invention has the following beneficial effects:
the embodiment of the invention divides the background blurring of the whole image into blurring processing for an edge region of a subject (may also be referred to as a photographed subject, for example, a person) and a subject background region. Because the accuracy requirement of the edge area of the main body is generally higher, and the area ratio of the edge area is often not large, the edge area is subjected to blurring on the dimension of the original image, the details of the edge area can be reserved on the basis of sacrificing a small degree of efficiency, and the blurring quality of the edge area is improved. Because the background area of the main body has low requirements on precision and the area occupation ratio of the background area is large, the embodiment of the invention performs the blurring of the background area on the small-scale image size and then performs the up-sampling, thereby effectively reducing the operation cost and improving the blurring efficiency.
Further, the filtering processing for each pixel point to be filtered by adopting the determined filtering kernel of each pixel point to be filtered includes: for each pixel point to be filtered, determining a filtering weight value of each pixel point in the filter kernel of the pixel point to be filtered; judging whether a CoC value equal to the CoC value of each pixel point in the kernel exists in the CoC value of each pixel point in the kernel in the first main body area of the first size original image; if the judgment result is yes, reducing the filtering weight value of the pixel point in the kernel by a preset multiple; filtering the pixel point to be filtered based on the filter kernel of the pixel point to be filtered and the determined filter weight value of each pixel point in the filter kernel; for each in-core pixel point, the larger the difference between the CoC value of the in-core pixel point and the CoC value of the pixel point to be filtered, the larger the value of the preset multiple.
In the embodiment of the present invention, the larger the CoC value difference between the pixel point to be filtered and the pixel point in the filtering kernel of the pixel point to be filtered and belonging to the kernel of the first main body region, which means that the larger the depth value difference between the two pixel points is, the further the distance between the actual position corresponding to the pixel point to be filtered and the actual position of the photographed subject (e.g., person) in the real photographing scene is. Under the above circumstances, the filtering weight value of the pixel point in the kernel belonging to the main area is reduced to a greater extent, that is, the influence of the main area on blurring of the edge area to be filtered at a longer distance can be reduced in the filtering process. By adopting the self-adaptive filtering weight adjustment method, the blurring effect of the edge region of the main body can be further improved and the blurring quality of the edge region of the main body can be improved by combining the distance between the actual position corresponding to the pixel point to be filtered in the image and the photographed main body in the filtering process of the edge region.
Further, the filtering the main background area of the second size original image includes: filtering the main background area of the second-size original image by adopting preset filtering times to obtain a first preset number of second-size preliminary background blurring images, wherein each filtering adopts filtering kernels with different preset sizes; and carrying out fusion processing on each second-size preliminary background blurring image based on the CoC value of each pixel point in the main background area of the second-size original image and a second preset number of fusion weight values so as to obtain the second-size background blurring image.
In the embodiment of the invention, when blurring processing is performed on the main background area of the original image with the second size, the filtering times of preset times are adopted, different filtering cores are adopted for filtering each time, and then the filtering results of the multiple times are fused. In the fusion process, the fused second-size background blurring image is further determined based on comparison results of the CoC value of each pixel point in the main background area and a plurality of fusion weight values. The smaller the CoC value of a pixel point, the closer the actual position corresponding to the pixel point is to the subject to be photographed, which means that the weaker the blurring degree of the pixel point should be. Therefore, when blurring the subject background region, the distance between the position of the region to be blurring corresponding to the actual photographed scene and the position of the subject region corresponding to the actual photographed scene is also considered, and the blurring results at different blurring degrees are adopted for different distances. Therefore, the blurring part in the blurring image is gradually changed into nature, and the blurring quality of the main background area is improved.
Further, the determining the main body edge region of the first size original image and the main body background region of the second size original image based on the first size CoC value graph and the second size CoC value graph includes: binarizing the CoC values of each pixel point in the second main body area and the second background area to obtain a second-size CoC binary image; performing morphological expansion processing on the second main body area based on the second dimension CoC binary diagram to determine an expansion main body area, and taking an area outside the expansion main body area as an expansion background area; and taking the area which is the same as the expansion background area in the second-size original image as the main body background area.
In the embodiment of the invention, in the process of determining the main body edge area of the first-size original image and the main body background area of the second-size original image, the morphological dilation operation is performed after binarization processing is performed on the CoC values of all pixel points in the second main body area and the second background area based on a second-size CoC value graph obtained by linearly transforming a second-size depth graph of the second-size original image. Compared with the method for directly adopting RGB pixel values or depth values of all pixel points in an original image to operate, the method is based on linear transformation processing and binarization processing, the binarization values of all pixel points are obtained, and then the main body edge area and the main body background area are determined, so that the method is beneficial to reducing operation complexity and improving operation efficiency, and further improving image blurring efficiency.
Drawings
FIG. 1 is a flow chart of a method of image blurring in an embodiment of the present invention;
FIG. 2 is a flow chart of one embodiment of step S12 of FIG. 1;
FIG. 3 is a flow chart of one embodiment of step S22 of FIG. 2;
FIG. 4 is a flow chart of another embodiment of step S12 of FIG. 1;
FIG. 5 is a partial flow chart of another image blurring method according to an embodiment of the present invention;
FIG. 6 is a flow chart of one embodiment of step S52 of FIG. 5;
fig. 7 is a schematic structural diagram of an image blurring apparatus according to an embodiment of the present invention.
Detailed Description
As described in the background art, with the development of mobile phone photographing technology, people continuously increase the photographing demands under different environments, and portrait mode is created under the background. How to make the effect more similar to the blurring effect of the professional camera has become a hot spot for research of various mobile phone manufacturers.
At present, a mask image adopted by the mainstream mobile phone portrait mode blurring processing technology is mainly a depth map calculated by a double-shot stereo matching algorithm. And carrying out filtering processing on the original picture according to the information of the depth map. Specifically, when blurring a picture, there are mainly two routes, i.e., filtering on the original size and filtering on the small-size picture obtained by downsampling the original. When blurring processing is carried out on the original image, edge details of the character can be well maintained, but the filtering calculation amount is large and the time consumption is high when the original image is directly carried out on the original image due to the large size of the original image. When blurring processing is performed on the small image, since the up-sampling operation is to obtain new pixel information through interpolation, details of a part of images are lost, the details are particularly obvious at the edges of the figures, and the problem of detail loss of the edges of the figures possibly caused affects the final blurring effect.
In order to solve the above technical problems, an embodiment of the present invention provides an image blurring method, which specifically includes: performing downsampling treatment on the first-size original image to obtain a second-size original image; filtering the main body edge area of the first-size original image to obtain a first-size edge blurring image, and filtering the main body background area of the second-size original image to obtain a second-size background blurring image; performing up-sampling processing on the second-size background blurring image to obtain a first-size background blurring image; and carrying out pyramid fusion processing on the first-size background virtual image and the first-size edge virtual image to obtain a first-size target virtual image.
Therefore, the embodiment of the invention divides the background blurring of the whole image into the character edge area and other background areas, and performs blurring processing on the large size and the small size respectively and then performs fusion. Specifically, considering that the accuracy requirement of the edge area of the main body is high, and the area ratio of the edge area is not high, the edge area is subjected to blurring on the original image size, the details of the edge area can be reserved on the basis of sacrificing a small degree of efficiency, and the blurring quality of the edge area is improved. Considering that the background area of the main body has lower requirement on precision, the area occupation ratio of the background area is generally larger, and the background area is subjected to blurring on the dimension of the small diagram and then up-sampling, the operation cost can be effectively reduced, and the blurring efficiency can be improved.
In order to make the above objects, features and advantages of the present invention more comprehensible, embodiments accompanied with figures are described in detail below.
Referring to fig. 1, fig. 1 is a flowchart of an image blurring method according to an embodiment of the present invention, and the method may be applied to various terminal devices with photographing capability, such as a mobile phone, a tablet computer, a video camera, a smart wearable device (e.g., a smart watch), and so on. The method may include steps S11 to S14:
step S11: performing downsampling treatment on the first-size original image to obtain a second-size original image;
step S12: filtering the main body edge area of the first-size original image to obtain a first-size edge blurring image, and filtering the main body background area of the second-size original image to obtain a second-size background blurring image;
step S13: performing up-sampling processing on the second-size background blurring image to obtain a first-size background blurring image;
step S14: and carrying out pyramid fusion processing on the first-size background virtual image and the first-size edge virtual image to obtain a first-size target virtual image.
In the implementation of step S11, the original image may be an image acquired by a mobile phone, a tablet computer, a video camera, a smart wearable device (such as a smart watch), and other various devices with image acquisition functions for a specific scene, which is not subjected to blurring processing.
In a specific implementation, the image acquisition process using the above device typically has one or more focus areas or subject areas corresponding to subjects in the specific scene, such as people, buildings, animals, etc.
The purpose of the downsampling process may be to reduce the size of the image. Specifically, downsampling is to reduce the number of sampling points in the original image. For an nxm image, if the downsampling factor is k, that is, a new image is formed by taking every k points every row and every column in the original image, the new image is the downsampled image. For example, in the row scanning direction, processing row by row, and for each row, taking a point every k points in the row to form a new image; alternatively, the processing is performed column by column in units of columns, and for each column, a new image is formed by taking a dot every k dots in the column.
As can be seen from the above, the size of the second size original image (i.e., the second size) is smaller than the size of the first size original image (i.e., the first size). In a specific implementation, the multiple of downsampling may be reasonably set according to the needs of a specific application scenario, for example, by combining software and hardware configuration, image processing efficiency, image definition and other factors. In some non-limiting embodiments, the size of the second size original image may be 1/2, 1/4, 1/8, etc. of the first size original image.
In the implementation of step S12, the first-size original image is divided into a subject area and a background area, where the subject area is an area occupied by the photographed subject, such as a portrait area, and areas other than the subject area may be collectively referred to as a background area. The photographed subject may be determined according to a photographing mode, for example, a photographed subject in a portrait mode is a person; alternatively, the subject to be photographed may be automatically identified according to the object in focus at the time of photographing, for example, the subject to be photographed is identified as a person if the object to be focused is a specific object, and the subject to be photographed is identified as the specific object. The background area may be further divided into a main body edge area, that is, an area of a preset size range around the main body area, and a main body background area, that is, an area other than the main body area and the main body edge area. For example, the main body region of the original image is a, the background region is B, and the background region B may be divided into a main body edge region B1 and a main body background region B2.
Referring to fig. 2, fig. 2 is a flowchart of one embodiment of step S12 in fig. 1. The filtering process of the main body edge region of the first size original image in the step S12 may include steps S21 to S22, which will be described below.
In step S21, a filter kernel of each pixel to be filtered is determined according to the CoC value of each pixel to be filtered in the edge region of the main body of the original image with the first size.
Wherein the CoC value may be a circle of confusion (Circle of Confusion) radius value. In a specific implementation, the CoC value of each pixel point may be obtained by performing linear transformation processing on the depth value of the pixel point according to an optical imaging rule.
Specifically, the meaning of CoC is: during the process of capturing an image, light rays begin to gather and diffuse before and after the focal point, and the image of the point becomes blurred, forming an enlarged circle, which is called a circle of confusion CoC. Wherein the diameter of the CoC is proportional to the lens size and the distance from the focal plane. When the offset distance is small to some extent, the CoC becomes smaller than the resolution of the film, this range is focused or imaged sharp, and anything outside this range is blurred.
In the blurring processing of the portrait mode, depth values of all pixel points in a depth map of an original image are calculated and converted into allowable dispersed circle radius values (namely CoC values) through a certain formula, and based on the CoC values, radius values of a filter kernel and filter weight values of the pixel points in the filter kernel in the subsequent filter processing process are determined, so that blurring effects which are more in line with optical rules can be obtained.
In a specific implementation, for each pixel to be filtered, the principle of filtering the pixel is as follows: the pixel value of the pixel point to be filtered after the filtering processing is the pixel value weighting result of each pixel point in a certain area range around the filtering pixel point. It can be known that the filtering kernel may be a weight matrix formed by weight values of each pixel point in a specific area around the pixel point to be filtered.
In a specific implementation, the following formula may be adopted, and the filter kernel of each pixel point to be filtered is determined according to the CoC value of each pixel point to be filtered in the edge area of the main body of the original image with the first size:
r=floor(C×exp(-(CoC) 2 ÷(sigma1) 2 ));
wherein r is used for representing the radius of a filtering kernel of the pixel point to be filtered, floor () is used for representing a downward rounding function, exp () is used for representing an exponential function based on a natural constant e, coC is used for representing the CoC value of the pixel point to be filtered, sigma1 is used for representing a first debugging parameter and has a preset first debugging parameter value, and C is a constant.
The parameters that sigma1 can debug may have a preset initial value, for example, and may be adjusted according to the debugging result. Specifically, from an initial image acquired by an image sensor to a final imaged image, the middle is typically subjected to multiple algorithm libraries, only one of which is image blurring. Each algorithm library has many parameters that can be adjusted and that can affect sigma1, such as image sharpening parameters, auto white balance parameters, display highlighting parameters, etc.
It will be appreciated that for the same CoC value, the larger the sigma1, the larger the resulting filter kernel radius r.
In some non-limiting embodiments, the constant C may be set to a suitable value between 12 and 17, for example, in one preferred embodiment, 15; sigma1 may be set to a suitable value in the range 2048 to 4096.
In step S22, filtering processing is performed on each pixel point to be filtered by using the determined filtering kernel of each pixel point to be filtered, so as to determine the edge blurring image of the first size.
In specific implementation, the filtering processing method may be an existing conventional method, which is not described herein.
Referring to fig. 3, fig. 3 is a flowchart of one embodiment of step S22 in fig. 2. The step S22 may include steps S31 to S34.
In step S31, for each pixel to be filtered, a filtering weight value of each in-core pixel within a filtering core of the pixel to be filtered is determined.
Further, the step S31 may include: for each in-core pixel point in the filtering core of the pixel point to be filtered, determining a filtering weight value of the in-core pixel point according to the distance between the in-core pixel point and the pixel point to be filtered; the larger the distance between the pixel point in the kernel and the pixel point to be filtered is, the smaller the filtering weight value of the pixel point in the kernel is.
Further, for each in-core pixel point in the filtering core of the pixel point to be filtered, the following formula may be used to calculate the filtering weight value of the in-core pixel point:
z=exp(-((x-x 0 ) 2 +(y-y 0 ) 2 )÷(sigma2) 2 );
wherein z is used for representing the filtering weight value of the pixel point to be filtered, and x is 0 And the abscissa, y, used for representing the pixel point to be filtered 0 And x is used for representing the ordinate of the pixel point to be filtered, y is used for representing the ordinate of the pixel point in the kernel in the filtering kernel of the pixel point to be filtered, exp () is used for representing an exponential function based on a natural constant e, sigma2 is used for representing a second debugging parameter and has a preset second debugging parameter value.
Wherein, sigma2 is also a tunable parameter, and the values of sigma2 and sigma1 may be different. For the value range of sigma2 and its influencing factors, reference is made to the related description about sigma1, which is not repeated here.
In a specific implementation, if the pixel to be filtered is (x 0, y 0), sigma2 may control the trend of the filtering weight of the pixel to be filtered for a certain neighboring pixel (x, y) located in its filtering kernel. It will be appreciated that the larger sigma2, the more the filter weight values are dispersed around, the greater the weights of the pixels in the vicinity of the periphery, and the less the center weight.
In step S32, it is determined, for each in-core pixel, whether or not a CoC value equal to the CoC value of the in-core pixel exists among the CoC values of the respective pixels in the first main area of the first-size original image.
The first main body area of the first-size original image is an area formed by each pixel point, in the first-size depth map, of which the depth value is in a first preset depth value range after the first-size depth map of the first-size original image is determined.
Wherein, the first main area of the first size original image and the main area described in the step S11 are the same area. In various application scenarios, the first subject region includes, but is not limited to, a portrait region in an image, a building region, a marker region, and so on. The first main body area may be a specific part area of the aforementioned area where the camera focuses in the shooting process.
In a specific implementation, if the CoC value of each pixel point in the first main area of the first size original image is assigned to a preset CoC value (for example, assigned to 1), in the step S32, it may be directly determined whether the CoC value of each pixel point in the kernel is 1.
If yes, continuing to execute the steps S33 to S34; if the determination is negative, the process goes to the execution step S34.
It will be appreciated that if, for a certain intra-kernel pixel, there is a CoC value equal to the CoC value of that intra-kernel pixel, the intra-kernel pixel is indicated to be located within the first body region, if the CoC value of each pixel in the first body region of the first-size original image is equal to the CoC value of that intra-kernel pixel.
In step S33, the filtering weight value of the pixel point in the kernel is reduced by a preset multiple.
For each in-core pixel point, the larger the difference between the CoC value of the in-core pixel point and the CoC value of the pixel point to be filtered, the larger the value of the preset multiple.
It can be understood that the greater the CoC value difference between the pixel point to be filtered and the pixel point in the filtering kernel of the pixel point to be filtered and belonging to the kernel of the first main body region, the greater the depth value difference between the two pixel points, the greater the distance between the actual position corresponding to the pixel point to be filtered and the actual position of the photographed subject (e.g., person) in the real photographing scene.
In step S34, filtering processing is performed on the pixel to be filtered based on the filtering kernel of the pixel to be filtered and the determined filtering weight value of each in-kernel pixel in the filtering kernel.
In the embodiment of the invention, when filtering is performed on the edge region, by adopting the scheme and combining the distance between the actual position corresponding to the pixel point to be filtered in the image and the photographed main body, the filtering weight is adjusted in a self-adaptive manner, namely, in the filtering process, the filtering weight value of the pixel point in the kernel belonging to the main body region is reduced, so that the influence of the main body region on blurring of the edge region to be filtered at a longer distance is reduced. Therefore, the blurring effect of the edge area of the main body can be further improved, and the blurring quality of the edge area of the main body can be improved.
Referring to fig. 4, fig. 4 is a flowchart of another embodiment of step S12 in fig. 1. The filtering process of the main background area of the second-size original image in step S12 may include steps S41 to S42, which will be described below.
In step S41, filtering is performed on the main background area of the second-size original image by using a preset number of filtering times to obtain a first preset number of second-size preliminary background blurring images, where each filtering uses a filtering kernel with a different preset size.
In step S42, based on the CoC values of the pixels in the main background area of the second-size original image and the second preset number of fusion weight values, fusion processing is performed on each second-size preliminary background virtual image to obtain the second-size background virtual image.
In a specific implementation, the filtering times, the number of the fusion weight values and the numerical value setting of each fusion weight value can be reasonably set or adjusted by comprehensively considering software and hardware performance, operation cost and efficiency, image blurring effect and the like. It should be noted that the numerical value setting of the filtering times cannot be too small, otherwise, the number of the preliminary background blurring images with the second size is too small, and the fusion effect is reduced; the numerical value setting of the filtering times cannot be too large, otherwise, the operation cost is increased, and the image blurring efficiency is reduced.
In some non-limiting embodiments, the number of filters may be a suitable number of 3-10 times.
The first preset number is the same as the filtering times, and the second preset number is larger than the first preset number.
As a non-limiting example, the second preset number (i.e., the number of fusion weight values) is the first preset number plus 1. For example, the number of filtering times is 3, the number of fusion weight values is 4, and each fusion weight value may be set to 10, 25, 45, 90, respectively.
Further, the step S42 may include: comparing the CoC value of each pixel point in the main background area of the second-size original image with each fusion weight value, and determining the pixel value of each pixel point in the same area as the main background area in the second-size background virtual image according to the comparison result; and adopting the original pixel values of all the pixel points outside the main background area of the second-size original image as the pixel values of the pixel points at the same position in the second-size background blurring image.
As one non-limiting example, the number of second-size preliminary background blurring images is 3, and the number of fusion weight values is 4; comparing the CoC value of each pixel point in the main background area of the second-size original image with each fusion weight value, and determining the pixel value of each pixel point in the same area as the main background area in the second-size background virtual image according to the comparison result may include:
if CoC (i, j) < Weight1, fusion Image (i, j) =image 1 (i, j);
if Weight1 is less than or equal to CoC (i, j) is less than or equal to Weight2,
fusion Image (i, j) =image 1 (i, j) × (1-w 1) +image2 (i, j) ×w1;
If Weight2< CoC (i, j) < Weight3, fusion Image (i, j) =image 2 (i, j);
if Weight 3. Ltoreq.CoC (i, j). Ltoreq.weight 4,
fusion Image (i, j) =image 2 (i, j) × (1-w 2) +image3 (i, j) ×w2;
if CoC (i, j) > Weight4, fusion Image (i, j) =image 3 (i, j);
wherein w1= (CoC (i, j) -Weight 1)/(256), w2= (CoC (i, j) -Weight 3)/(256);
wherein Weight1< Weight2< Weight3< Weight4;
wherein CoC (i, j) is used to represent CoC values of pixel points with coordinates of (i, j) in the background region of the main body, fusion Image (i, j) is used to represent pixel values of pixel points with coordinates of (i, j) in the same region as the background region of the main body in the second size background virtual Image, image1 (i, j) is used to represent pixel values of pixel points with coordinates of (i, j) in the first second size preliminary background virtual Image, image2 (i, j) is used to represent pixel values of pixel points with coordinates of (i, j) in the second size preliminary background virtual Image, image3 (i, j) is used to represent pixel values of pixel points with coordinates of (i, j) in the third second size preliminary background virtual Image, weight1 is used to represent pixel values of the first fusion Weight, weight2 is used to represent pixel values of the second fusion Weight, weight3 is used to represent pixel values of the first fusion Weight value, and Weight3 is used to represent the fourth fusion coefficient w 4 is used to represent fusion coefficient w.
In the embodiment of the invention, the smaller the CoC value of the pixel point is, the closer the actual position corresponding to the pixel point is to the subject to be photographed, which means that the weaker the blurring degree of the pixel point should be. Therefore, when blurring the subject background region, the distance between the position of the region to be blurring corresponding to the actual photographed scene and the position of the subject region corresponding to the actual photographed scene is also considered, and the blurring results at different blurring degrees are adopted for different distances. Therefore, the blurring part in the blurring image is gradually changed into nature, and the blurring quality of the main background area is improved.
With continued reference to fig. 1, in a specific implementation of step S13, the upsampling process may enlarge the size of the image, i.e. such that the size of the first size background blurred image coincides with the size of the first size edge blurred image.
In particular, the upsampling may be a two-dimensional interpolation operation on the artwork. If the up-sampling coefficient is k, i.e. k-1 points are inserted between the two points n and n+1 of the original image, so that k points are formed. Two-dimensional interpolation, i.e. interpolation is also performed for each column after each row has been inserted. Methods of interpolation fall into a wide variety, and are generally considered primarily from both time and frequency domains. The time domain interpolation mainly comprises linear interpolation, hermite interpolation, spline interpolation and the like. For frequency domain interpolation, it is known from fourier transform properties that zero padding in the frequency domain is equivalent to time domain interpolation, and thus interpolation operation can be achieved by how much zero padding is performed in the frequency domain.
In the implementation of step S14, the algorithm for performing the pyramid fusion process may be selected from one or more of a gaussian pyramid fusion algorithm, a laplacian pyramid fusion algorithm, a contrast pyramid fusion algorithm, and a gradient pyramid fusion algorithm. In a specific implementation, in addition to sampling several fusion algorithms listed above, other existing fusion algorithms that can achieve the same or similar functions may be used, which is not limited in this embodiment of the present invention.
Compared with the prior art, the method has the advantages that the filtering processing is directly carried out on the original image size, although the edge details of the image main body (such as a portrait) can be well maintained, the original image size is larger, and the filtering calculation amount and the time consumption are high when the filtering processing is directly carried out on the original image; or filtering is carried out on a small-size image obtained by downsampling the original image, then the filtered image is upsampled to obtain a virtual image with the original image size, and the upsampling operation can lose part of image details, especially the edge area details of a main body, so as to influence the virtual effect.
The embodiment of the invention divides the background blurring processing of the whole image into blurring processing of the edge area of the person and other background areas. Because the accuracy requirement of the edge area of the main body is generally higher, and the area ratio of the edge area is often not high, the edge area is subjected to blurring on the original image size to obtain a first-size edge blurring image, the details of the edge area can be reserved on the basis of sacrificing a small degree of efficiency, and the blurring quality of the edge area is improved. Because the requirement of the main background area on the precision is generally lower, the area occupation ratio of the background area is generally larger, the background area is subjected to blurring on the small-image size, and then upsampling is carried out to obtain a background blurring image with the first size, the operation cost can be effectively reduced, and the blurring efficiency can be improved. And performing pyramid fusion processing on the first-size virtual edge virtual image and the first-size background virtual image to obtain a final target virtual image. Therefore, the blurring efficiency is high, the operation cost is low, the edge details of the image main body are protected, the problem of false-false leakage and blurring is improved, the blurring part is gradually changed more naturally, and the quality of the target blurring image is improved as a whole.
Referring to fig. 5, fig. 5 is a partial flowchart of another image blurring method according to an embodiment of the present invention. The other image blurring method may include steps S11 to S14 shown in fig. 1, and may further include steps S51 to S52. Wherein, steps S51 to S52 may be performed before step S12. The steps are described below.
In step S51, a first size CoC value map of the first size original image is determined, and a second size CoC value map of the second size original image is determined.
Further, the determining the first size CoC value map of the first size original image in the step S51 may include: determining a first-size depth map of the first-size original image, wherein each pixel point of the first-size depth map has a respective depth value; taking a region formed by each pixel point with a depth value within a first preset depth value range in the first size depth map as a first main body region and taking other regions as a first background region; and respectively carrying out linear transformation processing on the depth values of all the pixel points in the first main body area and the first background area, determining the CoC value of all the pixel points in the first main body area, and determining the CoC value of all the pixel points in the first background area to obtain the first size CoC value graph.
The depth map may be an image representing actual distance information between a viewpoint and a camera, and may be obtained by generally using a stereo matching algorithm for an original image.
Wherein the first preset depth value range may be determined according to a depth value of a focus/center point (typically located at a specific position of the subject area) at the time of photographing. As a non-limiting example, the depth value for the focal point/center point is 100, the first preset depth value interval may be [90, 110].
Further, performing linear transformation processing on the depth values of the pixel points in the first main body area, and determining the CoC value of each pixel point in the first main body area may include: and assigning the CoC value of each pixel point in the first main body area as a preset CoC value.
Further, performing linear transformation processing on the depth values of the pixel points in the first background area, and determining the CoC value of each pixel point in the first background area may include: and carrying out linear change processing on the difference value between the depth value of each pixel point in the first background area and the depth value of the focusing point by adopting a preset linear transformation coefficient and a linear transformation formula, and determining the CoC value of each pixel point in the first background area.
The linear transformation coefficients and the linear transformation formula may be conventional transformation coefficients and formulas for converting depth values of pixel points into CoC values, which are not described herein.
In some non-limiting embodiments, during the linear transformation process, the CoC value of each pixel point in the first main area may be assigned to 1, and the CoC value of each pixel point in the first background area is greater than 1.
Further, the determining the second-size CoC value graph of the second-size original image in the step S51 may include: determining a second-size depth map of the second-size original image, wherein each pixel point of the second-size depth map has a respective depth value; taking a region formed by each pixel point with the depth value within a second preset depth value range in the second size depth map as a second main body region and taking other regions as second background regions; and respectively carrying out linear transformation processing on the depth values of all the pixel points in the second main body area and the second background area, determining the CoC value of all the pixel points in the second main body area, and determining the CoC value of all the pixel points in the second background area to obtain a second size CoC value graph.
In a specific implementation, the setting of the second preset depth value range and the specific method for performing the linear transformation process may refer to the description related to the determination of the first size CoC value map of the first size original image in step S51, which is not repeated herein.
In step S52, a main body edge area of the first-size original image and a main body background area of the second-size original image are determined based on the first-size CoC value map and the second-size CoC value map.
Referring to fig. 6, fig. 6 is a flowchart of one embodiment of step S52 in fig. 5. The determining of the subject background area of the second-size original image based on the first-size CoC value map and the second-size CoC value map in step S52 may include steps S61 to S63.
In step S61, binarizing the CoC values of the pixels in the second main area and the second background area to obtain a second-size CoC binary image.
In a specific implementation, when the binarization processing is performed, the pixel value of each pixel point in the second main area may be assigned to 255 (the second main area may become white), and the pixel value of each pixel point in the second background area may be assigned to 0 (the second background area may become black), so as to obtain the second size CoC binary image.
In step S62, morphological expansion processing is performed on the second body region based on the second dimension CoC binary map to determine an expanded body region, and a region other than the expanded body region is taken as an expansion background region.
It can be understood that, compared with the morphological expansion processing directly performed on the second main body area on the second size original image, in the embodiment of the invention, the morphological expansion processing is performed after the binarization processing, which is helpful for reducing the operation complexity and improving the processing efficiency.
In step S63, the same region as the inflated background region in the second-size original image is taken as the subject background region.
Further, the determining the main body edge area of the first size original image in the step S52 based on the first size CoC value chart and the second size CoC value chart may include: performing up-sampling processing on a second-size CoC binary image divided by the expansion main body region and the expansion background region to determine a first-size expansion main body region corresponding to the expansion main body region and a first-size expansion background region corresponding to the expansion background region; intersection is carried out on the same area as the first main body area in the first-size original image and the same area as the first-size expansion background area in the first-size original image so as to determine the main body edge area of the first-size original image.
In the embodiment of the invention, after the first main body area (also corresponding to the first size) and the first-size expansion background area are determined, the main body edge area of the first-size original image can be rapidly and accurately determined directly by the intersection mode of the two areas.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an image blurring apparatus according to an embodiment of the present invention. The image blurring apparatus may include:
an original image downsampling module 71, configured to downsample the first-size original image to obtain a second-size original image;
the regional filtering module 72 is configured to perform filtering processing on a main body edge region of the first-size original image to obtain a first-size edge blurred image, and perform filtering processing on a main body background region of the second-size original image to obtain a second-size background blurred image;
an upsampling module 73, configured to upsample the second size background virtual image to obtain a first size background virtual image;
the image fusion module 74 is configured to perform pyramid fusion processing on the first size background virtual image and the first size edge virtual image, so as to obtain a first size target virtual image.
Regarding the principle, implementation and beneficial effects of the image blurring apparatus, please refer to the foregoing and the related descriptions of the image blurring method shown in fig. 1 to 6, which are not repeated here.
The embodiment of the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the image blurring method shown in fig. 1 to 6 described above. The computer readable storage medium may include non-volatile memory (non-volatile) or non-transitory memory, and may also include optical disks, mechanical hard disks, solid state disks, and the like.
Specifically, in the embodiment of the present invention, the processor may be a central processing unit (central processing unit, abbreviated as CPU), and the processor may also be other general purpose processors, digital signal processors (digital signal processor, abbreviated as DSP), application specific integrated circuits (application specific integrated circuit, abbreviated as ASIC), off-the-shelf programmable gate arrays (field programmable gate array, abbreviated as FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It should also be appreciated that the memory in embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically erasable ROM (electrically EPROM, EEPROM), or a flash memory. The volatile memory may be a random access memory (random access memory, RAM for short) which acts as an external cache. By way of example but not limitation, many forms of random access memory (random access memory, abbreviated as RAM) are available, such as static random access memory (static RAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (double data rate SDRAM, abbreviated as DDR SDRAM), enhanced Synchronous Dynamic Random Access Memory (ESDRAM), synchronous Link DRAM (SLDRAM), and direct memory bus random access memory (direct rambus RAM, abbreviated as DR RAM).
The embodiment of the invention also provides a terminal, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor executes the steps of the image blurring method shown in the above figures 1 to 6 when running the computer program. The terminal can include, but is not limited to, terminal equipment such as a mobile phone, a computer, a tablet computer, a server, a cloud platform, and the like.
It should be understood that the term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In this context, the character "/" indicates that the front and rear associated objects are an "or" relationship.
The term "plurality" as used in the embodiments herein refers to two or more.
The first, second, etc. descriptions in the embodiments of the present application are only used for illustrating and distinguishing the description objects, and no order division is used, nor does it indicate that the number of the devices in the embodiments of the present application is particularly limited, and no limitation on the embodiments of the present application should be construed.
It should be noted that the serial numbers of the steps in the present embodiment do not represent a limitation on the execution sequence of the steps.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention, and the scope of the invention should be assessed accordingly to that of the appended claims.

Claims (21)

1. A method of image blurring, comprising:
performing downsampling treatment on the first-size original image to obtain a second-size original image;
filtering the main body edge area of the first-size original image to obtain a first-size edge blurring image, and filtering the main body background area of the second-size original image to obtain a second-size background blurring image;
performing up-sampling processing on the second-size background blurring image to obtain a first-size background blurring image;
and carrying out pyramid fusion processing on the first-size background virtual image and the first-size edge virtual image to obtain a first-size target virtual image.
2. The method of claim 1, wherein filtering the body edge region of the first size original image comprises:
determining a filter kernel of each pixel point to be filtered according to the CoC value of each pixel point to be filtered in the main body edge area of the first-size original image;
And filtering each pixel point to be filtered by adopting the determined filter kernel of each pixel point to be filtered so as to determine the first-size edge blurring image.
3. The method of claim 2, wherein the filter kernel of each pixel to be filtered is determined according to the CoC value of each pixel to be filtered in the edge region of the main body of the original image of the first size by using the following formula:
r=floor(C×exp(-(CoC) 2 ÷(sigma1) 2 ));
wherein r is used for representing the radius of a filtering kernel of the pixel point to be filtered, floor () is used for representing a downward rounding function, exp () is used for representing an exponential function based on a natural constant e, coC is used for representing the CoC value of the pixel point to be filtered, sigma1 is used for representing a first debugging parameter and has a preset first debugging parameter value, and C is a constant.
4. The method according to claim 2, wherein the filtering each pixel to be filtered using the determined filter kernel of each pixel to be filtered includes:
for each pixel point to be filtered, determining a filtering weight value of each pixel point in the filter kernel of the pixel point to be filtered;
judging whether a CoC value equal to the CoC value of each pixel point in the kernel exists in the CoC value of each pixel point in the kernel in the first main body area of the first size original image;
If the judgment result is yes, reducing the filtering weight value of the pixel point in the kernel by a preset multiple;
filtering the pixel point to be filtered based on the filter kernel of the pixel point to be filtered and the determined filter weight value of each pixel point in the filter kernel;
for each in-core pixel point, the larger the difference between the CoC value of the in-core pixel point and the CoC value of the pixel point to be filtered, the larger the value of the preset multiple.
5. The method of claim 4, wherein the first subject region of the first-size original image is a region formed by pixels having depth values within a first predetermined range of depth values in the first-size depth map after determining the first-size depth map of the first-size original image.
6. The method of claim 4, wherein for each pixel to be filtered, determining the filter weight value for each in-core pixel within the filter core for that pixel to be filtered comprises:
for each in-core pixel point in the filtering core of the pixel point to be filtered, determining a filtering weight value of the in-core pixel point according to the distance between the in-core pixel point and the pixel point to be filtered;
The larger the distance between the pixel point in the kernel and the pixel point to be filtered is, the smaller the filtering weight value of the pixel point in the kernel is.
7. The method of claim 6, wherein for each in-core pixel point within the filter core of the pixel point to be filtered, calculating a filter weight value for the in-core pixel point using the formula:
z=exp(-((x-x 0 ) 2 +(y-y 0 ) 2 )÷(sigma2) 2 );
wherein z is used for representing the filtering weight value of the pixel point to be filtered, and x is 0 And the abscissa, y, used for representing the pixel point to be filtered 0 And x is used for representing the ordinate of the pixel point to be filtered, y is used for representing the ordinate of the pixel point in the kernel in the filtering kernel of the pixel point to be filtered, exp () is used for representing an exponential function based on a natural constant e, sigma2 is used for representing a second debugging parameter and has a preset second debugging parameter value.
8. The method of claim 1, wherein filtering the subject background region of the second size original image comprises:
filtering the main background area of the second-size original image by adopting preset filtering times to obtain a first preset number of second-size preliminary background blurring images, wherein each filtering adopts filtering kernels with different preset sizes;
And carrying out fusion processing on each second-size preliminary background blurring image based on the CoC value of each pixel point in the main background area of the second-size original image and a second preset number of fusion weight values so as to obtain the second-size background blurring image.
9. The method of claim 8, wherein the first predetermined number is the same as the number of filters, and the second predetermined number is greater than the first predetermined number.
10. The method of claim 8, wherein the fusing each second-size preliminary background blurred image based on the CoC values of each pixel point and a second preset number of fusion weight values in the main background region of the second-size original image to obtain the second-size background blurred image comprises:
comparing the CoC value of each pixel point in the main background area of the second-size original image with each fusion weight value, and determining the pixel value of each pixel point in the same area as the main background area in the second-size background virtual image according to the comparison result;
and adopting the original pixel values of all the pixel points outside the main background area of the second-size original image as the pixel values of the pixel points at the same position in the second-size background blurring image.
11. The method of claim 10, wherein the number of second size preliminary background ghosting images is 3 and the number of fusion weight values is 4;
comparing the CoC value of each pixel point in the main background area of the second-size original image with each fusion weight value, and determining the pixel value of each pixel point in the same area as the main background area in the second-size background virtual image according to the comparison result comprises the following steps:
if CoC (i, j) < Weight1, fusion Image (i, j) =image 1 (i, j);
if Weight1 is less than or equal to CoC (i, j) is less than or equal to Weight2,
fusion Image (i, j) =image 1 (i, j) × (1-w 1) +image2 (i, j) ×w1;
if Weight2< CoC (i, j) < Weight3, fusion Image (i, j) =image 2 (i, j);
if Weight 3. Ltoreq.CoC (i, j). Ltoreq.weight 4,
fusion Image (i, j) =image 2 (i, j) × (1-w 2) +image3 (i, j) ×w2;
if CoC (i, j) > Weight4, fusion Image (i, j) =image 3 (i, j);
wherein w1= (CoC (i, j) -Weight 1)/(256), w2= (CoC (i, j) -Weight 3)/(256);
wherein Weight1< Weight2< Weight3< Weight4;
wherein CoC (i, j) is used to represent CoC values of pixel points with coordinates of (i, j) in the background region of the main body, fusion Image (i, j) is used to represent pixel values of pixel points with coordinates of (i, j) in the same region as the background region of the main body in the second size background virtual Image, image1 (i, j) is used to represent pixel values of pixel points with coordinates of (i, j) in the first second size preliminary background virtual Image, image2 (i, j) is used to represent pixel values of pixel points with coordinates of (i, j) in the second size preliminary background virtual Image, image3 (i, j) is used to represent pixel values of pixel points with coordinates of (i, j) in the third second size preliminary background virtual Image, weight1 is used to represent pixel values of the first fusion Weight, weight2 is used to represent pixel values of the second fusion Weight, weight3 is used to represent pixel values of the first fusion Weight value, and Weight3 is used to represent the fourth fusion coefficient w 4 is used to represent fusion coefficient w.
12. The method of claim 1, wherein before filtering the main body edge region of the first size original image to obtain a first size edge blurred image, and filtering the main body background region of the second size original image to obtain a second size background blurred image, the method further comprises:
determining a first size CoC value graph of the first size original image and determining a second size CoC value graph of the second size original image;
and determining a main body edge area of the first-size original image and a main body background area of the second-size original image based on the first-size CoC value diagram and the second-size CoC value diagram.
13. The method of claim 12, wherein determining a first-size CoC-valued map of the first-size raw image comprises:
determining a first-size depth map of the first-size original image, wherein each pixel point of the first-size depth map has a respective depth value;
taking a region formed by each pixel point with a depth value within a first preset depth value range in the first size depth map as a first main body region and taking other regions as a first background region;
And respectively carrying out linear transformation processing on the depth values of all the pixel points in the first main body area and the first background area, determining the CoC value of all the pixel points in the first main body area, and determining the CoC value of all the pixel points in the first background area to obtain the first size CoC value graph.
14. The method of claim 13, wherein linearly transforming the depth values of each pixel in the first body region, and determining the CoC values of each pixel in the first body region comprises:
and assigning the CoC value of each pixel point in the first main body area as a preset CoC value.
15. The method of claim 13, wherein determining a second-size CoC-valued map of the second-size raw image comprises:
determining a second-size depth map of the second-size original image, wherein each pixel point of the second-size depth map has a respective depth value;
taking a region formed by each pixel point with the depth value within a second preset depth value range in the second size depth map as a second main body region and taking other regions as second background regions;
And respectively carrying out linear transformation processing on the depth values of all the pixel points in the second main body area and the second background area, determining the CoC value of all the pixel points in the second main body area, and determining the CoC value of all the pixel points in the second background area to obtain a second size CoC value graph.
16. The method of claim 15, wherein determining a subject background area of the second size original image based on the first size CoC value map and the second size CoC value map comprises:
binarizing the CoC values of each pixel point in the second main body area and the second background area to obtain a second-size CoC binary image;
performing morphological expansion processing on the second main body area based on the second dimension CoC binary diagram to determine an expansion main body area, and taking an area outside the expansion main body area as an expansion background area;
and taking the area which is the same as the expansion background area in the second-size original image as the main body background area.
17. The method of claim 16, wherein determining a body edge region of the first size raw image based on the first size CoC value map and the second size CoC value map comprises:
Performing up-sampling processing on a second-size CoC binary image divided by the expansion main body region and the expansion background region to determine a first-size expansion main body region corresponding to the expansion main body region and a first-size expansion background region corresponding to the expansion background region;
intersection is carried out on the same area as the first main body area in the first-size original image and the same area as the first-size expansion background area in the first-size original image so as to determine the main body edge area of the first-size original image.
18. The method of claim 1, wherein the algorithm for performing pyramid fusion processing is selected from the group consisting of:
gaussian pyramid fusion algorithm, laplacian pyramid fusion algorithm, contrast pyramid fusion algorithm and gradient pyramid fusion algorithm.
19. An image blurring apparatus, comprising:
the original image downsampling module is used for downsampling the first-size original image to obtain a second-size original image;
the regional filtering module is used for carrying out filtering treatment on the main body edge region of the first-size original image to obtain a first-size edge blurring image, and carrying out filtering treatment on the main body background region of the second-size original image to obtain a second-size background blurring image;
The up-sampling module is used for up-sampling the second-size background blurring image to obtain a first-size background blurring image;
and the image fusion module is used for carrying out pyramid fusion processing on the first-size background virtual image and the first-size edge virtual image to obtain a first-size target virtual image.
20. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program when run by a processor performs the steps of the image blurring method of any of claims 1 to 18.
21. A terminal comprising a memory and a processor, the memory having stored thereon a computer program executable on the processor, characterized in that the processor executes the steps of the image blurring method according to any of claims 1 to 18 when the computer program is executed by the processor.
CN202211102808.1A 2022-09-09 2022-09-09 Image blurring method and device, computer readable storage medium and terminal Pending CN116188284A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211102808.1A CN116188284A (en) 2022-09-09 2022-09-09 Image blurring method and device, computer readable storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211102808.1A CN116188284A (en) 2022-09-09 2022-09-09 Image blurring method and device, computer readable storage medium and terminal

Publications (1)

Publication Number Publication Date
CN116188284A true CN116188284A (en) 2023-05-30

Family

ID=86431326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211102808.1A Pending CN116188284A (en) 2022-09-09 2022-09-09 Image blurring method and device, computer readable storage medium and terminal

Country Status (1)

Country Link
CN (1) CN116188284A (en)

Similar Documents

Publication Publication Date Title
US10410327B2 (en) Shallow depth of field rendering
CN110428366B (en) Image processing method and device, electronic equipment and computer readable storage medium
US11457138B2 (en) Method and device for image processing, method for training object detection model
CN110476185B (en) Depth of field information estimation method and device
CN110858871B (en) Image processing method, image processing apparatus, imaging apparatus, lens apparatus, storage medium, and image processing system
CN106027851B (en) Method and system for processing images
CN111563552B (en) Image fusion method, related device and apparatus
CN103426147B (en) Image processing apparatus, image pick-up device and image processing method
US10839529B2 (en) Image processing apparatus and image processing method, and storage medium
US10992845B1 (en) Highlight recovery techniques for shallow depth of field rendering
WO2018129692A1 (en) Image refocusing
US20220398698A1 (en) Image processing model generation method, processing method, storage medium, and terminal
JPWO2016175044A1 (en) Image processing apparatus and image processing method
CN113379609B (en) Image processing method, storage medium and terminal equipment
KR20230039520A (en) Image processing method, device, storage medium, and electronic device
CN116433496A (en) Image denoising method, device and storage medium
Luo et al. Defocus to focus: Photo-realistic bokeh rendering by fusing defocus and radiance priors
WO2016175043A1 (en) Image processing device and image processing method
WO2016175045A1 (en) Image processing device and image processing method
CN116188284A (en) Image blurring method and device, computer readable storage medium and terminal
CN111583124A (en) Method, device, system and storage medium for deblurring images
CN113409375B (en) Image processing method, image processing apparatus, and non-volatile storage medium
US11842570B2 (en) Image processing apparatus, image pickup apparatus, image processing method, and storage medium
CN112634298B (en) Image processing method and device, storage medium and terminal
US11797854B2 (en) Image processing device, image processing method and object recognition system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination