CN108564528A - A kind of portrait photo automatic background weakening method based on conspicuousness detection - Google Patents

A kind of portrait photo automatic background weakening method based on conspicuousness detection Download PDF

Info

Publication number
CN108564528A
CN108564528A CN201810342812.2A CN201810342812A CN108564528A CN 108564528 A CN108564528 A CN 108564528A CN 201810342812 A CN201810342812 A CN 201810342812A CN 108564528 A CN108564528 A CN 108564528A
Authority
CN
China
Prior art keywords
pixel
super
value
background
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810342812.2A
Other languages
Chinese (zh)
Inventor
牛玉贞
苏超然
陈羽中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN201810342812.2A priority Critical patent/CN108564528A/en
Publication of CN108564528A publication Critical patent/CN108564528A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of portrait photo automatic background weakening methods based on conspicuousness detection, include the following steps:1, portrait images are divided into using linear spectral clustering super-pixel segmentation algorithmNA super-pixel calculates the saliency value of each super-pixel using improved notable optimization algorithm;2, super-pixel that saliency value is more than to adaptive threshold using Da-Jin algorithm is labeled as foreground area, and the label that saliency value is less than to fixed threshold is that remaining is labeled as zone of ignorance, obtains three component of label of super-pixel scale;3, using the GrabCut algorithms of super-pixel scale, divide from three component of label and obtain portrait area boundary;4, background area is obscured first with quick guiding filtering algorithm, then details enhancing is selectively carried out according to notable testing result to foreground area, to obtain background blurring effect.This method can rely only on individual portrait images and rapidly carry out background blurring, and improve background blurring effect.

Description

A kind of portrait photo automatic background weakening method based on conspicuousness detection
Technical field
The present invention relates to image and video processing and computer vision field, it is especially a kind of based on conspicuousness detection Portrait photo automatic background weakening method.
Background technology
Quick with smart machine is popularized, and the photo majority of smart mobile phone shooting belongs to portrait photo.Due to not having The photo post-processing technique of profession, most of smart machine user beautifies the later stage portrait photo automatically has very strong need It asks.Wherein, background blurring technology is also known as shallow depth of field technology, is that one kind is used for highlighting photography body, expresses vision with having levels The beautification mode of aesthetic feeling.Currently, with the fast development of smart machine hardware, realize that background blurring technology relies primarily on two kinds firmly Part basis, when the postposition dual camera of smart machine, second is that the preposition depth of field sensor of smart machine.Both of which is to pass through The depth information of hardware auxiliary acquisition scene carrys out the depth of field rendering effect of composograph.When the depth letter for having single image scene Breath is used as priori, algorithm to carry out the depth of field to single photo and render also relative ease.However, both hardware foundations are not also general at present And existing a large amount of smart mobile phones are usually one camera of each placement before and after fuselage.In addition to hardware realization is background blurring, also Some software functions are realized, before capable of being utilized in photo beautification software and manually selection photo such as the user of common smart machine Scene area, then the fuzzy setting to background is adjusted to obtain similar effects.It is a kind of but since the operating process is relatively complicated Equipment cost is low, operation is simple and relies only on the background blurring algorithm of single image as the research hotspot of computer vision.
Due to the special space position of portrait in portrait figure, algorithm can merge priori to accelerate reasoning process.Its In, the obvious object detection algorithm that Zhu et al. is proposed is a kind of conspicuousness optimization algorithm (Saliency based on background detection Optimization abbreviation SO algorithms), the notable figure of generation is often viewed as the priori of foreground object applied to various fields Scape.Chen et al. thinks that the portrait area of foreground in portrait photo generally also can be the marking area in notable figure.Chen et al. Face characteristic is added to and optimizes notable figure in SO algorithms, and prior information of the algorithm based on notable figure is using a kind of new Gradient field guiding filtering algorithm obscures the details of portrait area in background and enhancing foreground in portrait photo simultaneously.Chen et al. The algorithm of proposition can cause part portrait area to be blurred due to not having the boundary of calculating foreground portrait area and background.
Invention content
The purpose of the present invention is to provide a kind of portrait photo automatic background weakening method based on conspicuousness detection, the party Method can rely only on individual portrait images and rapidly carry out background blurring, and improve background blurring effect.
To achieve the above object, the technical solution adopted by the present invention is:It is a kind of based on conspicuousness detection portrait photo from Dynamic background-blurring method, includes the following steps:
Step S1:Portrait images are divided into N number of super-pixel using linear spectral clustering super-pixel segmentation algorithm, are then utilized Improved notable optimization algorithm calculates the saliency value of each super-pixel;
Step S2:Saliency value to obtained super-pixel calculates an adaptive threshold using Da-Jin algorithm, and saliency value is big It is labeled as foreground area in the super-pixel of the adaptive threshold, while a fixed threshold is set, saliency value is less than described solid The super-pixel for determining threshold value is labeled as background area, and remaining region is then marked as zone of ignorance, to obtain a super-pixel Three component of label of scale;
Step S3:Using the GrabCut algorithms of super-pixel scale, divides from three component of label and obtain portrait area side Boundary;
Step S4:Based on the notable testing result that the obtained portrait area boundaries step S3 and step S1 obtain, first with Quick guiding filtering algorithm obscures background area, then is selectively carried out carefully according to notable testing result to foreground area Section enhancing, to obtain background blurring effect.
Further, in the step S1, portrait images are divided into using linear spectral clustering super-pixel segmentation algorithm N number of Then super-pixel is calculated the saliency value of each super-pixel using improved notable optimization algorithm, included the following steps:
Step S11:To arbitrary portrait images I, N number of super picture is divided into using linear spectral clustering super-pixel segmentation algorithm ElementObtain super-pixel segmentation tag setEach super-pixel segmentation marks liCorresponding i-th of super-pixel is wrapped The set of all pixels point contained, i are that super-pixel segmentation marks liSubscript;
Step S12:To N number of super-pixel that step S11 is obtained, its corresponding background connection priori value, CIE- is calculated The average color z of each super-pixel on Lab color spacesiCollection be combined intoThen it is all adjacent to build a connection The undirected weight map of super-pixel, and the weight definition on the side of the adjacent super-pixel of any two will be connected in the undirected weight map For the Euclidean distance of this 2 super-pixel color values;To which any two points super-pixel (p be calculatedj,pi) between geodetic Distance dgeo(pj,pi), subscript j, i value is 1 to N;The definition of priori is connected by background, first assumes the super picture in image boundary Element belongs to background area, defines super-pixel p on this basisjFormation zone on color space is Area (pj), and the life It is L (p at the length of side of the region in image boundaryj), then define background connection priori value BndCon (pj) be:
WhereinσclrIndicate geodesic distance dgeo(pj,pi) said standard Gauss point The standard deviation of cloth, Bnd be it is initially assumed that the super-pixel set for belonging to background area, since improved notable optimization algorithm is not straight Connect and assume that all borderline super-pixel of portrait images belong to background area, but only assume the left and right of portrait images and on Borderline super-pixel belongs to background area, so set Bnd contains only the super-pixel on the left and right and coboundary of image,Indicate super-pixel piIt is not belonging to set Bnd,When discriminate is true in bracket, super-pixel p is indicatediNo Belong to set Bnd, then δ ()=0, discriminate is fictitious time in bracket, then δ ()=1;
Background to obtain each super-pixel connects priori value, and the background connection for belonging to the super-pixel of background area is first It tests value and is numerically much larger than the super-pixel for belonging to foreground area;
Priori value is connected by the background for all super-pixel being calculated, for super-pixel pi, belong to background area Probability rightIt is defined as:
Wherein σcIt is parameter of the numerical values recited between [0.5,2.5];
And super-pixel piBelong to the probability right of foreground areaIt is defined as:
Wherein ds(pi,pj) it is super-pixel pjAnd piCentral point space length, σsFor the standard deviation criteria of Gaussian Profile;
Calculating and BndCon (pi) related, i.e., it is related with the determination of set Bnd, therefore the optimization of set Bnd It can influenceValue, the solution of optimization method is further influenced, to be generated to the saliency value of each super-pixel It influences;
Step S13:To setThreshold value t is acquired using Da-Jin algorithmwb, threshold value t will be less than on four boundarieswbSuper picture Element is rejected from set Bnd, is more than or equal to threshold value twbSuper-pixel rejoin in set Bnd, to complete to set Bnd Optimization;
Step S14:By by foreground probability valueBackground probability valueAnd smoothing weightsAs a parameter to solving Optimization method obtains the saliency value s of super-pixeli, optimization method is:
Wherein, smoothing weightsdapp(pi,pj) it is super-pixel piAnd pjIn CIE-Lab Euclidean distance on color space;
The saliency value s of each super-pixel is obtained by solving-optimizing equationi, and saliency value is normalized into [0,1] range It is interior.
Further, in the step S3, using the GrabCut algorithms of super-pixel scale, divide from three component of label To portrait area boundary, include the following steps:
Step S31:Segmentation problem is converted to based on the GrabCut algorithms of super-pixel scale and solves energy function minimum Problem, and it is max-flow/minimal cut problem by solving s-t networks to solve minimization problem, solves equation and is defined as:
WhereinIndicate the segmentation result of super-pixel,Indicate that the super-pixel belongs to background area for 0,For 1 table Show that the super-pixel belongs to foreground area, and E (x, θ, z) indicates that the energy function that GrabCut algorithms define, function include smooth item V (x, z) and data item U (x, θ, z), the smooth item V (x, z) is measuring the difference of foreground area and background area, data U (x, θ, z) is measuring the probability that super-pixel belongs to foreground or background area;In function variableIndicate CIE- The average color of each super-pixel on Lab color spaces,Then indicate the GrabCut algorithms of super-pixel scale Initial markers are it is assumed that when super-pixel belongs to foreground area, xiIt is 1, and when super-pixel belongs to background area or zone of ignorance, xi It is 0;θ is the gauss hybrid models that GrabCut algorithms define, and GrabCut algorithms make the super-pixel for belonging to non-foreground area For the input sample of foreground gauss hybrid models, and the super-pixel of non-background area will be belonged to as foreground gauss hybrid models Input sample;
Step S32:Using Orchard and Bauman algorithms build gauss hybrid models, recycle EM algorithms to its into Row solves, and further calculates data item U (x, θ, z) and smooth item V (x, z);Finally by the maximum for solving s-t networks Stream/minimal cut problem, to solve to obtainThat is the segmentation result of super-pixel scale.
Further, in the step S4, it is based on portrait area boundary and notable testing result, is filtered first with quick guiding Wave algorithm obscures background area, then selectively carries out details enhancing according to notable testing result to foreground area, from And background blurring effect is obtained, include the following steps:
Step S41:For the notable value set under super-pixel scaleCorresponding super-pixel segmentation tag set Notable testing result is mapped to artwork resolution ratio to get to notable figure S;Similarly, for the segmentation knot under super-pixel scale FruitCorresponding super-pixel segmentation tag setSegmentation result is mapped to artwork resolution ratio to cover to get to segmentation Code figure M;
Step S42:Based on segmentation result, background area is obscured using quick guiding filtering algorithm;
Assuming that quickly navigational figure is G in guiding filtering algorithm, image to be filtered is d, and the filter result of generation is q;Draw The thought for leading filtering core is to be based on Local Linear Model:
Wherein, i indicates the subscript of pixel in image, qiIndicate the pixel that the pixel of i is designated as under in filter result image Value, GiIndicate that the pixel value that the pixel of i is designated as under in navigational figure, k indicate under the box filter window ω that radius is r Mark, ωkIndicate the pixel indexed set that k-th of filter window is included;akAnd bkIt is to be missed by the way that filtered reconstruction will be minimized The problem of difference is converted into the closed solution that linear regression problem obtains;ukAnd σkIt is navigational figure G equal in k-th of filter window Value and variance, diIndicate the pixel value that the pixel of i is designated as under in image to be filtered, dkIndicate that image to be filtered is filtered at k-th The pixel value mean value of pixel in window;ε is a regularization parameter for being used for controlling smoothness;Since algorithm is calculating figure Multiple radiuses are the box filter window of r as in, and obtain corresponding akAnd bk, so for each pixel parameter a and B is allowed to be equivalent to the mean value of the parameter of all windows comprising the pixel, therefore has:
Indicate to contain the mean value of parameter a in all filter windows of the pixel for being designated as i under image,Expression includes The mean value of parameter b in all filter windows for the pixel for being designated as i under image;Fuzzy behaviour is carried out due to only using guiding filtering Make, so it is image d to be filtered that quickly two inputs of guiding filtering algorithm needs, which are identical,;Moreover, based on segmentation mask figure M only obscures background area, so having:
Wherein, miFor the segmentation result of pixel in segmentation mask figure, mi∈{0,1};
Step S43:Based on notable testing result and filtered image q, to more significant regional choice in foreground area Details enhancing is carried out, to obtain an effect preferably background blurring result;
Assuming that there is the enhanced image e of details, the difference by finding out fuzzy result and original image is used as original image edge Details, and whether these differences in original image are significantly expanded according to region, to enhance the details of original image, the public affairs of details enhancing Formula is:
Wherein, siFor the saliency value of pixel in notable figure, value range is [0,1];λ is a constant term;Details increases Image e after strong be to portrait images carry out it is background blurring after final effect figure.
Compared to the prior art, the beneficial effects of the invention are as follows:Present invention introduces background super-pixel optimisation strategies to improve The effect of conspicuousness detection, and further divide more complete portrait area using the GrabCut algorithms based on super-pixel scale. In addition, based on the foreground boundary that conspicuousness detection and image segmentation obtain, there is selection to more significant region in foreground area Property carry out details enhancing, and it is background blurring to achieve the effect that carry out uniformly fuzzy operation to background area.Energy of the present invention It enough relies only on individual portrait images completely to detect and be rapidly partitioned into portrait area, and makes background blurring effect closer to tool There is the background blurring effect that the digital single-lens reflex camera of large aperture is shot, there is larger use value.
Description of the drawings
Fig. 1 is the flow diagram of the method for the present invention.
Fig. 2 is the implementation flow chart of holistic approach in the embodiment of the present invention.
Specific implementation mode
Below in conjunction with the accompanying drawings and specific embodiment the invention will be further described.
The present invention provides a kind of portrait photo automatic background weakening method detected based on conspicuousness, such as Fig. 1 and Fig. 2 institutes Show, includes the following steps:
Step S1:Portrait images are divided into N number of super-pixel using linear spectral clustering (LSC) super-pixel segmentation algorithm, so The saliency value of each super-pixel is calculated using improved notable optimization algorithm afterwards.Specifically include following steps:
Step S11:To arbitrary portrait images I, N number of super picture is divided into using linear spectral clustering super-pixel segmentation algorithm ElementObtain super-pixel segmentation tag setEach super-pixel segmentation marks liCorresponding i-th of super-pixel is wrapped The set of all pixels point contained, i are that super-pixel segmentation marks liSubscript;
Step S12:Since portrait area generally occupies the lower boundary of image, improved notable optimization algorithm in portrait photo Do not assume that all borderline super-pixel of portrait images belong to background area directly first, but only assume portrait images a left side, Super-pixel on right and coboundary belongs to background area;
To N number of super-pixel that step S11 is obtained, its corresponding background connection priori value is calculated, CIE-Lab colors are empty Between each upper super-pixel average color ziCollection be combined intoThen the nothing of all neighbouring super pixels of connection is built To weight map, and the weight definition on the side for connecting the adjacent super-pixel of any two in the undirected weight map is at this 2 points and is surpassed The Euclidean distance of pixel color value;To which any two points super-pixel (p be calculatedj,pi) between geodesic distance dgeo (pj,pi), subscript j, i value is 1 to N;The definition of priori is connected by background, first assumes that the super-pixel in image boundary belongs to Background area defines super-pixel p on this basisjFormation zone on color space is Area (pj), and the formation zone The length of side in image boundary is L (pj), then define background connection priori value BndCon (pj) be:
WhereinσclrIndicate geodesic distance dgeo(pj,pi) said standard Gauss point The standard deviation of cloth, Bnd be it is initially assumed that the super-pixel set for belonging to background area, since improved notable optimization algorithm is not straight Connect and assume that all borderline super-pixel of portrait images belong to background area, but only assume the left and right of portrait images and on Borderline super-pixel belongs to background area, so set Bnd contains only the super-pixel on the left and right and coboundary of image,Indicate super-pixel piIt is not belonging to set Bnd,When discriminate is true in bracket, super-pixel p is indicatediNo Belong to set Bnd, then δ ()=0, discriminate is fictitious time in bracket, then δ ()=1;
Background to obtain each super-pixel connects priori value, and the background connection for belonging to the super-pixel of background area is first It tests value and is numerically much larger than the super-pixel for belonging to foreground area;
Priori value is connected by the background for all super-pixel being calculated, for super-pixel pi, belong to background area Probability rightIt is defined as:
Wherein σcIt is parameter of the numerical values recited between [0.5,2.5];
And super-pixel piBelong to the probability right of foreground areaIt is defined as:
Wherein ds(pi,pj) it is super-pixel pjAnd piCentral point space length, σsFor the standard deviation criteria of Gaussian Profile;
From the point of view of super-pixel belongs to foreground and background definition of probability,Calculating and BndCon (pi) it is related, i.e., with The determination of set Bnd is related, therefore the optimization of set Bnd can influenceValue, further influence optimization method Solution, to being had an impact to the saliency value of each super-pixel;
Step S13:The present invention proposes a background super-pixel optimisation strategy, i.e., is obtained using initial calculationCome excellent Change set Bnd;Main thought is the camera work for not all having profession due to the user of shooting portrait shot, the portrait shot of shooting Portrait area is still easily accessible by the left and right boundary of image in piece;Therefore, in this case when directly by the left and right boundary of image On super-pixel when being assumed to be background super-pixel, algorithm will erroneously assume that the super-pixel for partly belonging to portrait area to belong to In background, to will result in mistake to the detection of the conspicuousness of portrait area, it is therefore desirable to introduce a background super-pixel optimization Strategy;
Assuming that the super-pixel number that should belong to portrait area in set Bnd is Nf, remaining super-pixel number is Nb, there is Nf< <Nb;Belong to portrait area assuming that having but be divided into the super-pixel p in set Bnd by priori rules mistakekWith belong to background area Super-pixel pl;It is explained in order to more intuitive, is only respectively first foreground portrait area and the back of the body there are two types of color in hypothesis image Scene area;So there are L (pk)=Nb, L (pl)=Nb, i.e. L (pk)<<L(pl), also there are BndCon (pk)<<BndCon(pl);Then We can obtain
To setThreshold value t is acquired using Da-Jin algorithmwb, threshold value t will be less than on four boundarieswbSuper-pixel from set It is rejected in Bnd, is more than or equal to threshold value twbSuper-pixel rejoin in set Bnd, to complete optimization to set Bnd;
Step S14:By by foreground probability valueBackground probability valueAnd smoothing weightsAs a parameter to solving Optimization method obtains the saliency value s of super-pixeli, optimization method is:
Wherein, smoothing weightsdapp(pi,pj) it is super-pixel piAnd pjIn CIE-Lab Euclidean distance on color space;
The saliency value s of each super-pixel is obtained by solving-optimizing equationi, and saliency value is normalized into [0,1] range It is interior.
Step S2:Saliency value to obtained super-pixel calculates an adaptive threshold using Da-Jin algorithm, and saliency value is big It is labeled as foreground area in the super-pixel of the adaptive threshold, while a fixed threshold is set, saliency value is less than described solid The super-pixel for determining threshold value is labeled as background area, and remaining region is then marked as zone of ignorance, to obtain a super-pixel Three component of label of scale, the result figure obtained such as the step 2 of Fig. 2.
Step S3:Using the GrabCut algorithms of super-pixel scale, divides from three component of label and obtain portrait area side Boundary.Specifically include following steps:
Step S31:Segmentation problem is converted to based on the GrabCut algorithms of super-pixel scale and solves energy function minimum Problem, and it is max-flow/minimal cut problem by solving s-t networks to solve minimization problem, solves equation and is defined as:
WhereinIndicate the segmentation result of super-pixel,Indicate that the super-pixel belongs to background area for 0,For 1 table Show that the super-pixel belongs to foreground area, and E (x, θ, z) indicates that the energy function that GrabCut algorithms define, function include smooth item V (x, z) and data item U (x, θ, z), the smooth item V (x, z) is measuring the difference of foreground area and background area, data U (x, θ, z) is measuring the probability that super-pixel belongs to foreground or background area;In function variableIndicate CIE- The average color of each super-pixel on Lab color spaces,Then indicate the GrabCut algorithms of super-pixel scale Initial markers are it is assumed that when super-pixel belongs to foreground area, xiIt is 1, and when super-pixel belongs to background area or zone of ignorance, xi It is 0;θ is the gauss hybrid models that GrabCut algorithms define, and GrabCut algorithms make the super-pixel for belonging to non-foreground area For the input sample of foreground gauss hybrid models, and the super-pixel of non-background area will be belonged to as foreground gauss hybrid models Input sample;
Step S32:Using Orchard and Bauman algorithms build gauss hybrid models, recycle EM algorithms to its into Row solves, and further calculates data item U (x, θ, z) and smooth item V (x, z);Finally by the maximum for solving s-t networks Stream/minimal cut problem, to solve to obtainThat is the segmentation result of super-pixel scale.
Step S4:Based on the notable testing result that the obtained portrait area boundaries step S3 and step S1 obtain, first with Quick guiding filtering algorithm obscures background area, then is selectively carried out carefully according to notable testing result to foreground area Section enhancing, to obtain background blurring effect.Specifically include following steps:
Step S41:For the notable value set under super-pixel scaleCorresponding super-pixel segmentation tag set Notable testing result is mapped to artwork resolution ratio to get to notable figure S;Similarly, for the segmentation knot under super-pixel scale FruitCorresponding super-pixel segmentation tag setSegmentation result is mapped to artwork resolution ratio to cover to get to segmentation Code figure M;The result figure obtained such as the step 1 and step 3 of Fig. 2;
Step S42:Based on segmentation result, mould targetedly is carried out to background area using quick guiding filtering algorithm Paste;Due to relied on when image filtering be image local correlations, the result parameter obtained to the image after down-sampling carries out The filter result being calculated again with original image after up-sampling, and directly to original image guide filtering the result is that unusual phase Close, while assuming that down-sampling parameter is f, then it is exactly the time-consuming 1/f of original algorithm that algorithm, which takes,2;In order to improve this method Original image can equally be carried out down in the GrabCut algorithms of whole computational efficiency, conspicuousness detection and super-pixel scale Sampling, and down-sampling parameter here, could be provided as consistent;
Assuming that quickly navigational figure is G in guiding filtering algorithm, image to be filtered is d, and the filter result of generation is q;Draw The thought for leading filtering core is to be based on Local Linear Model:
Wherein, i indicates the subscript of pixel in image, qiIndicate the pixel that the pixel of i is designated as under in filter result image Value, GiIndicate that the pixel value that the pixel of i is designated as under in navigational figure, k indicate under the box filter window ω that radius is r Mark, ωkIndicate the pixel indexed set that k-th of filter window is included;akAnd bkIt is to be missed by the way that filtered reconstruction will be minimized The problem of difference is converted into the closed solution that linear regression problem obtains;ukAnd σkIt is navigational figure G equal in k-th of filter window Value and variance, diIndicate the pixel value that the pixel of i is designated as under in image to be filtered,Indicate that image to be filtered is filtered at k-th The pixel value mean value of pixel in wave window;ε is a regularization parameter for being used for controlling smoothness;Since algorithm is to calculate Multiple radiuses are the box filter window of r in image, and obtain corresponding akAnd bk, so for the parameter a of each pixel And b, it is allowed to be equivalent to the mean value of the parameter of all windows comprising the pixel, therefore have:
Indicate to contain the mean value of parameter a in all filter windows of the pixel for being designated as i under image,Expression includes The mean value of parameter b in all filter windows for the pixel for being designated as i under image;Fuzzy behaviour is carried out due to only using guiding filtering Make, so it is image d to be filtered that quickly two inputs of guiding filtering algorithm needs, which are identical,;Moreover, based on segmentation mask figure M only obscures background area, so having:
Wherein, miFor the segmentation result of pixel in segmentation mask figure, mi∈{0,1};
Step S43:Based on notable testing result and filtered image q, targetedly to more significant in foreground area Details enhancing is carried out to regional choice, to obtain an effect preferably background blurring result;
Assuming that there is the enhanced image e of details, the difference by finding out fuzzy result and original image is used as original image edge Details, and whether these differences in original image are significantly expanded according to region, to enhance the details of original image, the public affairs of details enhancing Formula is:
Wherein, siFor the saliency value of pixel in notable figure, value range is [0,1];λ is a constant term;Details increases Image e after strong be to portrait images carry out it is background blurring after final effect figure, the result figure obtained such as the step 5 of Fig. 2.
The above are preferred embodiments of the present invention, all any changes made according to the technical solution of the present invention, and generated function is made When with range without departing from technical solution of the present invention, all belong to the scope of protection of the present invention.

Claims (4)

1. a kind of portrait photo automatic background weakening method based on conspicuousness detection, which is characterized in that include the following steps:
Step S1:Portrait images are divided into N number of super-pixel using linear spectral clustering super-pixel segmentation algorithm, then utilize improvement Notable optimization algorithm calculate the saliency value of each super-pixel;
Step S2:Saliency value to obtained super-pixel calculates an adaptive threshold using Da-Jin algorithm, and saliency value is more than institute The super-pixel for stating adaptive threshold is labeled as foreground area, while a fixed threshold is arranged, and saliency value is less than the fixed threshold The super-pixel of value is labeled as background area, and remaining region is then marked as zone of ignorance, to obtain a super-pixel scale Three component of label;
Step S3:Using the GrabCut algorithms of super-pixel scale, divides from three component of label and obtain portrait area boundary;
Step S4:Based on the notable testing result that the obtained portrait area boundaries step S3 and step S1 obtain, first with quick Guiding filtering algorithm obscures background area, then selectively carries out details increasing according to notable testing result to foreground area By force, to obtain background blurring effect.
2. a kind of portrait photo automatic background weakening method based on conspicuousness detection according to claim 1, feature It is, in the step S1, portrait images is divided into N number of super-pixel using linear spectral clustering super-pixel segmentation algorithm, then The saliency value that each super-pixel is calculated using improved notable optimization algorithm, is included the following steps:
Step S11:To arbitrary portrait images I, N number of super-pixel is divided into using linear spectral clustering super-pixel segmentation algorithmObtain super-pixel segmentation tag setEach super-pixel segmentation marks liCorresponding to i-th of super-pixel is included All pixels point set, i be super-pixel segmentation mark liSubscript;
Step S12:To N number of super-pixel that step S11 is obtained, its corresponding background connection priori value, CIE-Lab face is calculated The average color z of each super-pixel in the colour spaceiCollection be combined intoThen all neighbouring super pixels of connection are built Undirected weight map, and will in the undirected weight map connect the adjacent super-pixel of any two side weight definition be this two The Euclidean distance of point super-pixel color value;To which any two points super-pixel (p be calculatedj,pi) between geodesic distance dgeo(pj,pi), subscript j, i value is 1 to N;The definition of priori is connected by background, first assumes the super-pixel category in image boundary In background area, super-pixel p is defined on this basisjFormation zone on color space is Area (pj), and the generation area The length of side of the domain in image boundary is L (pj), then define background connection priori value BndCon (pj) be:
WhereinσclrIndicate geodesic distance dgeo(pj,pi) said standard Gaussian Profile Standard deviation, Bnd be it is initially assumed that the super-pixel set for belonging to background area, since improved notable optimization algorithm is not directly false If all borderline super-pixel of portrait images belong to background area, but only assume the left and right and coboundary of portrait images On super-pixel belong to background area, so set Bnd contains only the super-pixel on the left and right and coboundary of image,Indicate super-pixel piIt is not belonging to set Bnd,When discriminate is true in bracket, super-pixel p is indicatediNo Belong to set Bnd, then δ ()=0, discriminate is fictitious time in bracket, then δ ()=1;
Background to obtain each super-pixel connects priori value, and belongs to the background connection priori value of the super-pixel of background area Numerically it is much larger than the super-pixel for belonging to foreground area;
Priori value is connected by the background for all super-pixel being calculated, for super-pixel pi, belong to the probability power of background area WeightIt is defined as:
Wherein σcIt is parameter of the numerical values recited between [0.5,2.5];
And super-pixel piBelong to the probability right of foreground areaIt is defined as:
Wherein, dE(pi,pj) indicate super-pixel pjAnd piEuclidean distance on color space CIE-Lab,
Wherein ds(pi,pj) it is super-pixel pjAnd piCentral point space length, σsFor the standard deviation criteria of Gaussian Profile;
Calculating and BndCon (pi) related, i.e., it is related with the determination of set Bnd, therefore the optimization energy shadow of set Bnd Sound arrivesValue, the solution of optimization method is further influenced, to generate shadow to the saliency value of each super-pixel It rings;
Step S13:To setThreshold value t is acquired using Da-Jin algorithmwb, threshold value t will be less than on four boundarieswbSuper-pixel from It is rejected in set Bnd, is more than or equal to threshold value twbSuper-pixel rejoin in set Bnd, to complete to the excellent of set Bnd Change;
Step S14:By by foreground probability valueBackground probability valueAnd smoothing weightsAs a parameter to solving-optimizing Equation obtains the saliency value s of super-pixeli, optimization method is:
Wherein, smoothing weightsdapp(pi,pj) it is super-pixel piAnd pjIn CIE-Lab colors Euclidean distance spatially;
The saliency value s of each super-pixel is obtained by solving-optimizing equationi, and saliency value is normalized in [0,1] range.
3. a kind of portrait photo automatic background weakening method based on conspicuousness detection according to claim 2, feature It is, in the step S3, using the GrabCut algorithms of super-pixel scale, divides to obtain portrait area side from three component of label Boundary includes the following steps:
Step S31:Segmentation problem solution energy function minimum is converted to based on the GrabCut algorithms of super-pixel scale to ask Topic, and it is max-flow/minimal cut problem by solving s-t networks to solve minimization problem, solves equation and is defined as:
WhereinIndicate the segmentation result of super-pixel,Indicate that the super-pixel belongs to background area for 0,Being indicated for 1 should Super-pixel belongs to foreground area, and E (x, θ, z) indicates the energy function that GrabCut algorithms define, function include smooth item V (x, Z) and data item U (x, θ, z), the smooth item V (x, z) is measuring the difference of foreground area and background area, data item U (x, θ, z) is measuring the probability that super-pixel belongs to foreground or background area;In function variableIndicate CIE-Lab The average color of each super-pixel on color space,Then indicate the first of the GrabCut algorithms of super-pixel scale Label begin it is assumed that when super-pixel belongs to foreground area, xiIt is 1, and when super-pixel belongs to background area or zone of ignorance, xiFor 0;θ is the gauss hybrid models that define of GrabCut algorithms, and GrabCut algorithms will belong to the super-pixel of non-foreground area as The input sample of foreground gauss hybrid models, and the super-pixel of non-background area will be belonged to as the defeated of foreground gauss hybrid models Enter sample;
Step S32:Gauss hybrid models are built using Orchard and Bauman algorithms, EM algorithms is recycled to ask it Solution, and further calculate data item U (x, θ, z) and smooth item V (x, z);Finally by solve s-t networks max-flow/ Minimal cut problem, to solve to obtainThat is the segmentation result of super-pixel scale.
4. a kind of portrait photo automatic background weakening method based on conspicuousness detection according to claim 3, feature It is, in the step S4, portrait area boundary and notable testing result is based on, first with quick guiding filtering algorithm to background Region is obscured, then selectively carries out details enhancing according to notable testing result to foreground area, to obtain background void Change effect, includes the following steps:
Step S41:For the notable value set under super-pixel scaleCorresponding super-pixel segmentation tag setIt will show It writes testing result and is mapped to artwork resolution ratio to get to notable figure S;Similarly, for the segmentation result under super-pixel scaleCorresponding super-pixel segmentation tag setBy segmentation result be mapped to artwork resolution ratio to get to segmentation mask Scheme M;
Step S42:Based on segmentation result, background area is obscured using quick guiding filtering algorithm;
Assuming that quickly navigational figure is G in guiding filtering algorithm, image to be filtered is d, and the filter result of generation is q;Guiding filter The thought of wave core is to be based on Local Linear Model:
Wherein, i indicates the subscript of pixel in image, qiIndicate the pixel value that the pixel of i is designated as under in filter result image, Gi Indicate that the pixel value that the pixel of i is designated as under in navigational figure, k indicate that radius is the subscript of the box filter window ω of r, ωk Indicate the pixel indexed set that k-th of filter window is included;akAnd bkIt is by the way that asking for filtered reconstruction error will be minimized Topic is converted into the closed solution that linear regression problem obtains;ukAnd σkIt is mean values and side of the navigational figure G in k-th of filter window Difference, diIndicate the pixel value that the pixel of i is designated as under in image to be filtered,Indicate image to be filtered in k-th of filter window The pixel value mean value of pixel;ε is a regularization parameter for being used for controlling smoothness;Due to algorithm be calculate image in it is more A radius is the box filter window of r, and obtains corresponding akAnd bk, so for the parameter a and b of each pixel, it is allowed to It is equivalent to the mean value of the parameter of all windows comprising the pixel, therefore is had:
Indicate to contain the mean value of parameter a in all filter windows of the pixel for being designated as i under image,Expression contains figure The mean value of parameter b in all filter windows for the pixel for being designated as i as under;Fuzzy operation is carried out due to only using guiding filtering, So it is image d to be filtered that quickly two inputs of guiding filtering algorithm needs, which are identical,;Moreover, based on segmentation mask figure M, only Background area is obscured, so having:
Wherein, miFor the segmentation result of pixel in segmentation mask figure, mi∈{0,1};
Step S43:Based on notable testing result and filtered image q, more significant regional choice in foreground area is carried out Details enhances, to obtain an effect preferably background blurring result;
Assuming that there is the enhanced image e of details, the difference by finding out fuzzy result and original image is used as original image edge details, And whether these differences in original image are significantly expanded according to region, to enhance the details of original image, the formula of details enhancing is:
Wherein, siFor the saliency value of pixel in notable figure, value range is [0,1];λ is a constant term;Details is enhanced Image e be to portrait images carry out it is background blurring after final effect figure.
CN201810342812.2A 2018-04-17 2018-04-17 A kind of portrait photo automatic background weakening method based on conspicuousness detection Pending CN108564528A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810342812.2A CN108564528A (en) 2018-04-17 2018-04-17 A kind of portrait photo automatic background weakening method based on conspicuousness detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810342812.2A CN108564528A (en) 2018-04-17 2018-04-17 A kind of portrait photo automatic background weakening method based on conspicuousness detection

Publications (1)

Publication Number Publication Date
CN108564528A true CN108564528A (en) 2018-09-21

Family

ID=63535651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810342812.2A Pending CN108564528A (en) 2018-04-17 2018-04-17 A kind of portrait photo automatic background weakening method based on conspicuousness detection

Country Status (1)

Country Link
CN (1) CN108564528A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111239A (en) * 2019-04-28 2019-08-09 叠境数字科技(上海)有限公司 A kind of portrait head background-blurring method based on the soft segmentation of tof camera
CN110288617A (en) * 2019-07-04 2019-09-27 大连理工大学 Based on the shared sliced image of human body automatic division method for scratching figure and ROI gradual change
CN110706234A (en) * 2019-10-08 2020-01-17 浙江工业大学 Automatic fine segmentation method for image
CN110889459A (en) * 2019-12-06 2020-03-17 北京深境智能科技有限公司 Learning method based on edge and Fisher criterion
CN112163511A (en) * 2020-09-25 2021-01-01 天津大学 Method for identifying authenticity of image
CN112215773A (en) * 2020-10-12 2021-01-12 新疆大学 Local motion deblurring method and device based on visual saliency and storage medium
CN112308791A (en) * 2020-10-12 2021-02-02 杭州电子科技大学 Color constancy method based on gray pixel statistics
CN112634314A (en) * 2021-01-19 2021-04-09 深圳市英威诺科技有限公司 Target image acquisition method and device, electronic equipment and storage medium
CN116433701A (en) * 2023-06-15 2023-07-14 武汉中观自动化科技有限公司 Workpiece hole profile extraction method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104333700A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Image blurring method and image blurring device
CN104899877A (en) * 2015-05-20 2015-09-09 中国科学院西安光学精密机械研究所 Method for extracting image foreground based on super pixel and fast trimap image
CN106548185A (en) * 2016-11-25 2017-03-29 三星电子(中国)研发中心 A kind of foreground area determines method and apparatus
CN106981068A (en) * 2017-04-05 2017-07-25 重庆理工大学 A kind of interactive image segmentation method of joint pixel pait and super-pixel
CN107527054A (en) * 2017-09-19 2017-12-29 西安电子科技大学 Prospect extraction method based on various visual angles fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104333700A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Image blurring method and image blurring device
CN104899877A (en) * 2015-05-20 2015-09-09 中国科学院西安光学精密机械研究所 Method for extracting image foreground based on super pixel and fast trimap image
CN106548185A (en) * 2016-11-25 2017-03-29 三星电子(中国)研发中心 A kind of foreground area determines method and apparatus
CN106981068A (en) * 2017-04-05 2017-07-25 重庆理工大学 A kind of interactive image segmentation method of joint pixel pait and super-pixel
CN107527054A (en) * 2017-09-19 2017-12-29 西安电子科技大学 Prospect extraction method based on various visual angles fusion

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CARSTEN ROTHER等: ""GrabCut"-Interactive Foreground Extraction using Iterated Graph Cuts", 《ACM T GRAPHIC》 *
KAIMING HE 等: "Fast Guided Filter", 《HTTP://ARXIV.ORG》 *
WEIHAI CHEN 等: "Automatic Synthetic Background Defocus for a Single Portrait Image", 《IEEE TRANSACTIONS ON CONSUMER ELECTRONICS》 *
高智勇 等: "一种结合视觉显著性的GrabCut图像分割方法", 《中南民族大学学报(自然科学版)》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111239A (en) * 2019-04-28 2019-08-09 叠境数字科技(上海)有限公司 A kind of portrait head background-blurring method based on the soft segmentation of tof camera
CN110288617B (en) * 2019-07-04 2023-02-03 大连理工大学 Automatic human body slice image segmentation method based on shared matting and ROI gradual change
CN110288617A (en) * 2019-07-04 2019-09-27 大连理工大学 Based on the shared sliced image of human body automatic division method for scratching figure and ROI gradual change
CN110706234A (en) * 2019-10-08 2020-01-17 浙江工业大学 Automatic fine segmentation method for image
CN110889459A (en) * 2019-12-06 2020-03-17 北京深境智能科技有限公司 Learning method based on edge and Fisher criterion
CN110889459B (en) * 2019-12-06 2023-04-28 北京深境智能科技有限公司 Learning method based on edge and Fisher criteria
CN112163511B (en) * 2020-09-25 2022-03-29 天津大学 Method for identifying authenticity of image
CN112163511A (en) * 2020-09-25 2021-01-01 天津大学 Method for identifying authenticity of image
CN112215773A (en) * 2020-10-12 2021-01-12 新疆大学 Local motion deblurring method and device based on visual saliency and storage medium
CN112308791A (en) * 2020-10-12 2021-02-02 杭州电子科技大学 Color constancy method based on gray pixel statistics
CN112215773B (en) * 2020-10-12 2023-02-17 新疆大学 Local motion deblurring method and device based on visual saliency and storage medium
CN112308791B (en) * 2020-10-12 2024-02-27 杭州电子科技大学 Color constancy method based on gray pixel statistics
CN112634314A (en) * 2021-01-19 2021-04-09 深圳市英威诺科技有限公司 Target image acquisition method and device, electronic equipment and storage medium
CN116433701A (en) * 2023-06-15 2023-07-14 武汉中观自动化科技有限公司 Workpiece hole profile extraction method, device, equipment and storage medium
CN116433701B (en) * 2023-06-15 2023-10-10 武汉中观自动化科技有限公司 Workpiece hole profile extraction method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108564528A (en) A kind of portrait photo automatic background weakening method based on conspicuousness detection
CN107862698B (en) Light field foreground segmentation method and device based on K mean cluster
CN107452010B (en) Automatic cutout algorithm and device
CN106780485B (en) SAR image change detection method based on super-pixel segmentation and feature learning
CN104915972B (en) Image processing apparatus, image processing method and program
CN105825494B (en) A kind of image processing method and mobile terminal
CN104504745B (en) A kind of certificate photo generation method split based on image and scratch figure
CN113888437A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN103177446A (en) Image foreground matting method based on neighbourhood and non-neighbourhood smoothness prior
JP7142420B2 (en) Image processing device, learning method, trained model, image processing method
CN104123529A (en) Human hand detection method and system thereof
CN108320294B (en) Intelligent full-automatic portrait background replacement method for second-generation identity card photos
CN111160407A (en) Deep learning target detection method and system
CN110969631B (en) Method and system for dyeing hair by refined photos
CN102034247A (en) Motion capture method for binocular vision image based on background modeling
CN108537816A (en) A kind of obvious object dividing method connecting priori with background based on super-pixel
Gupta et al. A robust model for salient text detection in natural scene images using MSER feature detector and Grabcut
CN113052228A (en) Liver cancer pathological section classification method based on SE-Incepton
CN105678318A (en) Traffic label matching method and apparatus
CN112906550A (en) Static gesture recognition method based on watershed transformation
CN114241372A (en) Target identification method applied to sector-scan splicing
CN112967305B (en) Image cloud background detection method under complex sky scene
CN113139549B (en) Parameter self-adaptive panoramic segmentation method based on multitask learning
CN105513079B (en) The detection method in large scale time series Remote Sensing Imagery Change region
CN109658523A (en) The method for realizing each function operation instruction of vehicle using the application of AR augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180921