CN105809666A - Image matting method and device - Google Patents

Image matting method and device Download PDF

Info

Publication number
CN105809666A
CN105809666A CN201410857219.3A CN201410857219A CN105809666A CN 105809666 A CN105809666 A CN 105809666A CN 201410857219 A CN201410857219 A CN 201410857219A CN 105809666 A CN105809666 A CN 105809666A
Authority
CN
China
Prior art keywords
pixel
pending image
image
background
described pending
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410857219.3A
Other languages
Chinese (zh)
Inventor
方圆圆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Leadcore Technology Co Ltd
Original Assignee
Leadcore Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Leadcore Technology Co Ltd filed Critical Leadcore Technology Co Ltd
Priority to CN201410857219.3A priority Critical patent/CN105809666A/en
Publication of CN105809666A publication Critical patent/CN105809666A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses an image matting method and device. The image matting method comprises the following steps: images to be processed are divided via clustering operation into foreground areas, background areas and unknown areas; the images to be processes are matted based on the foreground areas, the background areas and the unknown areas; a problem that poor image matting effects are caused due to manual dividing operation faults and inaccuracy via use of technologies of the prior art is solved, and image matting quality and effects are improved.

Description

Image matting method and device
Technical field
The present embodiments relate to image processing techniques, particularly relate to a kind of image matting method and device.
Background technology
Digital Matting is from a kind of image processing techniques being separately from other sections out the part in image, is called for short and scratches figure.Stingy figure includes the known blue screen matting of background and the natural image matting of background the unknown.
Wherein, the method that natural image matting adopts has a Knockout method, Poisson method, based on the stingy figure of perceptual color space and Bayes (Bayes) method etc..
In prior art, when adopting Bayes method to scratch figure, first pass through and manually divide the prospect of image, background and zone of ignorance, be then based on the foreground area, background area and the zone of ignorance that divide, image is split, it is achieved scratch figure.
But, the manual prospect of image, background and the zone of ignorance of dividing often is made a fault, and manually divides inaccurate, therefore, causes that the effect adopting the stingy figure of Bayes method is undesirable.
Summary of the invention
The embodiment of the present invention provides a kind of image matting method and device, to improve stingy plot quality.
First aspect, embodiments provides a kind of image matting method, including:
By clustering, pending image is divided, obtain foreground area, background area and zone of ignorance;
Carry out scratching figure to described pending image based on described foreground area, background area and zone of ignorance.
Second aspect, the embodiment of the present invention additionally provides a kind of image and scratches map device, including:
Divide module, for pending image being divided by cluster, obtain foreground area, background area and zone of ignorance;
Scratch module, for carrying out scratching figure to described pending image based on described foreground area, background area and zone of ignorance.
The image matting method of embodiment of the present invention offer and device, foreground area, background area and zone of ignorance is divided an image into by clustering, solve prior art because manually dividing the problem of the stingy figure weak effect that error causes with inaccuracy, improve quality and the effect of stingy figure.
Accompanying drawing explanation
The flow chart of a kind of image matting method that Fig. 1 provides for the embodiment of the present invention one;
The image matting method that Fig. 2 provides for the embodiment of the present invention two divides the method flow diagram of image;
The flow chart of a kind of image matting method that Fig. 3 provides for the embodiment of the present invention two;
Fig. 4 scratches the structural representation of map device for a kind of image that the embodiment of the present invention three provides.
Detailed description of the invention
Below in conjunction with drawings and Examples, the present invention is described in further detail.It is understood that specific embodiment described herein is used only for explaining the present invention, but not limitation of the invention.It also should be noted that, for the ease of describing, accompanying drawing illustrate only part related to the present invention but not entire infrastructure.
The executive agent of the image matting method that the embodiment of the present invention provides can be that image scratches map device, and this device can be that software realizes, it is also possible to is that hardware realizes, it is also possible to be that hardware and software is implemented in combination with.
Embodiment one
Referring to Fig. 1, a kind of image matting method that the present embodiment provides specifically includes: operation 11 and operation 12.
In operation 11, by clustering, pending image is divided, obtain foreground area, background area and zone of ignorance.
Cluster for example, it is possible to foreground and background in the picture draws a line respectively.Specifically, it may include:
Receive the drafting operational order that first area and second area to described pending image carry out;
According to described drafting operational order, in the first area of described pending image, draw the first lines, described second area is drawn the second lines;
Using described first lines and described second lines as initial cluster center, the pixel in described pending image is clustered;
According to described cluster result, described pending image is divided, improve the degree of accuracy of division.
By clustering method, image is divided so that division result is more accurate, and the contour line needed is less, only needs the first lines and the second lines here.
In operation 12, carry out scratching figure to described pending image based on described foreground area, background area and zone of ignorance.
Here traditional Bayes method can be adopted, the Bayes method after improvement (explanation in embodiment three as described below) can also be adopted, carry out scratching figure to described pending image according to operation 11 foreground area out, background area and zone of ignorance.
The image matting method that the present embodiment provides, foreground area, background area and zone of ignorance is divided an image into by clustering, avoid error and the setting area of manually division, solve prior art because manually dividing the problem of the stingy figure weak effect that error causes with inaccuracy, improve quality and the effect of stingy figure.
Exemplary, above-mentioned using described first lines and described second lines as initial cluster center, the pixel in described pending image is clustered, including:
According to described first lines and described second lines, described pending image is divided into foreground pixel set, background pixel set and unknown pixel set, GraphCut algorithm can be adopted here to split;
Described foreground pixel set and described background pixel set are clustered.
Exemplary, above-mentioned described foreground pixel set and described background pixel set are clustered, including:
By division methods, the pixel in described foreground pixel set and described background pixel set is clustered as prospect class and background classes respectively, as k-means clustering algorithm can be adopted, make division result more accurate, and the contour line of needs is less, traditional needs three and above, the division methods that the present embodiment provides has only to two;
Each pixel is obtained in described pending image to the minimum range of described prospect class and background classes by below equation:
Cij=| | C (i)-C (j) | |2
d i F = min | | C ( i ) - K F n | |
d i B = min | | C ( i ) - K B n | |
Wherein, C (i) and C (j) is the cluster barycenter of pixel i and pixel j, C in cluster barycenter respectively described pending imageijFor the cluster barycenter of pixel i and pixel j,For pixel i in described pending image to the minimum range of each prospect class, KF nFor the set of the average color of described prospect class,For pixel i in described pending image to the minimum range of each background classes, KB nSet for the average color of described background classes.
Exemplary, above-mentioned according to described cluster result, described pending image is divided, including:
Image division weights are determined according to described cluster result;
According to described image division weights, described pending image is split.
By arranging division weights, take into full account the impact on division result of distance and color in partition process so that divide more accurate.
Exemplary, above-mentioned determine image division weights according to described cluster result, including:
Described image division weights are determined by below equation:
E 1 ( x j = 1 ) d j F d j F + d j B
E 1 ( x i = 1 ) = d i B d i F + d i B
E2(xi, xj)=| xi-xj|g(Cij)
g ( ξ ) = 1 ξ + 1
Wherein, E1(xj=1) it is the distance weights of the pixel j of 1 for abscissa in described pending image,For pixel j in described pending image to the minimum range of prospect class,For pixel j in described pending image to the minimum range of background classes, E1(xi=1) it is the distance weights of the pixel i of 1 for abscissa in described pending image,For pixel i in described pending image to the minimum range of prospect class,For pixel i in described pending image to the minimum range of background classes, E2(xi, xj) for pixel color weight on the horizontal scale, CijFor the cluster barycenter of pixel i and pixel j, g () is proportion coefficients, and ξ is proportionality coefficient, is used for determining E1And E2Proportion.
Exemplary, above-mentioned according to described image division weights, described pending image is split, including:
According to described E1And E2Described pending image is split.According to weights, image division become three regions, and reduce the area of zone of ignorance.
Embodiment two
The present embodiment is on the basis of the various embodiments described above, it is provided that a kind of by clustering the method that pending image is divided.
Referring to Fig. 2, what the present embodiment two provided specifically includes by clustering the method that pending image is divided: operation 21-operation 25.
In operation 21, the image of input draws two lines.
Here, the image of input and pending image, namely wait to be scratched the image of figure.The foreground and background of this two lines phenogram picture respectively.Due to user it can be seen that the foreground area of image and background area, therefore, user can pass through terminal operation, marks a line in foreground area, marks a line in background area.
In operation 22, divide pixel set F and B.
Specifically, according to this two lines, GraphCut algorithm partition is adopted to go out foreground pixel point set F, background pixel point set B and unknown pixel point set U.
In operation 23, adopt K-Mean method that the pixel of F and B is clustered.
First by K-Mean method by the node clustering of F, B, calculate the average color of each class, { KF nRepresenting the average color set of all prospect classes, background classes is { KB n}.Calculate each pixel i to the minimum range d of each prospect class in imagei FWith corresponding background distanceComputing formula is as follows:
Cij=| | C (i)-C (j) | |2
d i F = min | | C ( i ) - K F n | |
d i B = min | | C ( i ) - K B n | |
Wherein, C (i) and C (j) is the cluster barycenter of pixel i and pixel j, C in cluster barycenter respectively described pending imageijFor the cluster barycenter of pixel i and pixel j,For pixel i in described pending image to the minimum range of each prospect class, KF nFor the set of the average color of described prospect class,For pixel i in described pending image to the minimum range of each background classes, KB nSet for the average color of described background classes.
In operation 24, define image division weights E1 and E2.
E2 is defined as relevant to a gradient function, and its effect is the possibility reducing and there is labelling change between the pixel that color is close, even if it only occurs in border I.Its computing formula is as follows:
E 1 ( x j = 1 ) d j F d j F + d j B
E 1 ( x i = 1 ) = d i B d i F + d i B
E2(xi, xj)=| xi-xj|g(Cij)
g ( ξ ) = 1 ξ + 1
Wherein, E1(xj=1) it is the distance weights of the pixel j of 1 for abscissa in described pending image,For pixel j in described pending image to the minimum range of prospect class,For pixel j in described pending image to the minimum range of background classes, E1(xi=1) it is the distance weights of the pixel i of 1 for abscissa in described pending image,For pixel i in described pending image to the minimum range of prospect class,For pixel i in described pending image to the minimum range of background classes, E2(xi, xj) for pixel color weight on the horizontal scale, CijFor the cluster barycenter of pixel i and pixel j, g () is proportion coefficients, and ξ is proportionality coefficient.
In operation 25, divide an image into foreground area, background area and zone of ignorance according to El and E2.
Specifically, image is split, the pixel in unknown pixel set U is divided into prospect set or background set, obtains image-region division result: foreground area, background area and zone of ignorance.
The technical scheme that the present embodiment provides, divides an image into foreground area, background area and zone of ignorance by clustering, and solves the error and coarse problem that manually divide in prior art, has reached to improve the effect of the accuracy that region divides.
Embodiment three
The present embodiment is on the basis of above-described embodiment, it is provided that another kind of image matting method.
Referring to Fig. 3, the image matting method that the present embodiment provides specifically includes: operation 31-operation 34.
In operation 31, image-region divides.
Specifically, it is possible to be whole picture is divided into foreground area, background area and needs to calculateThe zone of ignorance of value.Wherein,Value refers to the proportion shared by foreground, or represents opaque degree.In a width composograph,C, F and B represent composograph, foreground image and background image respectively, and correspondingly, (x, y), (x, y) (x y) represents the combination color of this point, foreground and background colour to F to C respectively with B.
This operation can adopt the method that GraphCut algorithm and cluster combine to carry out image division, specifically refers to the explanation in above-described embodiment, repeats no more here.
In operation 32, zone of ignorance is sampled and sample sub-clustering.
Specifically, first known background and foreground point, after adopting enough known background and foreground point, then sample point is carried out sub-clustering;Pixel for zone of ignorance, first carry out, with the oeil de boeuf mouth that size is variable, sample (this window slides to zone of ignorance in the way of " Bulbus Allii Cepae peeling ") from prospect and background area, after adopting enough known background and foreground point, sample point is carried out sub-clustering, and the point in each bunch obeys oriented Gauss distribution in color space.
In operation 33, color is estimated.
Define a rational Bayesian frame, and set up the computation model of color estimation model and value;Define a rational Bayesian frame, and set up the computation model of color estimation model and value.And use maximum a posteriori probability (MaximumAPosteriori, MAP) to be optimized to solve.MAP estimate in, for given zone of ignorance pixel C, find most possible F, B andEstimated value.This problem be described as be at probability distribution P (F, B,| C) on an optimization problem.
In operation 34, sciagraphy is adopted to obtain scratching figure result.
Specifically, sciagraphy is utilized to try to achieve newValue, completes to scratch figure.Initial value take neighbouring known pixels pointMeansigma methods.If sample has multiple bunches, then the sample bunch one_to_one corresponding of prospect, background is solved.For Bayes framework, this traditional Bayes method only define log probability L (C | F, B,), L (F) and L (B), it does not have definition LWhen prospect, background Gauss distribution have overlapped part, it is not firm that this is easily caused the value estimated, and also often produces pulse noise on the opacity passage generated.Therefore, the Bayes method of improvement is LIt is defined as one-dimensional Gaussian distribution model, is not increasing on the basis of time complexity, make the A channel that process obtains more smooth.Here, the transparency of A channel phenogram picture.And before taking into full account, background bunch sampled point proportion pairThe impact of value estimation accuracy a, it is proposed to L containing proportion coefficientsExpression formula.As follows:
∂ = ( F - B ) T ( F - B ) | | F - B | | 2
The technical scheme that the present embodiment provides, by adopting GraphCut algorithm and cluster to combine and image-region is divided, and adopt bayes method to scratch figure, input picture is carried out color estimation by the statistical distribution namely utilizing color, Optimized Iterative solution procedure so that scratch and achieve better balance between figure speed and stingy figure effect.Further, by LIt is defined as one-dimensional Gaussian distribution model, is not increasing on the basis of time complexity, make the A channel that process obtains more smooth.For the image that some backgrounds are more complicated, adopt the technical scheme that the present embodiment provides also to be able to obtain reasonable image and scratch figure effect.
Embodiment four
The present embodiment is on the basis of the various embodiments described above, it is provided that image FIG pull handle device, is used for realizing any one image FIG pull handle method above-mentioned.
Referring to Fig. 4, a kind of image FIG pull handle device that the present embodiment provides specifically includes: divide module 41 and stingy module 42.
Divide module 41 for pending image being divided by cluster, obtain foreground area, background area and zone of ignorance;
Scratch module 42 for carrying out scratching figure to described pending image based on described foreground area, background area and zone of ignorance.
Exemplary, above-mentioned division module 41 includes:
Receive submodule, for the drafting operational order receiving the first area to described pending image and second area carries out;
Rendering submodule, for according to described drafting operational order, drawing the first lines, draw the second lines in described second area in the first area of described pending image;
First cluster submodule, for using described first lines and described second lines as initial cluster center, clustering the pixel in described pending image;
Divide submodule, for described pending image being divided according to described cluster result.
Exemplary, above-mentioned first cluster submodule includes:
First segmentation submodule, for being divided into foreground pixel set, background pixel set and unknown pixel set according to described first lines and described second lines by described pending image;
Second cluster submodule, for clustering described foreground pixel set and described background pixel set.
Exemplary, above-mentioned second cluster submodule specifically for:
By dividing device, the pixel in described foreground pixel set and described background pixel set is clustered respectively as prospect class and background classes;
Each pixel is obtained in described pending image to the minimum range of described prospect class and background classes by below equation:
Cij=| | C (i)-C (j) | |2
d i F = min | | C ( i ) - K F n | |
d i B = min | | C ( i ) - K B n | |
Wherein, C (i) and C (j) is the cluster barycenter of pixel i and pixel j, C in cluster barycenter respectively described pending imageijFor the cluster barycenter of pixel i and pixel j,For pixel i in described pending image to the minimum range of each prospect class, KF nFor the set of the average color of described prospect class,For pixel i in described pending image to the minimum range of each background classes, KB nSet for the average color of described background classes.
Exemplary, above-mentioned division submodule includes:
Weights determine submodule, for determining image division weights according to described cluster result;
Second segmentation submodule, for according to described image division weights, splitting described pending image.
Exemplary, above-mentioned weights determine submodule specifically for:
Described image division weights are determined by below equation:
E 1 ( x j = 1 ) d j F d j F + d j B
E 1 ( x i = 1 ) = d i B d i F + d i B
E2(xi, xj)=| xi-xj|g(Cij)
g ( ξ ) = 1 ξ + 1
Wherein, E1(xj=1) it is the distance weights of the pixel j of 1 for abscissa in described pending image,For pixel j in described pending image to the minimum range of prospect class,For pixel j in described pending image to the minimum range of background classes, E1(xi=1) it is the distance weights of the pixel i of 1 for abscissa in described pending image,For pixel i in described pending image to the minimum range of prospect class,For pixel i in described pending image to the minimum range of background classes, E2(xi, xj) for pixel color weight on the horizontal scale, CijFor the cluster barycenter of pixel i and pixel j, g () is proportion coefficients, and ξ is proportionality coefficient, is used for determining E1And E2Proportion.
Exemplary, above-mentioned second segmentation submodule specifically for:
According to described E1And E2Described pending image is split.
Above-mentioned image scratches map device can perform the image matting method that any embodiment of the present invention provides, and possesses the corresponding functional module of operation each with image matting method and beneficial effect.
Note, above are only presently preferred embodiments of the present invention and institute's application technology principle.It will be appreciated by those skilled in the art that and the invention is not restricted to specific embodiment described here, various obvious change can be carried out for a person skilled in the art, readjust and substitute without departing from protection scope of the present invention.Therefore, although the present invention being described in further detail by above example, but the present invention is not limited only to above example, when without departing from present inventive concept, other Equivalent embodiments more can also be included, and the scope of the present invention is determined by appended right.

Claims (14)

1. an image matting method, it is characterised in that including:
By clustering, pending image is divided, obtain foreground area, background area and zone of ignorance;
Carry out scratching figure to described pending image based on described foreground area, background area and zone of ignorance.
2. method according to claim 1, it is characterised in that pending image is divided by clustering, including:
Receive the drafting operational order that first area and second area to described pending image carry out;
According to described drafting operational order, in the first area of described pending image, draw the first lines, described second area is drawn the second lines;
Using described first lines and described second lines as initial cluster center, the pixel in described pending image is clustered;
According to described cluster result, described pending image is divided.
3. method according to claim 2, it is characterised in that using described first lines and described second lines as initial cluster center, the pixel in described pending image is clustered, including:
According to described first lines and described second lines, described pending image is divided into foreground pixel set, background pixel set and unknown pixel set;
Described foreground pixel set and described background pixel set are clustered.
4. method according to claim 3, it is characterised in that described foreground pixel set and described background pixel set are clustered, including:
By division methods, the pixel in described foreground pixel set and described background pixel set is clustered respectively as prospect class and background classes;
Each pixel is obtained in described pending image to the minimum range of described prospect class and background classes by below equation:
Cij=| | C (i)-C (j) | |2
d i F = min | | C ( i ) - K n F | |
d i B = min | | C ( i ) - K B n | |
Wherein, C (i) and C (j) is the cluster barycenter of pixel i and pixel j, C in cluster barycenter respectively described pending imageijFor the cluster barycenter of pixel i and pixel j,For pixel i in described pending image to the minimum range of each prospect class, KF nFor the set of the average color of described prospect class,For pixel i in described pending image to the minimum range of each background classes, KB nSet for the average color of described background classes.
5. the method according to any one of claim 2-4, it is characterised in that described pending image is divided according to described cluster result, including:
Image division weights are determined according to described cluster result;
According to described image division weights, described pending image is split.
6. method according to claim 5, it is characterised in that determine image division weights according to described cluster result, including:
Described image division weights are determined by below equation:
E 1 ( x j = 1 ) = d j F d j F + d j B
E 1 ( x i = 1 ) = d i B d i F + d i B
E2(xi,xj)=| xi-xj|g(Cij)
g ( ξ ) = 1 ξ + 1
Wherein, E1(xj=1) it is the distance weights of the pixel j of 1 for abscissa in described pending image,For pixel j in described pending image to the minimum range of prospect class,For pixel j in described pending image to the minimum range of background classes, E1(xi=1) it is the distance weights of the pixel i of 1 for abscissa in described pending image,For pixel i in described pending image to the minimum range of prospect class,For pixel i in described pending image to the minimum range of background classes, E2(xi,xj) for pixel color weight on the horizontal scale, CijFor the cluster barycenter of pixel i and pixel j, g () is proportion coefficients, and ξ is proportionality coefficient, is used for determining E1And E2Proportion.
7. method according to claim 6, it is characterised in that according to described image division weights, splits described pending image, including:
According to described E1And E2Described pending image is split.
8. an image scratches map device, it is characterised in that including:
Divide module, for pending image being divided by cluster, obtain foreground area, background area and zone of ignorance;
Scratch module, for carrying out scratching figure to described pending image based on described foreground area, background area and zone of ignorance.
9. device according to claim 8, it is characterised in that described division module includes:
Receive submodule, for the drafting operational order receiving the first area to described pending image and second area carries out;
Rendering submodule, for according to described drafting operational order, drawing the first lines, draw the second lines in described second area in the first area of described pending image;
First cluster submodule, for using described first lines and described second lines as initial cluster center, clustering the pixel in described pending image;
Divide submodule, for described pending image being divided according to described cluster result.
10. device according to claim 9, it is characterised in that described first cluster submodule includes:
First segmentation submodule, for being divided into foreground pixel set, background pixel set and unknown pixel set according to described first lines and described second lines by described pending image;
Second cluster submodule, for clustering described foreground pixel set and described background pixel set.
11. device according to claim 10, it is characterised in that described second cluster submodule specifically for:
By dividing device, the pixel in described foreground pixel set and described background pixel set is clustered respectively as prospect class and background classes;
Each pixel is obtained in described pending image to the minimum range of described prospect class and background classes by below equation:
Cij=| | C (i)-C (j) | |2
d i F = min | | C ( i ) - K F n | |
d i B = min | | C ( i ) - K B n | |
Wherein, C (i) and C (j) is the cluster barycenter of pixel i and pixel j, C in cluster barycenter respectively described pending imageijFor the cluster barycenter of pixel i and pixel j,For pixel i in described pending image to the minimum range of each prospect class, KF nFor the set of the average color of described prospect class,For pixel i in described pending image to the minimum range of each background classes, KB nSet for the average color of described background classes.
12. according to the device described in any one of claim 9-11, it is characterised in that described division submodule includes:
Weights determine submodule, for determining image division weights according to described cluster result;
Second segmentation submodule, for according to described image division weights, splitting described pending image.
13. device according to claim 12, it is characterised in that described weights determine submodule specifically for:
Described image division weights are determined by below equation:
E 1 ( x j = 1 ) = d j F d j F + d j B
E 1 ( x i = 1 ) = d i B d i F + d i B
E2(xi,xj)=| xi-xj|g(Cij)
g ( ξ ) = 1 ξ + 1
Wherein, E1(xj=1) it is the distance weights of the pixel j of 1 for abscissa in described pending image,For pixel j in described pending image to the minimum range of prospect class,For pixel j in described pending image to the minimum range of background classes, E1(xi=1) it is the distance weights of the pixel i of 1 for abscissa in described pending image,For pixel i in described pending image to the minimum range of prospect class,For pixel i in described pending image to the minimum range of background classes, E2(xi,xj) for pixel color weight on the horizontal scale, CijFor the cluster barycenter of pixel i and pixel j, g () is proportion coefficients, and ξ is proportionality coefficient, is used for determining E1And E2Proportion.
14. device according to claim 13, it is characterised in that described second segmentation submodule specifically for:
According to described E1And E2Described pending image is split.
CN201410857219.3A 2014-12-30 2014-12-30 Image matting method and device Pending CN105809666A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410857219.3A CN105809666A (en) 2014-12-30 2014-12-30 Image matting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410857219.3A CN105809666A (en) 2014-12-30 2014-12-30 Image matting method and device

Publications (1)

Publication Number Publication Date
CN105809666A true CN105809666A (en) 2016-07-27

Family

ID=56465013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410857219.3A Pending CN105809666A (en) 2014-12-30 2014-12-30 Image matting method and device

Country Status (1)

Country Link
CN (1) CN105809666A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331533A (en) * 2016-08-10 2017-01-11 深圳市企拍文化科技有限公司 Method for adding LOGO in video
CN107154046A (en) * 2017-04-06 2017-09-12 南京邮电大学 A kind of method of video background processing and secret protection
CN109360253A (en) * 2018-09-28 2019-02-19 共享智能铸造产业创新中心有限公司 A kind of method for drafting of big pixel B MP format-pattern
CN109409377A (en) * 2018-12-03 2019-03-01 龙马智芯(珠海横琴)科技有限公司 The detection method and device of text in image
CN110148102A (en) * 2018-02-12 2019-08-20 腾讯科技(深圳)有限公司 Image composition method, ad material synthetic method and device
CN110751654A (en) * 2019-08-30 2020-02-04 稿定(厦门)科技有限公司 Image matting method, medium, equipment and device
WO2020043063A1 (en) * 2018-08-29 2020-03-05 稿定(厦门)科技有限公司 Interactive image matting method, medium, and computer apparatus
CN112149592A (en) * 2020-09-28 2020-12-29 上海万面智能科技有限公司 Image processing method and device and computer equipment
WO2022041865A1 (en) * 2020-08-28 2022-03-03 稿定(厦门)科技有限公司 Automatic image matting method and apparatus employing computation on multiple background colors
CN114820686A (en) * 2022-05-16 2022-07-29 北京百度网讯科技有限公司 Matting method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1564198A (en) * 2004-04-13 2005-01-12 浙江大学 Natural image digging method based on sensing colour space
CN103473780A (en) * 2013-09-22 2013-12-25 广州市幸福网络技术有限公司 Portrait background cutout method
CN103578107A (en) * 2013-11-07 2014-02-12 中科创达软件股份有限公司 Method for interactive image segmentation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1564198A (en) * 2004-04-13 2005-01-12 浙江大学 Natural image digging method based on sensing colour space
CN103473780A (en) * 2013-09-22 2013-12-25 广州市幸福网络技术有限公司 Portrait background cutout method
CN103578107A (en) * 2013-11-07 2014-02-12 中科创达软件股份有限公司 Method for interactive image segmentation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
吕巨建: "自然图像抠图方法的研究", 《万方数据 学位论文》 *
张展鹏 等: "数字抠像的最新研究进展", 《自动化学报》 *
章卫祥 等: "一种改进的Graph Cuts交互图像分割方法", 《2006中国科协年会——数字成像技术及影像材料科学学术交流会》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331533A (en) * 2016-08-10 2017-01-11 深圳市企拍文化科技有限公司 Method for adding LOGO in video
CN107154046A (en) * 2017-04-06 2017-09-12 南京邮电大学 A kind of method of video background processing and secret protection
CN110148102A (en) * 2018-02-12 2019-08-20 腾讯科技(深圳)有限公司 Image composition method, ad material synthetic method and device
CN110148102B (en) * 2018-02-12 2022-07-15 腾讯科技(深圳)有限公司 Image synthesis method, advertisement material synthesis method and device
US11443436B2 (en) 2018-08-26 2022-09-13 Gaoding (Xiamen) Technology Co. Ltd Interactive image matting method, computer readable memory medium, and computer device
WO2020043063A1 (en) * 2018-08-29 2020-03-05 稿定(厦门)科技有限公司 Interactive image matting method, medium, and computer apparatus
CN109360253A (en) * 2018-09-28 2019-02-19 共享智能铸造产业创新中心有限公司 A kind of method for drafting of big pixel B MP format-pattern
CN109360253B (en) * 2018-09-28 2023-08-11 共享智能装备有限公司 Drawing method of large-pixel BMP format image
CN109409377A (en) * 2018-12-03 2019-03-01 龙马智芯(珠海横琴)科技有限公司 The detection method and device of text in image
CN109409377B (en) * 2018-12-03 2020-06-02 龙马智芯(珠海横琴)科技有限公司 Method and device for detecting characters in image
CN110751654B (en) * 2019-08-30 2022-06-28 稿定(厦门)科技有限公司 Image matting method, medium, equipment and device
CN110751654A (en) * 2019-08-30 2020-02-04 稿定(厦门)科技有限公司 Image matting method, medium, equipment and device
WO2022041865A1 (en) * 2020-08-28 2022-03-03 稿定(厦门)科技有限公司 Automatic image matting method and apparatus employing computation on multiple background colors
CN112149592A (en) * 2020-09-28 2020-12-29 上海万面智能科技有限公司 Image processing method and device and computer equipment
CN114820686B (en) * 2022-05-16 2022-12-16 北京百度网讯科技有限公司 Matting method and device, electronic equipment and storage medium
CN114820686A (en) * 2022-05-16 2022-07-29 北京百度网讯科技有限公司 Matting method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN105809666A (en) Image matting method and device
CN109712145A (en) A kind of image matting method and system
CN103400386B (en) A kind of Interactive Image Processing method in video
CN110634147B (en) Image matting method based on bilateral guide up-sampling
Luan et al. Fast single image dehazing based on a regression model
CN104820990A (en) Interactive-type image-cutting system
CN105654436A (en) Backlight image enhancement and denoising method based on foreground-background separation
Chen et al. Boundary-assisted region proposal networks for nucleus segmentation
US20120020554A1 (en) Variable kernel size image matting
Xu et al. Multi-exposure image fusion techniques: A comprehensive review
Guo et al. Image dehazing based on haziness analysis
CN111507334A (en) Example segmentation method based on key points
Foare et al. Semi-linearized proximal alternating minimization for a discrete Mumford–Shah model
CN103313068A (en) White balance corrected image processing method and device based on gray edge constraint gray world
CN104376538B (en) Image sparse denoising method
Xiong et al. An efficient underwater image enhancement model with extensive Beer-Lambert law
Labat et al. Convergence of conjugate gradient methods with a closed-form stepsize formula
CN107993198A (en) Optimize the image defogging method and system of contrast enhancing
CN103914822A (en) Interactive video foreground object extraction method based on super pixel segmentation
CN109285205A (en) A kind of face exchanges method, system, computer readable storage medium and equipment
Wang et al. Convex regularized inverse filtering methods for blind image deconvolution
CN111010605B (en) Method for displaying video picture-in-picture window
CN108682021A (en) Rapid hand tracking, device, terminal and storage medium
CN101276476A (en) Process for the separating prospect background of 2D cartoon animation
CN114882367A (en) Airport pavement defect detection and state evaluation method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20160727

Assignee: Shanghai Li Ke Semiconductor Technology Co., Ltd.

Assignor: Leadcore Technology Co., Ltd.

Contract record no.: 2018990000159

Denomination of invention: Image matting method and device

License type: Common License

Record date: 20180615

WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160727