CN107516319A - A kind of high accuracy simple interactive stingy drawing method, storage device and terminal - Google Patents

A kind of high accuracy simple interactive stingy drawing method, storage device and terminal Download PDF

Info

Publication number
CN107516319A
CN107516319A CN201710791442.6A CN201710791442A CN107516319A CN 107516319 A CN107516319 A CN 107516319A CN 201710791442 A CN201710791442 A CN 201710791442A CN 107516319 A CN107516319 A CN 107516319A
Authority
CN
China
Prior art keywords
mrow
sample point
msubsup
msub
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710791442.6A
Other languages
Chinese (zh)
Other versions
CN107516319B (en
Inventor
孙鹏
王灿进
邱东海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North University of China
Original Assignee
North University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North University of China filed Critical North University of China
Priority to CN201710791442.6A priority Critical patent/CN107516319B/en
Publication of CN107516319A publication Critical patent/CN107516319A/en
Application granted granted Critical
Publication of CN107516319B publication Critical patent/CN107516319B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A kind of high accuracy disclosed by the invention simple interactive stingy drawing method, storage device and terminal, including:Input the original image of figure to be scratched;According to different use environments, prompt message is exported;User operation commands are obtained, determine operator scheme;Three components are calculated;Obtain prospect, the background sample point set of zone of ignorance pixel;Sample drawn point pair, the opacity of zone of ignorance pixel is estimated, take confidence level highest opacity as stingy figure opacity;Zone of ignorance pixel neighborhood carry out smooth operation, obtain zone of ignorance pixel it is smooth after stingy figure opacity;According to the stingy figure opacity after smooth, the color value of sample point pair, foreground target is extracted from original image.The present invention is simple to operate and has higher stingy figure precision, suitable for image processing field.

Description

A kind of high accuracy simple interactive stingy drawing method, storage device and terminal
Technical field
The present invention relates to image processing field, and in particular to a kind of simple interactive stingy drawing method of high accuracy, storage device And terminal.
Background technology
Stingy figure refers to the process of foreground area interested being precisely separated from image or video background, and it extensively should For fields such as video editing, virtual reality, film makings.Scratch after figure by simple fusion treatment, just can be by foreground target Indiscriminately ad. as one wishes it is combined with various backgrounds, greatly reduces the workload of scene arrangement.
Natural image matting theory thinks that each pixel in image can be represented by formula (9):
I=α F+ (1- α) B (9)
Wherein:Before I represents that the color value in real image, F represent that foreground color value, B represent that background color value, α are referred to as Scape opacity.
The image to be scratched of input can be divided into foreground area, wherein three parts in background area and zone of ignorance, prospect Opacity α=1 in region, opacity α=0 of background area.The target for scratching graphic operation is just to determine zone of ignorance not Transparency and its corresponding foreground color value, background color value.
At present, existing many stingy nomographys and stingy figure software come out, but with a certain distance from having from simple, practical target. Traditional blue screen matting is simple and easy, but requires that target is placed in the background of blueness, limits photographed scene and use range; Knockout scratches region of the figure suitable for color gentle transition, but the stingy figure of local detail is poor;Bayesian scratches figure and Possion Scratch figure and be applied to the situation that foreground and background differs greatly, but in foreground color complex distribution, it is poor to scratch figure effect;Robust It is preferable but often helpless for non-interconnected edge to scratch figure stability.In addition, algorithm above generally existing computation complexity High, the defects of being difficult to apply in real time.
The operation such as existing stingy figure software such as Photoshop is complex, it is necessary to certain professional knowledge, time and effort consuming; Often edge is not fine enough for the prospect that TouchRetouch and stingy figure expression etc. are extracted, and more flash be present.
The content of the invention
For insufficient present in correlation technique, the technical problems to be solved by the invention are:A kind of operation letter is provided Stingy drawing method, storage device and terminal single and with higher stingy figure precision.
In order to solve the above technical problems, the technical solution adopted by the present invention is:A kind of simple interactive stingy figure side of high accuracy Method, including:S101, the original image for inputting figure to be scratched;S102, according to different use environments, export prompt message to user; S103, different user operation commands are obtained, determine corresponding operator scheme;S104, the operator scheme according to determination, calculate Obtain three components;S105, each pixel to zone of ignorance in three components carry out gradient calculation, are obtained according to gradient direction current Prospect sample point set, the background sample point set of zone of ignorance pixel;S106, concentrated from above-mentioned two sample point and extract one respectively Sample, using the different sample points opacity different to estimating zone of ignorance pixel, takes confidence into sample point pair Highest opacity is spent as stingy figure opacity;S107, according to above-mentioned stingy figure opacity, in current unknown region The neighborhood of pixel carries out smooth operation, obtain current unknown region pixel it is smooth after stingy figure opacity;S108, basis are not Know each pixel in region it is smooth after stingy figure opacity and each stingy figure opacity corresponding to sample point pair face Colour, foreground target is extracted from original image.
Preferably, after extracting foreground target from original image, in addition to:S1091, the foreground target that will be extracted Directly preserved, exported;Or S1092, the new background image of input, the foreground target extracted and new background image are entered Row synthesis, preserve, output composograph.
Preferably, the operator scheme, including:Background subtraction pattern and manual markings pattern;The operation according to determination Pattern, three components are calculated, specifically include:
If operator scheme is background subtraction pattern, first passes through following difference formula and obtain foreground image Id:Id=| I-Ibg|, Wherein:I is the image to be scratched comprising foreground target and target context, IbgFor the background image not comprising foreground target;To prospect Image IdMake an opening operation, obtain revised foreground image Ido;To revised foreground image IdoIt is r to make sizeeCorruption Erosion operation, obtains the foreground area F of three componentsg, to revised foreground image IdoIt is r to make sizedExpansive working, obtain three The background area B of componentg, the region between foreground area and background area is zone of ignorance, so far, is obtained under background subtraction pattern Image I to be scratched three component It
If operator scheme is manual markings pattern, the operational order of user is first obtained, in image I to be scratched foreground zone A point P is selected in domain, background area respectivelyfg、PbgAs initial growth point, using following areas growth algorithm to adjacent diffusion:
Wherein:I is current pixel, and j is i neighborhood territory pixel, Ii、IjThe color value at i, j point, Gra are represented respectivelyi、GrajRespectively Represent the Grad at i, j point, Tv、TgThe respectively color threshold and Grads threshold of region growing, works as gne(j) be 1 when, represent Pixel j and pixel i is in the same area, works as gne(j) when being 0, represent that pixel j and pixel i are in different zones;Give birth in region After length terminates, the rough segmentation figure I of foreground and background is obtainedgrow, then to rough segmentation figure IgrowIt is r to make sizeeEtching operation, obtain The foreground area F of three componentsg, to rough segmentation figure IgrowIt is r to make sizedExpansive working, obtain the background area B of three componentsg, it is preceding Region between scene area and background area is zone of ignorance, so far, obtains three components of image I to be scratched under manual markings pattern It
Preferably, each pixel to zone of ignorance in three components carries out gradient calculation, is obtained according to gradient direction Prospect sample point set, the background sample point set of current unknown region pixel, are specifically included:
To each pixel i (x of zone of ignorancei,yi), calculate its Grad Grai, the direction of gradient is designated asCalculating Formula is:
AlongStraight line, cut-off line and foreground area and first intersection point of background area are made in direction, respectively as prospect sample The first sample point of this point set and the first sample point of background sample point set, around above-mentioned two first sample point, respectively Take it is multiple space length is closest therewith and margin of image element is more than Tv/ 2 point, prospect sample point set and background sample are generated respectively This point set, wherein:TvFor the color threshold in region.
Preferably, in the first sample point of the prospect sample point set and the first sample point of the background sample point set Around, take 4 respectively space length is closest therewith and margin of image element is more than Tv/ 2 point so that the prospect sample point set It is 5 to concentrate the sample point included with the background sample point.
Preferably, described concentrated from above-mentioned two sample point extracts a sample into sample point pair respectively, using not The same sample point opacity different to estimating zone of ignorance pixel, confidence level highest opacity is taken as figure is scratched and is used Opacity, specifically include:
Concentrated from the prospect sample point set and background sample point, extract a sample respectively into sample point It is right, according to the sample point to making straight line, each pixel i of zone of ignorance is projected on the straight line, estimated each Pixel i opacity:
Wherein:Fi (m)WithThe color value of m-th of prospect sample point and n-th of background sample point is represented respectively;
Estimate for multiple different opacities of each zone of ignorance pixel of acquisition, choose confidence level highest not Stingy figure opacity of the transparency as the zone of ignorance pixel, and this scratches sample point corresponding to figure opacity to then making For final stingy figure sample point pair.
Preferably, the calculating of the confidence level, is specifically included:
From prospect sample point collection, background sample point concentrate by take out sample point pair, calculate each sample point to relative to The error of linear model shown in formula (3), i.e., linear similarity:
According to zone of ignorance pixel i and the color value of the sample point pair of sampling, foreground color similarity is calculated:
And background color similarity:
According to the linear similarity and color similarity of above-mentioned acquisition, calculate corresponding to the current unknown of each sample point pair The confidence level of area pixel i opacities estimation:
Wherein:σ1And σ2For adjusting the weights between different similarities.
Preferably, when the neighborhood of current unknown region pixel carries out smooth operation, after being carried out smoothly using following equation Opacity calculate:
Wherein:Pi、PjRepresent 2 points of i, j coordinate, Ii、 IjRepresent 2 points of i, j color, Grai、GrajRepresent 2 points of i, j gradient, σp、σcAnd σgFor adjusting the weight between three.
Correspondingly, present invention also offers a kind of storage device, wherein be stored with a plurality of instruction, the instruction be suitable to by Reason device loads and performs the simple interactive stingy drawing method of high accuracy as described above.
Correspondingly, present invention also offers a kind of terminal, including:
Processor, it is adapted for carrying out each instruction;And storage device, suitable for storing a plurality of instruction, the instruction be suitable to by Reason loads and performs the simple interactive stingy drawing method of high accuracy as described above.
The advantageous effects of the present invention are:
1st, the present invention is simple, convenient, according to different use environments user can be prompted to carry out simple interaction behaviour Make;After the opacity for obtaining final stingy figure, smooth operation has also been carried out so that the present invention possesses higher stingy figure essence Degree.
2nd, the stingy figure result that the present invention finally obtains, both post-production material can be used as to carry out individually preserving output, made For the input of image processing module carry or other software, can be synthesized with the new background image of input again after Export again, improve the practicality and multifunctionality of the present invention.
3rd, the operator scheme in the present invention includes background subtraction pattern and manual markings pattern, has taken into full account use environment Otherness, it is only necessary to a small amount of manual intervention, can be achieved with the quick of complex scene and scratch figure;Either background subtraction pattern or hand Work marking mode, the present invention can carry out morphological operation to obtained thick three component, are relatively defined when calculating three components The true component of essence three, further increases stingy figure precision.
4th, for the present invention when choosing prospect sample point set and background sample point set, the method for use is first to zone of ignorance Each pixel carries out gradient calculation, makees straight line, cut-off line and first friendship of foreground area and background area along gradient direction Point, the first sample point of first sample point and background sample point set respectively as prospect sample point set, the sample so obtained Point is to positioned at the different texture region of image, that is, being respectively at real foreground area and background area with can guarantee that greater probability Domain;Then around above-mentioned two first sample point, take respectively it is multiple space length is closest therewith and margin of image element is more than Tv/ 2 point, ultimately generate prospect sample point set and background sample point set, so in turn ensure that have between sample point it is certain Ensure to contain real sample point pair, so as to improve the accuracy of stingy figure distinction and space similarity, i.e. greater probability.
5th, the quantity of other sample points around two first sample points is set as 4 by the present invention so that is finally obtained The quantity of the sample point pair obtained is 25 pairs, such sampling policy, limits the quantity of sample point pair, shortens and scratch the figure time, The follow-up complexity scratched figure and calculated is reduced, the accuracy of stingy figure is not only further increased, further improves Consumer's Experience.
6th, the present invention carries out method used by smooth operation to the opacity first obtained, it is contemplated that locus, face The influence of color and gradient so that between the pixel that locus is nearer, color is more similar, texture is closer, opacity more connects Closely, it is consistent with the subjective feeling of human eye, so as to effectively eliminate the singular point of opacity, substantially increases the essence of stingy figure Degree.
Brief description of the drawings
Fig. 1 is a kind of flow chart of the embodiment one of the simple interactive stingy drawing method of high accuracy provided by the invention;
Fig. 2 is a kind of flow chart of the embodiment two of the simple interactive stingy drawing method of high accuracy provided by the invention;
Fig. 3 is a kind of flow chart of the embodiment one of the simple interactive stingy map device of high accuracy provided by the invention;
Fig. 4 is a kind of flow chart of the embodiment two of the simple interactive stingy map device of high accuracy provided by the invention;
Fig. 5 is a kind of flow chart of the embodiment three of the simple interactive stingy map device of high accuracy provided by the invention;
Fig. 6 is a kind of flow chart of the example IV of the simple interactive stingy map device of high accuracy provided by the invention;
Fig. 7 is a kind of flow chart of the embodiment five of the simple interactive stingy map device of high accuracy provided by the invention;
Fig. 8 is a kind of flow chart of the embodiment six of the simple interactive stingy map device of high accuracy provided by the invention;
Fig. 9 is a kind of flow chart of the embodiment seven of the simple interactive stingy map device of high accuracy provided by the invention;
Figure 10 is a kind of hardware structure diagram of the simple interactive stingy map device of high accuracy provided by the invention;
In figure:101 be original image input module, and 102 be prompt message output module, and 103 be that operator scheme determines mould Block, 104 be three component computing modules, and 105 be sample point set acquisition module, 106 be sample point to screening module, 107 be impermeable Lightness Leveling Block, 108 be that foreground target plucks out module, and 109 be the first result output module, and 110 be new images input module, 111 be synthesis module, and 112 be the second result output module, and 1041 be thick three components acquiring unit, and 1042 obtain for smart three components Unit, 1051 be gradient calculation unit, and 1052 be sampling unit, and 1061 be opacity estimation unit, and 1062 be sample point pair Screening unit, 1063 be confidence computation unit.
Embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is The part of the embodiment of the present invention, rather than whole embodiments;Based on the embodiment in the present invention, ordinary skill people The every other embodiment that member is obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
The invention provides a kind of one embodiment of the simple interactive stingy drawing method of high accuracy, as shown in figure 1, the party Method includes:
S101, the original image for inputting figure to be scratched.
S102, according to different use environments, export prompt message to user.
S103, different user operation commands are obtained, determine corresponding operator scheme.
S104, the operator scheme according to determination, are calculated three components.
S105, each pixel to zone of ignorance in three components carry out gradient calculation, are obtained currently not according to gradient direction Know prospect sample point set, the background sample point set of area pixel.
S106, concentrated from above-mentioned two sample point and extract a sample respectively into sample point pair, using different samples This opacity different to estimating zone of ignorance pixel, confidence level highest opacity is taken to scheme as stingy with opaque Degree.
S107, according to above-mentioned stingy figure opacity, in current unknown region, the neighborhood of pixel carries out smooth operation, obtains Stingy figure opacity after current unknown region pixel is smooth.
S108, according to stingy figure of each pixel of zone of ignorance after smooth with opacity and each figure of scratching with opaque The color value of sample point pair corresponding to degree, extracts foreground target from original image.
In the specific implementation, can be for step S101, concrete operations:Gather the color of each pixel in original image Value, form single-frame images.Then according to the color of image value collected, according to different user operation commands, determine different Operator scheme, three components are calculated.And can be for step S108, concrete operations:Newly-built one identical with original image size Image, be set to completely black;Then in location of pixels corresponding to foreground area, replaced using the color value of original image, and not Know location of pixels corresponding to region, the color value calculated using formula (9), finally give stingy figure result.
In this example, foreground target can be arbitrary objects, it is only necessary to have obvious differentiation border, this hair with background Bright embodiment is not construed as limiting to the concrete form of foreground image.
The present embodiment is simple, convenient, according to different use environments user can be prompted to carry out simple interaction behaviour Make;After the opacity for obtaining final stingy figure, smooth operation has also been carried out so that the present invention possesses higher stingy figure essence Degree.
Further, the operator scheme may include background subtraction pattern and manual markings pattern.
In the specific implementation, if shooting environmental meets following condition:Scene is indoor or other relatively stable environment, Captured background, prospect are relatively fixed, and prospect is easy to remove, and secondary shooting is easy, then selects background subtraction pattern;Work herein , it is necessary to which video camera is placed on metastable pedestal under pattern, a background image for not including foreground target is shot first IbgAnd preserve, keep camera motionless, foreground target is placed in visual field, shooting one includes foreground target and target context simultaneously Image I.If shooting environmental meets following condition:Scene is outdoor or other more unstable environment, captured mesh Relative to background motion, the target that is taken can not be removed mark from visual field, and secondary shooting is more difficult, then selects manual markings mould Formula;Under this mode of operation, video camera is without fixed, it is only necessary to shoots a picture I comprising foreground target and target context .
Further, the operator scheme according to determination, is calculated three components, specifically may include:
If operator scheme is background subtraction pattern, first passes through following difference formula and obtain foreground image Id:Id=| I-Ibg|, Wherein:I is the image to be scratched comprising foreground target and target context, IbgFor the background image not comprising foreground target;To prospect Image IdMake an opening operation, obtain revised foreground image Ido;To revised foreground image IdoIt is r to make sizeeCorruption Erosion operation, obtains the foreground area F of three componentsg, to revised foreground image IdoIt is r to make sizedExpansive working, obtain three The background area B of componentg, the region between foreground area and background area is zone of ignorance, so far, is obtained under background subtraction pattern Image I to be scratched three component It
If operator scheme is manual markings pattern, the operational order of user is first obtained, in image I to be scratched foreground zone A point P is selected in domain, background area respectivelyfg、PbgAs initial growth point, using following areas growth algorithm to adjacent diffusion:
Wherein:I is current pixel, and j is i neighborhood territory pixel, Ii、IjThe color value at i, j point, Gra are represented respectivelyi、GrajRespectively Represent the Grad at i, j point, Tv、TgThe respectively color threshold and Grads threshold of region growing, works as gne(j) be 1 when, represent Pixel j and pixel i is in the same area, works as gne(j) when being 0, represent that pixel j and pixel i are in different zones;Give birth in region After length terminates, the rough segmentation figure I of foreground and background is obtainedgrow, then to rough segmentation figure IgrowIt is r to make sizeeEtching operation, obtain The foreground area F of three componentsg, to rough segmentation figure IgrowIt is r to make sizedExpansive working, obtain the background area B of three componentsg, it is preceding Region between scene area and background area is zone of ignorance, so far, obtains three components of image I to be scratched under manual markings pattern It
In the specific implementation, if user selects background subtraction pattern, prompt user that video camera is fixed on pedestal, first clap Take the photograph a background image I without foreground targetbgAnd preserve, keep camera motionless, foreground target is placed in visual field, shooting one Open the image I comprising foreground target and target context simultaneously;For this operator scheme, the extraction of foreground target can be by above-mentioned The difference of two images obtains, i.e. Id=| I-Ibg |, difference image may include the shade and noise of a part, and what is obtained is white Color region and it is non-critical be foreground target region, to obtain accurate three component, first to IdImage makees an opening operation, To connect the region of partial discontinuous, hole, the foreground image I corrected are removeddo, then again to revised foreground image IdoMake a series of morphological operation.If user selects manual markings pattern, prompt user first to shoot one and include prospect mesh The picture I with target context is marked, then user needs to select an initial growth point respectively in foreground area and background area Pfg、Pbg
The present embodiment has taken into full account the otherness of use environment, it is only necessary to a small amount of manual intervention, can be achieved with complexity The quick of scene scratches figure;Either background subtraction pattern or manual markings pattern, the present embodiment, all can be right when calculating three components Obtained thick three component carries out morphological operation, obtains accurately smart three components, further increases stingy figure precision.
It should be noted that for the morphological operations such as the opening operation under different working modes, burn into expansion, morphology Shape, the size of operator should be chosen according to size, the shape of picture material such as foreground target, can be defaulted as rectangle. When carrying out morphological operation, it is assumed that the gray value of foreground area is 1, and the gray value of background area is 0, before so corrosion can be reduced Scene area, and expansion energy reduces background area, the zone of ignorance for including prospect border can be obtained by finally subtracting each other.
Further, each pixel to zone of ignorance in three components carries out gradient calculation, is obtained according to gradient direction Prospect sample point set, the background sample point set of current unknown region pixel are taken, specifically may include:
To each pixel i (x of zone of ignorancei,yi), calculate its Grad Grai, the direction of gradient is designated asCalculating Formula is:
AlongStraight line, cut-off line and foreground area and first intersection point of background area are made in direction, respectively as prospect sample The first sample point of this point set and the first sample point of background sample point set, around above-mentioned two first sample point, respectively Take it is multiple space length is closest therewith and margin of image element is more than Tv/ 2 point, prospect sample point set and background sample are generated respectively This point set, wherein:TvFor the color threshold in region.
In the present embodiment, first sample point pair is searched on the straight line of gradient direction, then in the sample point pair Surrounding searches for remaining multigroup sample point pair, and searching algorithm here is not particularly limited in the present embodiment, such as KNN, search tree Deng, only need to meet condition.
Further, calculate before gradient, it is necessary to original color image is converted into gray level image, using RGB turn gray scale as Example:Y=0.299R+0.587G+0.114B, wherein:R, G, B are the red, green, blue component of original image respectively, and Y is after converting Gray value.Certainly, original image can also be the arbitrary formats such as YUV, output image form of the embodiment of the present invention to video camera It is not construed as limiting.
For the present embodiment when choosing prospect sample point set and background sample point set, the method for use is first to zone of ignorance Each pixel carries out gradient calculation, makees straight line, cut-off line and first friendship of foreground area and background area along gradient direction Point, the first sample point of first sample point and background sample point set respectively as prospect sample point set, the sample so obtained Point is to positioned at the different texture region of image, that is, being respectively at real foreground area and background area with can guarantee that greater probability Domain;Then around above-mentioned two first sample point, take respectively it is multiple space length is closest therewith and margin of image element is more than Tv/ 2 point, ultimately generate prospect sample point set and background sample point set, so in turn ensure that have between sample point it is certain Ensure to contain real sample point pair, so as to improve the accuracy of stingy figure distinction and space similarity, i.e. greater probability.
Yet further, in the first sample point of the prospect sample point set and the first sample of the background sample point set Around point, it can take 4 respectively space length is closest therewith and margin of image element is more than Tv/ 2 point so that the prospect sample It is 5 that this point set and background sample point, which concentrate the sample point included, so that the quantity of the sample point pair finally obtained is 25 pairs.Such sampling policy, the quantity of sample point pair is limited, shorten and scratch the figure time, reduced follow-up scratch and scheme what is calculated Complexity, the accuracy of stingy figure is not only further increased, further improves Consumer's Experience.
Further, described concentrated from above-mentioned two sample point extracts a sample into sample point pair respectively, uses The different sample points opacity different to estimating zone of ignorance pixel, confidence level highest opacity is taken as stingy figure With opacity, specifically may include:
Concentrated from the prospect sample point set and background sample point, extract a sample respectively into sample point It is right, according to the sample point to making straight line, each pixel i of zone of ignorance is projected on the straight line, estimated each Pixel i opacity:
Wherein:Fi (m)WithThe color value of m-th of prospect sample point and n-th of background sample point is represented respectively;
Estimate for multiple different opacities of each zone of ignorance pixel of acquisition, choose confidence level highest not Stingy figure opacity of the transparency as the zone of ignorance pixel, and this scratches sample point corresponding to figure opacity to then making For final stingy figure sample point pair.
In the present embodiment, for each zone of ignorance pixel i, obtain 25 different opacities estimations altogether, it is necessary to Confidence level highest opacity is therefrom selected to be used to pluck out foreground target.Here the standard for evaluating optimal sample point pair is:It is right There is minimum error in the linear model of formula (3), and there is minimum error with current pixel color value.Therefore, it is described The calculating of confidence level, specifically may include:
Concentrated from prospect sample point collection, background sample point by taking out sample point pair, calculate every a pair of sample points to relative In the error of the linear model shown in formula (3), i.e., linear similarity:
According to zone of ignorance pixel i and the color value of the sample point pair of sampling, foreground color similarity is calculated:
And background color similarity:
According to the linear similarity and color similarity of above-mentioned acquisition, calculate corresponding to the current unknown of each sample point pair The confidence level of area pixel i opacities estimation:
Wherein:σ1And σ2For adjusting the weights between different similarities.
After zone of ignorance node-by-node algorithm opacity, the confidence level due to that can not ensure each pixel is satisfied by will It is possible to noise spot occur when asking, and sampling, causes local zone of opacity not smooth enough, finally scratches figure and aberration occur, because This needs to do local smoothing method to zone of ignorance opacity.The factor considered is needed to have when smooth:Color data error, locus Difference, gradient disparities, i.e., the pixel that local color difference is bigger, local space position is more remote, texture difference is bigger, weights are got over It is small.
Therefore, it is balance space domain, the influence of color codomain, texture codomain, further, in current unknown region pixel Neighborhood when carrying out smooth operation, can use following equation carry out it is smooth after opacity calculate:
Wherein:Pi、PjRepresent 2 points of i, j coordinate, Ii、 IjRepresent 2 points of i, j color, Grai、GrajRepresent 2 points of i, j gradient, σp、σcAnd σgFor adjusting the weight between three.
In the specific implementation, smooth operation has considered color data error, differences in spatial location and gradient disparities, finally The weight coefficient of neighborhood opacity is calculated, by changing σp、σcAnd σg, ratio of this three in weight coefficient can be adjusted Weight.If for example, focusing on texture information, the proportion of gradient disparities should increase, i.e. σgσ should be more thanp、σc
The present embodiment carries out method used by smooth operation to the opacity first obtained, it is contemplated that locus, face The influence of color and gradient so that between the pixel that locus is nearer, color is more similar, texture is closer, opacity more connects Closely, it is consistent with the subjective feeling of human eye, so as to effectively eliminate the singular point of opacity, substantially increases the essence of stingy figure Degree.
Present invention also offers a kind of second embodiment of the simple interactive stingy drawing method of high accuracy, as shown in Fig. 2 On the basis of embodiment one, in the simple interactive stingy drawing method of the high accuracy, extracted from original image foreground target it Afterwards, may also include:
S1091, the foreground target extracted directly preserved, is exported;Or
S1092, the new background image of input, the foreground target extracted and new background image are synthesized, preserves, export Composograph.
The stingy figure result that the present embodiment finally obtains, both post-production material can be used as to carry out individually preserving output, made For the input of image processing module carry or other software, can be synthesized with the new background image of input again after Export again, improve the practicality and multifunctionality of the present invention.Image processing module mentioned above can be image synthesis unit, Other software can be the softwares such as Photoshop.
In the specific implementation, the composition algorithm being related in building-up process can use the synthetic method based on opacity α, According to the foregoing foreground target plucked out, and it is smooth after opacity α, with reference to the new background image of offer, synthesize new images;Tool Gymnastics conduct:The region of known foreground target and color F, and the color B of background, according to the foregoing opacity α calculated, Synthesis pixel-by-pixel is carried out to foreground target and target context, i.e. I=α F+ (1- α) B, I represents the color value in composograph. It new background image mentioned above, can both be obtained, can also be imported by outside, the embodiment of the present invention is to the back of the body by being taken on site The source of scape image is not limited.
Present invention also offers a kind of storage device, a plurality of instruction is stored with the storage device, the instruction is suitable to Loaded by processor and perform the simple interactive stingy drawing method of described high accuracy.
The storage device can be a computer-readable recording medium, can include:ROM, RAM, disk or CD etc..
Present invention also offers a kind of terminal, the terminal includes:Processor and storage device, the processor are suitable to real Now each instruction;The storage device is suitable to store a plurality of instruction, and the instruction is suitable to be loaded by processing and performed described high-precision The simple interactive stingy drawing method of degree.
The terminal can be the stingy map device for arbitrarily having simple interactive function, and the device can be various terminal equipment, Such as:PC, mobile phone, tablet personal computer etc., it can specifically be realized by software and/or hardware.
Present invention also offers the first embodiment for the stingy map device that can realize above-mentioned stingy drawing method, as shown in figure 3, A kind of simple interactive stingy map device of high accuracy, including:
Original image input module 101:For inputting the original image of figure to be scratched.
Prompt message output module 102:For according to different use environments, prompt message to be exported to user.
Operator scheme determining module 103:For obtaining different user operation commands, corresponding operator scheme is determined.
Three component computing modules 104:For the operator scheme according to determination, three components are calculated.
Sample point set acquisition module 105:For carrying out gradient calculation to each pixel of zone of ignorance in three components, according to Gradient direction obtains prospect sample point set, the background sample point set of current unknown region pixel.
Sample point is to screening module 106:A sample is extracted respectively into sample for being concentrated from above-mentioned two sample point This point pair, using the different sample points opacity different to estimating zone of ignorance pixel, take confidence level highest impermeable Lightness is as stingy figure opacity.
Opacity Leveling Block 107:For according to above-mentioned stingy figure opacity, the neighbour of pixel in current unknown region Domain carry out smooth operation, obtain current unknown region pixel it is smooth after stingy figure opacity.
Foreground target plucks out module 108:For according to stingy figure opacity of each pixel of zone of ignorance after smooth, with And the color value of sample point pair corresponding to each stingy figure opacity, foreground target is extracted from original image.
In the specific implementation, when the neighborhood of current unknown region pixel carries out smooth operation, can be entered using following equation Row opacity is smooth:
Wherein:Pi、PjRepresent 2 points of i, j coordinate, Ii、 IjRepresent 2 points of i, j color, Grai、GrajRepresent 2 points of i, j gradient, σp、σcAnd σgFor adjusting the weight between three.
Present invention also offers a kind of second of embodiment of the simple interactive stingy map device of high accuracy, as shown in figure 4, On the basis of embodiment one, the stingy map device may also include:
First result output module 109:For the foreground target extracted directly to be exported.
Present invention also offers a kind of the third embodiment of the simple interactive stingy map device of high accuracy, as shown in figure 5, On the basis of embodiment one, the stingy map device may also include:
New images input module 110:For inputting new background image.
Synthesis module 111:For the foreground target extracted and new background image to be synthesized.
Second result output module 112:For exporting composograph.
Present invention also offers a kind of the 4th kind of embodiment of the simple interactive stingy map device of high accuracy, the present embodiment essence On can be based on embodiment two or embodiment three, no matter be based on embodiment two or embodiment three, the apparatus structure of the present embodiment is all It is similar, therefore in order to save length, example IV based on embodiment two is shown at this.As shown in fig. 6, in embodiment On the basis of two, the operator scheme, including:Background subtraction pattern and manual markings pattern;The then three components computing module 104 Specifically it may include:
Thick three components acquiring unit 1041:For when operator scheme is background subtraction pattern, being obtained by following difference formula Take foreground image Id:Id=| I-Ibg|, wherein:I is the image to be scratched comprising foreground target and target context, IbgFor not comprising before The background image of scape target;When operator scheme is manual markings pattern, the operational order of user is obtained, image I's to be scratched A point P is selected in foreground area, background area respectivelyfg、PbgAs initial growth point, using following areas growth algorithm to neighbour Spread in domain:
Wherein:I is current pixel, and j is i neighborhood territory pixel, Ii、IjThe color value at i, j point, Gra are represented respectivelyi、Graj The Grad at i, j point, T are represented respectivelyv、TgThe respectively color threshold and Grads threshold of region growing, works as gne(j) be 1 when, Represent that pixel j and pixel i is in the same area, work as gne(j) when being 0, represent that pixel j and pixel i are in different zones, area After domain growth terminates, the rough segmentation figure I of prospect foreground image and background image is obtainedgrow
Smart three component acquiring units 1042:For when operator scheme is background subtraction pattern, to foreground image IdMake once Opening operation, obtain revised foreground image Ido, to revised foreground image IdoIt is r to make sizeeEtching operation, obtain three The foreground area F of componentg, to revised foreground image IdoIt is r to make sizedExpansive working, obtain the background area of three components Domain Bg, the region between foreground area and background area is zone of ignorance, obtains three components of image I to be scratched under background subtraction pattern It;When operator scheme is manual markings pattern, to rough segmentation figure IgrowIt is r to make sizeeEtching operation, before obtaining three components Scene area Fg, to rough segmentation figure IgrowIt is r to make sizedExpansive working, obtain the background area B of three componentsg, foreground area and the back of the body Region between scene area is zone of ignorance, obtains the three component I of image I to be scratched under manual markings patternt
Present invention also offers a kind of the 5th kind of embodiment of the simple interactive stingy map device of high accuracy, the present embodiment essence On can be based on embodiment two or embodiment three, no matter be based on embodiment two or embodiment three, the apparatus structure of the present embodiment is all It is similar, therefore in order to save length, embodiment five based on embodiment two is shown at this.As shown in fig. 7, in embodiment On the basis of two, the sample point set acquisition module 105 specifically may include:
Gradient calculation unit 1051:For each pixel i (x to zone of ignorancei,yi), calculate its Grad Grai, ladder The direction of degree is designated asCalculation formula be:
Sampling unit 1052:For alongMake straight line, cut-off line and first friendship of foreground area and background area in direction Point, the first sample point of first sample point and background sample point set respectively as prospect sample point set, in above-mentioned two first Around sample point, take respectively it is multiple space length is closest therewith and margin of image element is more than Tv/ 2 point, generates prospect respectively Sample point set and background sample point set, wherein:TvFor the color threshold in region.
Specifically, in the first sample point of the prospect sample point set and the first sample point of the background sample point set Around, it can take 4 respectively space length is closest therewith and margin of image element is more than Tv/ 2 point so that the prospect sample point It is 5 that collection and the background sample point, which concentrate the sample point included,.
Present invention also offers a kind of the 6th kind of embodiment of the simple interactive stingy map device of high accuracy, the present embodiment essence On can be based on embodiment two or embodiment three, no matter be based on embodiment two or embodiment three, the apparatus structure of the present embodiment is all It is similar, therefore in order to save length, embodiment six based on embodiment two is shown at this.As shown in figure 8, in embodiment On the basis of two, the sample point specifically may include to screening module 106:
Opacity estimation unit 1061:For being concentrated from the prospect sample point set and background sample point, respectively A sample is extracted into sample point pair, according to the sample point to making straight line, each pixel i of zone of ignorance is thrown Shadow is to the opacity on the straight line, estimating each pixel i:
Wherein:Fi (m)WithThe color value of m-th of prospect sample point and n-th of background sample point is represented respectively.
Sample point is to screening unit 1062:For the multiple different opaque of each zone of ignorance pixel for acquisition Degree estimation, stingy figure opacity of the confidence level highest opacity as the zone of ignorance pixel is chosen, and this is scratched figure and used Sample point corresponding to opacity is to then as final stingy figure sample point pair.
Present invention also offers a kind of the 7th kind of embodiment of the simple interactive stingy map device of high accuracy, as shown in figure 9, On the basis of embodiment six, the sample point specifically may also include to screening module 106:
Confidence computation unit 1063:For from prospect sample point collection, background sample point concentrate by take out sample point pair, Each sample point is calculated to the error relative to the linear model shown in formula (3), i.e., linear similarity:
According to zone of ignorance pixel i and the color value of the sample point pair of sampling, foreground color similarity is calculated:
And background color similarity:
According to the linear similarity and color similarity of above-mentioned acquisition, calculate corresponding to the current unknown of each sample point pair The confidence level of area pixel i opacities estimation:
Wherein:σ1And σ2For adjusting the weights between different similarities.
Present invention also offers the hardware structure diagram of above-mentioned stingy map device, and as shown in Figure 10, a kind of high accuracy simply interacts Formula scratches the hardware configuration of map device, can generally include:IMAQ part, image processing section and image output portion, from figure I.e. it is evident that annexation between the modules that are included of above three part and these modules in 10, therefore This is not repeated.
For IMAQ part:The optical signal of outside scenery transmitting or reflection projects photosensitive array by camera lens On, it is converted into analog electrical signal.Analog signal is handled by some filtering and enhancing, is data signal by A/D module converters, It is input in digital signal processor (DSP).The selection to sensor devices is not specifically limited herein, both can had been CCD, and also might be used To be CMOS.The advantages of CCD is that high sensitivity, noise are small, and shortcoming is complex manufacturing, cost is high, power consumption is big;CMOS's is excellent Point is that integrated level is high, cost is low, small power consumption, and shortcoming is that sensitivity is low, and noise is big.In general, should be intended to select resolution ratio, The high sensor devices of sensitivity, signal to noise ratio, to obtain preferable image quality.
For image processing section:Inside Digital Image Processor, main image processing work is completed, i.e., to image Carry out scratching figure and synthetic operation.DSP can receive the order of user mutual interface, such as prospect, background area selection, image storage Etc. various functions.User mutual interface includes the LCD screen of tangible formula, button etc..After image processor completes operation, may be used also According to user instruction, to select that image is stored or exported, therefore image processor should be able to directly access outside and deposit Memory device.The selection to image processor is not specifically limited herein, can be ARM, DSP or FPGA, it is only necessary at satisfaction Reason requires.
For image output portion:Including image encoder and output interface, by the picture signal handled according to certain Form (such as JPEG, H264) coding, computer is output to, on display by external interface.External interface can be herein The interfaces such as HDMI, VGA, network interface, it is only necessary to which band is wider than video rate.
In the above-described embodiments, the description to each embodiment all emphasizes particularly on different fields, and does not have the portion being described in detail in some embodiment Point, it may refer to the associated description of other embodiment.
It is understood that the correlated characteristic in the above method, device, storage device and terminal can be referred to mutually.Separately Outside, " first " in above-described embodiment, " second " etc. are to be used to distinguish each embodiment, and do not represent the quality of each embodiment.
The technical staff in the field can be understood that, for convenience and simplicity of description, the device of foregoing description With the specific work process of unit, the corresponding process in preceding method embodiment is may be referred to, will not be repeated here.
Algorithm and display be not inherently related to any certain computer, virtual system or other equipment provided herein. Various general-purpose systems can also be used together with teaching based on this.As described above, required by constructing this kind of device Structure be obvious.In addition, the present invention is not also directed to any specific programming language.It should be understood that it can utilize each Kind programming language realizes the content of invention described herein, and the description done above to language-specific is to disclose this The preferred forms of invention.
In embodiment provided herein, it should be understood that disclosed apparatus and method, others can be passed through Mode is realized.Device embodiment described above is only schematical, for example, the division of the unit, is only one kind Division of logic function, can there is other dividing mode when actually realizing, in another example, multiple units or component can combine or Another system is desirably integrated into, or some features can be ignored, or do not perform.It is another, it is shown or discussed it is mutual it Between coupling direct-coupling or communication connection can be by some communication interfaces, the INDIRECT COUPLING or communication of device or unit Connection, can be electrical, mechanical or other forms.
The unit illustrated as separating component can be or may not be physically separate, show as unit The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs 's.
Finally it should be noted that:Various embodiments above is merely illustrative of the technical solution of the present invention, rather than its limitations;To the greatest extent The present invention is described in detail with reference to foregoing embodiments for pipe, it will be understood by those within the art that:Its according to The technical scheme described in foregoing embodiments can so be modified, either which part or all technical characteristic are entered Row equivalent substitution;And these modifications or replacement, the essence of appropriate technical solution is departed from various embodiments of the present invention technology The scope of scheme.

Claims (10)

  1. A kind of 1. simple interactive stingy drawing method of high accuracy, it is characterised in that:Including:
    S101, the original image for inputting figure to be scratched;
    S102, according to different use environments, export prompt message to user;
    S103, different user operation commands are obtained, determine corresponding operator scheme;
    S104, the operator scheme according to determination, are calculated three components;
    S105, each pixel to zone of ignorance in three components carry out gradient calculation, and current unknown area is obtained according to gradient direction Prospect sample point set, the background sample point set of domain pixel;
    S106, concentrated from above-mentioned two sample point and extract a sample respectively into sample point pair, using different sample points The opacity different to estimating zone of ignorance pixel, confidence level highest opacity is taken as stingy figure opacity;
    S107, according to above-mentioned stingy figure opacity, in current unknown region, the neighborhood of pixel carries out smooth operation, obtains current Stingy figure opacity after zone of ignorance pixel is smooth;
    S108, stingy figure opacity and each stingy figure opacity pair according to each pixel of zone of ignorance after smooth The color value for the sample point pair answered, extracts foreground target from original image.
  2. A kind of 2. simple interactive stingy drawing method of high accuracy according to claim 1, it is characterised in that:From original image After extracting foreground target, in addition to:
    S1091, the foreground target extracted directly preserved, is exported;Or
    S1092, the new background image of input, the foreground target extracted and new background image are synthesized, preserves, export synthesis Image.
  3. A kind of 3. simple interactive stingy drawing method of high accuracy according to claim 2, it is characterised in that:The operation mould Formula, including:Background subtraction pattern and manual markings pattern;The operator scheme according to determination, three components are calculated, specific bag Include:
    If operator scheme is background subtraction pattern, first passes through following difference formula and obtain foreground image Id:Id=| I-Ibg|, wherein: I is the image to be scratched comprising foreground target and target context, IbgFor the background image not comprising foreground target;To foreground image Id Make an opening operation, obtain revised foreground image Ido;To revised foreground image IdoIt is r to make sizeeEtching operation, Obtain the foreground area F of three componentsg, to revised foreground image IdoIt is r to make sizedExpansive working, obtain three components Background area Bg, the region between foreground area and background area is zone of ignorance, so far, obtains figure to be scratched under background subtraction pattern As I three component It
    If operator scheme is manual markings pattern, the operational order of user is first obtained, foreground area, the back of the body in image I to be scratched A point P is selected in scene area respectivelyfg、PbgAs initial growth point, using following areas growth algorithm to adjacent diffusion:
    Wherein:I is current pixel, and j is i neighborhood territory pixel, Ii、IjThe color value at i, j point, Gra are represented respectivelyi、GrajRespectively Represent the Grad at i, j point, Tv、TgThe respectively color threshold and Grads threshold of region growing, works as gne(j) be 1 when, represent Pixel j and pixel i is in the same area, works as gne(j) when being 0, represent that pixel j and pixel i are in different zones;Give birth in region After length terminates, the rough segmentation figure I of foreground and background is obtainedgrow, then to rough segmentation figure IgrowIt is r to make sizeeEtching operation, obtain The foreground area F of three componentsg, to rough segmentation figure IgrowIt is r to make sizedExpansive working, obtain the background area B of three componentsg, it is preceding Region between scene area and background area is zone of ignorance, so far, obtains three components of image I to be scratched under manual markings pattern It
  4. A kind of 4. simple interactive stingy drawing method of high accuracy according to claim 2, it is characterised in that:It is described to three components Each pixel of middle zone of ignorance carries out gradient calculation, and the prospect sample point of current unknown region pixel is obtained according to gradient direction Collection, background sample point set, are specifically included:
    To each pixel i (x of zone of ignorancei,yi), calculate its Grad Grai, the direction of gradient is designated as Calculation formula For:
    <mrow> <mi>&amp;theta;</mi> <mo>=</mo> <mi>arctan</mi> <mfrac> <mrow> <mi>I</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mi>I</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>I</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <mn>1</mn> <mo>,</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mi>I</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>+</mo> <mn>1</mn> <mo>,</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
    AlongStraight line, cut-off line and foreground area and first intersection point of background area are made in direction, respectively as prospect sample point The first sample point of collection and the first sample point of background sample point set, around above-mentioned two first sample point, take more respectively It is individual space length is closest therewith and margin of image element is more than Tv/ 2 point, prospect sample point set and background sample point are generated respectively Collection, wherein:TvFor the color threshold in region.
  5. A kind of 5. simple interactive stingy drawing method of high accuracy according to claim 4, it is characterised in that:In the prospect sample Around the first sample point of this point set and the first sample point of the background sample point set, 4 space lengths therewith are taken respectively It is closest and margin of image element is more than Tv/ 2 point so that the prospect sample point set and background sample point concentrate what is included Sample point is 5.
  6. A kind of 6. simple interactive stingy drawing method of high accuracy according to claim 2, it is characterised in that:It is described from above-mentioned two Individual sample point is concentrated extracts a sample into sample point pair respectively, using different sample points to estimating zone of ignorance picture The different opacity of element, confidence level highest opacity is taken to be specifically included as figure opacity is scratched:
    Concentrated from the prospect sample point set and background sample point, extract a sample respectively into sample point pair, root According to the sample point to making straight line, each pixel i of zone of ignorance is projected on the straight line, estimates each pixel i Opacity:
    <mrow> <msubsup> <mi>&amp;alpha;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <mfrac> <mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>-</mo> <msubsup> <mi>B</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msubsup> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mrow> <mo>(</mo> <msubsup> <mi>F</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msubsup> <mi>B</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msubsup> <mo>)</mo> </mrow> </mrow> <mrow> <mo>|</mo> <msubsup> <mi>F</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msubsup> <mi>B</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
    Wherein:Fi (m)WithThe color value of m-th of prospect sample point and n-th of background sample point is represented respectively;
    Estimate for multiple different opacities of each zone of ignorance pixel of acquisition, it is opaque to choose confidence level highest Spend stingy figure opacity as the zone of ignorance pixel, and the stingy figure corresponding to opacity sample point to then as most Whole stingy figure sample point pair.
  7. A kind of 7. simple interactive stingy drawing method of high accuracy according to claim 6, it is characterised in that:The confidence level Calculate, specifically include:
    Concentrated from prospect sample point collection, background sample point by taking out sample point pair, calculate each sample point to relative to formula (3) error of the linear model shown in, i.e., linear similarity:
    <mrow> <msubsup> <mi>r</mi> <mi>i</mi> <mrow> <mo>(</mo> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <mo>-</mo> <mfrac> <mrow> <mo>|</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>-</mo> <msubsup> <mi>&amp;alpha;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> </mrow> <mo>)</mo> </mrow> </msubsup> <msubsup> <mi>F</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <mrow> <mo>(</mo> <mrow> <mn>1</mn> <mo>-</mo> <msubsup> <mi>&amp;alpha;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> </mrow> <mo>)</mo> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> <msubsup> <mi>B</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> </mrow> <mrow> <mo>|</mo> <msubsup> <mi>F</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msubsup> <mi>B</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
    According to zone of ignorance pixel i and the color value of the sample point pair of sampling, foreground color similarity is calculated:
    <mrow> <msubsup> <mi>f</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <mfrac> <msubsup> <mi>&amp;sigma;</mi> <mn>2</mn> <mn>2</mn> </msubsup> <mrow> <mo>|</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>-</mo> <msubsup> <mi>F</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> </mrow> </mfrac> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
    And background color similarity:
    <mrow> <msubsup> <mi>b</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <mi>exp</mi> <mrow> <mo>{</mo> <mrow> <mo>-</mo> <mfrac> <msubsup> <mi>&amp;sigma;</mi> <mn>2</mn> <mn>2</mn> </msubsup> <mrow> <mo>|</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mo>-</mo> <msubsup> <mi>B</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> </mrow> </mfrac> </mrow> <mo>}</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
    According to the linear similarity and color similarity of above-mentioned acquisition, the current unknown region corresponding to each sample point pair is calculated The confidence level of pixel i opacities estimation:
    <mrow> <msubsup> <mi>c</mi> <mi>i</mi> <mrow> <mo>(</mo> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <mfrac> <mrow> <msubsup> <mi>r</mi> <mi>i</mi> <mrow> <mo>(</mo> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> </mrow> <mo>)</mo> </mrow> </msubsup> <msubsup> <mi>f</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </msubsup> <msubsup> <mi>b</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msubsup> </mrow> <mrow> <msup> <msub> <mi>&amp;sigma;</mi> <mn>1</mn> </msub> <mn>2</mn> </msup> </mrow> </mfrac> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
    Wherein:σ1And σ2For adjusting the weights between different similarities.
  8. A kind of 8. simple interactive stingy drawing method of high accuracy according to claim 2, it is characterised in that:In current unknown area When the neighborhood of domain pixel carries out smooth operation, the opacity after being carried out smoothly using following equation is calculated:
    <mrow> <mover> <msub> <mi>&amp;alpha;</mi> <mi>i</mi> </msub> <mo>~</mo> </mover> <mo>=</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>&amp;Element;</mo> <msub> <mi>NE</mi> <mi>i</mi> </msub> </mrow> </munder> <msub> <mi>&amp;omega;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <msub> <mi>&amp;alpha;</mi> <mi>j</mi> </msub> <mo>/</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>&amp;Element;</mo> <msub> <mi>NE</mi> <mi>i</mi> </msub> </mrow> </munder> <msub> <mi>&amp;omega;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
    Wherein:Pi、PjRepresent 2 points of i, j coordinate, Ii、IjTable Show 2 points of i, j color, Grai、GrajRepresent 2 points of i, j gradient, σp、σcAnd σgFor adjusting the weight between three.
  9. A kind of 9. storage device, wherein being stored with a plurality of instruction, it is characterised in that:The instruction is suitable to be loaded and held by processor The simple interactive stingy drawing method of high accuracy of the row as described in any in claim 1~8.
  10. A kind of 10. terminal, it is characterised in that:Including:
    Processor, it is adapted for carrying out each instruction;And
    Storage device, suitable for storing a plurality of instruction, the instruction is suitable to be loaded by processing and performed as in claim 1~8 appointed The simple interactive stingy drawing method of high accuracy described in one.
CN201710791442.6A 2017-09-05 2017-09-05 High-precision simple interactive matting method, storage device and terminal Active CN107516319B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710791442.6A CN107516319B (en) 2017-09-05 2017-09-05 High-precision simple interactive matting method, storage device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710791442.6A CN107516319B (en) 2017-09-05 2017-09-05 High-precision simple interactive matting method, storage device and terminal

Publications (2)

Publication Number Publication Date
CN107516319A true CN107516319A (en) 2017-12-26
CN107516319B CN107516319B (en) 2020-03-10

Family

ID=60725074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710791442.6A Active CN107516319B (en) 2017-09-05 2017-09-05 High-precision simple interactive matting method, storage device and terminal

Country Status (1)

Country Link
CN (1) CN107516319B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108447064A (en) * 2018-02-28 2018-08-24 苏宁易购集团股份有限公司 A kind of image processing method and device
CN108961279A (en) * 2018-06-28 2018-12-07 Oppo(重庆)智能科技有限公司 Image processing method, device and mobile terminal
CN109389611A (en) * 2018-08-29 2019-02-26 稿定(厦门)科技有限公司 The stingy drawing method of interactive mode, medium and computer equipment
CN109829925A (en) * 2019-01-23 2019-05-31 清华大学深圳研究生院 A kind of method and model training method for extracting clean prospect in scratching figure task
CN110047034A (en) * 2019-03-27 2019-07-23 北京大生在线科技有限公司 Stingy figure under online education scene changes background method, client and system
CN110097560A (en) * 2019-04-30 2019-08-06 上海艾麒信息科技有限公司 Scratch drawing method and device
CN110111342A (en) * 2019-04-30 2019-08-09 贵州民族大学 A kind of optimum option method and device of stingy nomography
CN110335288A (en) * 2018-09-26 2019-10-15 惠州学院 A kind of video foreground target extraction method and device
CN110400323A (en) * 2019-07-30 2019-11-01 上海艾麒信息科技有限公司 It is a kind of to scratch drawing system, method and device automatically
CN110503704A (en) * 2019-08-27 2019-11-26 北京迈格威科技有限公司 Building method, device and the electronic equipment of three components
CN110717925A (en) * 2019-09-18 2020-01-21 贵州民族大学 Foreground mask extraction method and device, computer equipment and storage medium
CN111275804A (en) * 2020-01-17 2020-06-12 腾讯科技(深圳)有限公司 Image illumination removing method and device, storage medium and computer equipment
CN111435282A (en) * 2019-01-14 2020-07-21 阿里巴巴集团控股有限公司 Image processing method and device and electronic equipment
CN112149592A (en) * 2020-09-28 2020-12-29 上海万面智能科技有限公司 Image processing method and device and computer equipment
CN112330692A (en) * 2020-11-11 2021-02-05 北京文香信息技术有限公司 Matting method, device, equipment and storage medium
CN112598694A (en) * 2020-12-31 2021-04-02 深圳市即构科技有限公司 Video image processing method, electronic device and storage medium
CN113487630A (en) * 2021-07-14 2021-10-08 辽宁向日葵教育科技有限公司 Image matting method based on material analysis technology

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070013813A1 (en) * 2005-07-15 2007-01-18 Microsoft Corporation Poisson matting for images
US20070070200A1 (en) * 2005-09-29 2007-03-29 Wojciech Matusik Video matting using camera arrays
CN103473780A (en) * 2013-09-22 2013-12-25 广州市幸福网络技术有限公司 Portrait background cutout method
CN104036517A (en) * 2014-07-01 2014-09-10 成都品果科技有限公司 Image matting method based on gradient sampling
US20150254868A1 (en) * 2014-03-07 2015-09-10 Pelican Imaging Corporation System and methods for depth regularization and semiautomatic interactive matting using rgb-d images
US20160110876A1 (en) * 2014-10-15 2016-04-21 Postech Academy-Industry Foundation Matting method for extracting foreground object and apparatus for performing the matting method
CN106815844A (en) * 2016-12-06 2017-06-09 中国科学院西安光学精密机械研究所 A kind of stingy drawing method based on manifold learning
CN106952270A (en) * 2017-03-01 2017-07-14 湖南大学 A kind of quickly stingy drawing method of uniform background image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070013813A1 (en) * 2005-07-15 2007-01-18 Microsoft Corporation Poisson matting for images
US20070070200A1 (en) * 2005-09-29 2007-03-29 Wojciech Matusik Video matting using camera arrays
CN103473780A (en) * 2013-09-22 2013-12-25 广州市幸福网络技术有限公司 Portrait background cutout method
US20150254868A1 (en) * 2014-03-07 2015-09-10 Pelican Imaging Corporation System and methods for depth regularization and semiautomatic interactive matting using rgb-d images
CN104036517A (en) * 2014-07-01 2014-09-10 成都品果科技有限公司 Image matting method based on gradient sampling
US20160110876A1 (en) * 2014-10-15 2016-04-21 Postech Academy-Industry Foundation Matting method for extracting foreground object and apparatus for performing the matting method
CN106815844A (en) * 2016-12-06 2017-06-09 中国科学院西安光学精密机械研究所 A kind of stingy drawing method based on manifold learning
CN106952270A (en) * 2017-03-01 2017-07-14 湖南大学 A kind of quickly stingy drawing method of uniform background image

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHRISTOPH RHEMANN ET AL.: "High Resolution Matting via Interactive Trimap Segmentation", 《IEEE》 *
SWETA SINGH ET AL.: "Automatic Trimap and Alpha-Matte Generation For Digital Image Matting", 《IEEE》 *
VIKAS GUPTA ET AL.: "Automatic Trimap Generation for Image Matting", 《ARXIV》 *
WEI SUN ET AL.: "A novel interactive matting system based on texture information", 《IEEE》 *
于明 等: "基于背景差分的快速视频抠图算法的研究", 《河北工业大学学报》 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108447064A (en) * 2018-02-28 2018-08-24 苏宁易购集团股份有限公司 A kind of image processing method and device
CN108961279A (en) * 2018-06-28 2018-12-07 Oppo(重庆)智能科技有限公司 Image processing method, device and mobile terminal
US11443436B2 (en) 2018-08-26 2022-09-13 Gaoding (Xiamen) Technology Co. Ltd Interactive image matting method, computer readable memory medium, and computer device
WO2020043063A1 (en) * 2018-08-29 2020-03-05 稿定(厦门)科技有限公司 Interactive image matting method, medium, and computer apparatus
CN109389611A (en) * 2018-08-29 2019-02-26 稿定(厦门)科技有限公司 The stingy drawing method of interactive mode, medium and computer equipment
CN110659562A (en) * 2018-09-26 2020-01-07 惠州学院 Deep learning (DNN) classroom learning behavior analysis method and device
CN110516534A (en) * 2018-09-26 2019-11-29 惠州学院 A kind of method for processing video frequency and device based on semantic analysis
CN110335288A (en) * 2018-09-26 2019-10-15 惠州学院 A kind of video foreground target extraction method and device
CN110363788A (en) * 2018-09-26 2019-10-22 惠州学院 A kind of video object track extraction method and device
CN110378867A (en) * 2018-09-26 2019-10-25 惠州学院 By prospect background pixel to and grayscale information obtain transparency mask method
WO2020062898A1 (en) * 2018-09-26 2020-04-02 惠州学院 Video foreground target extraction method and apparatus
CN111435282A (en) * 2019-01-14 2020-07-21 阿里巴巴集团控股有限公司 Image processing method and device and electronic equipment
CN109829925A (en) * 2019-01-23 2019-05-31 清华大学深圳研究生院 A kind of method and model training method for extracting clean prospect in scratching figure task
CN110047034A (en) * 2019-03-27 2019-07-23 北京大生在线科技有限公司 Stingy figure under online education scene changes background method, client and system
CN110097560A (en) * 2019-04-30 2019-08-06 上海艾麒信息科技有限公司 Scratch drawing method and device
CN110111342A (en) * 2019-04-30 2019-08-09 贵州民族大学 A kind of optimum option method and device of stingy nomography
CN110111342B (en) * 2019-04-30 2021-06-29 贵州民族大学 Optimized selection method and device for matting algorithm
CN110400323A (en) * 2019-07-30 2019-11-01 上海艾麒信息科技有限公司 It is a kind of to scratch drawing system, method and device automatically
CN110400323B (en) * 2019-07-30 2020-11-24 上海艾麒信息科技股份有限公司 Automatic cutout system, method and device
CN110503704A (en) * 2019-08-27 2019-11-26 北京迈格威科技有限公司 Building method, device and the electronic equipment of three components
CN110503704B (en) * 2019-08-27 2023-07-21 北京迈格威科技有限公司 Method and device for constructing three-dimensional graph and electronic equipment
CN110717925A (en) * 2019-09-18 2020-01-21 贵州民族大学 Foreground mask extraction method and device, computer equipment and storage medium
CN110717925B (en) * 2019-09-18 2022-05-06 贵州民族大学 Foreground mask extraction method and device, computer equipment and storage medium
CN111275804A (en) * 2020-01-17 2020-06-12 腾讯科技(深圳)有限公司 Image illumination removing method and device, storage medium and computer equipment
CN112149592A (en) * 2020-09-28 2020-12-29 上海万面智能科技有限公司 Image processing method and device and computer equipment
CN112330692A (en) * 2020-11-11 2021-02-05 北京文香信息技术有限公司 Matting method, device, equipment and storage medium
CN112598694A (en) * 2020-12-31 2021-04-02 深圳市即构科技有限公司 Video image processing method, electronic device and storage medium
CN113487630A (en) * 2021-07-14 2021-10-08 辽宁向日葵教育科技有限公司 Image matting method based on material analysis technology

Also Published As

Publication number Publication date
CN107516319B (en) 2020-03-10

Similar Documents

Publication Publication Date Title
CN107516319A (en) A kind of high accuracy simple interactive stingy drawing method, storage device and terminal
CN107679497B (en) Video face mapping special effect processing method and generating system
CN107452010A (en) A kind of automatically stingy nomography and device
CN102881011B (en) Region-segmentation-based portrait illumination transfer method
CN108492343A (en) A kind of image combining method for the training data expanding target identification
WO2024007478A1 (en) Three-dimensional human body modeling data collection and reconstruction method and system based on single mobile phone
US20180322367A1 (en) Image processing method, non-transitory computer readable storage medium and image processing system
WO2018082389A1 (en) Skin colour detection method and apparatus, and terminal
JPWO2009020047A1 (en) Composition analysis method, image apparatus having composition analysis function, composition analysis program, and computer-readable recording medium
CN110263768A (en) A kind of face identification method based on depth residual error network
CN112241933A (en) Face image processing method and device, storage medium and electronic equipment
WO2018082388A1 (en) Skin color detection method and device, and terminal
CN110276831B (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
CN109064525A (en) A kind of picture format conversion method, device, equipment and storage medium
CN110473221A (en) A kind of target object automatic scanning system and method
CN110136166A (en) A kind of automatic tracking method of multichannel picture
CN109523622A (en) A kind of non-structured light field rendering method
CN111724317A (en) Method for constructing Raw domain video denoising supervision data set
WO2023197780A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN114862725B (en) Method and device for realizing motion perception fuzzy special effect based on optical flow method
CN107766803B (en) Video character decorating method and device based on scene segmentation and computing equipment
Mukherjee et al. Backward compatible object detection using hdr image content
CN113052783A (en) Face image fusion method based on face key points
CN113808277A (en) Image processing method and related device
CN101729739A (en) Method for rectifying deviation of image

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant