CN107452010A - A kind of automatically stingy nomography and device - Google Patents
A kind of automatically stingy nomography and device Download PDFInfo
- Publication number
- CN107452010A CN107452010A CN201710638979.9A CN201710638979A CN107452010A CN 107452010 A CN107452010 A CN 107452010A CN 201710638979 A CN201710638979 A CN 201710638979A CN 107452010 A CN107452010 A CN 107452010A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- msubsup
- color
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000000007 visual effect Effects 0.000 claims abstract description 46
- 238000000034 method Methods 0.000 claims abstract description 35
- 230000011218 segmentation Effects 0.000 claims abstract description 32
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 30
- 238000006748 scratching Methods 0.000 claims abstract description 20
- 230000002393 scratching effect Effects 0.000 claims abstract description 20
- 238000004364 calculation method Methods 0.000 claims abstract description 16
- 238000005070 sampling Methods 0.000 claims description 7
- 230000000877 morphologic effect Effects 0.000 claims description 6
- 238000009499 grossing Methods 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 5
- 238000012216 screening Methods 0.000 claims description 4
- 238000005530 etching Methods 0.000 claims description 2
- 230000004927 fusion Effects 0.000 claims description 2
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 claims description 2
- 239000010931 gold Substances 0.000 claims description 2
- 229910052737 gold Inorganic materials 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 claims description 2
- 241001062009 Indigofera Species 0.000 claims 1
- 230000002093 peripheral effect Effects 0.000 claims 1
- 238000000926 separation method Methods 0.000 claims 1
- 230000008901 benefit Effects 0.000 abstract description 2
- 238000012545 processing Methods 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 8
- 238000000205 computational method Methods 0.000 description 5
- 239000003086 colorant Substances 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 238000013316 zoning Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 210000004027 cell Anatomy 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000011017 operating method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
Abstract
A kind of automatically stingy drawing method and device are related to digital image processing field, including:The original image of figure to be scratched is obtained, it is calculated and scratches figure visual saliency;Foreground area, background area are isolated with filter in spatial domain and Threshold Segmentation Algorithm, combining form student movement calculates to obtain three components;Gradient calculation is carried out to each pixel of zone of ignorance, is sampled to obtain the prospect of current unknown region pixel, background sample point set according to gradient direction and conspicuousness size;The opacity and confidence level of each sample point are calculated, takes confidence level highest sample to as the final optimal sample pair for scratching figure.The regional area of smooth opacity, the opacity finally estimated;Finally, according to the opacity and the color of optimal sample pair finally estimated, carry out scratching graphic operation in original image, extract foreground target.The invention also discloses a kind of automatically stingy map device.The embodiment of the present invention have without user mutual, it is easy to use, scratch the advantages of figure precision and success rate is high.
Description
Technical field
The present invention relates to digital image processing field, and in particular to a kind of automatically stingy nomography and device.
Background technology
In actual life, target interested is taken out from a width background image, as independent material or with
New background image synthesis, it is desirably to obtain background complete, true to nature and replaces effect, this technology has been widely used in image volume
Volume, the field such as video display special efficacy, infiltrate through extensively in daily life.Before digital picture scratches diagram technology with its good application
Scape and commercial value, in recent years as the focus in computer vision research field.
Linear model of the digital matting algorithm by each pixel modeling in natural image for foreground and background color, i.e.,:
I=α F+ (1- α) B (1)
Wherein, before I represents that the color value in real image, F represent that foreground color value, B represent that background color value, α are referred to as
Scape opacity, span are opacity α=1 of [0,1], wherein foreground area, opacity α=0 of background area,
And be the fringe region of foreground target in zone of ignorance, α takes the value between (0,1).So-called stingy figure, it is exactly in known real image
In the case of I, prospect F, background B and opacity α process are asked for.I, F, B are three-dimensional vector, and equation is needed according to 3
Known quantity asks for remaining 7 unknown quantity, therefore is a height underconstrained problem.
It is blue screen matting to have been widely used for video display and the stingy diagram technology of medium manufacturing company at present, and its principle is:Will
Background is limited to single blueness, so as to by the unknown quantity boil down to 4 in equation.Blue screen matting is simple to operate, but to background
Limit it is larger, and when prospect occur it is blue when, will be unable to completely pluck out target.
The natural image matting algorithm that scholars mainly study at present can be roughly divided into two classes, i.e.,:
(1) algorithm based on sampling.This method assumes that image local is continuous, with the known sample point near zone of ignorance
The foreground and background component of current pixel is estimated.Such as invention CN105225245 propose based on grain distribution assume and
The natural image matting method of Regularization Strategy, improve Bayes and scratch drawing method, but the method defect based on sampling is to obtain
The alpha connectivity of graphs obtained are poor, and generally require Image Priori Knowledge and substantial amounts of user's mark;
(2) algorithm based on propagation.This method needs user that (such as point, line) is marked first, identify prospect,
Background, zone of ignorance is then considered as field, the edge of field corresponds to known region, by establishing Laplace matrixes describing
The relation of alpha value, stingy figure process is converted into the solution procedure of Laplace matrixes, defect be it is computationally intensive, for non-company
Logical regional effect is bad.
In addition, the algorithm being combined will be sampled and propagate by also having, to play the advantage of the two, such as robust scratches nomography
Deng, but the defects of algorithm still generally existing user mutual is complicated, a priori assumption of image is excessive, computationally intensive, so as to limit
Application, add using difficulty.
The content of the invention
In order to solve problems of the prior art, nomography and device are scratched automatically the invention provides a kind of, according to
Input picture, which calculates, scratches figure visual saliency, in the case where not limiting background and Image Priori Knowledge, from natural scene image
Middle completion is full-automatic to scratch figure, without user mutual, while can guarantee that higher stingy figure precision and success rate.
The technical proposal for solving the technical problem of the invention is as follows:
A kind of automatically stingy nomography, this method comprise the following steps:
Step 1:The original image of figure to be scratched is obtained, it is calculated and scratches figure visual saliency;
Step 2:According to stingy figure visual saliency figure, using filter in spatial domain and Threshold Segmentation Algorithm, foreground zone is isolated
Domain, background area, combining form student movement calculate to obtain three components;
Step 3:According to three components, gradient calculation is carried out to each pixel of zone of ignorance, according to gradient direction and significantly
Property size samples to obtain the prospect of current unknown region pixel, background sample point set;
Step 4:According to the prospect of current unknown region pixel, background sample point set, the opaque of each sample point is calculated
Degree and confidence level, confidence level highest sample is taken to as the final optimal sample pair for scratching figure.Then smooth opacity
Regional area, the opacity finally estimated;
Step 5:According to the opacity and the color value of optimal sample pair finally estimated, scratched in original image
Graphic operation, extract foreground target.
A kind of automatically stingy map device, described device include:
Image collection module, for gathering the color value of single image;
Figure visual saliency computing module is scratched, for the color of image value obtained according to described image acquisition module, is calculated
The stingy figure visual saliency of image;
Three component computing modules, for the stingy figure visual saliency obtained according to the stingy figure visual saliency computing module
Figure, using filter in spatial domain and Threshold Segmentation Algorithm, foreground area, background area are isolated, combining form student movement is calculated, calculated
To three components;
Sample point set acquisition module, three components obtained according to the three components computing module, to each of zone of ignorance
Pixel carries out gradient calculation, is sampled to obtain prospect, the background of current unknown region pixel according to gradient direction and conspicuousness size
Sample point set;
Opacity calculates module, according to the prospect of sample point set acquisition module acquisition, background sample point set, calculates
The opacity and confidence level of each sample point, confidence level highest sample is taken to as the final optimal sample pair for scratching figure.
Then smooth, the opacity finally estimated is carried out to the regional area of opacity;
Prospect plucks out module, the opacity and the color value of optimal sample pair finally estimated for basis, in original graph
Carry out scratching graphic operation as in, extract foreground target.
The beneficial effects of the invention are as follows:The vision note of stingy figure vision significance computational methods simulation human eye proposed by the present invention
Meaning mechanism, foreground target can be automatically extracted, eliminate the user interactive of complexity, complete automatically stingy figure process, operation letter
Folk prescription is just;By limiting the quantity of sample point pair, shorten and scratch the figure time;Saliency maps and opacity are put down
It is sliding, improve stingy figure precision.
Brief description of the drawings
Fig. 1 is a kind of schematic flow sheet of automatic stingy nomography of the present invention
Fig. 2 is the schematic flow sheet of zoning significance of the present invention
Fig. 3 is a kind of structural representation of automatic stingy map device of the present invention
Embodiment
The present invention is described in further details with reference to the accompanying drawings and examples.
Fig. 1 is a kind of schematic flow sheet of automatic stingy nomography embodiment of the present invention, and the embodiments of the invention provide one kind
Automatically stingy drawing method, this method can be by arbitrarily having the stingy map device of image storage and display function to perform, and the device can
To be various terminal equipment, such as:PC, mobile phone, tablet personal computer etc. or digital camera, video camera etc., specifically
It can be realized by software and/or hardware.As shown in figure 1, the method for the present embodiment includes:
Step 1:The original image of figure to be scratched is obtained, calculates the stingy figure visual saliency of original image.
There are following characteristics in view of the foreground target for usually requiring extraction:Foreground target has complete target area,
It is and more apparent with ambient background contrast;Distribution of color is more uniform;Most of regional luminance is larger;There is obvious side with background
Edge is distinguished.Therefore the vision that the embodiment of the present invention proposes scratches figure significance computational methods and considers the color of foreground target, brightness
With the characteristic such as area integrity, it is assumed that acquisition image is rgb forms, needs to calculate gray-scale map according to r, g, b Color Channel first
Igray, specific formula is:
Igray=(r+g+b)/3 (2)
Original image can also be the arbitrary formats such as YUV, and the embodiment of the present invention is not made to the output image form of video camera
Limit, the formula that corresponding colour turns gray scale also needs to adjust.
Then to IgrayLPF and down-sampled is carried out, namely:By original-gray image IgrayAs the pyramidal 0th
Scale layer.1st scale layer is that the 0th scale layer carries out convolution with low pass filter, then 1/2 is sampled respectively in the x and y direction
Arrive, by that analogy, each layer of resolution ratio is the half of last layer to remainder layer.Low pass filter herein can be Gauss filter
Ripple, Laplce's filtering and Gabor filter, the embodiment of the present invention is to generating the form of the pyramidal low pass filter of yardstick not
Make concrete restriction.
It is corresponding with human-eye visual characteristic, in the very low place of brightness of image, it is difficult to cause human eye to pay attention to, it is therefore desirable to right
The pyramidal luminance component of yardstick carries out threshold value suppression, i.e., less than maximum brightness value Igray_max5% region, luminance component puts
For 0, it so can effectively suppress dark weak ambient interferences.And the less point of local luminance in object edge, although being also removed
Possibility, but on the one hand these positions will be retained in the notable figure of region, another aspect fringe region will be divided into unknown
Region, therefore last stingy figure integrality will not be impacted.
After establishing yardstick pyramid, brightness Saliency maps are then calculated.Circular is:With the figure on thin yardstick c
As the middle section for vision, the image on thick yardstick s is the neighboring area of vision, makes central yardstick c ∈ { 2,3,4 }, periphery
Yardstick s=c+ δ, δ ∈ { 3,4 }, can obtain 6 kinds of centers-periphery combination is respectively { 2-5,2-6,3-6,3-7,4-7,4-8 }.By s
Image interpolation on yardstick to c yardstick, and according to formula I (c, s)=| I (c) Θ I (s) (3), subtracted with the image on c yardsticks
The image gone after interpolation, obtain luminance difference figure.Wherein I (σ) is image pyramid, and σ=0,1 ..., 8 represents different chis
Degree, Θ represent center-periphery difference operator.Some position in these characteristic patterns expression image is with its local neighborhood in brightness
Otherness, this otherness is bigger, represents that brightness conspicuousness is higher in subrange, more easily causes the attention of human eye.According to
After this calculates 6 groups of luminance difference figures, these disparity maps are merged, abandon redundancy feature, it is notable to generate last brightness
Property figure.Because on different scale, the order of magnitude of disparity map can not reflect conspicuousness information, therefore can not be by different chis
The disparity map of degree is simply added.In the embodiment of the present invention, disparity map is normalized, normalized functionSpecific step
Suddenly it is:
(1) section [0,1] is normalized to all 6 width disparity maps;
(2) local variance of each disparity map is calculated respectively;
(3) blending weight is taken with local variance into positive correlation, i.e. local variance is bigger, represents the letter that the otherness figure includes
Breath amount is bigger, then the width otherness figure should assign larger weights in weighted array.
Then, according to formula(4) weighted array luminance difference figure, brightness conspicuousness is obtained
Figure.WhereinRepresent across the yardstick addition factor.
Then color Saliency maps are calculated.According to Ewald " color to " model, at the wild center of human visual experience,
Neuron is suppressed by G colors if being activated by R colors, is suppressed if being activated by B colors by Y colors, thus according to formula (21~
24) image is converted into tetra- passages of RGBY from tri- passages of rgb:
R=r- (g+b)/2 (5)
G=g- (r+b)/2 (6)
B=b- (g+r)/2 (7)
Y=(r+g)/2- | r-g |/2-b (8)
Then according to the characteristic of human eye vision cell, red green and blue yellow confrontation colour cell RG and BY are calculated, namely:Ask for
RG detailed process is:Respectively on yardstick c and yardstick s, R, G passage of image is subtracted each other seek absolute value pixel-by-pixel, then by s
Result of calculation on yardstick is interpolated on c yardsticks, finally makees difference pixel-by-pixel with the former result on c yardsticks.BY's asks for process class
Seemingly.So repeatedly subtract each other, the color distortion Saliency maps on 6 width RG and BY can be respectively obtained.
Then, according to formulaWeighted array color distortion
Figure, obtains color Saliency maps.
Then zoning Saliency maps.Super-pixel segmentation is carried out to foreground target, then counts returning for each super-pixel
One changes color histogram, and super-pixel is clustered according to color histogram, divides the image into as several region { r1,
r2,...,rk, the cluster centre in each region is cei, i=1 ..., k.Then region riRegion significance figure VArIt can calculate
It is as follows:
Wherein, w (ri) represent riThe weight of region area, computational methods are:
Wherein, PN (ri) represent region riNumber of pixels.Above formula represents that region area is bigger, and weight is bigger, i.e. region ri
Influence of the big region of surrounding area to its conspicuousness is greater than the small region of area.
Dr(rj,ri) that represent is region rjWith region riCluster centre distance, i.e.,:
Wherein, cei(m)、cej(n) that represent is color histogram cei、cejM, n color components, Dc (m, n) table
What is shown is Euclidean distance of m, n kind color in LAB color spaces.
The embodiment of the present invention proposes the region significance extracting method that a kind of super-pixel segmentation and cluster are combined, specific real
Apply step Fig. 2 will illustrate below.
Step A) super-pixel segmentation is made to input picture.Available superpixel segmentation method includes normalization segmentation (NC)
Algorithm, figure cut (GS) algorithm, rapid drift (QS) algorithm, simple linear iteration cluster (SLIC) algorithm etc..In view of algorithm
Rapidity and practicality, selection SLIC algorithms of the embodiment of the present invention, are concretely comprised the following steps:
1) total pixel number for assuming image is N, is divided into K*K super-pixel in advance.Entire image is uniformly divided first
K*K fritter is segmented into, takes the center of each fritter as initial point.Pixel gradient, ladder are calculated in the 3*3 neighborhoods of each initial point
The minimum point of degree is the initial center O of super-pixel segmentation algorithmi, i=0,1 ... K*K-1, assign each one list of initial center
Only label;
2) each pixel is expressed as to five dimensional vectors { l, a, b, x, y } of CIELAB color spaces and XY coordinates, calculated
Each pixel and the distance at its immediate center, distance calculation formula are:
Wherein, dlabFor color distortion value, dxyFor position difference value, spacing centered on S, m is balance parameters, and Dis is pixel
The distance between.To each pixel, the label at immediate center therewith is assigned;
3) center of the pixel of different labels is recalculated, updates Oi, and the difference at new and old center is calculated, if difference is less than
Then algorithm terminates threshold value, otherwise return to step 2).
Step B:Count the normalization color histogram of each super-pixel.Each dimension of Lab color spaces is divided into
Several bin, the pixel color counted in super-pixel fall the probability in each bin, finally normalize the histogram of statistics.
Step C:Super-pixel is clustered, image is divided into several continuums.Available clustering method includes
Based on any one of division, model, level, grid, Density Clustering, the DBSCAN clusters based on density are used in the present embodiment
Algorithm.Concretely comprise the following steps:One is chosen from super-pixel and is used as seed point, is searched for given error threshold Eps and MinPts
Its all density are up to super-pixel and judge whether the point is core point.If core point, then with its density accessible point shape
Into a cluster areas;If not core point nor boundary point, then reselect other points as seed point and repeat with
Upper step;If not core point but boundary point, then it is assumed that be noise spot and abandon.Above step is repeated until institute is somewhat equal
It is retrieved, eventually forms several zonings.DBSCAN can successfully manage noise spot, can divide the class cluster of arbitrary shape, because
This is applied to the present embodiment.
Step D:Zoning significance.According to super-pixel cluster result and formula (12) (13) (14), each area is calculated
The region significance in domain.
Finally, according to the significance level of colouring information, area information and monochrome information, formula VA=α are usedgVAg+αcVAc
+αrVArAnd α (18)g+αc+αrThree class Saliency maps are merged in=1 (19), obtain stingy figure visual saliency map to the end.In the present embodiment
In, take αc=0.5, αr=0.3, αg=0.2.
Step 2:According to stingy figure visual saliency figure, using filter in spatial domain and Threshold Segmentation Algorithm, foreground zone is isolated
Domain, background area, combining form student movement calculate to obtain three components.
Carried out first by spatial domain filter to scratching figure visual saliency map smoothly to remove noise, then from threshold value point
Cut algorithm (such as Otsu methods) and threshold value T is calculatedva, then in notable figure, significance value is more than TvaPixel correspond to prospect
Region, less than TvaPixel correspond to background area, so as to obtain the component I of rough segmentation threetc。
Filter in spatial domain is to ensure that regional area occurs without the singular point of significance, while notable angle value is put down
It is sliding, medium filtering, bilateral filtering or Gaussian filter may be selected.In order to ensure the computational efficiency of algorithm, selected in the present embodiment
Gaussian filter is selected as the smoothing filter for scratching figure visual saliency figure, filtering window 3*3.
Due in stingy figure visual saliency figure, having considered brightness conspicuousness, color conspicuousness and region significance,
Ensure that the significance of foreground area is much larger than the significance of background area, thus it is not severe for the performance of Threshold Segmentation Algorithm
Carve and require, threshold value TvaCan in a wider scope value without influence foreground segmentation result.In the present embodiment,
We choose Otsu thresholding algorithms, concretely comprise the following steps:
(1) gray level for assuming to scratch figure visual saliency figure VA is 0,1 ..., L-1, and image total pixel number is N, counts it
Grey level histogram, i.e.,:Assuming that gray scale is that i, i=0,1 ..., L-1 number of pixels are N in VAi, then it is corresponding in grey level histogram
Gray level i value is Ni/N;
(2) by threshold value TvaTraveled through from 0 to L-1, pixel is divided into less than TvaWith more than or equal to TvaTwo classes, count this two class
Between class between it is poor:
G=ω0(μ0-μ)2+ω1(μ1-μ)2 (20)
Wherein, ω0、ω1T is less than respectivelyvaWith more than or equal to TvaNumber of pixels shared by proportion, μ0、μ1It is small respectively
In TvaWith more than or equal to TvaPixel average.
(3) after traveling through, corresponding T when poor g between class takes maximum is foundvaAs final segmentation threshold.
Shape, the size of morphological operator should be according to the picture material such as size of image resolution ratio, foreground area, shapes
Chosen, circle is defaulted as in the present embodiment to ensure the uniformity of all directions.When carrying out morphological operation, to avoid threshold
There is small hole and flash after value segmentation, to ItcMake following morphological operation:Make an opening operation first, to connect part
Discontinuous region, remove hole;It is r then to make sizeeEtching operation, obtain the foreground area F of three componentsg;It is as size
rdExpansive working, obtain the background parts B of three componentsg, the region between foreground and background is zone of ignorance, is thus obtained
Image I the to be scratched component I of subdivision threet。
Assuming that the gray value of foreground area is 1, the gray value of background area is 0, uses morphology nuclear operator and binary map
Convolution is carried out, for the foreground area of white, corrosion can reduce its border, and be reduced for the background area of black, expansion energy
Its border, it is zone of ignorance between last foreground and background.
Step 3:According to three components, gradient calculation is carried out to each pixel of zone of ignorance, according to gradient direction and significantly
Property size samples to obtain the prospect of current unknown region pixel, background sample point set.
To each pixel I (x of zone of ignorancei,yi), calculate its Grad Grai, the direction of gradient is designated as θ, θ calculating
Formula is
In the present embodiment, prospect, the background sample point pair of reference are searched on the straight line of gradient direction, in the sample
Around point pair, the size of region of search is determined according to the stingy figure visual saliency value of current pixel location point, conspicuousness is bigger,
Hunting zone is smaller, represents around the big pixel of conspicuousness, and real prospect, background dot are closer to the pixel.Then, according to
Space length and visual saliency search out 5 foreground and background samples pair respectively, and detailed process is:
1) search radius r is madesInitial value be 1, count count=0;
2) centered on the foreground/background sample point by reference, rsOn the circle for being for radius, all pixels point p is calculated
Whether condition is met:|VA(p)-VA(p0)|<Tvap, wherein p0Center for region of search is the prospect or background sample referred to
Point, p is the point on search circle, the count+1 if condition is met;
3) count is judged whether>5, if then search stops;If otherwise rs++ and return to step 2).
Such sampling policy, less sampled point pair on the one hand can be generated, reduce the follow-up complexity scratched figure and calculated, separately
On the one hand being sampled along gradient direction, energy greater probability ensures that foreground and background point is located at different texture regions respectively,
Ensure that there is certain space and conspicuousness similarity between sampled point according to the sampling and can of neighborhood significance value simultaneously, therefore
The guarantee of the method for sampling energy greater probability includes true samples point pair, so as to improve the accuracy of stingy figure
Step 4:According to the prospect of current unknown region pixel, background sample point set, the opaque of each sample point is calculated
Degree and confidence level, confidence level highest sample is taken to as the final optimal sample pair for scratching figure.Then smooth opacity
Regional area, the opacity finally estimated;
Concentrated optionally a bit from prospect sample point collection and background sample point, according to imaging linear model, estimate opacity
For
WhereinWithThe color value of m-th of prospect sample point and n-th of background sample point is represented respectively.So
For each zone of ignorance pixel, 25 different opacity estimations are obtained altogether, then need therefrom to select confidence level most
High opacity is used to pluck out foreground target.
The requirement of optimal prospect background point pair is:1) there is minimum error for the linear model of formula (1);2) prospect
Sample point and background sample point have larger colour-difference;3) prospect or the color value of background sample point and current pixel relatively connect
Closely;4) space length of foreground and background sample point and current pixel is smaller.
According to criterion 1) and criterion 2), defining linear-aberration similarity is
According to criterion 3), defining color similarity is
According to criterion 4), definition space Distance conformability degree is
Defining confidence level function is
Wherein,It is linear-aberration similarity, color similarity, space length phase respectively
Like degree, DiIt is the preceding background sample radius of unknown pixel, scratching figure visual saliency by it determines, the more big then D of visual saliencyiMore
It is small, σ1、σ2And σ3For adjusting the weights between different similarities.Choose confidence level highest α as current unknown pixel not
The estimation of transparency, and corresponding prospect, background sample are to then as final optimal prospect, the background sample pair scratched figure and used.
For each pixel of zone of ignorance, point-by-point opacity estimated as before.Finally, it is possible to which some pixels are put
Reliability is too low, causes the opacity error that estimates larger, finally scratches figure and aberration occurs.Therefore need impermeable to zone of ignorance
Lightness does local smoothing method.The factor considered is needed to have when smooth:Color data error, differences in spatial location, significance difference, i.e. office
The pixel that portion's color distortion is bigger, local space position is more remote, significance is bigger, weights are smaller.Therefore, for balance space domain,
The influence of color codomain, significance codomain, the present invention in opacity smoothing method it is as follows:
Wherein, Pi、PjRepresent 2 points of i, j coordinate, Ii、IjRepresent 2 points of i, j color, VAi、VAjRepresent 2 points of i, j's
Scratch figure visual saliency, σp、σcAnd σvaFor adjusting the weight between three.The opacity so calculated is due to abundant
Consider the influence of locus, color and significance so that locus is nearer, color is more similar, significance is closer
Opacity is closer between pixel, and the subjective feeling with human eye is consistent, so as to effectively eliminate the unusual of opacity
Point, improve the precision of stingy figure.
According to 4 criterions of optimal prospect background point pair, determine shown in its metric function such as formula (30), wherein synthesis is examined
Linear model degree of conformity, prospect background colour-difference, object pixel and four colour-difference, space length indexs of preceding background are considered.
By setting σ1、σ2、σ3Value, for adjusting the weights of different similarities.Then, according to formula (31), opacity is carried out
Smoothly, the opacity of final stingy figure is obtained.In the present embodiment, smooth operation has considered color distortion, space bit
Difference and significance difference are put, by changing σp、σcAnd σva, proportion of this three in weight coefficient can be adjusted.For example, such as
Fruit focuses on significance information, i.e. σvaσ should be more thanp、σc。
Step 5:According to the opacity and the color value of optimal sample pair finally estimated, scratched in original image
Graphic operation, extract foreground target.
Specifically operating procedure is:Newly-built one with artwork size identical image as background, using calculating not
Transparency and foreground pixel value, synthesized according to formula (1) with new background, obtain final stingy figure result.
Stingy figure is carried out for natural image in present example, is not construed as limiting for the particular content of foreground target, background,
Prospect and background is only needed to have the distinguishable obvious differentiation border of naked eyes.
The embodiments of the invention provide a kind of automatically stingy nomography, by obtaining the original image of figure to be scratched, calculate original
The stingy figure visual saliency of image;Then according to stingy figure visual saliency figure, using filter in spatial domain and Threshold Segmentation Algorithm, divide
Foreground area, background area are separated out, combining form student movement calculates to obtain three components;According to three components, to each picture of zone of ignorance
Element carries out gradient calculation, is sampled to obtain the prospect of current unknown region pixel, background sample according to gradient direction and conspicuousness size
This point set;According to the prospect of current unknown region pixel, background sample point set, the opacity and confidence of each sample point are calculated
Degree, confidence level highest sample is taken to as the final optimal sample pair for scratching figure.Then the regional area of smooth opacity,
The opacity finally estimated;According to the opacity and the color value of optimal sample pair finally estimated, in original image
In carry out scratch graphic operation, extract foreground target.Stingy figure vision significance computational methods simulation human eye proposed by the present invention regards
Feel attention mechanism, foreground target can be automatically extracted, eliminate the user interactive of complexity, complete automatically stingy figure process, behaviour
Make simple and convenient;By limiting the quantity of sample point pair, shorten and scratch the figure time;Saliency maps and opacity are carried out
Smoothly, stingy figure precision is improved.
Fig. 3 is a kind of structural representation of automatic stingy map device provided in an embodiment of the present invention, and the device includes:
Image collection module, for gathering the color value of single image;
Figure visual saliency computing module is scratched, for the image obtained according to described image acquisition module, it is calculated and scratches figure
Visual saliency;
Three component computing modules, for the stingy figure visual saliency obtained according to the stingy figure visual saliency computing module
Figure, using filter in spatial domain and Threshold Segmentation Algorithm, foreground area, background area are isolated, combining form student movement is calculated, calculated
To three components;
Sample point set acquisition module, three components obtained according to the three components computing module, to each of zone of ignorance
Pixel carries out gradient calculation, is sampled to obtain prospect, the background of current unknown region pixel according to gradient direction and conspicuousness size
Sample point set;
Opacity calculates module, according to the prospect of sample point set acquisition module acquisition, background sample point set, calculates
The opacity and confidence level of each sample point, confidence level highest sample is taken to as the final optimal sample pair for scratching figure.
Then smooth, the opacity finally estimated is carried out to the regional area of opacity;
Prospect plucks out module, the opacity and the color value of optimal sample pair finally estimated for basis, in original graph
Carry out scratching graphic operation as in, extract foreground target.
Specifically, the stingy figure visual saliency computing module includes:
Yardstick pyramid generation unit, for the image to be scratched according to acquisition, carry out smoothly and down-sampled, generation yardstick gold
Word tower;
Brightness conspicuousness computing unit, for the yardstick pyramid obtained according to yardstick pyramid generation unit, with thin chi
Image on degree is the middle section of vision, and the image on thick yardstick is the neighboring area of vision, calculates brightness Saliency maps;
Color conspicuousness computing unit, for the yardstick pyramid obtained according to yardstick pyramid generation unit, with thin chi
Image on degree is the middle section of vision, and the image on thick yardstick is the neighboring area of vision, calculates color Saliency maps;
Region significance computing unit, for the image to be scratched obtained according to image collection module, foreground target is carried out
Super-pixel segmentation, and super-pixel is clustered according to color histogram, calculate the color conspicuousness of each cluster areas;
Conspicuousness integrated unit, it is notable for the brightness Saliency maps that are obtained according to brightness conspicuousness computing unit, color
Property computing unit obtain color Saliency maps, region significance computing unit obtain region significance figure, fusion treated
Scratch the stingy figure vision significance figure of image.
The three components computing module includes:
Filter in spatial domain unit, for choosing suitable filter in spatial domain method, put down to scratching figure visual saliency figure
It is sliding;
Threshold segmentation unit, for the smoothly stingy figure visual saliency figure obtained according to filter in spatial domain unit, from threshold
It is worth partitioning algorithm segmentation and obtains foreground area and background area, obtains three rough components;
Morphology operations unit:For rough three component obtained according to Threshold segmentation unit, carry out morphological operation with
Hole is filled, obtains prospect, background and zone of ignorance, i.e., accurate three component.
The sample point set acquisition module includes:
Gradient calculation unit, for the gray value of the image to be scratched according to, obtain the gradient of each unknown pixel;
Sampling unit, the gradient direction for being obtained according to gradient calculation unit make straight line, cut-off line and foreground area and
First intersection point of background area is to as initial search point, in the search neighborhood of a point, search and unknown pixel from the near to the remote
Significance value difference is less than the sample point of threshold value.
The opacity, which calculates module, to be included:
Linearly-aberration similarity calculated:For the sample point set obtained according to sample point set acquisition module, by taking
Go out sample point, and calculate its linear-color similarity;
Color similarity computing unit:For the sample point set obtained according to sample point set acquisition module, by taking out sample
This point, and calculate its color similarity;
Space length similarity calculated:For the sample point set obtained according to sample point set acquisition module, by taking
Go out sample point, and calculate its space length similarity;
Screening sample unit:For according to from linear-aberration similarity calculated, color similarity computing unit and sky
Between the Similarity value that is obtained apart from similarity calculated, calculate the confidence level of every relatively current unknown pixel of a pair of sample points;
Choose estimation of the confidence level highest opacity as the opacity of current pixel location.
Smooth unit:For the opacity obtained according to screening sample unit, local smoothing method is done to it.
The embodiments of the invention provide a kind of automatically stingy map device, by obtaining the original image of figure to be scratched, calculate original
The stingy figure visual saliency of image;Then according to stingy figure visual saliency figure, using filter in spatial domain and Threshold Segmentation Algorithm, divide
Foreground area, background area are separated out, combining form student movement calculates to obtain three components;According to three components, to each picture of zone of ignorance
Element carries out gradient calculation, is sampled to obtain the prospect of current unknown region pixel, background sample according to gradient direction and conspicuousness size
This point set;According to the prospect of current unknown region pixel, background sample point set, the opacity and confidence of each sample point are calculated
Degree, confidence level highest sample is taken to as the final optimal sample pair for scratching figure.Then the regional area of smooth opacity,
The opacity finally estimated;According to the opacity and the color value of optimal sample pair finally estimated, in original image
In carry out scratch graphic operation, extract foreground target.Stingy figure vision significance computational methods simulation human eye proposed by the present invention regards
Feel attention mechanism, foreground target can be automatically extracted, eliminate the user interactive of complexity, complete automatically stingy figure process, behaviour
Make simple and convenient;By limiting the quantity of sample point pair, shorten and scratch the figure time;Saliency maps and opacity are carried out
Smoothly, stingy figure precision is improved.
The embodiment of the present invention additionally provides a kind of computer program product of automatic stingy figure.
Claims (10)
1. a kind of automatically stingy nomography, it is characterised in that this method comprises the following steps:
Step 1:The original image of figure to be scratched is obtained, it is calculated and scratches figure visual saliency;
Step 2:Stingy figure visual saliency figure according to step 1, use filter in spatial domain and Threshold Segmentation Algorithm, separation
Go out foreground area and background area, combining form student movement calculates to obtain three components;
Step 3:Gradient calculation is carried out to each pixel of the zone of ignorance of three components described in step 2, according to gradient direction
Sample to obtain the foreground and background sample point set of current unknown region pixel with conspicuousness size;
Step 4:The prospect of current unknown region pixel according to step 3, background sample point set, calculate each sample point
Opacity and confidence level, take confidence level highest sample to as the final optimal sample pair for scratching figure, then it is smooth not
The regional area of transparency, the opacity finally estimated;
Step 5:The opacity of final estimation and the color value of optimal sample pair according to step 4, in original image
In carry out scratch graphic operation, extract foreground target.
2. a kind of automatically stingy nomography according to claim 1, it is characterised in that the stingy figure vision for calculating original image shows
Work degree concretely comprises the following steps:
Step (1), calculate gray-scale map Igray, and to IgraySuccessively smooth and down-sampled is carried out, generates the yardstick pyramid of n-layer;
Step (2), the middle section using the image on thin yardstick c as vision, the image on thick yardstick s are the peripheral region of vision
Domain, luminance difference characteristic pattern is calculated first:
I (c, s)=| I (c) Θ I (s) |
Wherein I (σ) is image pyramid, and σ=0,1 ..., 8 represents different yardsticks, and c ∈ { 2,3,4 } are central yardstick, s=c
+ δ, δ ∈ { 3,4 } represent surrounding yardstick, and Θ represents center-periphery difference operator, finally gives 6 width luminance difference characteristic patterns;Brightness
Saliency maps are the normalization weighted sum of 6 width luminance difference characteristic patterns:
<mrow>
<msub>
<mi>VA</mi>
<mi>g</mi>
</msub>
<mo>=</mo>
<munderover>
<mrow>
<mi></mi>
<mo>&CirclePlus;</mo>
</mrow>
<mrow>
<mi>c</mi>
<mo>=</mo>
<mn>2</mn>
</mrow>
<mn>4</mn>
</munderover>
<munderover>
<mrow>
<mi></mi>
<mo>&CirclePlus;</mo>
</mrow>
<mrow>
<mi>s</mi>
<mo>=</mo>
<mi>c</mi>
<mo>+</mo>
<mn>3</mn>
</mrow>
<mrow>
<mi>c</mi>
<mo>+</mo>
<mn>4</mn>
</mrow>
</munderover>
<mi>N</mi>
<mrow>
<mo>(</mo>
<mi>I</mi>
<mo>(</mo>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>s</mi>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
</mrow>
Wherein,Represent normalized function,Represent across the yardstick addition factor;
Step (3), calculate middle section using the image on thin yardstick c as vision, the image on thick yardstick s is the periphery of vision
Region, image is converted into RGBY quaternarys chrominance channel from rgb passages first, then calculates red green passage color distortion figure RG and indigo plant
Yellow passage color distortion figure BY:
RG (c, s)=| R (c)-G (c) | Θ | G (s)-R (s) |
BY (c, s)=| B (c)-Y (c) | Θ | Y (s)-B (s) |
Color Saliency maps are the normalization weighted sum of 12 width color distortion characteristic patterns:
<mrow>
<msub>
<mi>VA</mi>
<mi>c</mi>
</msub>
<mo>=</mo>
<munderover>
<mrow>
<mi></mi>
<mo>&CirclePlus;</mo>
</mrow>
<mrow>
<mi>c</mi>
<mo>=</mo>
<mn>2</mn>
</mrow>
<mn>4</mn>
</munderover>
<munderover>
<mrow>
<mi></mi>
<mo>&CirclePlus;</mo>
</mrow>
<mrow>
<mi>s</mi>
<mo>=</mo>
<mi>c</mi>
<mo>+</mo>
<mn>3</mn>
</mrow>
<mrow>
<mi>c</mi>
<mo>+</mo>
<mn>4</mn>
</mrow>
</munderover>
<mrow>
<mo>(</mo>
<mi>N</mi>
<mo>(</mo>
<mrow>
<mi>R</mi>
<mi>G</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>s</mi>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
<mo>)</mo>
<mo>+</mo>
<mi>N</mi>
<mo>(</mo>
<mrow>
<mi>B</mi>
<mi>Y</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>c</mi>
<mo>,</mo>
<mi>s</mi>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
</mrow>
Step (4), the carry out super-pixel segmentation to foreground target, the normalization color histogram of each super-pixel is then counted,
And super-pixel is clustered according to color histogram, divide the image into as several regions { r1, r2 ..., rk }, each region
Cluster centre be cei, i=1 ..., k;Then region ri region significance value VAr can be calculated as follows:
<mrow>
<msub>
<mi>VA</mi>
<mi>r</mi>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>r</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>&NotEqual;</mo>
<mi>i</mi>
</mrow>
</munder>
<mi>w</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>r</mi>
<mi>j</mi>
</msub>
<mo>)</mo>
</mrow>
<msub>
<mi>D</mi>
<mi>r</mi>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>r</mi>
<mi>j</mi>
</msub>
<mo>,</mo>
<msub>
<mi>r</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
Wherein, w (ri) represent region area weighted value, Dr (rj, ri) represent is region rj and region ri cluster centre
Distance;
Step (5), brightness significance, color significance and region significance synthesized:
VA=αgVAg+αcVAc+αrVAr
αg+αc+αr=1
Wherein, αg、αc、αrIt is the weight coefficient of different conspicuousnesses.
3. a kind of automatically stingy nomography according to claim 1, it is characterised in that obtain the method for three components, specific step
Suddenly it is:
Carry out smoothly to remove noise to scratching figure visual saliency map first by spatial domain filter, then calculated from Threshold segmentation
Threshold value Tva is calculated in method, so as to obtain the component Itc of rough segmentation three;Then make following morphological operation to Itc:Make one first
Secondary opening operation, to connect the region of partial discontinuous, remove hole;Then make the etching operation that size is re, obtain three components
Foreground area Fg;Make the expansive working that size is rd, obtain the background parts Bg of three components, region is not between foreground and background
Know region, obtain image I the to be scratched component It of subdivision three.
A kind of 4. automatically stingy nomography according to claim 1, it is characterised in that described sample point set acquisition methods,
Specifically include:
To each pixel I (xi, yi) of zone of ignorance, its Grad Grai is calculated, the direction of gradient is designated as θ, θ calculation formula
For
<mrow>
<mi>&theta;</mi>
<mo>=</mo>
<mi>a</mi>
<mi>r</mi>
<mi>c</mi>
<mi>t</mi>
<mi>a</mi>
<mi>n</mi>
<mfrac>
<mrow>
<mi>I</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mi>I</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mi>i</mi>
</msub>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mi>I</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<mn>1</mn>
<mo>,</mo>
<msub>
<mi>y</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mi>I</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<msub>
<mi>y</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
</mrow>
Make straight line along θ directions, straight line and foreground area and first intersection point of background area are respectively obtained, as initial ranging
Center.In the friendship neighborhood of a point, 5 points of the significance value difference less than threshold value Tvap therewith are searched for from the near to the remote, are finally had altogether
Generate 5*5=25 sample point pair.
5. a kind of automatically stingy nomography according to claim 1, it is characterised in that opacity calculates method, specific bag
Include:
Concentrated optionally a bit from prospect sample point set background sample point, according to imaging linear model, estimation opacity is (public
Bi (t) is changed in formula)
<mrow>
<msubsup>
<mi>&alpha;</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>=</mo>
<mfrac>
<mrow>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msubsup>
<mi>Bg</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>)</mo>
</mrow>
<mi>T</mi>
</msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>Fg</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>Bg</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>|</mo>
<msubsup>
<mi>Fg</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>Bg</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>|</mo>
</mrow>
</mfrac>
</mrow>
WhereinWithThe color value of m-th of prospect sample point and n-th of background sample point is represented respectively;So for
Each zone of ignorance pixel i, 25 different opacity estimations are obtained altogether, then need therefrom to select confidence level highest
Opacity is used to pluck out foreground target.
Defining linear-aberration similarity is
<mrow>
<msubsup>
<mi>lc</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>=</mo>
<mo>-</mo>
<msub>
<mi>&sigma;</mi>
<mn>1</mn>
</msub>
<mfrac>
<mrow>
<mo>|</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msubsup>
<mi>&alpha;</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</msubsup>
<msubsup>
<mi>Fg</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<msubsup>
<mi>&alpha;</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>)</mo>
</mrow>
<msubsup>
<mi>Bg</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>|</mo>
</mrow>
<mrow>
<mo>|</mo>
<msubsup>
<mi>Fg</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>Bg</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>|</mo>
</mrow>
</mfrac>
</mrow>
Defining color similarity is
<mrow>
<msubsup>
<mi>co</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>=</mo>
<mi>exp</mi>
<mo>{</mo>
<mo>-</mo>
<mfrac>
<mrow>
<msubsup>
<mi>&sigma;</mi>
<mn>2</mn>
<mn>2</mn>
</msubsup>
<mo>&CenterDot;</mo>
<munder>
<mrow>
<mi>m</mi>
<mi>a</mi>
<mi>x</mi>
</mrow>
<mi>j</mi>
</munder>
<mrow>
<mo>(</mo>
<mo>|</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msubsup>
<mi>Fg</mi>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>|</mo>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<munder>
<mrow>
<mi>m</mi>
<mi>a</mi>
<mi>x</mi>
</mrow>
<mi>j</mi>
</munder>
<mrow>
<mo>(</mo>
<mo>|</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msubsup>
<mi>Bg</mi>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>|</mo>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>|</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msubsup>
<mi>Fg</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>|</mo>
<mo>&CenterDot;</mo>
<mo>|</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msubsup>
<mi>Bg</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>|</mo>
</mrow>
</mfrac>
<mo>}</mo>
</mrow>
Definition space Distance conformability degree is
<mrow>
<msubsup>
<mi>ds</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>=</mo>
<mi>exp</mi>
<mo>{</mo>
<mo>-</mo>
<mfrac>
<mrow>
<msubsup>
<mi>&sigma;</mi>
<mn>3</mn>
<mn>2</mn>
</msubsup>
<mo>|</mo>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>x</mi>
<mrow>
<msubsup>
<mi>Fg</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>)</mo>
</mrow>
</msubsup>
</mrow>
</msub>
<mo>|</mo>
<mo>&CenterDot;</mo>
<mo>|</mo>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>x</mi>
<mrow>
<msubsup>
<mi>Bg</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>)</mo>
</mrow>
</msubsup>
</mrow>
</msub>
<mo>|</mo>
</mrow>
<msub>
<mi>D</mi>
<mi>i</mi>
</msub>
</mfrac>
<mo>}</mo>
</mrow>
2
Defining confidence level function is
<mrow>
<msubsup>
<mi>c</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>=</mo>
<mi>exp</mi>
<mo>{</mo>
<mo>-</mo>
<mfrac>
<mrow>
<msubsup>
<mi>lc</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</msubsup>
<msubsup>
<mi>co</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</msubsup>
<msubsup>
<mi>ds</mi>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>,</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</msubsup>
</mrow>
<mrow>
<msup>
<msub>
<mi>&sigma;</mi>
<mn>4</mn>
</msub>
<mn>2</mn>
</msup>
</mrow>
</mfrac>
<mo>}</mo>
</mrow>
WhereinIt is linear-aberration similarity, color similarity, space length similarity respectively, Di
It is unknown pixel i preceding background sample radius, scratching figure visual saliency by it determines, the more big then Di of visual saliency is smaller, σ 1,
σ 2 and σ 3 is used to adjust the weights between different similarities;Confidence level highest α is chosen as the opaque of current unknown pixel
The estimation of degree, and corresponding prospect, background sample are to then as final prospect, the background sample pair scratched figure and used;
Finally, opacity is smoothed;
<mrow>
<msub>
<mi>&omega;</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>=</mo>
<mi>exp</mi>
<mo>{</mo>
<mo>-</mo>
<mfrac>
<mrow>
<mo>|</mo>
<msub>
<mi>P</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>P</mi>
<mi>j</mi>
</msub>
<mo>|</mo>
</mrow>
<mrow>
<mn>2</mn>
<msubsup>
<mi>&sigma;</mi>
<mi>p</mi>
<mn>2</mn>
</msubsup>
</mrow>
</mfrac>
<mo>-</mo>
<mfrac>
<mrow>
<mo>|</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>I</mi>
<mi>j</mi>
</msub>
<mo>|</mo>
</mrow>
<mrow>
<mn>2</mn>
<msubsup>
<mi>&sigma;</mi>
<mi>c</mi>
<mn>2</mn>
</msubsup>
</mrow>
</mfrac>
<mo>-</mo>
<mfrac>
<mrow>
<mo>|</mo>
<msub>
<mi>VA</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>VA</mi>
<mi>j</mi>
</msub>
<mo>|</mo>
</mrow>
<mrow>
<mn>2</mn>
<msubsup>
<mi>&sigma;</mi>
<mrow>
<mi>v</mi>
<mi>a</mi>
</mrow>
<mn>2</mn>
</msubsup>
</mrow>
</mfrac>
<mo>}</mo>
</mrow>
Wherein, Pi、PjRepresent 2 points of i, j coordinate, Ii、IjRepresent 2 points of i, j color, VAi、VAjRepresent 2 points of i, j stingy figure
Visual saliency, σ p, σ c and σ va are used to adjust the weight between three.
6. a kind of automatically stingy map device, it is characterised in that described device includes:
Image collection module, for gathering the color value of single image;
Figure visual saliency computing module is scratched, for the color of image value obtained according to described image acquisition module, calculates image
Stingy figure visual saliency;
Three component computing modules, for the stingy figure visual saliency figure obtained according to the stingy figure visual saliency computing module,
Using filter in spatial domain and Threshold Segmentation Algorithm, foreground area, background area are isolated, combining form student movement is calculated, is calculated
Three components;
Sample point set acquisition module, three components obtained according to the three components computing module, to each pixel of zone of ignorance
Gradient calculation is carried out, is sampled to obtain prospect, the background sample of current unknown region pixel according to gradient direction and conspicuousness size
Point set;
Opacity calculates module, according to the prospect of sample point set acquisition module acquisition, background sample point set, calculates each
The opacity and confidence level of sample point, confidence level highest sample is taken to as the final optimal sample pair for scratching figure;Then
Smooth, the opacity finally estimated is carried out to the regional area of opacity;
Prospect plucks out module, the opacity and the color value of optimal sample pair finally estimated for basis, in original image
Carry out scratching graphic operation, extract foreground target.
7. a kind of automatically stingy map device according to claim 6, it is characterised in that the stingy figure visual saliency calculates mould
Block includes:
Yardstick pyramid generation unit, for the image to be scratched according to acquisition, carry out smoothly and down-sampled, generation yardstick gold word
Tower;
Brightness conspicuousness computing unit, for the yardstick pyramid obtained according to yardstick pyramid generation unit, with thin yardstick
Image be vision middle section, the image on thick yardstick is the neighboring area of vision, calculates brightness Saliency maps;
Color conspicuousness computing unit, for the yardstick pyramid obtained according to yardstick pyramid generation unit, with thin yardstick
Image be vision middle section, the image on thick yardstick is the neighboring area of vision, calculates color Saliency maps;
Region significance computing unit, for the image to be scratched obtained according to image collection module, super picture is carried out to foreground target
Element segmentation, and super-pixel is clustered according to color histogram, calculate the color conspicuousness of each cluster areas;
Conspicuousness integrated unit, by obtained according to brightness conspicuousness computing unit brightness Saliency maps, based on color conspicuousness
The region significance figure that color Saliency maps, the region significance computing unit that unit obtains obtain is calculated, fusion obtains figure to be scratched
The stingy figure vision significance figure of picture.
8. a kind of automatically stingy map device according to claim 6, it is characterised in that the three components computing module includes:
Filter in spatial domain unit, for choosing suitable filter in spatial domain method, carried out smoothly to scratching figure visual saliency figure;
Threshold segmentation unit, for the smoothly stingy figure visual saliency figure obtained according to filter in spatial domain unit, from threshold value point
Cut algorithm segmentation and obtain foreground area and background area, obtain three rough components;
Morphology operations unit:For the component of rough segmentation three obtained according to Threshold segmentation unit, morphological operation is carried out to fill
Hole, prospect, background and zone of ignorance are obtained, i.e., accurate three component.
A kind of 9. automatically stingy map device according to claim 6, it is characterised in that the sample point set acquisition module bag
Include:
Gradient calculation unit, for the gray value of the image to be scratched according to, obtain the gradient of each unknown pixel;
Sampling unit, the gradient direction for being obtained according to gradient calculation module make straight line, cut-off line and foreground area and background
First intersection point in region to as initial search point, in the search neighborhood of a point, searching for notable with unknown pixel from the near to the remote
The property different sample point less than threshold value of value difference.
10. a kind of automatically stingy map device according to claim 6, it is characterised in that the opacity calculates module bag
Include:
Linearly-aberration similarity calculated:For the sample point set obtained according to sample point set acquisition module, by taking out sample
This point, and calculate its linear-color similarity;
Color similarity computing unit:For the sample point set obtained according to sample point set acquisition module, by taking out sample point,
And calculate its color similarity;
Space length similarity calculated:For the sample point set obtained according to sample point set acquisition module, by taking out sample
This point, and calculate its space length similarity;
Screening sample unit:For according to from linear-aberration similarity calculated, color similarity computing unit and space away from
The Similarity value obtained from similarity calculated, calculate the confidence level of every relatively current unknown pixel of a pair of sample points;Choose
Estimation of the confidence level highest opacity as the opacity of current pixel location;
Smooth unit:For the opacity obtained according to screening sample unit, local smoothing method is done to it.Consider when smooth because
Element includes:Color data error, differences in spatial location, significance difference.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710638979.9A CN107452010B (en) | 2017-07-31 | 2017-07-31 | Automatic cutout algorithm and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710638979.9A CN107452010B (en) | 2017-07-31 | 2017-07-31 | Automatic cutout algorithm and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107452010A true CN107452010A (en) | 2017-12-08 |
CN107452010B CN107452010B (en) | 2021-01-05 |
Family
ID=60490577
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710638979.9A Active CN107452010B (en) | 2017-07-31 | 2017-07-31 | Automatic cutout algorithm and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107452010B (en) |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108134937A (en) * | 2017-12-21 | 2018-06-08 | 西北工业大学 | A kind of compression domain conspicuousness detection method based on HEVC |
CN108320294A (en) * | 2018-01-29 | 2018-07-24 | 袁非牛 | A kind of full-automatic replacement method of portrait background intelligent of China second-generation identity card photo |
CN108460383A (en) * | 2018-04-11 | 2018-08-28 | 四川大学 | Saliency refined method based on neural network and image segmentation |
CN108596913A (en) * | 2018-03-28 | 2018-09-28 | 众安信息技术服务有限公司 | A kind of stingy drawing method and device |
CN109461158A (en) * | 2018-11-19 | 2019-03-12 | 第四范式(北京)技术有限公司 | Color image segmentation method and system |
CN109493363A (en) * | 2018-09-11 | 2019-03-19 | 北京达佳互联信息技术有限公司 | A kind of FIG pull handle method, apparatus and image processing equipment based on geodesic distance |
CN109540925A (en) * | 2019-01-23 | 2019-03-29 | 南昌航空大学 | Complicated ceramic tile surface defect inspection method based on difference shadow method and local variance measurement operator |
CN109785329A (en) * | 2018-10-29 | 2019-05-21 | 重庆师范大学 | Based on the purple soil image segmentation extracting method for improving SLIC algorithm |
CN110111342A (en) * | 2019-04-30 | 2019-08-09 | 贵州民族大学 | A kind of optimum option method and device of stingy nomography |
CN110288617A (en) * | 2019-07-04 | 2019-09-27 | 大连理工大学 | Based on the shared sliced image of human body automatic division method for scratching figure and ROI gradual change |
CN110298861A (en) * | 2019-07-04 | 2019-10-01 | 大连理工大学 | A kind of quick three-dimensional image partition method based on shared sampling |
CN110400323A (en) * | 2019-07-30 | 2019-11-01 | 上海艾麒信息科技有限公司 | It is a kind of to scratch drawing system, method and device automatically |
CN110415273A (en) * | 2019-07-29 | 2019-11-05 | 肇庆学院 | A kind of efficient motion tracking method of robot and system of view-based access control model conspicuousness |
CN110503704A (en) * | 2019-08-27 | 2019-11-26 | 北京迈格威科技有限公司 | Building method, device and the electronic equipment of three components |
CN110751655A (en) * | 2019-09-16 | 2020-02-04 | 南京工程学院 | Automatic cutout method based on semantic segmentation and significance analysis |
CN110751654A (en) * | 2019-08-30 | 2020-02-04 | 稿定(厦门)科技有限公司 | Image matting method, medium, equipment and device |
CN110956681A (en) * | 2019-11-08 | 2020-04-03 | 浙江工业大学 | Portrait background automatic replacement method combining convolutional network and neighborhood similarity |
CN111028259A (en) * | 2019-11-15 | 2020-04-17 | 广州市五宫格信息科技有限责任公司 | Foreground extraction method for improving adaptability through image saliency |
CN111161286A (en) * | 2020-01-02 | 2020-05-15 | 大连理工大学 | Interactive natural image matting method |
CN111383232A (en) * | 2018-12-29 | 2020-07-07 | Tcl集团股份有限公司 | Matting method, matting device, terminal equipment and computer-readable storage medium |
CN111435282A (en) * | 2019-01-14 | 2020-07-21 | 阿里巴巴集团控股有限公司 | Image processing method and device and electronic equipment |
CN111462027A (en) * | 2020-03-12 | 2020-07-28 | 中国地质大学(武汉) | Multi-focus image fusion method based on multi-scale gradient and matting |
CN111563908A (en) * | 2020-05-08 | 2020-08-21 | 展讯通信(上海)有限公司 | Image processing method and related device |
CN111784726A (en) * | 2019-09-25 | 2020-10-16 | 北京沃东天骏信息技术有限公司 | Image matting method and device |
CN111862110A (en) * | 2020-06-30 | 2020-10-30 | 辽宁向日葵教育科技有限公司 | Green curtain image matting method, system, equipment and readable storage medium |
CN111931688A (en) * | 2020-08-27 | 2020-11-13 | 珠海大横琴科技发展有限公司 | Ship recognition method and device, computer equipment and storage medium |
CN111932447A (en) * | 2020-08-04 | 2020-11-13 | 中国建设银行股份有限公司 | Picture processing method, device, equipment and storage medium |
CN112101370A (en) * | 2020-11-11 | 2020-12-18 | 广州卓腾科技有限公司 | Automatic pure-color background image matting algorithm, computer-readable storage medium and equipment |
CN112149592A (en) * | 2020-09-28 | 2020-12-29 | 上海万面智能科技有限公司 | Image processing method and device and computer equipment |
CN112183248A (en) * | 2020-09-14 | 2021-01-05 | 北京大学深圳研究生院 | Video salient object detection method based on channel-by-channel space-time characterization learning |
CN112200826A (en) * | 2020-10-15 | 2021-01-08 | 北京科技大学 | Industrial weak defect segmentation method |
CN112634314A (en) * | 2021-01-19 | 2021-04-09 | 深圳市英威诺科技有限公司 | Target image acquisition method and device, electronic equipment and storage medium |
CN112634312A (en) * | 2020-12-31 | 2021-04-09 | 上海商汤智能科技有限公司 | Image background processing method and device, electronic equipment and storage medium |
CN112801896A (en) * | 2021-01-19 | 2021-05-14 | 西安理工大学 | Backlight image enhancement method based on foreground extraction |
CN113052755A (en) * | 2019-12-27 | 2021-06-29 | 杭州深绘智能科技有限公司 | High-resolution image intelligent matting method based on deep learning |
CN113271394A (en) * | 2021-04-07 | 2021-08-17 | 福建大娱号信息科技股份有限公司 | AI intelligent image matting method and terminal without blue-green natural background |
CN113487630A (en) * | 2021-07-14 | 2021-10-08 | 辽宁向日葵教育科技有限公司 | Image matting method based on material analysis technology |
CN114078139A (en) * | 2021-11-25 | 2022-02-22 | 四川长虹电器股份有限公司 | Image post-processing method based on portrait segmentation model generation result |
CN114677394A (en) * | 2022-05-27 | 2022-06-28 | 珠海视熙科技有限公司 | Matting method, matting device, image pickup apparatus, conference system, electronic apparatus, and medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101945223A (en) * | 2010-09-06 | 2011-01-12 | 浙江大学 | Video consistent fusion processing method |
CN102651135A (en) * | 2012-04-10 | 2012-08-29 | 电子科技大学 | Optimized direction sampling-based natural image matting method |
CN104036517A (en) * | 2014-07-01 | 2014-09-10 | 成都品果科技有限公司 | Image matting method based on gradient sampling |
US9569855B2 (en) * | 2015-06-15 | 2017-02-14 | Electronics And Telecommunications Research Institute | Apparatus and method for extracting object of interest from image using image matting based on global contrast |
-
2017
- 2017-07-31 CN CN201710638979.9A patent/CN107452010B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101945223A (en) * | 2010-09-06 | 2011-01-12 | 浙江大学 | Video consistent fusion processing method |
CN102651135A (en) * | 2012-04-10 | 2012-08-29 | 电子科技大学 | Optimized direction sampling-based natural image matting method |
CN104036517A (en) * | 2014-07-01 | 2014-09-10 | 成都品果科技有限公司 | Image matting method based on gradient sampling |
US9569855B2 (en) * | 2015-06-15 | 2017-02-14 | Electronics And Telecommunications Research Institute | Apparatus and method for extracting object of interest from image using image matting based on global contrast |
Non-Patent Citations (3)
Title |
---|
孙巍: "视觉感知特性指导下的自然图像抠图算法研究", 《中国博士学位论文全文数据库 信息科技辑》 * |
罗娇: "一种自动抠像技术的研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
郝凯: "图像及图像序列上的交互抠图技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (61)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108134937A (en) * | 2017-12-21 | 2018-06-08 | 西北工业大学 | A kind of compression domain conspicuousness detection method based on HEVC |
CN108134937B (en) * | 2017-12-21 | 2021-07-13 | 西北工业大学 | Compressed domain significance detection method based on HEVC |
CN108320294A (en) * | 2018-01-29 | 2018-07-24 | 袁非牛 | A kind of full-automatic replacement method of portrait background intelligent of China second-generation identity card photo |
CN108320294B (en) * | 2018-01-29 | 2021-11-05 | 袁非牛 | Intelligent full-automatic portrait background replacement method for second-generation identity card photos |
CN108596913A (en) * | 2018-03-28 | 2018-09-28 | 众安信息技术服务有限公司 | A kind of stingy drawing method and device |
CN108460383A (en) * | 2018-04-11 | 2018-08-28 | 四川大学 | Saliency refined method based on neural network and image segmentation |
CN108460383B (en) * | 2018-04-11 | 2021-10-01 | 四川大学 | Image significance refinement method based on neural network and image segmentation |
CN109493363A (en) * | 2018-09-11 | 2019-03-19 | 北京达佳互联信息技术有限公司 | A kind of FIG pull handle method, apparatus and image processing equipment based on geodesic distance |
CN109493363B (en) * | 2018-09-11 | 2019-09-27 | 北京达佳互联信息技术有限公司 | A kind of FIG pull handle method, apparatus and image processing equipment based on geodesic distance |
CN109785329A (en) * | 2018-10-29 | 2019-05-21 | 重庆师范大学 | Based on the purple soil image segmentation extracting method for improving SLIC algorithm |
CN109785329B (en) * | 2018-10-29 | 2023-05-26 | 重庆师范大学 | Purple soil image segmentation and extraction method based on improved SLIC algorithm |
CN109461158A (en) * | 2018-11-19 | 2019-03-12 | 第四范式(北京)技术有限公司 | Color image segmentation method and system |
CN111383232B (en) * | 2018-12-29 | 2024-01-23 | Tcl科技集团股份有限公司 | Matting method, matting device, terminal equipment and computer readable storage medium |
CN111383232A (en) * | 2018-12-29 | 2020-07-07 | Tcl集团股份有限公司 | Matting method, matting device, terminal equipment and computer-readable storage medium |
CN111435282A (en) * | 2019-01-14 | 2020-07-21 | 阿里巴巴集团控股有限公司 | Image processing method and device and electronic equipment |
CN109540925A (en) * | 2019-01-23 | 2019-03-29 | 南昌航空大学 | Complicated ceramic tile surface defect inspection method based on difference shadow method and local variance measurement operator |
CN109540925B (en) * | 2019-01-23 | 2021-09-03 | 南昌航空大学 | Complex ceramic tile surface defect detection method based on difference method and local variance measurement operator |
CN110111342B (en) * | 2019-04-30 | 2021-06-29 | 贵州民族大学 | Optimized selection method and device for matting algorithm |
CN110111342A (en) * | 2019-04-30 | 2019-08-09 | 贵州民族大学 | A kind of optimum option method and device of stingy nomography |
CN110288617A (en) * | 2019-07-04 | 2019-09-27 | 大连理工大学 | Based on the shared sliced image of human body automatic division method for scratching figure and ROI gradual change |
CN110288617B (en) * | 2019-07-04 | 2023-02-03 | 大连理工大学 | Automatic human body slice image segmentation method based on shared matting and ROI gradual change |
CN110298861A (en) * | 2019-07-04 | 2019-10-01 | 大连理工大学 | A kind of quick three-dimensional image partition method based on shared sampling |
CN110415273A (en) * | 2019-07-29 | 2019-11-05 | 肇庆学院 | A kind of efficient motion tracking method of robot and system of view-based access control model conspicuousness |
CN110400323A (en) * | 2019-07-30 | 2019-11-01 | 上海艾麒信息科技有限公司 | It is a kind of to scratch drawing system, method and device automatically |
CN110503704A (en) * | 2019-08-27 | 2019-11-26 | 北京迈格威科技有限公司 | Building method, device and the electronic equipment of three components |
CN110503704B (en) * | 2019-08-27 | 2023-07-21 | 北京迈格威科技有限公司 | Method and device for constructing three-dimensional graph and electronic equipment |
CN110751654A (en) * | 2019-08-30 | 2020-02-04 | 稿定(厦门)科技有限公司 | Image matting method, medium, equipment and device |
CN110751655A (en) * | 2019-09-16 | 2020-02-04 | 南京工程学院 | Automatic cutout method based on semantic segmentation and significance analysis |
CN110751655B (en) * | 2019-09-16 | 2021-04-20 | 南京工程学院 | Automatic cutout method based on semantic segmentation and significance analysis |
CN111784726A (en) * | 2019-09-25 | 2020-10-16 | 北京沃东天骏信息技术有限公司 | Image matting method and device |
CN110956681B (en) * | 2019-11-08 | 2023-06-30 | 浙江工业大学 | Portrait background automatic replacement method combining convolution network and neighborhood similarity |
CN110956681A (en) * | 2019-11-08 | 2020-04-03 | 浙江工业大学 | Portrait background automatic replacement method combining convolutional network and neighborhood similarity |
CN111028259B (en) * | 2019-11-15 | 2023-04-28 | 广州市五宫格信息科技有限责任公司 | Foreground extraction method adapted through image saliency improvement |
CN111028259A (en) * | 2019-11-15 | 2020-04-17 | 广州市五宫格信息科技有限责任公司 | Foreground extraction method for improving adaptability through image saliency |
CN113052755A (en) * | 2019-12-27 | 2021-06-29 | 杭州深绘智能科技有限公司 | High-resolution image intelligent matting method based on deep learning |
CN111161286B (en) * | 2020-01-02 | 2023-06-20 | 大连理工大学 | Interactive natural image matting method |
CN111161286A (en) * | 2020-01-02 | 2020-05-15 | 大连理工大学 | Interactive natural image matting method |
CN111462027B (en) * | 2020-03-12 | 2023-04-18 | 中国地质大学(武汉) | Multi-focus image fusion method based on multi-scale gradient and matting |
CN111462027A (en) * | 2020-03-12 | 2020-07-28 | 中国地质大学(武汉) | Multi-focus image fusion method based on multi-scale gradient and matting |
CN111563908A (en) * | 2020-05-08 | 2020-08-21 | 展讯通信(上海)有限公司 | Image processing method and related device |
CN111862110A (en) * | 2020-06-30 | 2020-10-30 | 辽宁向日葵教育科技有限公司 | Green curtain image matting method, system, equipment and readable storage medium |
CN111932447A (en) * | 2020-08-04 | 2020-11-13 | 中国建设银行股份有限公司 | Picture processing method, device, equipment and storage medium |
CN111932447B (en) * | 2020-08-04 | 2024-03-22 | 中国建设银行股份有限公司 | Picture processing method, device, equipment and storage medium |
CN111931688A (en) * | 2020-08-27 | 2020-11-13 | 珠海大横琴科技发展有限公司 | Ship recognition method and device, computer equipment and storage medium |
CN112183248A (en) * | 2020-09-14 | 2021-01-05 | 北京大学深圳研究生院 | Video salient object detection method based on channel-by-channel space-time characterization learning |
CN112149592A (en) * | 2020-09-28 | 2020-12-29 | 上海万面智能科技有限公司 | Image processing method and device and computer equipment |
CN112200826A (en) * | 2020-10-15 | 2021-01-08 | 北京科技大学 | Industrial weak defect segmentation method |
CN112200826B (en) * | 2020-10-15 | 2023-11-28 | 北京科技大学 | Industrial weak defect segmentation method |
CN112101370A (en) * | 2020-11-11 | 2020-12-18 | 广州卓腾科技有限公司 | Automatic pure-color background image matting algorithm, computer-readable storage medium and equipment |
CN112101370B (en) * | 2020-11-11 | 2021-08-24 | 广州卓腾科技有限公司 | Automatic image matting method for pure-color background image, computer-readable storage medium and equipment |
CN112634312A (en) * | 2020-12-31 | 2021-04-09 | 上海商汤智能科技有限公司 | Image background processing method and device, electronic equipment and storage medium |
CN112634312B (en) * | 2020-12-31 | 2023-02-24 | 上海商汤智能科技有限公司 | Image background processing method and device, electronic equipment and storage medium |
CN112634314A (en) * | 2021-01-19 | 2021-04-09 | 深圳市英威诺科技有限公司 | Target image acquisition method and device, electronic equipment and storage medium |
CN112801896A (en) * | 2021-01-19 | 2021-05-14 | 西安理工大学 | Backlight image enhancement method based on foreground extraction |
CN112801896B (en) * | 2021-01-19 | 2024-02-09 | 西安理工大学 | Backlight image enhancement method based on foreground extraction |
CN113271394A (en) * | 2021-04-07 | 2021-08-17 | 福建大娱号信息科技股份有限公司 | AI intelligent image matting method and terminal without blue-green natural background |
CN113487630A (en) * | 2021-07-14 | 2021-10-08 | 辽宁向日葵教育科技有限公司 | Image matting method based on material analysis technology |
CN114078139A (en) * | 2021-11-25 | 2022-02-22 | 四川长虹电器股份有限公司 | Image post-processing method based on portrait segmentation model generation result |
CN114078139B (en) * | 2021-11-25 | 2024-04-16 | 四川长虹电器股份有限公司 | Image post-processing method based on human image segmentation model generation result |
CN114677394A (en) * | 2022-05-27 | 2022-06-28 | 珠海视熙科技有限公司 | Matting method, matting device, image pickup apparatus, conference system, electronic apparatus, and medium |
CN114677394B (en) * | 2022-05-27 | 2022-09-30 | 珠海视熙科技有限公司 | Matting method, matting device, image pickup apparatus, conference system, electronic apparatus, and medium |
Also Published As
Publication number | Publication date |
---|---|
CN107452010B (en) | 2021-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107452010A (en) | A kind of automatically stingy nomography and device | |
CN109919869B (en) | Image enhancement method and device and storage medium | |
CN107833220B (en) | Fabric defect detection method based on deep convolutional neural network and visual saliency | |
CN108537239B (en) | Method for detecting image saliency target | |
CN107862698B (en) | Light field foreground segmentation method and device based on K mean cluster | |
US20180374199A1 (en) | Sky Editing Based On Image Composition | |
CN107516319B (en) | High-precision simple interactive matting method, storage device and terminal | |
CN104134234B (en) | A kind of full automatic three-dimensional scene construction method based on single image | |
CN111640125B (en) | Aerial photography graph building detection and segmentation method and device based on Mask R-CNN | |
CN110853026B (en) | Remote sensing image change detection method integrating deep learning and region segmentation | |
Shen et al. | Depth-aware image seam carving | |
CN108492343A (en) | A kind of image combining method for the training data expanding target identification | |
CN103839223A (en) | Image processing method and image processing device | |
CN108230338A (en) | A kind of stereo-picture dividing method based on convolutional neural networks | |
CN110381268B (en) | Method, device, storage medium and electronic equipment for generating video | |
CN103914699A (en) | Automatic lip gloss image enhancement method based on color space | |
CN110634147A (en) | Image matting method based on bilateral boot up-sampling | |
CN108596923A (en) | Acquisition methods, device and the electronic equipment of three-dimensional data | |
CN106991686A (en) | A kind of level set contour tracing method based on super-pixel optical flow field | |
CN116583878A (en) | Method and system for personalizing 3D head model deformation | |
CN116997933A (en) | Method and system for constructing facial position map | |
CN108596992B (en) | Rapid real-time lip gloss makeup method | |
CN109741358B (en) | Superpixel segmentation method based on adaptive hypergraph learning | |
CN117157673A (en) | Method and system for forming personalized 3D head and face models | |
CN111832508B (en) | DIE _ GA-based low-illumination target detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20220921 Address after: No. 333, Feiyue East Road, High-tech Industrial Development Zone, Changchun City, Jilin Province, 130012 Patentee after: Changchun Changguang Qiheng Sensing Technology Co.,Ltd. Address before: 130033, 3888 southeast Lake Road, Jilin, Changchun Patentee before: CHANGCHUN INSTITUTE OF OPTICS, FINE MECHANICS AND PHYSICS, CHINESE ACADEMY OF SCIENCE |