CN107452010B - Automatic cutout algorithm and device - Google Patents

Automatic cutout algorithm and device Download PDF

Info

Publication number
CN107452010B
CN107452010B CN201710638979.9A CN201710638979A CN107452010B CN 107452010 B CN107452010 B CN 107452010B CN 201710638979 A CN201710638979 A CN 201710638979A CN 107452010 B CN107452010 B CN 107452010B
Authority
CN
China
Prior art keywords
image
foreground
region
matting
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710638979.9A
Other languages
Chinese (zh)
Other versions
CN107452010A (en
Inventor
王灿进
孙涛
王挺峰
王锐
陈飞
田玉珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Changguang Qiheng Sensing Technology Co ltd
Original Assignee
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Institute of Optics Fine Mechanics and Physics of CAS filed Critical Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority to CN201710638979.9A priority Critical patent/CN107452010B/en
Publication of CN107452010A publication Critical patent/CN107452010A/en
Application granted granted Critical
Publication of CN107452010B publication Critical patent/CN107452010B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Cosmetics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

An automatic cutout method and device relate to the field of digital image processing and comprise the following steps: acquiring an original image to be subjected to matting, and calculating the matting visual saliency; separating a foreground region and a background region by using a spatial domain filtering and threshold segmentation algorithm, and obtaining a ternary diagram by combining morphological operation; performing gradient calculation on each pixel of the unknown region, and sampling according to the gradient direction and the significance to obtain a foreground sample point set and a background sample point set of the pixels of the current unknown region; and calculating the opacity and the confidence coefficient of each sample point, and taking the sample pair with the highest confidence coefficient as the optimal sample pair for final matting. Smoothing the local region of the opacity to obtain the final estimated opacity; and finally, according to the finally estimated opacity and the color of the optimal sample pair, carrying out matting operation in the original image to extract a foreground object. The invention also discloses an automatic image matting device. The embodiment of the invention has the advantages of no need of user interaction, simple and convenient use, and high matting precision and success rate.

Description

Automatic cutout algorithm and device
Technical Field
The invention relates to the field of digital image processing, in particular to an automatic cutout algorithm and an automatic cutout device.
Background
In real life, an interested target is extracted from a background image and is used as an independent material or is synthesized with a new background image so as to obtain a complete and vivid background replacement effect. The digital image matting technology has become a hot spot in the field of computer vision research in recent years due to good application prospect and commercial value.
The digital matting algorithm models each pixel in a natural image as a linear model of foreground and background colors, namely:
I=αF+(1-α)B (1)
where I denotes a color value in an actual image, F denotes a foreground color value, B denotes a background color value, α is called foreground opacity, and has a value in the range of [0,1], where the opacity α of the foreground region is 1 and the opacity α of the background region is 0, and α takes a value between (0,1) in an unknown region, that is, an edge region of the foreground object. The process of finding the foreground F, the background B and the opacity alpha under the condition that the actual image I is known is called matting. I. F, B are three-dimensional vectors, and the equation needs to solve the remaining 7 unknowns according to 3 known quantities, thus being a highly under-constrained problem.
The keying technology which is widely applied to film and television and media production companies at present is blue screen keying, and the principle is as follows: the background is limited to a single blue color, compressing the unknowns in the equation to 4. The blue screen matting operation is simple, but the limitation on the background is large, and when the foreground is blue, the target cannot be completely scratched out.
The natural image matting algorithms mainly studied by the present scholars can be roughly classified into two categories, namely:
(one) a sampling-based algorithm. The method assumes that the image is locally continuous and estimates the foreground and background components of the current pixel using known sample points near the unknown region. For example, the invention CN105225245 proposes a natural image matting method based on texture distribution assumption and regularization strategy, which improves the bayesian matting method, but the sampling-based method has the disadvantages that the obtained alpha image has poor connectivity, and often needs image priori knowledge and a large amount of user marks;
and (II) a propagation-based algorithm. The method comprises the steps that a user firstly marks (such as points, lines and the like), a foreground and a background are identified, then an unknown region is regarded as a field, the edge of the field corresponds to a known region, a Laplace matrix is established to describe the relation of alpha values, and a matting process is converted into a solving process of the Laplace matrix.
In addition, an algorithm combining sampling and propagation is provided to bring the advantages of the sampling and propagation into play, such as a robust matting algorithm and the like, but the algorithm still generally has the defects of complex user interaction, excessive prior assumption of images and large calculation amount, so that the application range is limited, and the use difficulty is increased.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an automatic cutout algorithm and device, which calculate the cutout vision significance according to an input image, complete full-automatic cutout from a natural scene image without limiting background and image prior knowledge, do not need user interaction, and simultaneously can ensure higher cutout precision and success rate.
The technical scheme adopted by the invention for solving the technical problem is as follows:
an automatic matting algorithm, the method comprising the steps of:
the method comprises the following steps: acquiring an original image to be subjected to matting, and calculating the matting visual saliency;
step two: according to the matting visual saliency map, separating a foreground region and a background region by using a spatial domain filtering and threshold segmentation algorithm, and obtaining a trisection map by combining morphological operation;
step three: according to the ternary diagram, gradient calculation is carried out on each pixel of the unknown region, and a foreground sample point set and a background sample point set of the current pixel of the unknown region are obtained through sampling according to the gradient direction and the significance;
step four: and calculating the opacity and the confidence coefficient of each sample point according to the foreground and background sample point sets of the pixels of the current unknown region, and taking the sample pair with the highest confidence coefficient as the optimal sample pair for final image matting. Then smoothing the local region of the opacity to obtain the final estimated opacity;
step five: and carrying out matting operation in the original image according to the finally estimated opacity and the color value of the optimal sample pair, and extracting a foreground target.
An automatic matting device, the device comprising:
the image acquisition module is used for acquiring the color value of a single image;
the matting visual saliency calculation module is used for calculating the matting visual saliency of the image according to the image color value acquired by the image acquisition module;
the trisection image calculation module is used for separating a foreground region and a background region by using a spatial domain filtering and threshold segmentation algorithm according to the sectional image visual saliency acquired by the sectional image visual saliency calculation module, and calculating to acquire a trisection image by combining morphological operation;
the sample point set acquisition module is used for carrying out gradient calculation on each pixel of the unknown region according to the trimap image acquired by the trimap image calculation module and obtaining a foreground sample point set and a background sample point set of the current unknown region pixel by sampling according to the gradient direction and the significance;
and the opacity calculation module is used for calculating the opacity and the confidence coefficient of each sample point according to the foreground and background sample point sets acquired by the sample point set acquisition module, and taking the sample pair with the highest confidence coefficient as the optimal sample pair for final image matting. Then, smoothing the local region of the opacity to obtain the final estimated opacity;
and the foreground matting module is used for carrying out matting operation in the original image according to the finally estimated opacity and the color value of the optimal sample pair to extract a foreground target.
The invention has the beneficial effects that: the cutout vision significance calculation method provided by the invention simulates a vision attention mechanism of human eyes, can automatically extract a foreground target, avoids complex user interaction operation, completes a full-automatic cutout process, and is simple and convenient to operate; by limiting the number of sample point pairs, the matting time is shortened; and the saliency map and the opacity are smoothed, so that the matting precision is improved.
Drawings
FIG. 1 is a schematic flow chart of an automatic matting algorithm according to the present invention
FIG. 2 is a schematic flow chart of calculating the saliency of an area according to the present invention
FIG. 3 is a schematic structural diagram of an automatic matting device according to the present invention
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Fig. 1 is a schematic flow diagram of an embodiment of an automatic matting algorithm according to the present invention, and an embodiment of the present invention provides an automatic matting method, which can be executed by any matting device with image storage and display functions, and the device can be various terminal devices, for example: the mobile phone may be a personal computer, a mobile phone, a tablet computer, or a digital camera, a video camera, or the like, and may be implemented by software and/or hardware. As shown in fig. 1, the method of the present embodiment includes:
the method comprises the following steps: and acquiring an original image to be subjected to matting, and calculating the matting visual saliency of the original image.
Considering that foreground objects that generally need to be extracted have the following features: the foreground target has a complete target area and has obvious contrast with the surrounding background; the color distribution is more uniform; the brightness of most areas is larger; with a more pronounced edge distinction from the background. Therefore, the method for calculating the significance of the visual cutout, provided by the embodiment of the invention, considers the characteristics of the color, the brightness, the region integrity and the like of the foreground object, and firstly calculates the gray level image I according to r, g and b color channels under the assumption that the acquired image is in the rgb formatgrayThe concrete formula is as follows:
Igray=(r+g+b)/3 (2)
the original image can be in any format such as YUV and the like, the output image format of the camera is not limited by the embodiment of the invention, and the corresponding formula for converting color into gray scale also needs to be adjusted.
Then to IgrayLow-pass filtering and down-sampling are performed, i.e.: the original gray image IgrayAs the 0 th dimension layer of the pyramid. The 1 st scale layer is the 0 th scale layer and is convoluted with a low-pass filter, then the 1 st scale layer is obtained by respectively 1/2 sampling in the x direction and the y direction, the rest layers are analogized, and the resolution of each layer is half of that of the previous layer. The low-pass filter may be gaussian filter, laplacian filter, or Gabor filter, and the form of the low-pass filter for generating the scale pyramid is not particularly limited by the embodiment of the present invention.
Corresponding to the visual characteristics of human eyes, the brightness of the scale pyramid needs to be bright because the image is hardly noticeable to human eyes at a place where the brightness of the image is lowDegree component is thresholded, i.e. below the maximum brightness value Igray_maxIn the 5% area, the brightness component is set to 0, so that the dark and weak background interference can be effectively suppressed. Although there is a possibility of being eliminated, on one hand, these parts are retained in the region saliency map, and on the other hand, the edge region is divided into unknown regions, so that the integrity of the final matting is not affected.
After the scale pyramid is built, a luminance saliency map is then computed. The specific calculation method comprises the following steps: taking the image on the fine scale c as a visual central region, and the image on the coarse scale s as a visual peripheral region, making the central scale c belong to {2,3,4}, the peripheral scale s belong to c +, {3,4}, and 6 central-peripheral combinations are {2-5,2-6,3-6,3-7,4-7,4-8}, respectively. And (3) interpolating the image on the s scale to the scale of c, and subtracting the interpolated image from the image on the c scale according to a formula I (c, s) ═ I (c) Θ I(s) | (3) to obtain a brightness difference image. Where I (σ) is the image pyramid, σ ═ 0,1,., 8 denote different scales, and Θ denotes the center-periphery difference operator. The characteristic maps represent the difference of brightness of a certain position in an image and a local neighborhood thereof, and the larger the difference is, the higher the brightness significance in a local range is, and the more easily the difference is noticed by human eyes. After 6 groups of brightness difference graphs are calculated according to the above, the difference graphs are fused, redundant features are abandoned, and a final brightness significance graph is generated. Since the absolute value of the difference map does not reflect significance information at different scales, the difference maps at different scales cannot be simply added. In the embodiment of the invention, the difference map is normalized, and the specific steps of the normalization function N (-) are as follows:
(1) normalizing all 6 difference maps to an interval [0,1 ];
(2) respectively calculating the local variance of each difference image;
(3) the fusion weight value is in positive correlation with the local variance, that is, the larger the local variance is, the larger the information amount contained in the difference graph is, and the greater the weight value should be given to the difference graph in the process of weighted combination.
Then, according to the formula
Figure GDA0001404902760000051
(4) And weighting and combining the brightness difference maps to obtain a brightness significance map. Wherein
Figure GDA0001404902760000054
Representing a cross-scale addition factor.
A color saliency map is then calculated. According to the Ewald's model of color pairing', in the center of the human visual receptive field, neurons are inhibited by the G color if activated by the R color and inhibited by the Y color if activated by the B color, so that the image is converted from three channels rgb to four channels RGBY according to the formulas (21-24):
R=r-(g+b)/2 (5)
G=g-(r+b)/2 (6)
B=b-(g+r)/2 (7)
Y=(r+g)/2-|r-g|/2-b (8)
then, according to the characteristics of the visual cells of human eyes, the red-green and blue-yellow color-confrontation groups RG and BY are calculated, namely: RG (c, s) ═ r (c) -g (c) | Θ | g(s) -r(s) (9),
Figure GDA0001404902760000052
the specific process of obtaining RG is as follows: and (3) subtracting the R, G channels of the image pixel by pixel on the scale c and the scale s respectively to obtain an absolute value, then interpolating the calculation result on the scale s to the scale c, and finally performing pixel-by-pixel difference on the calculation result and the original result on the scale c. The BY finding process is similar. BY subtracting the images for a plurality of times, 6 color difference significance maps on RG and BY can be obtained respectively.
Then, according to the formula
Figure GDA0001404902760000053
(11) And (4) weighting and combining the color difference maps to obtain a color saliency map.
A regional saliency map is then computed. Performing superpixel segmentation on the foreground target, then counting the normalized color histogram of each superpixel, clustering the superpixels according to the color histograms, and segmenting the image into a plurality of regions { r1,r2,...,rkThe cluster center of each area is ceiI 1.., k. Then region riArea saliency map VArThe following can be calculated:
Figure GDA0001404902760000061
wherein, w (r)i) Is represented by riThe calculation method of the weight of the area of the region comprises the following steps:
Figure GDA0001404902760000062
wherein, PN (r)i) Indicating the region riThe number of pixels. The expression above indicates that the larger the area of the region, the greater the weight, i.e. the region riThe surrounding area is larger in the influence of the area having a large area on the significance than the area having a small area.
Dr(rj,ri) Denotes the region rjAnd region riThe distance of the cluster center of (a), i.e.:
Figure GDA0001404902760000063
therein, cei(m)、cej(n) denotes a color histogram cei、cejAnd Dc (m, n) represents the euclidean distance of the m-th and n-th colors in the LAB color space.
The embodiment of the invention provides a region saliency extraction method combining superpixel segmentation and clustering, and specific implementation steps are described in the following fig. 2.
Step A) super-pixel segmentation is carried out on the input image. Alternative superpixel segmentation methods include normalized segmentation (NC) algorithms, graph cut (GS) algorithms, fast drift (QS) algorithms, Simple Linear Iterative Clustering (SLIC) algorithms, and the like. Considering the rapidity and the practicability of the algorithm, the embodiment of the invention selects the SLIC algorithm, and the specific steps are as follows:
1) assuming that the total number of pixels of an image is NIt is divided into K x K superpixels. Firstly, the whole image is evenly divided into K x K small blocks, and the center of each small block is taken as an initial point. Calculating pixel gradient in 3-3 neighborhood of each initial point, wherein the point with minimum gradient is the initial center O of the superpixel segmentation algorithmiK-1, giving each initial center a separate label;
2) expressing each pixel point as a five-dimensional vector { l, a, b, x, y } of a CIELAB color space and XY coordinates, and calculating the distance between each pixel point and the nearest center thereof, wherein the distance calculation formula is as follows:
Figure GDA0001404902760000064
Figure GDA0001404902760000065
Figure GDA0001404902760000066
wherein d islabAs a value of color difference, dxyIs the position difference value, S is the center-to-center distance, m is the balance parameter, and Dis is the distance between pixels. For each pixel, assigning a label of the center closest to the pixel;
3) recalculating centers of pixels of different labels, updating OiAnd calculating the difference value of the new center and the old center, if the difference value is smaller than the threshold value, finishing the algorithm, and if not, returning to the step 2).
And B: the normalized color histogram of each superpixel is counted. Namely, dividing each dimension of the Lab color space into a plurality of bins, counting the probability that the color of the pixel in the super pixel falls into each bin, and finally normalizing the counted histogram.
And C: the superpixels are clustered, and the image is divided into a plurality of continuous areas. The optional clustering method includes any one of clustering based on division, model, hierarchy, grid and density, and in this embodiment, a density-based DBSCAN clustering algorithm is used. The method comprises the following specific steps: one of the super-pixels is selected as a seed point, all the density reachable super-pixels are searched by using the given error threshold values Eps and MinPts, and whether the point is a core point or not is judged. If the core point is the core point, forming a clustering area with the density reachable point; if the core point and the boundary point are not the core point and the boundary point, reselecting other points as seed points and repeating the steps; if not the core point but the boundary point, it is considered as a noise point and discarded. Repeating the steps until all the points are searched, and finally forming a plurality of divided areas. The DBSCAN can effectively cope with noise points, and can divide clusters of arbitrary shapes, and is therefore suitable for this embodiment.
Step D: and calculating the area saliency. And calculating the region saliency of each region according to the super-pixel clustering result and the formulas (12), (13) and (14).
Finally, according to the importance degree of the color information, the region information and the brightness information, using the formula VA ═ αgVAgcVAcrVAr(18) And alphagcrAnd (1), (19) fusing the three types of saliency maps to obtain a final sectional visual saliency map. In this embodiment, take αc=0.5,αr=0.3,αg=0.2。
Step two: according to the cutout visual saliency map, a foreground region and a background region are separated by using a spatial domain filtering and threshold segmentation algorithm, and a trisection map is obtained by combining morphological operation.
Firstly, smoothing the sectional drawing visual saliency map by using a spatial domain filter to remove noise, and then calculating by using a threshold segmentation algorithm (such as an Otsu method) to obtain a threshold TvaThen in the saliency map, the saliency value is greater than TvaCorresponds to the foreground region, less than TvaCorresponding to the pixels of the image to be a background area, thereby obtaining a rough three-division image Itc
The spatial domain filtering is to ensure that singularities of the saliency do not appear in the local region, and meanwhile, the saliency value is smoothed, and a median filtering, a bilateral filtering or a Gaussian filter can be selected. In order to ensure the computational efficiency of the algorithm, in this embodiment, a gaussian filter is selected as a smoothing filter of the matting visual saliency map, and the filter window is 3 × 3.
Because the brightness significance, the color significance and the region significance are comprehensively considered in the sectional visual significance map, the significance of the foreground region is ensured to be far greater than that of the background region, so that the performance of a threshold segmentation algorithm is not strictly required, and the threshold T is not requiredvaThe values can be taken in a wider range without affecting the result of foreground segmentation. In this embodiment, an Otsu threshold algorithm is selected, and the specific steps are as follows:
(1) assuming that the gray level of the sectional visual saliency map VA is 0, 1., L-1, and the total pixel number of the image is N, the gray histogram is counted, that is: let VA have a number of pixels N, where i is 0,1iThen the value of the corresponding gray level i in the gray histogram is Ni/N;
(2) Will threshold value TvaGo through from 0 to L-1, divide the pixel into less than TvaAnd is greater than or equal to TvaTwo classes, the inter-class difference between the two classes is counted:
g=ω00-μ)211-μ)2 (20)
Figure GDA0001404902760000081
wherein, ω is0、ω1Are each less than TvaAnd is greater than or equal to TvaThe proportion of the number of pixels of (d), mu0、μ1Are each less than TvaAnd is greater than or equal to TvaThe pixel mean of (2).
(3) After traversing, finding the corresponding T when the inter-class difference g takes the maximum valuevaAs the final segmentation threshold.
The shape and size of the morphological operator should be selected according to the image content, such as the image resolution, the size and shape of the foreground region, and in this embodiment, the shape is default to a circular shape to ensure the uniformity in each direction. During morphological operations, to avoid small holes and burrs after threshold segmentation, pair ItcThe following morphological operations were performed: firstly, performing primary opening operation to connect locally discontinuous areas and remove holes; then dimension reThe etching operation of (2) to obtain a foreground region F of a trimapg(ii) a As a dimension rdThe expansion operation of (2) to obtain a background portion B of the trimapgThe area between the foreground and the background is an unknown area, so that a subdivided trisection image I of the image I to be scratched is obtainedt
Assuming that the gray value of the foreground area is 1 and the gray value of the background area is 0, the morphology operator is used for convolution with the binary image, for a white foreground area, the corrosion can reduce the boundary, for a black background area, the expansion can reduce the boundary, and finally, the space between the foreground and the background is an unknown area.
Step three: and according to the three-part graph, gradient calculation is carried out on each pixel of the unknown region, and a foreground sample point set and a background sample point set of the current pixel of the unknown region are obtained through sampling according to the gradient direction and the significance.
For each pixel I (x) of the unknown regioni,yi) Calculating the gradient value Gra thereofiThe gradient direction is marked as theta, and the calculation formula of the theta is shown as
Figure GDA0001404902760000091
In this embodiment, a reference foreground and background sample point pair is searched on a straight line in a gradient direction, the size of a search area is determined according to a matting visual saliency value of a current position pixel point around the sample point pair, the larger the saliency is, the smaller the search range is, the more significant the search area is, the real foreground and background points are represented around a pixel with large saliency, and the closer the real foreground and background points are to the pixel. Then, 5 foreground and background sample pairs are respectively searched out according to the spatial distance and the visual saliency, and the specific process is as follows:
1) let search radius rs1, count is 0;
2) at the foreground/background sample point of reference as the center, rsIs a circle with radius, and is counted for all pixel pointsAnd (4) calculating whether the conditions are met: i VA (p) -VA (p)0)|<TvapWherein p is0The center of the search area is a reference foreground or background sample point, p is a point on a search circle, and if the condition is met, count +1 is carried out;
3) judging whether count is carried out>5, if yes, stopping searching; if not rs, + + and return to step 2).
According to the sampling strategy, on one hand, fewer sampling point pairs can be generated, the complexity of subsequent cutout calculation is reduced, on the other hand, sampling is carried out along the gradient direction, the foreground and background points can be guaranteed to be respectively located in different texture regions with higher probability, and meanwhile, certain space and significance similarity can be guaranteed to exist between the sampling points according to sampling of neighborhood significance values, so that the sampling method can guarantee that the real sampling point pairs are contained with higher probability, and the cutout accuracy is improved
Step four: and calculating the opacity and the confidence coefficient of each sample point according to the foreground and background sample point sets of the pixels of the current unknown region, and taking the sample pair with the highest confidence coefficient as the optimal sample pair for final image matting. Then smoothing the local region of the opacity to obtain the final estimated opacity;
from the foreground sample point set and the background sample point set, one point is selected, and according to the imaging linear model, the opacity is estimated to be
Figure GDA0001404902760000101
Wherein
Figure GDA0001404902760000102
And
Figure GDA0001404902760000103
respectively representing the color values of the mth foreground sample point and the nth background sample point. Thus, for each unknown region pixel, 25 different opacity estimates are obtained, and then the opacity with the highest confidence level needs to be selected for matting out the foreground object.
The requirements of the optimal foreground background point pair are as follows: 1) the linear model for equation (1) has the smallest error; 2) the foreground sample point and the background sample point have larger color difference; 3) the foreground or background sample point is closer to the color value of the current pixel; 4) the spatial distance between the foreground and background sample points and the current pixel is small.
According to criteria 1) and 2), a linear-chromatic aberration similarity is defined as
Figure GDA0001404902760000104
According to criterion 3), defining the color similarity as
Figure GDA0001404902760000105
According to criterion 4), defining the spatial distance similarity as
Figure GDA0001404902760000106
Defining a confidence function as
Figure GDA0001404902760000107
Wherein,
Figure GDA0001404902760000108
respectively, the linear-chromatic aberration similarity, the color similarity, the spatial distance similarity, DiIs the front background sampling radius of unknown pixel, is determined by the matting visual saliency, the greater the visual saliency is, the greater DiThe smaller, σ1、σ2And σ3For adjusting the weights between different similarities. And selecting alpha with the highest confidence as the opacity estimation of the current unknown pixel, and using the corresponding foreground and background sample pair as the optimal foreground and background sample pair for final matting.
For each pixel of the unknown region, the opacity is estimated point by point as before. Finally, it is possible that the confidence of some pixel points is too low, resulting in a large estimated opacity error and a color difference in the final cutout. Therefore, local smoothing of the opacity of the unknown regions is required. The factors to be considered in smoothing are: the difference of color values, the difference of spatial positions and the difference of saliency, namely the larger the difference of local colors, the farther the local spatial positions are, the larger the saliency of the pixels is, the smaller the weight is. Therefore, in order to balance the influence of the spatial domain, the color range and the saliency range, the opacity smoothing method of the invention comprises the following steps:
Figure GDA0001404902760000111
Figure GDA0001404902760000112
wherein, Pi、PjRepresenting the coordinates of two points I, j, Ii、IjRepresenting the color of two points i, j, VAi、VAjRepresenting the visual saliency of the sectional image of i and jp、σcAnd σvaFor adjusting the weight among the three. The calculated opacity fully considers the influences of the spatial position, the color and the significance, so that the closer the spatial position is, the more similar the color is, and the closer the significance is, the closer the opacity is between the pixels, and the closer the opacity is consistent with the subjective feeling of human eyes, thereby effectively eliminating the singular point of the opacity and improving the accuracy of the cutout.
According to the 4 optimal criteria of the foreground and background point pairs, the measurement function is determined as shown in the formula (30), wherein four indexes of linear model conformity, foreground and background color difference, color difference between a target pixel and a front background and spatial distance are comprehensively considered. By setting σ1、σ2、σ3Is used to adjust the weights of different similarities. The opacity is then smoothed according to equation (31) to obtain the opacity for final matting. In the present embodiment, the smoothing operation comprehensively takes into account the color differenceSpatial position difference and significance difference by varying σp、σcAnd σvaThe weights of the three in the weighting coefficients can be adjusted. For example, if attention is paid to saliency information, i.e. σvaShould be greater than σp、σc
Step five: and carrying out matting operation in the original image according to the finally estimated opacity and the color value of the optimal sample pair, and extracting a foreground target.
The specific operation steps are as follows: and (3) newly building an image with the same size as the original image as a background, and synthesizing the image with the new background according to a formula (1) by using the calculated opacity and foreground pixel value to obtain a final matting result.
In the embodiment of the invention, the natural image is subjected to matting, the specific contents of the foreground object and the background are not limited, and only the foreground and the background have obvious distinguishing boundaries which can be distinguished by naked eyes.
The embodiment of the invention provides an automatic cutout algorithm, which is characterized in that the cutout visual saliency of an original image is calculated by acquiring the original image to be cutout; then, according to the sectional image visual saliency map, a foreground region and a background region are separated by using a spatial domain filtering and threshold segmentation algorithm, and a trisection map is obtained by combining morphological operation; according to the ternary diagram, gradient calculation is carried out on each pixel of the unknown region, and a foreground sample point set and a background sample point set of the current pixel of the unknown region are obtained through sampling according to the gradient direction and the significance; and calculating the opacity and the confidence coefficient of each sample point according to the foreground and background sample point sets of the pixels of the current unknown region, and taking the sample pair with the highest confidence coefficient as the optimal sample pair for final image matting. Then smoothing the local region of the opacity to obtain the final estimated opacity; and carrying out matting operation in the original image according to the finally estimated opacity and the color value of the optimal sample pair, and extracting a foreground target. The cutout vision significance calculation method provided by the invention simulates a vision attention mechanism of human eyes, can automatically extract a foreground target, avoids complex user interaction operation, completes a full-automatic cutout process, and is simple and convenient to operate; by limiting the number of sample point pairs, the matting time is shortened; and the saliency map and the opacity are smoothed, so that the matting precision is improved.
Fig. 3 is a schematic structural diagram of an automatic matting device provided in an embodiment of the present invention, where the device includes:
the image acquisition module is used for acquiring the color value of a single image;
the matting visual saliency calculation module is used for calculating the matting visual saliency of the image acquired by the image acquisition module;
the trisection image calculation module is used for separating a foreground region and a background region by using a spatial domain filtering and threshold segmentation algorithm according to the sectional image visual saliency acquired by the sectional image visual saliency calculation module, and calculating to acquire a trisection image by combining morphological operation;
the sample point set acquisition module is used for carrying out gradient calculation on each pixel of the unknown region according to the trimap image acquired by the trimap image calculation module and obtaining a foreground sample point set and a background sample point set of the current unknown region pixel by sampling according to the gradient direction and the significance;
and the opacity calculation module is used for calculating the opacity and the confidence coefficient of each sample point according to the foreground and background sample point sets acquired by the sample point set acquisition module, and taking the sample pair with the highest confidence coefficient as the optimal sample pair for final image matting. Then, smoothing the local region of the opacity to obtain the final estimated opacity;
and the foreground matting module is used for carrying out matting operation in the original image according to the finally estimated opacity and the color value of the optimal sample pair to extract a foreground target.
Specifically, the cutout visual saliency calculation module includes:
the scale pyramid generating unit is used for smoothing and down-sampling according to the acquired image to be scratched to generate a scale pyramid;
the brightness saliency calculation unit is used for calculating a brightness saliency map by taking the image on the fine scale as a visual central area and the image on the coarse scale as a visual peripheral area according to the scale pyramid obtained by the scale pyramid generation unit;
the color saliency calculation unit is used for calculating a color saliency map by taking the image on the fine scale as a visual central area and the image on the coarse scale as a visual peripheral area according to the scale pyramid obtained by the scale pyramid generation unit;
the region saliency calculation unit is used for carrying out superpixel segmentation on the foreground target according to the image to be scratched acquired by the image acquisition module, clustering superpixels according to the color histogram and calculating the color saliency of each clustering region;
and the saliency fusion unit is used for obtaining the cutout visual saliency map of the image to be cutout through fusion according to the brightness saliency map acquired by the brightness saliency calculation unit, the color saliency map acquired by the color saliency calculation unit and the region saliency map acquired by the region saliency calculation unit.
The trimap image calculation module comprises:
the spatial domain filtering unit is used for selecting a proper spatial domain filtering method to smooth the cutout visual saliency map;
the threshold segmentation unit is used for selecting a threshold segmentation algorithm to segment and acquire a foreground region and a background region according to the smooth matting visual saliency map acquired by the spatial domain filtering unit to acquire a rough trimap image;
a morphology calculation unit: and the method is used for performing morphological operation to fill holes according to the rough trisection image acquired by the threshold segmentation unit to obtain a foreground, a background and an unknown area, namely an accurate trisection image.
The sample point set acquisition module comprises:
the gradient calculation unit is used for acquiring the gradient of each unknown pixel according to the gray value of the image to be scratched;
and the sampling unit is used for drawing a straight line according to the gradient direction obtained by the gradient calculation unit, taking a first intersection point pair of the straight line, the foreground area and the background area as an initial search point, and searching a sample point with the significance difference of the unknown pixel being less than a threshold value in the neighborhood of the search point from near to far.
The opacity calculation module includes:
a linear-chromatic aberration similarity calculation unit: the system comprises a sample point set acquisition module, a linear color similarity calculation module and a color similarity calculation module, wherein the sample point set acquisition module is used for acquiring sample points in a sample point set;
color similarity calculation unit: the device is used for taking out sample points pair by pair according to the sample point set obtained by the sample point set obtaining module and calculating the color similarity of the sample points;
a spatial distance similarity calculation unit: the system comprises a sample point set acquisition module, a spatial distance similarity calculation module and a spatial distance similarity calculation module, wherein the sample point set acquisition module is used for acquiring sample points in a sample point set;
a sample screening unit: the device is used for calculating the confidence coefficient of each pair of sample points relative to the current unknown pixel according to the similarity values acquired from the linear-chromatic aberration similarity calculation unit, the color similarity calculation unit and the spatial distance similarity calculation unit; and selecting the opacity with the highest confidence as the opacity estimation of the current position pixel.
A smoothing unit: for locally smoothing the opacity obtained by the sample screening unit.
The embodiment of the invention provides an automatic cutout device, which is characterized in that the cutout visual saliency of an original image is calculated by acquiring the original image to be cutout; then, according to the sectional image visual saliency map, a foreground region and a background region are separated by using a spatial domain filtering and threshold segmentation algorithm, and a trisection map is obtained by combining morphological operation; according to the ternary diagram, gradient calculation is carried out on each pixel of the unknown region, and a foreground sample point set and a background sample point set of the current pixel of the unknown region are obtained through sampling according to the gradient direction and the significance; and calculating the opacity and the confidence coefficient of each sample point according to the foreground and background sample point sets of the pixels of the current unknown region, and taking the sample pair with the highest confidence coefficient as the optimal sample pair for final image matting. Then smoothing the local region of the opacity to obtain the final estimated opacity; and carrying out matting operation in the original image according to the finally estimated opacity and the color value of the optimal sample pair, and extracting a foreground target. The cutout vision significance calculation method provided by the invention simulates a vision attention mechanism of human eyes, can automatically extract a foreground target, avoids complex user interaction operation, completes a full-automatic cutout process, and is simple and convenient to operate; by limiting the number of sample point pairs, the matting time is shortened; and the saliency map and the opacity are smoothed, so that the matting precision is improved.
The embodiment of the invention also provides a computer program product for automatic cutout.

Claims (8)

1. An automatic matting algorithm, characterized in that it comprises the following steps:
the method comprises the following steps: acquiring an original image to be subjected to matting, and calculating the matting visual saliency;
step two: separating a foreground region and a background region by using a spatial domain filtering and threshold segmentation algorithm according to the sectional visual saliency map of the first step, and obtaining a trisection map by combining morphological operation;
step three: performing gradient calculation on each pixel of the unknown region of the trisection image in the step two, and sampling according to the gradient direction and the significance to obtain a foreground sample point set and a background sample point set of the current unknown region pixel;
step four: calculating the opacity and confidence of each sample point according to the foreground and background sample point sets of the current unknown region pixels, taking the sample pair with the highest confidence as the optimal sample pair for final image matting, and smoothing the local region of the opacity to obtain the final estimated opacity; the opacity calculation method specifically comprises the following steps:
from the foreground sample point set, any point in the background sample point set is selected, and according to the imaging linear model, the opacity is estimated to be
Figure FDA0002764043990000011
Wherein
Figure FDA0002764043990000012
And
Figure FDA0002764043990000013
respectively representing color values of an mth foreground sample point and an nth background sample point; i isiThe color value of a pixel i in an unknown area; thus, for each unknown region pixel i, 25 different opacity estimates are obtained, and then the opacity with the highest confidence coefficient needs to be selected for matting out the foreground object;
define the linear-chromatic aberration similarity as
Figure FDA0002764043990000014
Defining a color similarity as
Figure FDA0002764043990000015
Defining spatial distance similarity as
Figure FDA0002764043990000016
Defining a confidence function as
Figure FDA0002764043990000021
Wherein
Figure FDA0002764043990000022
Respectively, the linear-chromatic aberration similarity, the color similarity and the spatial distance similarity, Di is the foreground and background sampling radius of an unknown pixel i and is determined by the matting visual saliency of the unknown pixel i, the greater the visual saliency is, the smaller Di is, and the sigma is1、σ2And σ3The weight value between different similarity degrees is adjusted; selecting alpha with highest confidence as the opacity estimation of the current unknown pixel, and the corresponding alphaThe foreground and background sample pairs are used as the foreground and background sample pairs for final matting; x is the number ofiIs the position of the unknown pixel i in the image,
Figure FDA0002764043990000023
is the position of the mth foreground sample point in the image,
Figure FDA0002764043990000024
is the position of the mth background sample point in the image;
finally, carrying out smoothing treatment on the opacity;
Figure FDA0002764043990000025
Figure FDA0002764043990000026
wherein, Pi、PjRepresenting the coordinates of two points I, j, Ii、IjRepresenting the color of two points i, j, VAi、VAjRepresenting the visual saliency of the sectional image of i and jp、σcAnd σvaUsed for adjusting the weight among the three components; omegaijIs the weighted weight of the opacity of the two points i and j;
step five: and C, according to the finally estimated opacity and the color value of the optimal sample pair, carrying out matting operation in the original image and extracting a foreground target.
2. The automatic matting algorithm according to claim 1 is characterized in that the specific steps of calculating the matting visual saliency of the original image are as follows:
step (1) calculating a gray level image IgrayAnd to IgraySmoothing and down-sampling layer by layer to generate n layers of scale pyramids;
step (2), taking the image on the fine scale c as a visual central area and the image on the coarse scale s as a visual peripheral area, firstly calculating a brightness difference characteristic diagram:
I(c,s)=|I(c)ΘI(s)|
wherein, i (c), i(s) are image pyramids, c ∈ {2,3,4} is central scale, s ═ c +, ∈ {3,4} represents peripheral scale, Θ represents central-peripheral difference operator, and finally 6 luminance difference feature maps are obtained; the luminance saliency map is a normalized weighted sum of 6 luminance difference feature maps:
Figure FDA0002764043990000031
wherein,
Figure FDA0002764043990000032
the expression of the normalization function is used,
Figure FDA0002764043990000033
represents a cross-scale addition factor;
step (3), calculating a central area taking an image on a fine scale c as a visual center area and an image on a coarse scale s as a visual peripheral area, firstly converting the image from an rgb channel to an RGBY four-component color channel, and then calculating a red-green channel color difference graph RG and a blue-yellow channel color difference graph BY:
RG(c,s)=|R(c)-G(c)|Θ|G(s)-R(s)|
BY(c,s)=|B(c)-Y(c)|Θ|Y(s)-B(s)|
wherein R (c), G (c), B (c), Y (c) are RGBY components of the image on scale c, and R(s), G(s), B(s), Y(s) are RGBY components of the image on scale s;
the color saliency map is a normalized weighted sum of 12 color difference feature maps:
Figure FDA0002764043990000034
step (4), carrying out superpixel segmentation on the foreground object, and thenCounting the normalized color histogram of each super-pixel, clustering the super-pixels according to the color histogram, and dividing the image into several regions { r1, r 2.. and rk }, wherein the clustering center of each region is ceiI 1.., k; the region saliency value VAr for the region ri can be calculated as follows:
Figure FDA0002764043990000035
wherein, w (r)i) A weight value representing the area of the region, Dr (rj, ri) representing the distance between the cluster centers of the region rj and the region ri;
and (5) synthesizing the brightness significance, the color significance and the region significance:
VA=αgVAgcVAcrVAr
αgcr=1
wherein alpha isg、αc、αrAre weighting coefficients of different significance.
3. The automatic cutout algorithm as claimed in claim 1, wherein the method for obtaining the trisection image comprises the following specific steps:
firstly, smoothing the sectional view visual saliency map by using a spatial domain filter to remove noise, and then calculating by using a threshold segmentation algorithm to obtain a threshold Tva so as to obtain a rough segmentation trisection map Itc; subsequently, Itc was morphologically manipulated as follows: firstly, performing primary opening operation to connect locally discontinuous areas and remove holes; then, carrying out etching operation with the size of re to obtain a foreground area Fg of a trisection graph; and performing expansion operation with the size rd to obtain a background part Bg of the three-segment image, wherein the area between the foreground and the background is an unknown area, and obtaining a subdivided three-segment image It of the image I to be scratched.
4. The automatic matting algorithm according to claim 1, wherein the sample point set obtaining method specifically includes:
for each pixel I (xi, yi) of the unknown region, calculating the gradient value Grai, wherein the gradient direction is marked as theta, and the calculation formula of the theta is
Figure FDA0002764043990000041
Making a straight line along the theta direction, and respectively obtaining first intersection points of the straight line, the foreground area and the background area as an initial search center; in the neighborhood of the intersection point, 5 points with significance difference smaller than a threshold Tvap are searched from near to far, and finally 5-25 sample point pairs are generated.
5. An automatic matting device, characterized in that the device comprises:
the image acquisition module is used for acquiring the color value of a single image;
the matting visual saliency calculation module is used for calculating the matting visual saliency of the image according to the image color value acquired by the image acquisition module;
the trisection image calculation module is used for separating a foreground region and a background region by using a spatial domain filtering and threshold segmentation algorithm according to the sectional image visual saliency acquired by the sectional image visual saliency calculation module, and calculating to acquire a trisection image by combining morphological operation;
the sample point set acquisition module is used for carrying out gradient calculation on each pixel of the unknown region according to the trimap image acquired by the trimap image calculation module and obtaining a foreground sample point set and a background sample point set of the current unknown region pixel by sampling according to the gradient direction and the significance;
the opacity calculation module is used for calculating the opacity and the confidence coefficient of each sample point according to the foreground and background sample point sets acquired by the sample point set acquisition module, and taking the sample pair with the highest confidence coefficient as the optimal sample pair for final image matting; then, smoothing the local region of the opacity to obtain the final estimated opacity; the opacity calculation module includes:
a linear-chromatic aberration similarity calculation unit: the system comprises a sample point set acquisition module, a linear color similarity calculation module and a color similarity calculation module, wherein the sample point set acquisition module is used for acquiring sample points in a sample point set;
color similarity calculation unit: the device is used for taking out sample points pair by pair according to the sample point set obtained by the sample point set obtaining module and calculating the color similarity of the sample points;
a spatial distance similarity calculation unit: the system comprises a sample point set acquisition module, a spatial distance similarity calculation module and a spatial distance similarity calculation module, wherein the sample point set acquisition module is used for acquiring sample points in a sample point set;
a sample screening unit: the device is used for calculating the confidence coefficient of each pair of sample points relative to the current unknown pixel according to the similarity values acquired from the linear-chromatic aberration similarity calculation unit, the color similarity calculation unit and the spatial distance similarity calculation unit; selecting the opacity with the highest confidence coefficient as the estimation of the opacity of the pixel at the current position;
a smoothing unit: the sample screening unit is used for obtaining opacity of the sample; factors considered in smoothing include: color value difference, spatial position difference, saliency difference;
and the foreground matting module is used for carrying out matting operation in the original image according to the finally estimated opacity and the color value of the optimal sample pair to extract a foreground target.
6. The automatic matting device according to claim 5, wherein the matting visual saliency calculation module includes:
the scale pyramid generating unit is used for smoothing and down-sampling according to the acquired image to be scratched to generate a scale pyramid;
the brightness saliency calculation unit is used for calculating a brightness saliency map by taking the image on the fine scale as a visual central area and the image on the coarse scale as a visual peripheral area according to the scale pyramid obtained by the scale pyramid generation unit;
the color saliency calculation unit is used for calculating a color saliency map by taking the image on the fine scale as a visual central area and the image on the coarse scale as a visual peripheral area according to the scale pyramid obtained by the scale pyramid generation unit;
the region saliency calculation unit is used for carrying out superpixel segmentation on the foreground target according to the image to be scratched acquired by the image acquisition module, clustering superpixels according to the color histogram and calculating the color saliency of each clustering region;
and the saliency fusion unit is used for obtaining the cutout visual saliency map of the image to be cutout through fusion according to the brightness saliency map acquired by the brightness saliency calculation unit, the color saliency map acquired by the color saliency calculation unit and the region saliency map acquired by the region saliency calculation unit.
7. The automatic matting device according to claim 5, wherein the trisection map calculation module includes:
the spatial domain filtering unit is used for selecting a proper spatial domain filtering method to smooth the cutout visual saliency map;
the threshold segmentation unit is used for selecting a threshold segmentation algorithm to segment and acquire a foreground region and a background region according to the smooth matting visual saliency map acquired by the spatial domain filtering unit to acquire a rough trimap image;
a morphology calculation unit: and the method is used for performing morphological operation to fill holes according to the rough segmentation trisection image acquired by the threshold segmentation unit to obtain a foreground, a background and an unknown area, namely an accurate trisection image.
8. The automatic matting device according to claim 5, wherein the sample point set obtaining module includes:
the gradient calculation unit is used for acquiring the gradient of each unknown pixel according to the gray value of the image to be scratched;
and the sampling unit is used for drawing a straight line according to the gradient direction obtained by the gradient calculation module, taking a first intersection point pair of the straight line, the foreground area and the background area as an initial search point, and searching a sample point with the significance difference of the unknown pixel being less than a threshold value in the neighborhood of the search point from near to far.
CN201710638979.9A 2017-07-31 2017-07-31 Automatic cutout algorithm and device Active CN107452010B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710638979.9A CN107452010B (en) 2017-07-31 2017-07-31 Automatic cutout algorithm and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710638979.9A CN107452010B (en) 2017-07-31 2017-07-31 Automatic cutout algorithm and device

Publications (2)

Publication Number Publication Date
CN107452010A CN107452010A (en) 2017-12-08
CN107452010B true CN107452010B (en) 2021-01-05

Family

ID=60490577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710638979.9A Active CN107452010B (en) 2017-07-31 2017-07-31 Automatic cutout algorithm and device

Country Status (1)

Country Link
CN (1) CN107452010B (en)

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108134937B (en) * 2017-12-21 2021-07-13 西北工业大学 Compressed domain significance detection method based on HEVC
CN108320294B (en) * 2018-01-29 2021-11-05 袁非牛 Intelligent full-automatic portrait background replacement method for second-generation identity card photos
CN108596913A (en) * 2018-03-28 2018-09-28 众安信息技术服务有限公司 A kind of stingy drawing method and device
CN108460383B (en) * 2018-04-11 2021-10-01 四川大学 Image significance refinement method based on neural network and image segmentation
CN109493363B (en) * 2018-09-11 2019-09-27 北京达佳互联信息技术有限公司 A kind of FIG pull handle method, apparatus and image processing equipment based on geodesic distance
CN109785329B (en) * 2018-10-29 2023-05-26 重庆师范大学 Purple soil image segmentation and extraction method based on improved SLIC algorithm
CN109461158A (en) * 2018-11-19 2019-03-12 第四范式(北京)技术有限公司 Color image segmentation method and system
CN111383232B (en) * 2018-12-29 2024-01-23 Tcl科技集团股份有限公司 Matting method, matting device, terminal equipment and computer readable storage medium
CN111435282A (en) * 2019-01-14 2020-07-21 阿里巴巴集团控股有限公司 Image processing method and device and electronic equipment
CN109540925B (en) * 2019-01-23 2021-09-03 南昌航空大学 Complex ceramic tile surface defect detection method based on difference method and local variance measurement operator
CN110111342B (en) * 2019-04-30 2021-06-29 贵州民族大学 Optimized selection method and device for matting algorithm
CN110288617B (en) * 2019-07-04 2023-02-03 大连理工大学 Automatic human body slice image segmentation method based on shared matting and ROI gradual change
CN110298861A (en) * 2019-07-04 2019-10-01 大连理工大学 A kind of quick three-dimensional image partition method based on shared sampling
CN110415273B (en) * 2019-07-29 2020-09-01 肇庆学院 Robot efficient motion tracking method and system based on visual saliency
CN110400323B (en) * 2019-07-30 2020-11-24 上海艾麒信息科技股份有限公司 Automatic cutout system, method and device
CN112396610A (en) * 2019-08-12 2021-02-23 阿里巴巴集团控股有限公司 Image processing method, computer equipment and storage medium
CN110503704B (en) * 2019-08-27 2023-07-21 北京迈格威科技有限公司 Method and device for constructing three-dimensional graph and electronic equipment
CN110751654B (en) * 2019-08-30 2022-06-28 稿定(厦门)科技有限公司 Image matting method, medium, equipment and device
CN110751655B (en) * 2019-09-16 2021-04-20 南京工程学院 Automatic cutout method based on semantic segmentation and significance analysis
CN111784726B (en) * 2019-09-25 2024-09-24 北京沃东天骏信息技术有限公司 Portrait matting method and device
CN110956681B (en) * 2019-11-08 2023-06-30 浙江工业大学 Portrait background automatic replacement method combining convolution network and neighborhood similarity
CN111028259B (en) * 2019-11-15 2023-04-28 广州市五宫格信息科技有限责任公司 Foreground extraction method adapted through image saliency improvement
CN113052755A (en) * 2019-12-27 2021-06-29 杭州深绘智能科技有限公司 High-resolution image intelligent matting method based on deep learning
CN111161286B (en) * 2020-01-02 2023-06-20 大连理工大学 Interactive natural image matting method
CN111462027B (en) * 2020-03-12 2023-04-18 中国地质大学(武汉) Multi-focus image fusion method based on multi-scale gradient and matting
CN111563908B (en) * 2020-05-08 2023-04-28 展讯通信(上海)有限公司 Image processing method and related device
CN111862110A (en) * 2020-06-30 2020-10-30 辽宁向日葵教育科技有限公司 Green curtain image matting method, system, equipment and readable storage medium
CN111932447B (en) * 2020-08-04 2024-03-22 中国建设银行股份有限公司 Picture processing method, device, equipment and storage medium
CN111931688A (en) * 2020-08-27 2020-11-13 珠海大横琴科技发展有限公司 Ship recognition method and device, computer equipment and storage medium
CN112183248A (en) * 2020-09-14 2021-01-05 北京大学深圳研究生院 Video salient object detection method based on channel-by-channel space-time characterization learning
CN112149592A (en) * 2020-09-28 2020-12-29 上海万面智能科技有限公司 Image processing method and device and computer equipment
CN112200826B (en) * 2020-10-15 2023-11-28 北京科技大学 Industrial weak defect segmentation method
CN112101370B (en) * 2020-11-11 2021-08-24 广州卓腾科技有限公司 Automatic image matting method for pure-color background image, computer-readable storage medium and equipment
CN112634312B (en) * 2020-12-31 2023-02-24 上海商汤智能科技有限公司 Image background processing method and device, electronic equipment and storage medium
CN112634314A (en) * 2021-01-19 2021-04-09 深圳市英威诺科技有限公司 Target image acquisition method and device, electronic equipment and storage medium
CN112801896B (en) * 2021-01-19 2024-02-09 西安理工大学 Backlight image enhancement method based on foreground extraction
CN113271394A (en) * 2021-04-07 2021-08-17 福建大娱号信息科技股份有限公司 AI intelligent image matting method and terminal without blue-green natural background
CN113487630B (en) * 2021-07-14 2022-03-22 辽宁向日葵教育科技有限公司 Matting method, device, equipment and storage medium based on material analysis technology
CN113902656A (en) * 2021-08-17 2022-01-07 浙江大华技术股份有限公司 Wide dynamic image fusion method, device and computer readable storage medium
CN113870298A (en) * 2021-09-07 2021-12-31 中国人民解放军海军航空大学 Method and device for extracting water surface target shadow, electronic equipment and storage medium
CN114078139B (en) * 2021-11-25 2024-04-16 四川长虹电器股份有限公司 Image post-processing method based on human image segmentation model generation result
CN114677394B (en) * 2022-05-27 2022-09-30 珠海视熙科技有限公司 Matting method, matting device, image pickup apparatus, conference system, electronic apparatus, and medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101945223B (en) * 2010-09-06 2012-04-04 浙江大学 Video consistent fusion processing method
CN102651135B (en) * 2012-04-10 2015-06-17 电子科技大学 Optimized direction sampling-based natural image matting method
CN104036517B (en) * 2014-07-01 2017-02-15 成都品果科技有限公司 Image matting method based on gradient sampling
KR102115328B1 (en) * 2015-06-15 2020-05-26 한국전자통신연구원 Apparatus for extracting object of interest in image using image matting based on global contrast and method using the same

Also Published As

Publication number Publication date
CN107452010A (en) 2017-12-08

Similar Documents

Publication Publication Date Title
CN107452010B (en) Automatic cutout algorithm and device
CN113781402B (en) Method and device for detecting scratch defects on chip surface and computer equipment
CN108537239B (en) Method for detecting image saliency target
CN109522908B (en) Image significance detection method based on region label fusion
CN107516319B (en) High-precision simple interactive matting method, storage device and terminal
CN110111338B (en) Visual tracking method based on superpixel space-time saliency segmentation
CN109389129B (en) Image processing method, electronic device and storage medium
CN109035253A (en) A kind of stingy drawing method of the deep learning automated graphics of semantic segmentation information guiding
CA3021795A1 (en) System and method for detecting plant diseases
CN108629783B (en) Image segmentation method, system and medium based on image feature density peak search
CN110738676A (en) GrabCT automatic segmentation algorithm combined with RGBD data
CN109035196B (en) Saliency-based image local blur detection method
TW200834459A (en) Video object segmentation method applied for rainy situations
CN105809716B (en) Foreground extraction method integrating superpixel and three-dimensional self-organizing background subtraction method
CN108596923A (en) Acquisition methods, device and the electronic equipment of three-dimensional data
CN108320294B (en) Intelligent full-automatic portrait background replacement method for second-generation identity card photos
CN109087330A (en) It is a kind of based on by slightly to the moving target detecting method of smart image segmentation
CN110147816B (en) Method and device for acquiring color depth image and computer storage medium
CN107886471B (en) Method for removing redundant objects of photo based on super-pixel voting model
CN104657980A (en) Improved multi-channel image partitioning algorithm based on Meanshift
CN107622280B (en) Modularized processing mode image saliency detection method based on scene classification
CN106157330A (en) A kind of visual tracking method based on target associating display model
CN113705579A (en) Automatic image annotation method driven by visual saliency
CN110569859A (en) Color feature extraction method for clothing image
CN110910497B (en) Method and system for realizing augmented reality map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220921

Address after: No. 333, Feiyue East Road, High-tech Industrial Development Zone, Changchun City, Jilin Province, 130012

Patentee after: Changchun Changguang Qiheng Sensing Technology Co.,Ltd.

Address before: 130033, 3888 southeast Lake Road, Jilin, Changchun

Patentee before: CHANGCHUN INSTITUTE OF OPTICS, FINE MECHANICS AND PHYSICS, CHINESE ACADEMY OF SCIENCE

TR01 Transfer of patent right