CN107516319B - High-precision simple interactive matting method, storage device and terminal - Google Patents

High-precision simple interactive matting method, storage device and terminal Download PDF

Info

Publication number
CN107516319B
CN107516319B CN201710791442.6A CN201710791442A CN107516319B CN 107516319 B CN107516319 B CN 107516319B CN 201710791442 A CN201710791442 A CN 201710791442A CN 107516319 B CN107516319 B CN 107516319B
Authority
CN
China
Prior art keywords
sample point
foreground
region
background
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710791442.6A
Other languages
Chinese (zh)
Other versions
CN107516319A (en
Inventor
孙鹏
王灿进
邱东海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North University of China
Original Assignee
North University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North University of China filed Critical North University of China
Priority to CN201710791442.6A priority Critical patent/CN107516319B/en
Publication of CN107516319A publication Critical patent/CN107516319A/en
Application granted granted Critical
Publication of CN107516319B publication Critical patent/CN107516319B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user

Abstract

The invention discloses a high-precision simple interactive matting method, storage equipment and a terminal, wherein the method comprises the following steps: inputting an original image to be scratched; outputting prompt information according to different use environments; acquiring a user operation command and determining an operation mode; calculating to obtain a trisection graph; acquiring foreground and background sample point sets of pixels in an unknown area; extracting sample point pairs, estimating the opacity of pixels in an unknown region, and taking the opacity with the highest confidence level as the opacity for matting; performing smoothing operation on the neighborhood of the pixel of the unknown region to obtain the opacity for matting after the pixel of the unknown region is smoothed; and extracting a foreground target from the original image according to the opacity of the smoothed matte and the color value of the sample point pair. The method is simple to operate, has high matting accuracy and is suitable for the field of image processing.

Description

High-precision simple interactive matting method, storage device and terminal
Technical Field
The invention relates to the field of image processing, in particular to a high-precision simple interactive matting method, storage equipment and a terminal.
Background
Matting refers to a process of accurately separating an interested foreground region from an image or a video background, and is widely applied to the fields of video editing, virtual reality, movie production and the like. After the image matting, the foreground target can be combined with various backgrounds at will through simple fusion processing, and the workload of scene arrangement is greatly reduced.
The natural image matting theory considers that each pixel point in an image can be represented by formula (9):
I=αF+(1-α)B (9)
where I represents the color value in the actual image, F represents the foreground color value, B represents the background color value, α is called foreground opacity.
The input image to be subjected to matting can be divided into three parts, namely a foreground region, a background region and an unknown region, wherein the opacity α of the foreground region is equal to 1, and the opacity α of the background region is equal to 0.
At present, many matting algorithms and matting software are available, but all have a certain distance from a simple and practical target. The traditional blue screen cutout is simple and easy to implement, but the target is required to be placed in a blue background, so that the shooting scene and the use range are limited; knockout matting is suitable for areas with smooth transitions in color, but local detail matting is poor; bayesian and Possion matting are suitable for the situation that the difference between the foreground and the background is large, but when the foreground color distribution is complex, the matting effect is poor; the Robust matting has better stability, but the robustness is often insufficient for non-connected edges. In addition, the above algorithm generally has the defects of high computational complexity and difficulty in real-time application.
The existing matting software such as Photoshop and the like is complex in operation, needs certain professional knowledge, and is time-consuming and labor-consuming; the foreground extracted by TouchRetouch, twitch and the like is often not fine enough in edge and has more burrs.
Disclosure of Invention
Aiming at the defects in the related technology, the technical problem to be solved by the invention is as follows: the matting method, the storage device and the terminal are simple to operate and have high matting accuracy.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: a high-precision simple interactive matting method comprises the following steps: s101, inputting an original image to be scratched; s102, outputting prompt information to a user according to different use environments; s103, acquiring different user operation commands and determining corresponding operation modes; s104, calculating to obtain a ternary diagram according to the determined operation mode; s105, performing gradient calculation on each pixel of the unknown region in the trimap image, and acquiring a foreground sample point set and a background sample point set of the current unknown region pixel according to the gradient direction; s106, respectively extracting a sample point from the two sample point sets to form a sample point pair, estimating different opacities of pixels in an unknown region by adopting different sample point pairs, and taking the opacities with the highest confidence level as opacities for matting; s107, according to the opacity for matting, carrying out smoothing operation on the neighborhood of the pixels of the current unknown region to obtain the opacity for matting after the pixels of the current unknown region are smoothed; and S108, extracting a foreground object from the original image according to the smooth matting opacity of each pixel of the unknown region and the color value of the sample point pair corresponding to each matting opacity.
Preferably, after the foreground object is extracted from the original image, the method further includes: s1091, directly storing and outputting the extracted foreground target; or, S1092, inputting a new background image, synthesizing the extracted foreground object with the new background image, and storing and outputting the synthesized image.
Preferably, the operation mode includes: a background difference mode and a manual marking mode; according to the determined operation mode, calculating to obtain a trisection graph, which specifically comprises the following steps:
if the operation mode is a background difference mode, obtaining a foreground image I through the following difference formulad:Id=|I-IbgL, wherein: i is an image to be scratched containing a foreground object and a background object, IbgA background image containing no foreground object; for foreground image IdPerforming an opening operation to obtain a corrected foreground image Ido(ii) a For the corrected foreground image IdoAs a dimension reThe etching operation of (2) to obtain a foreground region F of a trimapgFor the corrected foreground image IdoAs a dimension rdThe expansion operation of (2) to obtain a background region B of a trimapgAnd obtaining a trisection image I of the image I to be scratched under the background difference mode by using the unknown region as the region between the foreground region and the background regiont
If the operation mode is the manual marking mode, firstly obtaining an operation command of a user, and respectively selecting a point P in the foreground region and the background region of the image I to be scratchedfg、PbgAs initial growing points, the following region growing algorithm was used to diffuse into the neighborhood:
Figure BDA0001399297940000021
wherein: i is the current pixel, j is the neighborhood pixel of I, Ii、IjRespectively representing color values at i, j points, Grai、GrajRespectively representing the gradient values at i and j points, Tv、TgColor threshold and gradient threshold for region growing, respectively, when gne(j) 1 indicates that pixel j is in the same region as pixel i, and g isne(j) 0, indicating that pixel j and pixel i are in different regions; after the area growth is finished, obtaining a rough map I of the foreground and the backgroundgrowThen, for rough sketch IgrowAs a dimension reThe etching operation of (2) to obtain a foreground region F of a trimapgFor rough sketch IgrowAs a dimension rdThe expansion operation of (2) to obtain a background region B of a trimapgAnd the area between the foreground area and the background area is an unknown area, so that a trisection image I of the image I to be scratched in a manual marking mode is obtainedt
Preferably, the performing gradient calculation on each pixel of the unknown region in the trimap image, and obtaining a foreground sample point set and a background sample point set of the current unknown region pixel according to a gradient direction specifically includes:
for each pixel i (x) of the unknown regioni,yi) Calculating the gradient value Gra thereofiThe direction of the gradient is noted
Figure BDA0001399297940000035
The calculation formula of (2) is as follows:
Figure BDA0001399297940000031
along the edge
Figure BDA0001399297940000034
Making straight line in the direction, taking the first intersection point of the straight line and the foreground area and the background area as the first sample point of the foreground sample point set and the first sample point of the background sample point set, and taking multiple first sample points around the two first sample pointsThe spatial distance between the pixel and the pixel is closest to the pixel, and the difference between the pixel values is greater than TvAnd/2, respectively generating a foreground sample point set and a background sample point set, wherein: t isvIs the color threshold of the region.
Preferably, around the first sample point of the foreground sample point set and the first sample point of the background sample point set, respectively, 4 samples are taken to be closest in spatial distance to the first sample point of the foreground sample point set and have a pixel value difference larger than TvA point of/2 such that the foreground sample point set and the background sample point set each contain 5 sample points.
Preferably, the extracting a sample point from the two sample point sets to form a sample point pair, estimating different opacities of pixels in the unknown region by using different sample point pairs, and taking the opacities with the highest confidence level as the opacities for matting specifically includes:
respectively extracting a sample point from the foreground sample point set and the background sample point set to form a sample point pair, drawing a straight line according to the sample point pair, projecting each pixel i of the unknown region onto the straight line, and estimating the opacity of each pixel i:
Figure BDA0001399297940000032
wherein: fi (m)And
Figure BDA0001399297940000033
respectively representing color values of an mth foreground sample point and an nth background sample point;
and selecting the opacity with the highest confidence as the matting opacity of the pixels of the unknown region for obtaining a plurality of different opacity estimates of the pixels of each unknown region, wherein the sample point pair corresponding to the matting opacity is used as the final matting sample point pair.
Preferably, the calculation of the confidence specifically includes:
taking out sample point pairs from the foreground sample point set and the background sample point set pair by pair, and calculating the error of each sample point pair relative to the linear model shown in the formula (3), namely the linear similarity:
Figure BDA0001399297940000041
calculating the foreground color similarity according to the color values of the pixel i in the unknown area and the sampled sample point pair:
Figure BDA0001399297940000042
and background color similarity:
Figure BDA0001399297940000043
and calculating the confidence of the opacity estimation of the pixel i of the current unknown region corresponding to each sample point pair according to the obtained linear similarity and the color similarity:
Figure BDA0001399297940000044
wherein: sigma1And σ2For adjusting the weights between different similarities.
Preferably, when the neighborhood of the current unknown region pixel is subjected to smoothing operation, the following formula is adopted to perform opacity calculation after smoothing:
Figure BDA0001399297940000045
wherein:
Figure BDA0001399297940000046
Pi、Pjrepresenting the coordinates of two points I, j, Ii、IjRepresenting the colors of the i, j dots, Grai、GrajDenotes the gradient of two points i, j, σp、σcAnd σgFor adjusting the weight among the three.
Accordingly, the present invention also provides a storage device having stored therein a plurality of instructions adapted to be loaded by a processor and to perform the high precision easy interactive matting method as described above.
Correspondingly, the invention also provides a terminal, comprising:
a processor adapted to implement instructions; and a storage device adapted to store a plurality of instructions, the instructions adapted to be loaded by a process and to perform the high precision easy interactive matting method as described above.
The invention has the beneficial technical effects that:
1. the method is simple and convenient to operate, and can prompt the user to carry out simple interactive operation according to different use environments; after the opacity for final matting is obtained, smooth operation is also performed, so that the method has higher matting accuracy.
2. The finally obtained matting result can be independently stored and output as a post-production material, can be input by an image processing module of self-contained or other software, and can be synthesized with an input new background image and then output, so that the practicability and the versatility of the invention are improved.
3. The operation mode in the invention comprises a background difference mode and a manual marking mode, the difference of the use environment is fully considered, and the quick cutout of the complex scene can be realized only by a small amount of manual intervention; regardless of the background difference mode or the manual marking mode, the method can perform morphological operation on the obtained coarse trisection image when calculating the trisection image to obtain a more accurate refined trisection image, and further improve the matting precision.
4. When the foreground sample point set and the background sample point set are selected, the method is adopted to perform gradient calculation on each pixel of an unknown area, a straight line is made along the gradient direction, the first intersection point of the straight line, the foreground area and the background area is taken as the first sample point of the foreground sample point set and the first sample point of the background sample point set respectively, and the sample point pairs obtained in the way can be ensured to be positioned in different texture areas of an image with higher probability, namely positioned in different texture areas of the image respectivelyA true foreground region and a background region; then, around the two first sample points, a plurality of the first sample points are respectively selected to be closest to the first sample points in space distance, and the difference of pixel values is larger than TvAnd 2, finally generating a foreground sample point set and a background sample point set, thus ensuring that the sample points have certain distinctiveness and spatial similarity, namely ensuring that real sample point pairs are contained with higher probability, and further improving the accuracy of the matting.
5. According to the sampling strategy, the number of other sample points around the two first sample points is set to be 4, so that the number of finally obtained sample point pairs is 25, the sampling strategy limits the number of the sample point pairs, the matting time is shortened, the complexity of subsequent matting calculation is reduced, the accuracy of matting is further improved, and the user experience is improved.
6. The method adopted by the invention for smoothing the opacity obtained in advance considers the influences of spatial position, color and gradient, so that the closer the spatial position is, the more similar the color is, and the closer the texture is, the closer the opacity is, which is consistent with the subjective feeling of human eyes, thereby effectively eliminating the singular point of the opacity and greatly improving the accuracy of image matting.
Drawings
FIG. 1 is a flowchart of a first embodiment of a high-precision simple interactive matting method provided by the present invention;
FIG. 2 is a flowchart of a second embodiment of a high-precision simple interactive matting method provided by the present invention;
FIG. 3 is a flowchart of a first embodiment of a high-precision simple interactive matting device provided by the invention;
FIG. 4 is a flowchart of a second embodiment of a high-precision simple interactive matting device provided by the invention;
FIG. 5 is a flow chart of a third embodiment of a high-precision simple interactive matting device provided by the invention;
FIG. 6 is a flowchart of a fourth embodiment of a high-precision simple interactive matting device provided by the invention;
FIG. 7 is a flow chart of a fifth embodiment of a high-precision simple interactive matting device provided by the present invention;
FIG. 8 is a flowchart of a sixth embodiment of a high-precision simple interactive matting device provided by the invention;
FIG. 9 is a flow chart of a seventh embodiment of a high-precision simple interactive matting device provided by the present invention;
FIG. 10 is a hardware block diagram of a high-precision simple interactive matting device provided by the present invention;
in the figure: 101 is an original image input module, 102 is a prompt information output module, 103 is an operation mode determination module, 104 is a trimap computation module, 105 is a sample point set acquisition module, 106 is a sample point pair screening module, 107 is an opacity smoothing module, 108 is a foreground object matting module, 109 is a first result output module, 110 is a new image input module, 111 is a synthesis module, 112 is a second result output module, 1041 is a coarse trimap acquisition unit, 1042 is a fine trimap acquisition unit, 1051 is a gradient computation unit, 1052 is a sampling unit, 1061 is an opacity estimation unit, 1062 is a sample point pair screening unit, and 1063 is a confidence computation unit.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments, but not all embodiments, of the present invention; all other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a first embodiment of a high-precision simple interactive matting method, which comprises the following steps:
s101, inputting an original image to be scratched.
And S102, outputting prompt information to the user according to different use environments.
S103, acquiring different user operation commands and determining corresponding operation modes.
And S104, calculating to obtain a ternary diagram according to the determined operation mode.
And S105, performing gradient calculation on each pixel of the unknown region in the trimap image, and acquiring a foreground sample point set and a background sample point set of the current unknown region pixel according to the gradient direction.
And S106, respectively extracting a sample point from the two sample point sets to form a sample point pair, estimating different opacities of pixels in the unknown region by adopting different sample point pairs, and taking the opacities with the highest confidence level as the opacities for matting.
And S107, according to the opacity for matting, performing smoothing operation on the neighborhood of the pixels of the current unknown region to obtain the opacity for matting after the pixels of the current unknown region are smoothed.
And S108, extracting a foreground object from the original image according to the smooth matting opacity of each pixel of the unknown region and the color value of the sample point pair corresponding to each matting opacity.
In specific implementation, for step S101, specific operations may be: and collecting the color value of each pixel in the original image to form a single-frame image. And then according to the collected image color values and different user operation commands, determining different operation modes, and calculating to obtain a ternary diagram. For step S108, the specific operations may be: newly building an image with the same size as the original image, and setting the image as full black; and then, replacing the color value of the original image at the pixel position corresponding to the foreground region, and calculating the color value by using the formula (9) at the pixel position corresponding to the unknown region to finally obtain a matting result.
In this example, the foreground object may be any object, and only needs to have an obvious boundary for distinguishing from the background, and the embodiment of the present invention does not limit the specific form of the foreground image.
The method is simple and convenient to operate, and can prompt a user to perform simple interactive operation according to different use environments; after the opacity for final matting is obtained, smooth operation is also performed, so that the method has higher matting accuracy.
Further, the operation mode may include a background difference mode and a manual marking mode.
In specific implementation, if the shooting environment satisfies the following conditions: the scene is indoor or other stable environment, the shot background and foreground are relatively fixed, the foreground is convenient to move, secondary shooting is easy, and a background difference mode is selected; in this mode of operation, the camera is placed on a relatively stable base and a background image I is first taken that does not contain foreground objectsbgAnd storing, keeping the camera still, placing the foreground object in the field of view, and shooting an image I simultaneously containing the foreground object and the background object. If the shooting environment meets the following conditions: the scene is outdoor or other relatively unstable environments, the shot target moves relative to the background, the shot target cannot be moved away from the view field, and the manual marking mode is selected if secondary shooting is difficult; in this mode of operation, the camera need not be fixed, but only needs to take a picture I containing the foreground object and the background object.
Further, the calculating according to the determined operation mode to obtain a ternary diagram may specifically include:
if the operation mode is a background difference mode, obtaining a foreground image I through the following difference formulad:Id=|I-IbgL, wherein: i is an image to be scratched containing a foreground object and a background object, IbgA background image containing no foreground object; for foreground image IdPerforming an opening operation to obtain a corrected foreground image Ido(ii) a For the corrected foreground image IdoAs a dimension reThe etching operation of (2) to obtain a foreground region F of a trimapgFor the corrected foreground image IdoAs a dimension rdThe expansion operation of (2) to obtain a background region B of a trimapgAnd obtaining a trisection image I of the image I to be scratched under the background difference mode by using the unknown region as the region between the foreground region and the background regiont
If the operation mode is the manual marking mode, firstly obtainingSelecting a point P in the foreground region and background region of the image I to be scratched respectively by the operation command of a userfg、PbgAs initial growing points, the following region growing algorithm was used to diffuse into the neighborhood:
Figure BDA0001399297940000081
wherein: i is the current pixel, j is the neighborhood pixel of I, Ii、IjRespectively representing color values at i, j points, Grai、GrajRespectively representing the gradient values at i and j points, Tv、TgColor threshold and gradient threshold for region growing, respectively, when gne(j) 1 indicates that pixel j is in the same region as pixel i, and g isne(j) 0, indicating that pixel j and pixel i are in different regions; after the area growth is finished, obtaining a rough map I of the foreground and the backgroundgrowThen, for rough sketch IgrowAs a dimension reThe etching operation of (2) to obtain a foreground region F of a trimapgFor rough sketch IgrowAs a dimension rdThe expansion operation of (2) to obtain a background region B of a trimapgAnd the area between the foreground area and the background area is an unknown area, so that a trisection image I of the image I to be scratched in a manual marking mode is obtainedt
In specific implementation, if the user selects the background difference mode, the user is prompted to fix the camera on the base, and a background image I without a foreground target is shot firstbgStoring, keeping the camera still, placing the foreground object in a field of view, and shooting an image I simultaneously containing the foreground object and the background object; for this mode of operation, the extraction of the foreground object can be obtained by the difference of the two images, i.e. IdI-Ibg |, the difference image may contain a part of shadow and noise, the obtained white area is not strictly the foreground target area, and to obtain a more accurate trimap image, I is first subjected todPerforming primary opening operation on the image to connect local discontinuous regions, removing holes and obtaining a corrected foreground image IdoThen the corrected foreground image I is processeddoA series of morphological operations were performed. If the user selects the handIn the working marking mode, prompting a user to shoot a picture I containing a foreground target and a background target, and then selecting an initial growth point P in a foreground area and a background area respectivelyfg、Pbg
The difference of the use environment is fully considered, and the rapid cutout of the complex scene can be realized only by a small amount of manual intervention; regardless of the background difference mode or the manual marking mode, in the embodiment, when the trisection image is calculated, morphological operation is performed on the obtained coarse trisection image, so that a more accurate fine trisection image is obtained, and the matting precision is further improved.
It should be noted that, for morphological operations such as opening operation, erosion, dilation and the like in different working modes, the shape and size of the morphological operator should be selected according to the image content, such as the size and shape of the foreground object, and may be a rectangle by default. In the morphological operation, the gray value of the foreground region is assumed to be 1, the gray value of the background region is assumed to be 0, so that the erosion can reduce the foreground region, the expansion can reduce the background region, and finally, the subtraction can obtain the unknown region containing the foreground boundary.
Further, the performing gradient calculation on each pixel of the unknown region in the trimap image, and obtaining a foreground sample point set and a background sample point set of the current unknown region pixel according to a gradient direction may specifically include:
for each pixel i (x) of the unknown regioni,yi) Calculating the gradient value Gra thereofiThe direction of the gradient is noted
Figure BDA0001399297940000093
The calculation formula of (2) is as follows:
Figure BDA0001399297940000091
along the edge
Figure BDA0001399297940000092
Making straight line in the direction, and taking the first intersection point of the straight line and the foreground area and the background area as the foreground sampleThe first sample point of the sample point set and the first sample point of the background sample point set are respectively provided with a plurality of first sample points which are closest to the first sample points in space distance and have pixel value difference larger than TvAnd/2, respectively generating a foreground sample point set and a background sample point set, wherein: t isvIs the color threshold of the region.
In this embodiment, a first sample point pair is searched on a straight line in the gradient direction, and then the remaining sets of sample point pairs are searched around the sample point pair, where the search algorithm is not particularly limited in this embodiment, such as KNN, search tree, and the like, and only the conditions need to be satisfied.
Further, before calculating the gradient, the original color image needs to be converted into a gray image, taking RGB to gray as an example: y ═ 0.299R +0.587G +0.114B, where: r, G, B are the red, green and blue components of the original image, respectively, and Y is the converted gray value. Of course, the original image may also be in any format such as YUV, and the embodiment of the present invention does not limit the format of the output image of the camera.
In this embodiment, when selecting a foreground sample point set and a background sample point set, a method is adopted in which gradient calculation is performed on each pixel of an unknown region, a straight line is made along a gradient direction, a first intersection point of the straight line, the foreground region and the background region is taken as a first sample point of the foreground sample point set and a first sample point of the background sample point set, and thus, the obtained sample point pairs can be ensured to be located in different texture regions of an image with a high probability, namely, in a real foreground region and a real background region; then, around the two first sample points, a plurality of the first sample points are respectively selected to be closest to the first sample points in space distance, and the difference of pixel values is larger than TvAnd 2, finally generating a foreground sample point set and a background sample point set, thus ensuring that the sample points have certain distinctiveness and spatial similarity, namely ensuring that real sample point pairs are contained with higher probability, and further improving the accuracy of the matting.
Still further, around the first sample point of the foreground sample point set and the first sample point of the background sample point set, 4 pixels having the closest spatial distance to the first sample point of the foreground sample point set and the first sample point of the background sample point set may be respectively selectedThe difference being greater than TvA point of/2, such that the foreground sample point set and the background sample point set each contain 5 sample points, such that the number of finally obtained sample point pairs is 25 pairs. The sampling strategy limits the number of sample point pairs, shortens the cutout time, reduces the complexity of subsequent cutout calculation, not only further improves the cutout accuracy, but also improves the user experience.
Further, the extracting a sample point from the two sample point sets to form a sample point pair, estimating different opacities of pixels in the unknown region by using different sample point pairs, and taking the opacities with the highest confidence level as the opacities for matting may specifically include:
respectively extracting a sample point from the foreground sample point set and the background sample point set to form a sample point pair, drawing a straight line according to the sample point pair, projecting each pixel i of the unknown region onto the straight line, and estimating the opacity of each pixel i:
Figure BDA0001399297940000101
wherein: fi (m)And
Figure BDA0001399297940000102
respectively representing color values of an mth foreground sample point and an nth background sample point;
and selecting the opacity with the highest confidence as the matting opacity of the pixels of the unknown region for obtaining a plurality of different opacity estimates of the pixels of each unknown region, wherein the sample point pair corresponding to the matting opacity is used as the final matting sample point pair.
In this embodiment, for each unknown region pixel i, 25 different opacity estimates are obtained in total, and the opacity with the highest confidence level needs to be selected for matting out the foreground object. The criteria for evaluating the optimal sample point pair here are: the linear model for equation (3) has the smallest error and the smallest error with the current pixel color value. Therefore, the calculation of the confidence may specifically include:
taking out sample point pairs from the foreground sample point set and the background sample point set pair by pair, and calculating the error, namely the linear similarity, of each pair of sample point pairs relative to the linear model shown in the formula (3):
Figure BDA0001399297940000111
calculating the foreground color similarity according to the color values of the pixel i in the unknown area and the sampled sample point pair:
Figure BDA0001399297940000112
and background color similarity:
Figure BDA0001399297940000113
and calculating the confidence of the opacity estimation of the pixel i of the current unknown region corresponding to each sample point pair according to the obtained linear similarity and the color similarity:
Figure BDA0001399297940000114
wherein: sigma1And σ2For adjusting the weights between different similarities.
After the opacity is calculated point by point for an unknown region, the opacity of the unknown region needs to be locally smoothed because the confidence of each pixel point cannot be guaranteed to meet the requirement and noise points may occur during sampling, which results in that the local opaque region is not smooth enough and color difference occurs in the final image matting. The factors to be considered in smoothing are: color value difference, spatial position difference and gradient difference, namely, the larger the local color difference is, the farther the local spatial position is, and the larger the texture difference is, the smaller the weight is.
Therefore, to balance the influence of the spatial domain, the color value domain and the texture value domain, further, when the neighborhood of the pixels in the current unknown region is subjected to the smoothing operation, the opacity calculation after smoothing can be performed by adopting the following formula:
Figure BDA0001399297940000115
wherein:
Figure BDA0001399297940000121
Pi、Pjrepresenting the coordinates of two points I, j, Ii、IjRepresenting the colors of the i, j dots, Grai、GrajDenotes the gradient of two points i, j, σp、σcAnd σgFor adjusting the weight among the three.
In specific implementation, the smooth operation comprehensively considers color value difference, spatial position difference and gradient difference, finally calculates the weighting coefficient of the neighborhood opacity, and changes sigmap、σcAnd σgThe weights of the three in the weighting coefficients can be adjusted. For example, if texture information is emphasized, the gradient difference should be weighted more heavily, i.e., σgShould be greater than σp、σc
The method adopted by the embodiment for smoothing the opacity obtained in advance considers the influences of the spatial position, the color and the gradient, so that the closer the spatial position is, the more similar the color is, and the closer the texture is, the closer the opacity is, which is consistent with the subjective feeling of human eyes, thereby effectively eliminating the singular point of the opacity and greatly improving the accuracy of the matting.
The invention further provides a second embodiment of the high-precision simple interactive matting method, as shown in fig. 2, on the basis of the first embodiment, after extracting a foreground object from an original image, the high-precision simple interactive matting method may further include:
s1091, directly storing and outputting the extracted foreground target; alternatively, the first and second electrodes may be,
and S1092, inputting a new background image, synthesizing the extracted foreground target and the new background image, and storing and outputting the synthesized image.
The final cutout result obtained in the embodiment can be independently stored and output as a post-production material, can be input by an image processing module of self-contained or other software, and can be output after being synthesized with an input new background image, so that the practicability and the versatility of the invention are improved. The image processing module mentioned above may be an image synthesis module, and the other software may be Photoshop and the like software.
In specific implementation, a synthesis algorithm involved in the synthesis process can adopt a synthesis method based on the opacity α to synthesize a new image by combining the provided new background image according to the extracted foreground object and the smoothed opacity α, and the specific operation is that the region and the color F of the known foreground object and the color B of the background are synthesized pixel by pixel according to the calculated opacity α, namely I is α F + (1- α) B, and I represents a color value in the synthesized image.
The invention also provides a storage device, wherein a plurality of instructions are stored in the storage device, and the instructions are suitable for being loaded by a processor and executing the high-precision simple interactive matting method.
The storage device may be a computer-readable storage medium, and may include: ROM, RAM, magnetic or optical disks, and the like.
The present invention also provides a terminal, including: a processor and a storage device, the processor adapted to implement instructions; the storage device is adapted to store a plurality of instructions, the instructions adapted to be loaded by a process and to perform the high precision easy interactive matting method.
The terminal can be any matting device with simple interaction function, and the device can be various terminal devices, such as: personal computers, mobile phones, tablet computers, and the like, which may be implemented by software and/or hardware.
The invention also provides a first embodiment of a matting device capable of realizing the matting method, as shown in fig. 3, a high-precision simple interactive matting device, comprising:
original image input module 101: for inputting an original image to be scratched.
The prompt information output module 102: and the prompt information is output to the user according to different use environments.
The operation mode determination module 103: the method is used for acquiring different user operation commands and determining corresponding operation modes.
The trimap computation module 104: and calculating to obtain a trisection graph according to the determined operation mode.
The sample point set acquisition module 105: the gradient calculation method is used for performing gradient calculation on each pixel of the unknown region in the trimap image, and obtaining a foreground sample point set and a background sample point set of the current unknown region pixel according to the gradient direction.
Sample point pair screening module 106: and the method is used for respectively extracting a sample point from the two sample point sets to form a sample point pair, estimating different opacities of pixels in the unknown region by adopting different sample point pairs, and taking the opacities with the highest confidence level as the opacities for matting.
Opacity smoothing module 107: and the method is used for carrying out smoothing operation on the neighborhood of the pixel of the current unknown region according to the opacity for matting to obtain the opacity for matting after the pixel of the current unknown region is smoothed.
The foreground object extraction module 108: and extracting a foreground object from the original image according to the matte opacity of each pixel of the unknown region after smoothing and the color value of the sample point pair corresponding to each matte opacity.
In specific implementation, when performing smoothing operation on the neighborhood of the current unknown region pixel, opacity smoothing may be performed by using the following formula:
Figure BDA0001399297940000141
wherein:
Figure BDA0001399297940000142
Pi、Pjrepresenting the coordinates of two points I, j, Ii、IjRepresenting the colors of the i, j dots, Grai、GrajDenotes the gradient of two points i, j, σp、σcAnd σgFor adjusting the weight among the three.
The invention also provides a second embodiment of a high-precision simple interactive matting device, as shown in fig. 4, on the basis of the first embodiment, the matting device may further include:
the first result output module 109: and the foreground object extracting module is used for directly outputting the extracted foreground object.
The invention also provides a third embodiment of a high-precision simple interactive matting device, as shown in fig. 5, on the basis of the first embodiment, the matting device may further include:
the new image input block 110: for inputting a new background image.
A synthesis module 111: and the foreground object and the new background image are synthesized.
The second result output module 112: for outputting the composite image.
The present invention further provides a fourth embodiment of a high-precision simple interactive matting device, which can be substantially based on the second embodiment or the third embodiment, and the device structure of the present embodiment is similar regardless of the second embodiment or the third embodiment, so that only the fourth embodiment based on the second embodiment is shown here for saving space. As shown in fig. 6, on the basis of the second embodiment, the operation modes include: a background difference mode and a manual marking mode; the trimap computation module 104 may specifically include:
coarse trimap image obtaining unit 1041: when the operation mode is the background difference mode, the foreground image I is obtained by the following difference formulad:Id=|I-IbgL, wherein: i is an image to be scratched containing a foreground object and a background object, IbgIs not coveredA background image containing a foreground object; when the operation mode is the manual marking mode, acquiring an operation command of a user, and respectively selecting a point P in the foreground region and the background region of the image I to be scratchedfg、PbgAs initial growing points, the following region growing algorithm was used to diffuse into the neighborhood:
Figure BDA0001399297940000143
wherein: i is the current pixel, j is the neighborhood pixel of I, Ii、IjRespectively representing color values at i, j points, Grai、GrajRespectively representing the gradient values at i and j points, Tv、TgColor threshold and gradient threshold for region growing, respectively, when gne(j) 1 indicates that pixel j is in the same region as pixel i, and g isne(j) When the value is 0, the pixel j and the pixel I are in different areas, and after the area growth is finished, a rough map I of the foreground image and the background image is obtainedgrow
The fine trimap image obtaining unit 1042: for foreground image I when the operation mode is background difference modedPerforming an opening operation to obtain a corrected foreground image IdoFor the corrected foreground image IdoAs a dimension reThe etching operation of (2) to obtain a foreground region F of a trimapgFor the corrected foreground image IdoAs a dimension rdThe expansion operation of (2) to obtain a background region B of a trimapgObtaining a trisection image I of the image I to be scratched under the background difference mode, wherein the region between the foreground region and the background region is an unknown regiont(ii) a When the operation mode is the manual marking mode, the rough map I is subjected togrowAs a dimension reThe etching operation of (2) to obtain a foreground region F of a trimapgFor rough sketch IgrowAs a dimension rdThe expansion operation of (2) to obtain a background region B of a trimapgObtaining a trisection image I of the image I to be scratched in a manual marking mode, wherein the region between the foreground region and the background region is an unknown regiont
The present invention further provides a fifth embodiment of a high-precision simple interactive matting device, which can be substantially based on the second embodiment or the third embodiment, and the device structure of the present embodiment is similar regardless of the second embodiment or the third embodiment, so that only the fifth embodiment based on the second embodiment is shown here for saving space. As shown in fig. 7, on the basis of the second embodiment, the sample point set obtaining module 105 may specifically include:
gradient calculation unit 1051: for each pixel i (x) of the unknown regioni,yi) Calculating the gradient value Gra thereofiThe direction of the gradient is noted
Figure BDA0001399297940000151
The calculation formula of (2) is as follows:
Figure BDA0001399297940000152
the sampling unit 1052: for along
Figure BDA0001399297940000153
Making straight line in the direction, taking the first intersection point of the straight line, the foreground region and the background region as the first sample point of the foreground sample point set and the first sample point of the background sample point set, and taking a plurality of the first sample points around the two first sample points respectively, wherein the first sample points are closest to the first sample points in space distance and have pixel value difference larger than TvAnd/2, respectively generating a foreground sample point set and a background sample point set, wherein: t isvIs the color threshold of the region.
Specifically, around the first sample point of the foreground sample point set and the first sample point of the background sample point set, respectively, 4 samples can be taken which are closest in spatial distance to the first sample point and have a pixel value difference larger than TvA point of/2 such that the foreground sample point set and the background sample point set each contain 5 sample points.
The present invention further provides a sixth embodiment of a high-precision simple interactive matting device, which can be substantially based on the second embodiment or the third embodiment, and the device structure of the present embodiment is similar regardless of the second embodiment or the third embodiment, so that only the sixth embodiment based on the second embodiment is shown here for saving space. As shown in fig. 8, based on the second embodiment, the sample point pair screening module 106 may specifically include:
opacity estimation unit 1061: the method is used for respectively extracting a sample point from the foreground sample point set and the background sample point set to form a sample point pair, drawing a straight line according to the sample point pair, projecting each pixel i of an unknown region onto the straight line, and estimating the opacity of each pixel i:
Figure BDA0001399297940000161
wherein: fi (m)And
Figure BDA0001399297940000162
respectively representing the color values of the mth foreground sample point and the nth background sample point.
Sample point pair screening unit 1062: and selecting the opacity with the highest confidence as the matting opacity of the pixels of the unknown region, and using the sample point pair corresponding to the matting opacity as the final matting sample point pair.
The present invention further provides a seventh embodiment of a high-precision simple interactive matting device, as shown in fig. 9, on the basis of the sixth embodiment, the sample point pair screening module 106 may further specifically include:
confidence calculation unit 1063: the method is used for taking sample point pairs from the foreground sample point set and the background sample point set pair by pair, and calculating the error, namely the linear similarity, of each sample point pair relative to the linear model shown in the formula (3):
Figure BDA0001399297940000163
calculating the foreground color similarity according to the color values of the pixel i in the unknown area and the sampled sample point pair:
Figure BDA0001399297940000164
and background color similarity:
Figure BDA0001399297940000165
and calculating the confidence of the opacity estimation of the pixel i of the current unknown region corresponding to each sample point pair according to the obtained linear similarity and the color similarity:
Figure BDA0001399297940000166
wherein: sigma1And σ2For adjusting the weights between different similarities.
The invention also provides a hardware structure diagram of the matting device, as shown in fig. 10, a hardware structure of a high-precision simple interactive matting device can substantially include: the image acquisition part, the image processing part and the image output part can clearly see the modules included in the three parts and the connection relationship among the modules from fig. 10, and therefore, the description is omitted here.
For the image acquisition part: the light signal emitted or reflected by the external scene is projected onto the photosensitive array through the lens and converted into an analog electric signal. The analog signal is converted into a digital signal by an A/D module through some filtering and enhancing processes, and the digital signal is input into a Digital Signal Processor (DSP). The choice of the light-sensing device is not particularly limited, and may be either a CCD or a CMOS. The CCD has the advantages of high sensitivity and low noise, and has the defects of complex production process, high cost and high power consumption; the CMOS has the advantages of high integration level, low cost, low power consumption and the disadvantages of low sensitivity and high noise. In general, photosensitive devices with high resolution, sensitivity, and signal-to-noise ratio should tend to be selected for better imaging quality.
For the image processing section: inside the digital image processor, the main image processing work is completed, namely, the matting and the synthesis operation are carried out on the image. The DSP may receive commands from the user interaction interface such as foreground and background region selection, image storage, and other functions. The user interaction interface comprises a touch-able LCD screen, keys, etc. After the image processor completes the operation, the image can be stored or output according to the user instruction, so the image processor should have direct access to the external storage device. The choice of the image processor is not particularly limited, and may be an ARM, a DSP, or an FPGA, as long as the processing requirements are satisfied.
For the image output section: the image signal after being processed is encoded according to a certain format (such as JPEG, H264, etc.) and is output to a computer and a display through an external interface. The external interface can be an interface such as an HDMI, a VGA, a network port and the like, and only the bandwidth is required to be larger than the video rate.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It will be appreciated that the relevant features of the method, apparatus, storage device and terminal described above are mutually referenced. In addition, "first", "second", and the like in the above embodiments are for distinguishing the embodiments, and do not represent merits of the embodiments.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the apparatus and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. In addition, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (6)

1. A high-precision simple interactive matting method is characterized in that: the method comprises the following steps:
s101, inputting an original image to be scratched;
s102, outputting prompt information to a user according to different use environments;
s103, acquiring different user operation commands and determining corresponding operation modes;
s104, calculating to obtain a ternary diagram according to the determined operation mode;
s105, performing gradient calculation on each pixel of the unknown region in the trimap image, and acquiring a foreground sample point set and a background sample point set of the current unknown region pixel according to the gradient direction;
s106, respectively extracting a sample point from the two sample point sets to form a sample point pair, estimating different opacities of pixels in an unknown region by adopting different sample point pairs, and taking the opacities with the highest confidence level as opacities for matting;
s107, according to the opacity for matting, carrying out smoothing operation on the neighborhood of the pixels of the current unknown region to obtain the opacity for matting after the pixels of the current unknown region are smoothed;
s108, extracting a foreground target from an original image according to the smooth matting opacity of each pixel of the unknown region and the color value of a sample point pair corresponding to each matting opacity;
the gradient calculation is performed on each pixel of the unknown region in the three-segment graph, and a foreground sample point set and a background sample point set of the current pixel of the unknown region are obtained according to the gradient direction, and the method specifically includes the following steps:
for each pixel i (x) of the unknown regioni,yi) Calculating the gradient value Gra thereofiThe direction of the gradient is marked as theta, and the calculation formula of the theta is as follows:
Figure FDA0002171747210000011
making a straight line along the theta direction, taking a first intersection point of the straight line, the foreground region and the background region as a first sample point of the foreground sample point set and a first sample point of the background sample point set respectively, and taking a plurality of first sample points which are closest to the first sample points in space distance and have pixel value difference larger than T around the two first sample points respectivelyvAnd/2, respectively generating a foreground sample point set and a background sample point set, wherein: t isvIs a regionA color threshold;
the method includes the steps of respectively extracting a sample point from the two sample point sets to form a sample point pair, estimating different opacities of pixels in an unknown region by adopting different sample point pairs, and taking the opacities with the highest confidence degrees as the opacities for matting, wherein the method specifically includes the following steps:
respectively extracting a sample point from the foreground sample point set and the background sample point set to form a sample point pair, drawing a straight line according to the sample point pair, projecting each pixel i of the unknown region onto the straight line, and estimating the opacity of each pixel i:
Figure FDA0002171747210000021
wherein: fi (m)And
Figure FDA0002171747210000022
respectively representing color values of an mth foreground sample point and an nth background sample point;
for a plurality of different opacity estimations of each obtained unknown region pixel, selecting the opacity with the highest confidence coefficient as the matting opacity of the unknown region pixel, and using a sample point pair corresponding to the matting opacity as a final matting sample point pair;
the calculation of the confidence coefficient specifically comprises the following steps:
taking out sample point pairs from the foreground sample point set and the background sample point set pair by pair, and calculating the error of each sample point pair relative to the linear model shown in the formula (3), namely the linear similarity:
Figure FDA0002171747210000023
calculating the foreground color similarity according to the color values of the pixel i in the unknown area and the sampled sample point pair:
Figure FDA0002171747210000024
and background color similarity:
Figure FDA0002171747210000025
and calculating the confidence of the opacity estimation of the pixel i of the current unknown region corresponding to each sample point pair according to the obtained linear similarity and the color similarity:
Figure FDA0002171747210000026
wherein: sigma1And σ2The weight value between different similarity degrees is adjusted;
when the neighborhood of the current unknown region pixel is subjected to smoothing operation, the opacity calculation after smoothing is carried out by adopting the following formula:
Figure FDA0002171747210000027
wherein:
Figure FDA0002171747210000028
Pi、Pjrepresenting the coordinates of two points I, j, Ii、IjRepresenting the colors of the i, j dots, Grai、GrajDenotes the gradient of two points i, j, σp、σcAnd σgFor adjusting the weight among the three.
2. A high precision simple interactive matting method according to claim 1 characterised in that: after extracting the foreground object from the original image, the method further comprises the following steps:
s1091, directly storing and outputting the extracted foreground target; alternatively, the first and second electrodes may be,
and S1092, inputting a new background image, synthesizing the extracted foreground target and the new background image, and storing and outputting the synthesized image.
3. A high precision simple interactive matting method according to claim 1 characterised in that: the operating mode includes: a background difference mode and a manual marking mode; according to the determined operation mode, calculating to obtain a trisection graph, which specifically comprises the following steps:
if the operation mode is a background difference mode, obtaining a foreground image I through the following difference formulad:Id=|I-IbgL, wherein: i is an image to be scratched containing a foreground object and a background object, IbgA background image containing no foreground object; for foreground image IdPerforming an opening operation to obtain a corrected foreground image Ido(ii) a For the corrected foreground image IdoAs a dimension reThe etching operation of (2) to obtain a foreground region F of a trimapgFor the corrected foreground image IdoAs a dimension rdThe expansion operation of (2) to obtain a background region B of a trimapgAnd obtaining a trisection image I of the image I to be scratched under the background difference mode by using the unknown region as the region between the foreground region and the background regiont
If the operation mode is the manual marking mode, firstly obtaining an operation command of a user, and respectively selecting a point P in the foreground region and the background region of the image I to be scratchedfg、PbgAs initial growing points, the following region growing algorithm was used to diffuse into the neighborhood:
Figure FDA0002171747210000031
wherein: i is the current pixel, j is the neighborhood pixel of I, Ii、IjRespectively representing color values at i, j points, Grai、GrajRespectively representing the gradient values at i and j points, Tv、TgColor threshold and gradient threshold for region growing, respectively, when gne(j) 1 indicates that pixel j is in the same region as pixel i, and g isne(j) 0, indicating that pixel j and pixel i are in different regions; after the area growth is finished, obtaining a rough map I of the foreground and the backgroundgrowSubsequently, subsequentlyFor rough sketch IgrowAs a dimension reThe etching operation of (2) to obtain a foreground region F of a trimapgFor rough sketch IgrowAs a dimension rdThe expansion operation of (2) to obtain a background region B of a trimapgAnd the area between the foreground area and the background area is an unknown area, so that a trisection image I of the image I to be scratched in a manual marking mode is obtainedt
4. A high precision simple interactive matting method according to claim 1 characterised in that: respectively taking 4 pixels which are closest to the first sample point of the foreground sample point set and the first sample point of the background sample point set in space distance and have pixel value difference larger than TvA point of/2 such that the foreground sample point set and the background sample point set each contain 5 sample points.
5. A storage device having a plurality of instructions stored therein, characterized in that: the instructions are suitable for being loaded by a processor and executing the high-precision simple interactive matting method according to any one of claims 1 to 4.
6. A terminal, characterized by: the method comprises the following steps:
a processor adapted to implement instructions; and
a storage device adapted to store a plurality of instructions adapted to be loaded by a process and to perform a high precision easy interactive matting method as claimed in any one of claims 1 to 4.
CN201710791442.6A 2017-09-05 2017-09-05 High-precision simple interactive matting method, storage device and terminal Active CN107516319B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710791442.6A CN107516319B (en) 2017-09-05 2017-09-05 High-precision simple interactive matting method, storage device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710791442.6A CN107516319B (en) 2017-09-05 2017-09-05 High-precision simple interactive matting method, storage device and terminal

Publications (2)

Publication Number Publication Date
CN107516319A CN107516319A (en) 2017-12-26
CN107516319B true CN107516319B (en) 2020-03-10

Family

ID=60725074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710791442.6A Active CN107516319B (en) 2017-09-05 2017-09-05 High-precision simple interactive matting method, storage device and terminal

Country Status (1)

Country Link
CN (1) CN107516319B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108447064B (en) * 2018-02-28 2022-12-13 苏宁易购集团股份有限公司 Picture processing method and device
CN108961279A (en) * 2018-06-28 2018-12-07 Oppo(重庆)智能科技有限公司 Image processing method, device and mobile terminal
CN109389611A (en) * 2018-08-29 2019-02-26 稿定(厦门)科技有限公司 The stingy drawing method of interactive mode, medium and computer equipment
WO2020062898A1 (en) * 2018-09-26 2020-04-02 惠州学院 Video foreground target extraction method and apparatus
CN111435282A (en) * 2019-01-14 2020-07-21 阿里巴巴集团控股有限公司 Image processing method and device and electronic equipment
CN109829925B (en) * 2019-01-23 2020-12-25 清华大学深圳研究生院 Method for extracting clean foreground in matting task and model training method
CN110047034A (en) * 2019-03-27 2019-07-23 北京大生在线科技有限公司 Stingy figure under online education scene changes background method, client and system
CN110111342B (en) * 2019-04-30 2021-06-29 贵州民族大学 Optimized selection method and device for matting algorithm
CN110097560A (en) * 2019-04-30 2019-08-06 上海艾麒信息科技有限公司 Scratch drawing method and device
CN110400323B (en) * 2019-07-30 2020-11-24 上海艾麒信息科技股份有限公司 Automatic cutout system, method and device
CN110503704B (en) * 2019-08-27 2023-07-21 北京迈格威科技有限公司 Method and device for constructing three-dimensional graph and electronic equipment
CN110717925B (en) * 2019-09-18 2022-05-06 贵州民族大学 Foreground mask extraction method and device, computer equipment and storage medium
CN111275804B (en) * 2020-01-17 2022-09-16 腾讯科技(深圳)有限公司 Image illumination removing method and device, storage medium and computer equipment
CN112149592A (en) * 2020-09-28 2020-12-29 上海万面智能科技有限公司 Image processing method and device and computer equipment
CN112330692B (en) * 2020-11-11 2022-06-28 安徽文香科技有限公司 Matting method, matting device, matting equipment and storage medium
CN112598694B (en) * 2020-12-31 2022-04-08 抖动科技(深圳)有限公司 Video image processing method, electronic device and storage medium
CN113487630B (en) * 2021-07-14 2022-03-22 辽宁向日葵教育科技有限公司 Matting method, device, equipment and storage medium based on material analysis technology

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473780A (en) * 2013-09-22 2013-12-25 广州市幸福网络技术有限公司 Portrait background cutout method
CN104036517A (en) * 2014-07-01 2014-09-10 成都品果科技有限公司 Image matting method based on gradient sampling
CN106815844A (en) * 2016-12-06 2017-06-09 中国科学院西安光学精密机械研究所 A kind of stingy drawing method based on manifold learning
CN106952270A (en) * 2017-03-01 2017-07-14 湖南大学 A kind of quickly stingy drawing method of uniform background image

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7636128B2 (en) * 2005-07-15 2009-12-22 Microsoft Corporation Poisson matting for images
US7420590B2 (en) * 2005-09-29 2008-09-02 Mitsubishi Electric Research Laboratories, Inc. Video matting using camera arrays
US10089740B2 (en) * 2014-03-07 2018-10-02 Fotonation Limited System and methods for depth regularization and semiautomatic interactive matting using RGB-D images
KR101624801B1 (en) * 2014-10-15 2016-05-26 포항공과대학교 산학협력단 Matting method for extracting object of foreground and apparatus for performing the matting method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473780A (en) * 2013-09-22 2013-12-25 广州市幸福网络技术有限公司 Portrait background cutout method
CN104036517A (en) * 2014-07-01 2014-09-10 成都品果科技有限公司 Image matting method based on gradient sampling
CN106815844A (en) * 2016-12-06 2017-06-09 中国科学院西安光学精密机械研究所 A kind of stingy drawing method based on manifold learning
CN106952270A (en) * 2017-03-01 2017-07-14 湖南大学 A kind of quickly stingy drawing method of uniform background image

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A novel interactive matting system based on texture information;Wei Sun et al.;《IEEE》;20140410;全文 *
Automatic Trimap and Alpha-Matte Generation For Digital Image Matting;Sweta Singh et al.;《IEEE》;20131231;全文 *
Automatic Trimap Generation for Image Matting;Vikas Gupta et al.;《arXiv》;20170704;全文 *
High Resolution Matting via Interactive Trimap Segmentation;Christoph Rhemann et al.;《IEEE》;20081231;全文 *
基于背景差分的快速视频抠图算法的研究;于明 等;《河北工业大学学报》;20130228;第42卷(第1期);全文 *

Also Published As

Publication number Publication date
CN107516319A (en) 2017-12-26

Similar Documents

Publication Publication Date Title
CN107516319B (en) High-precision simple interactive matting method, storage device and terminal
CN109919869B (en) Image enhancement method and device and storage medium
Li et al. Low-light image and video enhancement using deep learning: A survey
CN107452010B (en) Automatic cutout algorithm and device
US10509954B2 (en) Method and system of image segmentation refinement for image processing
CN110637297B (en) Convolution engine, data processing method and electronic equipment
CN106899781B (en) Image processing method and electronic equipment
CN110574026A (en) Configurable convolution engine for interleaving channel data
CN110574025A (en) Convolution engine for merging interleaved channel data
CN113508416B (en) Image fusion processing module
KR20200014842A (en) Image illumination methods, devices, electronic devices and storage media
US11308655B2 (en) Image synthesis method and apparatus
CN106664351A (en) Method and system of lens shading color correction using block matching
CN109064505B (en) Depth estimation method based on sliding window tensor extraction
SE534551C2 (en) Digital image manipulation including identification of a target area in a target image and seamless replacement of image information from a source image
WO2018053952A1 (en) Video image depth extraction method based on scene sample library
CN112241933A (en) Face image processing method and device, storage medium and electronic equipment
CN108965647B (en) Foreground image obtaining method and device
CN111724317A (en) Method for constructing Raw domain video denoising supervision data set
CN108961299B (en) Foreground image obtaining method and device
US11812154B2 (en) Method, apparatus and system for video processing
CN109064525A (en) A kind of picture format conversion method, device, equipment and storage medium
CN111489322A (en) Method and device for adding sky filter to static picture
CN114372990A (en) Image synthesis method, device, equipment and storage medium for thoroughly scratching and removing green screen
WO2019200785A1 (en) Fast hand tracking method, device, terminal, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant