CN109829925A - A kind of method and model training method for extracting clean prospect in scratching figure task - Google Patents

A kind of method and model training method for extracting clean prospect in scratching figure task Download PDF

Info

Publication number
CN109829925A
CN109829925A CN201910064557.4A CN201910064557A CN109829925A CN 109829925 A CN109829925 A CN 109829925A CN 201910064557 A CN201910064557 A CN 201910064557A CN 109829925 A CN109829925 A CN 109829925A
Authority
CN
China
Prior art keywords
prospect
image
color
background
opacity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910064557.4A
Other languages
Chinese (zh)
Other versions
CN109829925B (en
Inventor
郭振华
刘冀洋
杨芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Tsinghua University
Original Assignee
Shenzhen Graduate School Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Tsinghua University filed Critical Shenzhen Graduate School Tsinghua University
Priority to CN201910064557.4A priority Critical patent/CN109829925B/en
Publication of CN109829925A publication Critical patent/CN109829925A/en
Application granted granted Critical
Publication of CN109829925B publication Critical patent/CN109829925B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of method and model training method that clean prospect is extracted in scratching figure task, and model training method includes: the opacity and color data of A1. acquisition prospect;A2. image foreground and background combination producing is combined;A3. according to the opacity label of prospect in combination image, opaque region complete in the prospect combined in image is indicated with white, fully transparent area is indicated with black, and translucent area is indicated with grey;Then it is operated by expansion form, translucent area is subjected to expansion and forms zone of ignorance, ultimately generates three color sketch of prospect label;A4. training convolutional neural networks: input combination image original image and three color sketch of corresponding prospect label, the opacity estimation of output prospect and the color of background are estimated, output is compared with the true value label combined in image and is constantly adjusted according to parameter of the comparison result to convolutional neural networks model, to obtain trained convolutional neural networks model.

Description

A kind of method and model training method for extracting clean prospect in scratching figure task
Technical field
The present invention relates to field of image processing more particularly to a kind of methods and mould that clean prospect is extracted in scratching figure task Type training method.
Background technique
Figure is scratched, i.e., extracts foreground object in original image, be usually applied to new background replacement and is led in image procossing Domain, such as: certificate photo processing, commercial advertisement have important role in poster processing.It is generalized to the background replacement in video In also have many applications.
For scene image I, the color F of foreground object therein is generally acknowledged that attribute α and background with opacity B is overlapped fusion according to opacity, is embodied as:
I=α F+ (1- α) B
Previous stingy drawing method common are chroma key and scratch figure and stingy two class of figure of opacity.
Chroma key scratches figure and refers to the shooting picture under the background of specific single color and separate prospect according to backcolor situation Color and background color, to have the function that extraction prospect.Such method is more usually used in the FIG pull handle of video frame now In, but it is limited to the demand of plain background, the shooting condition of image to be processed is more harsh, and application scenarios also more limit.
Opacity scratches figure and refers to seeking for concern prospect opacity.Such methods are generally adaptable to more complex back Scape situation, but it is easy the color information of ignorance prospect, thus when prospect is translucent, just with opacity and scene image The Scene colors information of I, often have powerful connections in the prospect extracted color and texture are left.
The stingy drawing method of the opacity of non-deep learning has rough during calculating the opacity of estimation prospect Color information estimation, but this estimation is usually very coarse, cannot extract clean forecolor.And deep learning side Method usually only in image repair and projects under reflection image such as separates at the color-separateds scene and has original performance;Applied to not When transparency scratches the scene of figure, generally it is used to estimate the opacity of foreground object and estimating without any color information completely Meter.
Summary of the invention
To solve the above problems, the present invention proposes a kind of method and model training for extracting clean prospect in scratching figure task Method can extract clean prospect in complex scene.
A kind of model training method extracting clean prospect in scratching figure task provided by the invention comprising the steps of:
A1. the opacity and color data of multiple prospects are acquired;
A2. multiple prospects and multiple background combinations are generated into combination image;
A3. it according to the opacity label of prospect in the combination image, will combine completely impermeable in the prospect in image Area pellucida domain indicates that fully transparent area is indicated with black with white, and translucent area is indicated with grey;Then pass through expansion form Operation is learned, the translucent area is subjected to expansion and forms zone of ignorance, ultimately generates three color sketch of prospect label;
A4. training convolutional neural networks: the combination image original image of input step A2 and the prospect mark of corresponding step A3 Remember three color sketches, export prospect opacity estimation and background color estimate, by above-mentioned output with combine it is true in image Value label is compared and is constantly adjusted according to parameter of the comparison result to the convolutional neural networks model, to be trained Convolutional neural networks model.
Preferably, acquire as follows in the step A1: A11. shoots white background, with white background respectively This 4 picture of prospect, black background, the prospect with black background;A12. according to 4 pictures in step A11, before being calculated The opacity of scape and the primary data of color;A13. the primary data obtained in step A12 is subjected to noise compensation and color Enhancing correction, obtains the opacity and color data of prospect.
Preferably, in the step A2, described multiple prospects and multiple background combinations are generated combination image includes: by institute It states multiple prospects and carries out Random-Rotation and/or overturning to expand to Foreground Material, then generate and combine with multiple background combinations Image.
Preferably, it in the step A2, is described using the Foreground Material after amplification and the matching relationship form between background The combination image is simultaneously stored.
Preferably, in the training process of the step A4, training sample, increase the number of iterations are enhanced by dynamic to mention High convolutional neural networks performance.
Further preferably, the dynamic enhancing training sample includes: to the combination image original image and corresponding prospect Three color sketch of label synchronizes random cropping, and adjusts the uniform sizes inputted to convolutional neural networks training, and as follows One of step is a variety of: the zone of ignorance in the three color sketch of prospect label is carried out random expansion process;It will combination The chroma-luminance of background and/or contrast are adjusted at random in image;By combine image in prospect chroma-luminance carry out with Machine adjustment.
Preferably, the convolutional neural networks model uses full convolutional network form, and the full convolutional network is to input Combination image original image and three color sketch of prospect label carry out multiple convolution layer, active coating and pond down-sampling layer and obtain characteristic pattern Spectrum, and the full convolutional network respectively has a decoder for the opacity estimation of prospect and the color estimation of background, leads to It crosses the convolutional layer, active coating and deconvolution up-sampling layer to calculate the characteristic spectrum, before obtaining input image size The opacity estimation of scape and the color of background are estimated;Between two decoders under different scale space characteristics information exchange, And scale space feature is corresponded to encoder and carries out information fusion.
Preferably, it in the step A4 by the way of trained to sample in batches, will be updated after the completion of every a batch training The weight of convolutional neural networks, renewal process use gradient descent method, and fixed learning rate.
The present invention also provides a kind of methods that clean prospect is extracted in scratching figure task, which is characterized in that including following step Rapid: B1. obtains the scene image comprising prospect to be extracted;It B2. manually will be complete in the prospect to be extracted in the scene image Opaque region indicates that fully transparent area is indicated with black with white, remaining zone of ignorance is indicated with grey, generates prospect mark Remember three color sketches;B3. the scene image original image in step B1 and three color sketch of the prospect label in corresponding step B2 is defeated Enter to as above in trained convolutional neural networks model, the opacity estimation and backcolor for exporting prospect to be extracted are estimated Meter;B4. estimate according to the estimation of the opacity of the prospect to be extracted and backcolor and combine image, according to prospect and Background Superposition Formula, which calculates, obtains clean prospect.
Beneficial effects of the present invention: pass through the opacity and color data of acquisition prospect in advance in the present invention;Then will Foreground and background combination producing combines image, meanwhile, according to the true value label in combination image, it is raw that tricolor marker is carried out to prospect At three color sketch of prospect label;Image original image and three color sketch of prospect label will be combined again as training sample, to convolution mind It is trained through network model to obtain and train convolutional neural networks model;Pass through the trained convolutional neural networks mould The color of type, the opacity estimation and background that can obtain prospect is estimated, may further calculate and obtain clean prospect.Pass through This method can extract clean prospect in complex scene, especially for the high prospect of transparency.
Detailed description of the invention
Fig. 1 is the model training method flow diagram for extracting clean prospect in the embodiment of the present invention in scratching figure task.
Fig. 2 is the opacity that prospect is acquired in the embodiment of the present invention and the flow diagram of color data.
Fig. 3 is that convolutional neural networks model carries out batch flow chart of training specific method in the embodiment of the present invention.
Fig. 4 is scratch in figure task using trained convolutional neural networks model in the embodiment of the present invention extracting completely The schematic diagram of prospect.
Specific embodiment
With reference to embodiment and compares attached drawing invention is further described in detail, it should be emphasised that, Following the description is only exemplary, the range and its application being not intended to be limiting of the invention.
The present embodiment provides a kind of methods that clean prospect is extracted in scratching figure task, specifically include: model training method, And it is based on trained model, the method for clean prospect is extracted from practical combinations image.
The model training method that clean prospect is extracted in scratching figure task is as shown in Figure 1, comprising:
A1. the opacity and color data of multiple prospects are acquired.
It is main that four pictures are shot by camera for the opacity and color data of each prospect, and be computed It is obtained with artificial enhancing, specific method flow schematic diagram is as shown in Figure 2, comprising:
A10. filming apparatus environment and establishing shot device are built.
A11. white background, the prospect with white background, black background, the prospect with black background this four are shot respectively Picture.
The shooting sequence of this four picture is specific as shown in Fig. 2, first take pictures to white background, then sets in white background Enter foreground object and take pictures, then white background is switched to black background and is taken pictures, finally removes foreground object and black is carried on the back Scape is taken pictures.
Have black, the white dichromatism of low saturation characteristic as background by selecting, foreground object in shooting picture can be reduced Color error ratio caused by being influenced by ambient light reflections, and then improve the color accuracy for finally calculating obtained prospect;Separately Outside, because of black, white dichromatism high contrast the characteristics of, error can be reduced in the color and opacity of the prospect of calculating.
It is to be understood that can also first take pictures to black background, then according still further to black background-prospect, white background- Prospect, the sequence of white background are taken pictures.
A12. according to four pictures in step A11, the opacity of prospect and the primary data of color is calculated.
It is opened by the black background obtained of as above taking pictures, black background-prospect, white background-prospect, white background four Picture can be obtained the opacity of prospect and the primary data of color by overlaying relation equation.Its overlaying relation equation is such as Under:
I1、I2、I3、I4The white background, prospect with white background, the prospect with black background, black is shot in corresponding A 11 This four picture of color background;B1Indicate white background, B2Indicate black background, F indicates that foreground object color, α indicate prospect not Transparency.
A13. the primary data obtained in step A12 is subjected to noise compensation and color enhancement corrects, obtain prospect not Transparency and color data.
Due to switching backcolor when shooting picture in A11, one will be generated to foreground object illumination in actual photographed environment Fixing is rung, and is read influence of noise by shooting camera, so that calculating the opacity of the prospect got according to equation group in A12 There is certain error and noise with color data, need additionally to carry out color enhancement correction and noise remove.
A2. multiple prospects and multiple background combinations are generated into combination image, there is the prospect in the combination image The true value label of opacity and color and the color of the background.
When for trained Foreground Material negligible amounts, the multiple prospect can be carried out to Random-Rotation and/or turned over The mode turned expands Foreground Material, then combines image with the generation of multiple background random combines again.
In order to which the dynamic saved memory space and consider the training sample in subsequent training process enhances, after amplification Matching relationship form between Foreground Material and background describes the combination image, and the combination after the indirect multiple combinations of storage Image.More specifically, using the true value label of one group of forecolor, prospect opacity and backcolor in current procedures Composograph is described.
A3. it according to the opacity label of prospect in the combination image, will combine completely impermeable in the prospect in image Area pellucida domain indicates that fully transparent area is indicated with black with white, and translucent area is indicated with grey;Then pass through expansion form Operation is learned, the translucent area is subjected to expansion and forms zone of ignorance, ultimately generates three color sketch of prospect label.
According to the opacity label of prospect in the combination image of step A2, complete opaque region in prospect is determined, i.e., Opacity label value is to be indicated at 1 using white;Fully transparent area, i.e. opacity label value are to use black at 0 It indicates;Translucent area, i.e. opacity label be greater than 0 and less than 1 at, indicated using grey.The step according to prospect not Transparency label is automatically performed by program.
Then, it is operated, translucent area as above is expanded, the unknown area paid close attention to by expansion form Domain, to obtain the three color sketch of prospect label of approximate handmarking's effect in scene.
A4. training convolutional neural networks: the combination image original image of input step A2 and three color of prospect label of step A3 The color of sketch, the opacity estimation and background that export prospect is estimated, by above-mentioned output and the true value label that combines in image It is compared and is constantly adjusted according to parameter of the comparison result to the convolutional neural networks model, to obtain trained convolution Neural network model.
The process of training convolutional neural networks are as follows: building convolutional neural networks;The constitutional diagram that input prospect is merged with background In the case where as original image and three color sketch of prospect label, the color of the opacity estimation and background that export prospect is marked in prospect Situation in three color sketches in zone of ignorance.
The building convolutional neural networks specifically: use full convolutional network form, structural network has a coding Device, to the prospect of input and the three color sketch of combination image and corresponding prospect label that background merges, by multiple convolution layer, Active coating and pond down-sampling layer obtain characteristic spectrum.In addition, convolutional network for background color estimation and prospect it is impermeable Lightness estimation respectively has a decoder, up-samples layer by convolutional layer, active coating and deconvolution and calculates characteristic spectrum, with Obtain the prospect opacity of input image size and the prognostic chart of backcolor.Wherein, not between two decoder networks With having information exchange under scale space feature, and correspond to encoder the progress information fusion of scale space feature.
The training convolutional neural networks refer specifically to: use supervised learning form, for network output single channel not Transparency figure and triple channel background colour coloured picture zone of ignorance part and corresponding prospect opacity in three color sketch of prospect label The label value of figure and background colour coloured picture carries out learning training.
Convolutional neural networks use the method trained in batches to sample, will be updated nerve after the completion of every a batch image training The weight of network, renewal process use gradient descent method, and fixed learning rate.By dynamically enhancing training sample in training, fitting Degree increases two methods of the number of iterations to improve convolutional neural networks performance.It is as shown in Figure 3 that trained detailed process is criticized each time.
The dynamic enhancing training sample specifically refers to: the robustness in order to preferably promote convolutional neural networks, together When in order to save the hardware memory spaces such as hard disk in training process, memory, to material in the training process of convolutional neural networks Random cropping and change of scale;Then chroma-luminance is enhanced, and to the zone of ignorance in three color sketch of prospect label Area expands at random.
Trimming operation the original background of material sample, original foreground color, original foreground opacity and it is corresponding before It is synchronously completed in three color sketch of scape label, change of scale is carried out after cutting.Change of scale, which refers to, cuts block scale for training sample When being converted into trained network, training is criticized and fixed input size to realize.Then 1. forecolor brightness is adjusted at random, is 2. carried on the back Scape chroma-luminance adjusts at random, 3. backcolor contrast adjust at random, the 4. zone of ignorance face in three color sketch of prospect label One of random expansion of product or a variety of progress reconfigure after adjusting at random.
By continuous training as above, trained convolutional neural networks model can be obtained.
Using as above trained convolutional neural networks model, clean prospect can be extracted from actual scene image, Detailed process includes the following:
B1. the scene image comprising prospect to be extracted is obtained.
B2. manually opaque region complete in the prospect to be extracted in the scene image is indicated with white, completely thoroughly Area pellucida domain indicates that remaining zone of ignorance is indicated with grey with black, generates three color sketch of prospect label.
B3. the scene image original image in step B1 and three color sketch of the prospect label in corresponding step B2 are input to In trained convolutional neural networks model as above, the opacity estimation and backcolor estimation of prospect to be extracted are exported.
B4. estimate according to the estimation of the opacity of the prospect to be extracted and backcolor and combine image, according to preceding Scape and background Superposition Formula, which calculate, obtains clean prospect.
According to the estimation of the opacity of the step B3 prospect exported and backcolor estimation, can't directly export clean Extraction prospect, it is still necessary to further calculate.Specifically: estimated by the zone of ignorance in three color sketch of Utilization prospects label Backcolor, the opacity of prospect and original scene image are done according to the calculating of foreground and background overlaying relation formula Net prospect.
Scratch using trained convolutional neural networks model schematic diagram such as Fig. 4 institute that clean prospect is extracted in figure task Show.In Fig. 4, about the output of convolutional neural networks, facilitate displaying, the output that the opacity about prospect is estimated, according to preceding The part that the region that three color sketch of scape label will determine as background is set to 0, is determined as prospect is set to 1;Backcolor is estimated Output, according to the region that three color sketch of prospect label will determine as background be set to input combination image background regions color Value, and the region for being determined as prospect in three color sketches is then set to 0.
Consider extraction prospect in the application being superimposed in new background, the color and opacity of the extraction prospect prospect Product representation.Calculating about the prospect of extraction only carries out in the non-zero region of the prospect opacity value of estimation, about extraction Clean prospect α F may be expressed as: in relation to the specific formula for calculation of estimation opacity α, estimation background B and former input figure I
Want to extract clean prospect, the transparence information and color information of prospect must must be just known, although the prior art In the opacity information of prospect can be obtained by the method for deep learning, but prospect can not be directly obtained by deep learning Color information.The opacity of prospect and the color situation of background are estimated in this implementation by convolutional neural networks model, Then further according to the calculation formula of the prospect of extraction to obtain the color situation of prospect or directly obtain the color of prospect and opaque It spends product (i.e. clean foreground information).And the training of convolutional neural networks model, method calculating, which is shot, by four pictures first obtains Then prospect and different backgrounds are combined generation combination image by the opacity and color information for the prospect of obtaining, meanwhile, it is right Three color sketch of acquisition prospect label is marked in prospect in combination image;It will combination image original image and three color sketch of prospect label It is input in convolutional neural networks model to be trained to model simultaneously.Via the nerve net that method training obtains in the present invention Network model can more accurately estimate the opacity feelings of backcolor situation and prospect in scene under complex scene Condition.The calculation formula for reusing extraction prospect removes backcolor, texture, scratches in figure task before extraction completely to realize Scape has better effects.
The above content is combine it is specific/further detailed description of the invention for preferred embodiment, cannot recognize Fixed specific implementation of the invention is only limited to these instructions.For those of ordinary skill in the art to which the present invention belongs, Without departing from the inventive concept of the premise, some replacements or modifications can also be made to the embodiment that these have been described, And these substitutions or variant all shall be regarded as belonging to protection scope of the present invention.

Claims (9)

1. a kind of model training method for extracting clean prospect in scratching figure task, which is characterized in that comprise the steps of:
A1. the opacity and color data of multiple prospects are acquired;
A2. multiple prospects and multiple background combinations are generated into combination image;
A3. according to the opacity label of prospect in the combination image, complete opaque region in the prospect in image will be combined Domain indicates that fully transparent area is indicated with black with white, and translucent area is indicated with grey;Then it is grasped by expansion form Make, the translucent area is subjected to expansion and forms zone of ignorance, ultimately generates three color sketch of prospect label;
A4. training convolutional neural networks: the combination image original image of input step A2 and the prospect label three of corresponding step A3 The color of color sketch, the opacity estimation and background that export prospect is estimated, by above-mentioned output and the true value mark that combines in image Label are compared and are constantly adjusted according to parameter of the comparison result to the convolutional neural networks model, to obtain trained volume Product neural network model.
2. model training method as described in claim 1, which is characterized in that acquired as follows in the step A1:
A11. this four white background, the prospect with white background, black background, the prospect with black background figures are shot respectively Piece;
A12. according to four pictures in step A11, the opacity of prospect and the primary data of color is calculated;
A13. the primary data obtained in step A12 is subjected to noise compensation and color enhancement corrects, obtain the opaque of prospect Degree and color data.
3. model training method as described in claim 1, which is characterized in that in the step A2, it is described by multiple prospects and Multiple background combinations generate combination image
By the multiple prospect carry out Random-Rotation and/or overturning to be expanded to Foreground Material, then with multiple background combinations Generate combination image.
4. model training method as claimed in claim 3, which is characterized in that in the step A2, before after amplification Matching relationship form between scape material and background describes described to combine image and stored.
5. model training method as described in claim 1, which is characterized in that in the training process of the step A4, pass through Dynamic enhancing training sample increases the number of iterations to improve convolutional neural networks performance.
6. model training method as claimed in claim 5, which is characterized in that the dynamic enhancing training sample includes: to institute It states combination image original image and three color sketch of corresponding prospect label synchronizes random cropping, and adjust to convolutional neural networks One of the uniform sizes of training input and following steps are a variety of:
Zone of ignorance in the three color sketch of prospect label is subjected to random expansion process;
The chroma-luminance of background and/or contrast in image will be combined to be adjusted at random;
The chroma-luminance for combining prospect in image is adjusted at random.
7. model training method as described in claim 1, which is characterized in that the convolutional neural networks model uses full convolution Latticed form, the full convolutional network to the combination image original image and three color sketch of prospect label of input carry out multiple convolution layer, Active coating and pond down-sampling layer obtain characteristic spectrum, and the full convolutional network is for the opacity estimation of prospect and background Color estimation respectively have a decoder, by the convolutional layer, active coating and deconvolution up-sample layer to the characteristic pattern Spectrum calculates, to obtain the opacity estimation of the prospect of input image size and the color estimation of background;Between two decoders The information exchange under different scale space characteristics, and correspond to scale space feature with encoder and carry out information fusion.
8. model training method as described in claim 1 in batches, which is characterized in that use in the step A4 and trained to sample Mode, will be updated the weights of convolutional neural networks after the completion of every a batch training, renewal process uses gradient descent method, and solid Determine learning rate.
9. a kind of method for extracting clean prospect in scratching figure task, which comprises the following steps:
B1. the scene image comprising prospect to be extracted is obtained;
B2. manually opaque region complete in the prospect to be extracted in the scene image is indicated with white, fully transparent area Domain indicates that remaining zone of ignorance is indicated with grey with black, generates three color sketch of prospect label;
B3. the scene image original image in step B1 and three color sketch of the prospect label in corresponding step B2 are input to such as power Benefit requires any one of 1-8 in trained convolutional neural networks model, exports opacity estimation and the back of prospect to be extracted The estimation of scenery coloured silk;
B4. according to the estimation of the opacity of the prospect to be extracted and backcolor estimation and scene image, according to prospect and Background Superposition Formula, which calculates, obtains clean prospect.
CN201910064557.4A 2019-01-23 2019-01-23 Method for extracting clean foreground in matting task and model training method Expired - Fee Related CN109829925B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910064557.4A CN109829925B (en) 2019-01-23 2019-01-23 Method for extracting clean foreground in matting task and model training method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910064557.4A CN109829925B (en) 2019-01-23 2019-01-23 Method for extracting clean foreground in matting task and model training method

Publications (2)

Publication Number Publication Date
CN109829925A true CN109829925A (en) 2019-05-31
CN109829925B CN109829925B (en) 2020-12-25

Family

ID=66862265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910064557.4A Expired - Fee Related CN109829925B (en) 2019-01-23 2019-01-23 Method for extracting clean foreground in matting task and model training method

Country Status (1)

Country Link
CN (1) CN109829925B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322440A (en) * 2019-07-08 2019-10-11 东北大学 A kind of method of cell microscopic image data extending
CN110544218A (en) * 2019-09-03 2019-12-06 腾讯科技(深圳)有限公司 Image processing method, device and storage medium
CN111311485A (en) * 2020-03-17 2020-06-19 Oppo广东移动通信有限公司 Image processing method and related device
CN113194270A (en) * 2021-04-28 2021-07-30 北京达佳互联信息技术有限公司 Video processing method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8175384B1 (en) * 2008-03-17 2012-05-08 Adobe Systems Incorporated Method and apparatus for discriminative alpha matting
US20160203587A1 (en) * 2015-01-14 2016-07-14 Thomson Licensing Method and apparatus for color correction in an alpha matting process
CN107516319A (en) * 2017-09-05 2017-12-26 中北大学 A kind of high accuracy simple interactive stingy drawing method, storage device and terminal
CN108460770A (en) * 2016-12-13 2018-08-28 华为技术有限公司 Scratch drawing method and device
CN108961279A (en) * 2018-06-28 2018-12-07 Oppo(重庆)智能科技有限公司 Image processing method, device and mobile terminal
CN108961303A (en) * 2018-07-23 2018-12-07 北京旷视科技有限公司 A kind of image processing method, device, electronic equipment and computer-readable medium
US20200020108A1 (en) * 2018-07-13 2020-01-16 Adobe Inc. Automatic Trimap Generation and Image Segmentation
CN110751655A (en) * 2019-09-16 2020-02-04 南京工程学院 Automatic cutout method based on semantic segmentation and significance analysis

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8175384B1 (en) * 2008-03-17 2012-05-08 Adobe Systems Incorporated Method and apparatus for discriminative alpha matting
US20160203587A1 (en) * 2015-01-14 2016-07-14 Thomson Licensing Method and apparatus for color correction in an alpha matting process
CN108460770A (en) * 2016-12-13 2018-08-28 华为技术有限公司 Scratch drawing method and device
CN107516319A (en) * 2017-09-05 2017-12-26 中北大学 A kind of high accuracy simple interactive stingy drawing method, storage device and terminal
CN108961279A (en) * 2018-06-28 2018-12-07 Oppo(重庆)智能科技有限公司 Image processing method, device and mobile terminal
US20200020108A1 (en) * 2018-07-13 2020-01-16 Adobe Inc. Automatic Trimap Generation and Image Segmentation
CN108961303A (en) * 2018-07-23 2018-12-07 北京旷视科技有限公司 A kind of image processing method, device, electronic equipment and computer-readable medium
CN110751655A (en) * 2019-09-16 2020-02-04 南京工程学院 Automatic cutout method based on semantic segmentation and significance analysis

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
NING XU等: ""Deep Image Matting"", 《ARXIV:1703.03872V3》 *
QUAN CHEN等: ""Semantic Human Matting"", 《ARXIV:1809.01354V2》 *
沈洋: ""交互式前景抠图技术综述"", 《计算机辅助设计与图像学学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322440A (en) * 2019-07-08 2019-10-11 东北大学 A kind of method of cell microscopic image data extending
CN110544218A (en) * 2019-09-03 2019-12-06 腾讯科技(深圳)有限公司 Image processing method, device and storage medium
CN110544218B (en) * 2019-09-03 2024-02-13 腾讯科技(深圳)有限公司 Image processing method, device and storage medium
CN111311485A (en) * 2020-03-17 2020-06-19 Oppo广东移动通信有限公司 Image processing method and related device
CN111311485B (en) * 2020-03-17 2023-07-04 Oppo广东移动通信有限公司 Image processing method and related device
CN113194270A (en) * 2021-04-28 2021-07-30 北京达佳互联信息技术有限公司 Video processing method and device, electronic equipment and storage medium
CN113194270B (en) * 2021-04-28 2022-08-05 北京达佳互联信息技术有限公司 Video processing method and device, electronic equipment and storage medium
WO2022227689A1 (en) * 2021-04-28 2022-11-03 北京达佳互联信息技术有限公司 Video processing method and apparatus

Also Published As

Publication number Publication date
CN109829925B (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN109829925A (en) A kind of method and model training method for extracting clean prospect in scratching figure task
CN105894484B (en) A kind of HDR algorithm for reconstructing normalized based on histogram with super-pixel segmentation
CN109064423B (en) Intelligent image repairing method for generating antagonistic loss based on asymmetric circulation
CN112734650B (en) Virtual multi-exposure fusion based uneven illumination image enhancement method
CN110570377A (en) group normalization-based rapid image style migration method
CN107798661B (en) Self-adaptive image enhancement method
CN110443763B (en) Convolutional neural network-based image shadow removing method
CN108805839A (en) Combined estimator image defogging method based on convolutional neural networks
Wang et al. Variational single nighttime image haze removal with a gray haze-line prior
CN103955918A (en) Full-automatic fine image matting device and method
CN109584170A (en) Underwater image restoration method based on convolutional neural networks
CN113222875B (en) Image harmonious synthesis method based on color constancy
CN109829868B (en) Lightweight deep learning model image defogging method, electronic equipment and medium
WO2023066173A1 (en) Image processing method and apparatus, and storage medium and electronic device
CN110223251A (en) Suitable for manually with the convolutional neural networks underwater image restoration method of lamp
CN112508812A (en) Image color cast correction method, model training method, device and equipment
CN113658057A (en) Swin transform low-light-level image enhancement method
CN114862698B (en) Channel-guided real overexposure image correction method and device
CN115641391A (en) Infrared image colorizing method based on dense residual error and double-flow attention
CN114219976A (en) Image processing method, image processing device, electronic equipment, storage medium and computer product
CN113284061A (en) Underwater image enhancement method based on gradient network
CN111553856A (en) Image defogging method based on depth estimation assistance
CN112991236B (en) Image enhancement method and device based on template
CN109671044B (en) A kind of more exposure image fusion methods decomposed based on variable image
CN109300170A (en) Portrait photo shadow transmission method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201225

Termination date: 20220123