CN110222722A - Interactive image stylization processing method, calculates equipment and storage medium at system - Google Patents

Interactive image stylization processing method, calculates equipment and storage medium at system Download PDF

Info

Publication number
CN110222722A
CN110222722A CN201910396504.2A CN201910396504A CN110222722A CN 110222722 A CN110222722 A CN 110222722A CN 201910396504 A CN201910396504 A CN 201910396504A CN 110222722 A CN110222722 A CN 110222722A
Authority
CN
China
Prior art keywords
image
network
training
sub
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910396504.2A
Other languages
Chinese (zh)
Inventor
江世杰
梁凌宇
耿家锴
高帆
郭晟尧
黄晓东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201910396504.2A priority Critical patent/CN110222722A/en
Publication of CN110222722A publication Critical patent/CN110222722A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of interactive image stylization processing method, system, calculate equipment and storage medium, which comprises obtain multiple first training images and multiple second training images;It constructs and scratches figure network, adaptivenon-uniform sampling is carried out to the target and background of the first training image, obtains first object image;Stylized neural network is constructed, first object image and the second training image are inputted into stylized neural network, stylized neural network is trained;According to interactively entering for user, adaptivenon-uniform sampling is carried out using the target and background of stingy figure network handles processing image, obtains the second target image;Stylized processing is carried out to the second target image using the stylized neural network after training, obtains final stylized image.The present invention according to the needs of users, can interactively carry out the image stylization in specified region, simulate the drafting style of different art forms, enhance the form of expression of visual information in image, promote the attraction of image.

Description

Interactive image stylization processing method, calculates equipment and storage medium at system
Technical field
The present invention relates to a kind of interactive image stylization processing method, system, equipment and storage medium are calculated, belonged to In image procossing and rendering field.
Background technique
Graphic Arts stylization is non-photorealistic rendering (Non-in computer graphics in recent years Photorealistic Rendering, NPR) main research, it using computer as tool, with algorithm simulation go out difference The drafting style of art form enhances the form of expression of visual information in image, attracts the attention of people.From last century 90 years In generation, feeling of unreality is rendered to be rapidly developed as an individual branches of graphics so far.The range of design is also no longer Simple area of computer aided artistic painting, also relates to the industries such as film, education, Entertainment, cartoon.
The stylization processing of image is increasingly liked there are many image processing softwares in the market by everybody, wherein The user of Photoshop is most, professional most strong.User passes through some filters (such as watercolor, poster included in Photoshop Edge, splash and texture) corresponding stylization processing can be carried out to image.With the appearance of mobile phone A ndroid and ios system, Handset image processing software is also liked by everybody.360camera, Baidu's evil spirit figure, a plurality of handset image processing such as Meitu Xiu Xiu Software also joined various image stylization effects, such as HDR, pseudo-classic, sketch etc..
However the image stylization filter of current image processing software existing on the market and relevant research are substantially all It is stylized processing to be carried out to general image and treatment effect is bad, user is thus caused not only cannot independently to select to need in image The part rendered, and rendering effect is often unable to reach expection, also needs to carry out a series of subsequent processings.In addition, mesh There is the problems such as inefficiency, poor robustness mostly in preceding common image stylization algorithm.
Summary of the invention
In view of this, the present invention provides a kind of interactive image stylization processing method, system, calculating equipment and storage Medium according to the needs of users, can interactively carry out the image stylization in specified region, simulate different art forms Drafting style, enhance image in visual information the form of expression, the attraction of image is promoted, in multimedia, art teaching etc. Field has practical value.
The first purpose of this invention is to provide a kind of interactive image stylization processing method.
Second object of the present invention is to provide a kind of interactive image stylization processing system.
Third object of the present invention is to provide a kind of calculating equipment.
Fourth object of the present invention is to provide a kind of storage medium.
The first purpose of this invention can be reached by adopting the following technical scheme that:
A kind of interactive image stylization processing method, which comprises
Obtain multiple first training images and multiple second training images;Wherein, first training image is to include mesh The image of mark and background, second training image are style image;
It constructs and scratches figure network, adaptivenon-uniform sampling is carried out to the target and background of the first training image, obtains first object figure Picture;
Stylized neural network is constructed, first object image and the second training image are inputted into stylized neural network, it is right Stylized neural network is trained;
According to interactively entering for user, the target and background using stingy figure network handles processing image is adaptively divided It cuts, obtains the second target image;
Stylized processing is carried out to the second target image using the stylized neural network after training, obtains final stylization Image.
Further, described construct scratches figure network, carries out adaptivenon-uniform sampling to the target and background of the first training image, obtains To first object image, specifically include:
In RGB color, the first training is schemed respectively using the full covariance mixed Gauss model of K Gaussian component The target and background of picture is modeled;
The target and background of the first training image is partitioned into using iteration Energy minimization;Wherein, the energy is most Smallization algorithm is initialized by K-means algorithm, respectively by belong to target and background pixel cluster be K class, K class it is equal Value and covariance are estimated to obtain by belonging to the rgb value of the pixel of target and background;
According to the target and background of the first training image, a figure is established, weight figure is handled by maximum-flow algorithm, is used in combination Minimal cut algorithm is split;
Iteration optimization mixed Gauss model and segmentation result obtain first object image.
Further, the gibbs energy of first training image is as follows:
E (α, k, θ, z)=U (α, k, θ, z)+V (α, z)
Wherein:
α is the label of pixel, and 0 is background, and 1 is target, and 2 be possible background, and 3 be possible target;K=1 ... K, table Show the classification of K-means algorithm cluster, θ is the weight of Gaussian component, and z is the image data of the first training image;
U is the area item of energy function, is indicated in the image data of the first training image, a pixel is classified as mesh The punishment of mark or background:
According to gauss hybrid models, negative logarithm is taken to obtain the probability that some pixel belongs to target or background:
Wherein, π is the weight that single Gaussian component contributes probability, and μ is the mean vector of each Gaussian component, and ∑ is association Variance matrix;
V is boundary energy item, indicates discontinuous punishment between neighborhood territory pixel m and n, the boundary energy item is in rgb space In, it is as follows for measuring the similitude of neighborhood territory pixel m and n:
Wherein, β=(2 < (zm-zn)2>)-1, β determines by the contrast of the first training image, and γ is constant, and C is adjacent face The set of color pair.
Further, after the iteration optimization mixed Gauss model and segmentation result, further includes: use border Matting carries out the boundary of segmentation smooth.
Further, first object image and the second training image are inputted style by the stylized neural network of the building Change neural network, stylized neural network be trained, is specifically included:
It establishes and generates network;Wherein, the generation network is by encoder sub-network, AdaIN sub-network and decoder subnet Network composition, encoder sub-network utilize several layer buildings before pre-training VGG19 network;
By first object image and the second training image input coding device sub-network, the output of encoder sub-network is obtained, Input by the output of encoder sub-network as AdaIN sub-network, obtains the output of AdaIN sub-network, by AdaIN sub-network Input of the output as decoder sub-network, by decoder sub-network output stylization image;
Using encoder sub-network as loss network is calculated, according to the stylized image of decoder sub-network output, AdaIN The output of sub-network and the second training image calculate loss;
Using back-propagation algorithm, the optimal way finely tuned using level, by calculating transmitting gradient from the last layer, by Layer transmitting, updates all parameters.
Further, described by first object image and the second training image input coding device sub-network, obtain encoder The output of sub-network, the input by the output of encoder sub-network as AdaIN sub-network, obtains the output of AdaIN sub-network, Input by the output of AdaIN sub-network as decoder sub-network exports stylized image by decoder sub-network, specific to wrap It includes:
By first object image and the second training image input coding device sub-network, in feature space to first object image It is encoded with the second training image, the corresponding fisrt feature mapping of output first object image is corresponding with the second training image Second feature mapping;
The fisrt feature mapping and second feature mapping input AdaIN sub-network that encoder sub-network is exported, export mesh Mark Feature Mapping;
The target signature that AdaIN sub-network is exported maps input decoder sub-network, is converted by decoder sub-network To image space, stylized image is exported.
Further, the loss is made of content loss and style loss, as follows:
L=Lc+λLs
Wherein, LcFor content loss, LsFor style loss, λ is weight;
The content loss LcBetween the stylized image exported for the output of AdaIN sub-network and decoder sub-network Euclidean distance is as follows:
Lc=| | f (g (t))-t | |2
Wherein, t is the output of AdaIN sub-network, and g (t) is the stylized image of decoder sub-network output;
The style loss is as follows:
Wherein, σ is standard deviation, and μ is average value,For a wherein layer network for pre-training VGG19 network, s is the second instruction Practice the image data of image.
Second object of the present invention can be reached by adopting the following technical scheme that:
A kind of interactive image stylization processing system, the system comprises:
Image collection module, for obtaining multiple first training images and multiple second training images;Wherein, described first Training image is the image comprising target and background, and second training image is style image;
Figure network struction module is scratched, for constructing stingy figure network, the target and background of the first training image is carried out adaptive It should divide, obtain first object image;
Stylized neural network constructs module, and for constructing stylized neural network, first object image and second are instructed Practice image and input stylized neural network, stylized neural network is trained;
Module is scratched, for interactively entering according to user, utilizes the target and background of stingy figure network handles processing image Adaptivenon-uniform sampling is carried out, the second target image is obtained;
Stylized processing module, for carrying out stylization to the second target image using the stylized neural network after training Processing obtains final stylized image.
Third object of the present invention can be reached by adopting the following technical scheme that:
A kind of calculating equipment, including processor and for the memory of storage processor executable program, the processing When device executes the program of memory storage, above-mentioned interactive image stylization processing method is realized.
Fourth object of the present invention can be reached by adopting the following technical scheme that:
A kind of storage medium is stored with program, when described program is executed by processor, realizes above-mentioned interactive image wind It formats processing method.
The present invention have compared with the existing technology it is following the utility model has the advantages that
The present invention scratches figure network and stylized neural network by constructing, can interactively entering according to user, using scratching The target and background that figure network handles handle image carries out adaptivenon-uniform sampling, obtains the target image that user specifies region, then benefit Stylized processing is carried out to the target image with stylized neural network, so that the drafting style of different art forms is simulated, The form of expression for enhancing visual information in image, promotes the attraction of image, has in fields such as multimedia, art teachings practical Value.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with The structure shown according to these attached drawings obtains other attached drawings.
Fig. 1 is the flow chart of the interactive image stylization processing method of the embodiment of the present invention 1.
Fig. 2 is that the flow chart of figure network is scratched in the building of the embodiment of the present invention 1.
Fig. 3 is the building of the embodiment of the present invention 1 and the flow chart of the stylized neural network of training.
Fig. 4 is the schematic diagram of the image to be processed of the embodiment of the present invention 1.
Fig. 5 is that the user of the embodiment of the present invention 1 interactively enters the schematic diagram of image to be processed.
Fig. 6 is the image schematic diagram that the stingy figure network of the embodiment of the present invention 1 exports.
Fig. 7 is the schematic diagram of the reference style image of the embodiment of the present invention 1.
Fig. 8 is the image schematic diagram that the stylized neural network of the embodiment of the present invention 1 exports.
Fig. 9 is the structural block diagram of the interactive image stylization processing system of the embodiment of the present invention 2.
Figure 10 is the structural block diagram of the stingy figure network struction module of the embodiment of the present invention 2.
Figure 11 is that the stylized neural network of the embodiment of the present invention 2 constructs the structural block diagram of module.
Figure 12 is the structural block diagram of the calculating equipment of the embodiment of the present invention 3.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiments of the present invention, instead of all the embodiments, based on the embodiments of the present invention, ordinary skill people Member's every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
Embodiment 1:
As shown in Figure 1, present embodiments providing a kind of interactive image stylization processing method, this method includes following step It is rapid:
S101, multiple first training images and multiple second training images are obtained.
In this step, the first training image is the image comprising target (prospect) and background, and the second training image is wind Table images, the target of the present embodiment are face.
First training image and the second training image can be obtained by acquisition, such as include target by camera shooting With the image of background, and shooting style image, can also be obtained from database lookup, such as store up wrap in databases in advance Image and style image containing target and background, searching for from database can be obtained.
S102, it constructs and scratches figure network, adaptivenon-uniform sampling is carried out to the target and background of the first training image, obtains the first mesh Logo image.
In this step, adaptivenon-uniform sampling is carried out to the target and background of the first training image and uses Grabcut algorithm, The target of the first training image is extracted, by Grabcut algorithm to obtain first object image.
As shown in Fig. 2, step S102 is specifically included:
S1021, in RGB color, using the full covariance mixed Gaussian mould of a Gaussian component of K (one take as K=5) Type (Gaussian Mixed Model, abbreviation GMM) respectively models the target and background of the first training image.
Gibbs (Gibbs) energy of first training image is as follows:
E (α, k, θ, z)=U (α, k, θ, z)+V (α, z)
Wherein:
α is the label of pixel, and 0 is background, and 1 is target, and 2 be possible background, and 3 be possible target;K=1 ... K, table Show the classification of K-means algorithm cluster, θ is the weight of Gaussian component, and z is the image data of the first training image;
U is the area item of energy function, is indicated in the image data of the first training image, a pixel is classified as mesh The punishment of mark or background:
According to gauss hybrid models, negative logarithm is taken to obtain the probability that some pixel belongs to target or background:
Wherein, π is the weight that single Gaussian component contributes probability, and μ is the mean vector of each Gaussian component (because having Tri- channels RGB, therefore be three element vectors), ∑ is covariance matrix (because there is tri- channels RGB, therefore being 3x3 matrix);
V is boundary energy item, indicates discontinuous punishment between neighborhood territory pixel m and n, the boundary energy item is in rgb space In, it is as follows for measuring the similitude of neighborhood territory pixel m and n:
Wherein, β=(2 < (zm-zn)2>)-1, β is determined by the contrast of the first training image, so that V can be in contrast height Or it is low in the case where can work normally, γ is constant, take based on experience value 50, C be adjacent color pair set.
S1022, the target and background that the first training image is partitioned into using iteration Energy minimization.
In this step, Energy minimization is initialized by K-means algorithm, will belong to target and back respectively The pixel cluster of scape is K class, and the mean value and covariance of K class are estimated to obtain by belonging to the rgb value of the pixel of target and background.
S1023, according to the target and background of the first training image, establish a figure, pass through max-flow (max-flow) calculate Method handles weight figure, and is split with minimal cut (min-cut) algorithm.
In this step, weight figure is handled by maximum-flow algorithm, and is split with minimal cut algorithm, such as following formula:
S1024, iteration optimization mixed Gauss model and segmentation result, obtain first object image.
Step S1021~S1023 is repeated, until convergence, each iteration can alternatively optimize mixed Gauss model and divide It cuts as a result, border matting can also be used to segmentation after iteration optimization mixed Gauss model and segmentation result Boundary carries out the post-processings such as smooth, to obtain the first object image of more preferable effect.
S103, stylized neural network is constructed, first object image and the second training image is inputted into stylized nerve net Network is trained stylized neural network.
In this step, stylized neural network is trained based on Tensorflow platform, is based on arbitrary image wind Lattice fast transferring (Arbitrary Style Transfer in Real-time), in conjunction with the second training image to first object Image carries out stylization, realizes the task of arbitrary image stylization, and by attempting to adjust ginseng, multiple training pattern ties experiment Fruit has carried out further optimization.
Stylized neural network is broadly divided into two parts: generating network and calculates loss network;Generating network is one Feedforward network, later period are used to carry out style switching network;What calculating loss network was constrained when being for training.
As shown in figure 3, step S103 is specifically included:
S1031, generation network is established.
In this step, it generates network to be made of encoder sub-network, AdaIN sub-network and decoder sub-network, encode Device sub-network utilizes several layers of (until relu4_1) before pre-training VGG19 network.
S1032, by first object image and the second training image input coding device sub-network, obtain encoder sub-network Output, the input by the output of encoder sub-network as AdaIN sub-network, obtains the output of AdaIN sub-network, by AdaIN Input of the output of sub-network as decoder sub-network exports stylized image by decoder sub-network.
In this step, the specific processing of encoder sub-network, AdaIN sub-network and decoder sub-network is as follows:
1) encoder sub-network: encoder sub-network has an encoder, is denoted as f, and first object image is content graph Picture, is denoted as c, and the second training image is style image, is denoted as s, by content images c and style image s input coding device sub-network, Content images c and style image s are encoded in feature space, the corresponding fisrt feature mapping of output content images c (feature map) f (c) second feature corresponding with style image s maps (feature map) f (s).
2) AdaIN sub-network: fisrt feature mapping f (c) that encoder sub-network is exported and second feature mapping f (s) AdaIN sub-network is inputted, output target signature maps (feature map) t;Specifically, content images c is normalized, The mesh in the every channel style image s is matched by being aligned mean value and the variance of the target signature mapping in every channel of content images c Mark the mean value and variance of Feature Mapping:
T=AdaIN (f (c), f (s))
Wherein,
Wherein, σ is standard deviation, and μ is average value.
3) decoder sub-network: decoder sub-network has a decoder, is denoted as g, the mesh that AdaIN sub-network is exported Feature Mapping t input decoder sub-network is marked, image space is transformed by decoder g, is exported stylization image T (c, s):
T (c, s)=g (t)
Decoder sub-network is generally used to be needed with the symmetrical network structure of encoder sub-network, its weight parameter information Be trained, start can random some initiation parameters, parameter constantly can be updated by gradient decline so that entire Loss function is smaller, network is gradually restrained.
S1033, network is lost using encoder sub-network as calculating, is schemed according to the stylization of decoder sub-network output The output and style image of picture, AdaIN sub-network calculate loss.
Specifically, according to stylized image g (t), target signature map t and style image s, calculate loss L, loss L by Content loss and style loss composition, as follows:
L=Lc+λLs
Wherein, LcFor content loss, LsFor style loss, λ is weight, and the weight selected when training is 2.0;
Content loss LcThe Euclidean distance between t and stylization image g (t) is mapped for target signature, as follows:
Lc=| | f (g (t))-t | |2
Style loses LsIt is as follows:
Wherein, σ is standard deviation, and μ is average value,For a wherein layer network for pre-training VGG19 network, identical power is selected Relu1_1, relu2_1, relu3_1 and relu4_1 of weight.
S1034, it is transmitted using the optimal way of level fine tuning by being calculated from the last layer using back-propagation algorithm Gradient is successively transmitted, updates all parameters.
Above-mentioned steps S101~S103 is the training stage, that is, scratches the building stage of figure network and stylized neural network, connect Step S104~the S105 to get off is to apply (test) stage.It is appreciated that above-mentioned steps S101~S103 is set in a calculating Standby (such as computer) is completed, and can enter step application (test) stage of S104~S105 on the computing device, can also Other calculating equipment are shared with the stingy figure network and stylized neural network that go out this calculating device build, in others Calculate the application stage that S104~S105 is entered step in equipment.
S104, interactively entering according to user are carried out adaptive using the target and background of stingy figure network handles processing image It should divide, obtain the second target image.
The realization of this step is similar to above-mentioned steps S102, is that step S1022, user can pass through in place of the main distinction Direct frame selects target to obtain an initial map-T, and the pixel outside box is all used as background pixel TB, T in boxUPicture The whole pixel as " may be target " of element, user can guide the target of frame choosing with paintbrush, with to target and background into Row adaptivenon-uniform sampling.
To TBInterior each pixel n, the label α of initialized pixel nn=0, it is background pixel;
To TUInterior each pixel n, the label α of initialized pixel nn=3, the i.e. pixel as " possible target ";
The mixed Gauss model of target and background is estimated by this pixel.Then with k-mean algorithm respectively belonging to mesh The pixel cluster of mark and background is K class, i.e., K Gauss model in mixed Gauss model.
According to obtained initial map-T to the Gaussian component in each pixel distributive mixing Gauss model, maximum probability That is exactly the kth of pixel nnA Gaussian component:
The classification done using each pixel in step S1021 for Gaussian component passes through pixel sample as pixel samples collection This rgb value estimates its mean parameter and covariance, and the weight of Gaussian component is by belonging to the number of pixels of the Gaussian component It is determined with the ratio of total number of pixels.
In this step, image to be processed is as shown in figure 5, user interactively enters image to be processed as shown in fig. 6, scratching figure net The image of network output is as shown in Figure 7.
S105, stylized processing is carried out to the second target image using the stylized neural network after training, obtained final Stylized image.
Specifically, by the wind after the reference style image chosen in the second target image and the second training image input training It formats neural network, by the final stylized image of stylized neural network output after training, final stylization image is ginseng Examine the stylized effect in style image.
In this step, with reference to style image as shown in fig. 7, the image of stylized neural network output is as shown in Figure 8.
The interactive image stylization processing method of the present embodiment can according to the shape of target face and illumination particularity from It adaptively carries out target (prospect) to extract, so as to formulate in target face the stylization in region;Meanwhile this method can letter Change the manual operation of facial image fusion, improves the operating efficiency and ease for use of face fusion edit tool.
It will be understood by those skilled in the art that realizing that all or part of the steps in the method for above-described embodiment can pass through Program is completed to instruct relevant hardware, and corresponding program can store in computer readable storage medium.
It should be noted that this is not although describing the method operation of above-described embodiment in the accompanying drawings with particular order It is required that hint must execute these operations in this particular order, could be real or have to carry out shown in whole operation Existing desired result.On the contrary, the step of describing can change and execute sequence.Additionally or alternatively, it is convenient to omit certain steps, Multiple steps are merged into a step to execute, and/or a step is decomposed into execution of multiple steps.
Embodiment 2:
As shown in figure 9, present embodiments providing a kind of interactive image stylization processing system, which includes that image obtains Modulus block 901 scratches figure network struction module 902, stylized neural network building module 903, scratches at module 904 and stylization Module 905 is managed, the concrete function of modules is as follows:
Described image obtains module 901, for obtaining multiple first training images and multiple second training images;Wherein, First training image is the image comprising target and background, and second training image is style image.
The stingy figure network struction module 902, for constructing stingy figure network, to the target and background of the first training image into Row adaptivenon-uniform sampling obtains first object image.
The stylization neural network constructs module 903, for constructing stylized neural network, by first object image and Second training image inputs stylized neural network, is trained to stylized neural network.
The stingy module 904 utilizes the target of stingy figure network handles processing image for interactively entering according to user Adaptivenon-uniform sampling is carried out with background, obtains the second target image.
It is described stylization processing module 905, for using training after stylized neural network to the second target image into Row stylization processing obtains final stylized image.
As shown in Figure 10, scratching figure network struction module 902 includes:
Modeling unit 9021 is used in RGB color, using the full covariance mixed Gaussian mould of K Gaussian component Type respectively models the target and background of the first training image.
First cutting unit 9022, for using iteration Energy minimization be partitioned into the first training image target and Background;Wherein, the Energy minimization is initialized by K-means algorithm, will belong to target and background respectively Pixel cluster is K class, and the mean value and covariance of K class are estimated to obtain by belonging to the rgb value of the pixel of target and background.
Second cutting unit 9023 establishes a figure, passes through maximum for the target and background according to the first training image Flow algorithm handles weight figure, and is split with minimal cut algorithm.
Iteration unit 9024 is used for iteration optimization mixed Gauss model and segmentation result, obtains first object image.
As shown in figure 11, stylized neural network building module 903 includes:
Unit 9031 is established, generates network for establishing;Wherein, the generation network is by encoder sub-network, AdaIN Network and decoder sub-network composition, encoder sub-network utilize several layer buildings before pre-training VGG19 network.
Processing unit 9032, for being compiled first object image and the second training image input coding device sub-network The output of code device sub-network, the input by the output of encoder sub-network as AdaIN sub-network, obtains AdaIN sub-network Output, the input by the output of AdaIN sub-network as decoder sub-network export stylized image by decoder sub-network.
Costing bio disturbance unit 9033, for losing network for encoder sub-network as calculating, according to decoder sub-network The stylized image of output, the output of AdaIN sub-network and the second training image calculate loss.
Optimize unit 9034, for use back-propagation algorithm, using level finely tune optimal way, by from last Layer calculates transmitting gradient, successively transmits, updates all parameters.
The specific implementation of modules may refer to above-described embodiment 1 in the present embodiment, and this is no longer going to repeat them;It needs Illustrate, device provided in this embodiment only the example of the division of the above functional modules, in practical applications, It can according to need and be completed by different functional modules above-mentioned function distribution, i.e., internal structure is divided into different functions Module, to complete all or part of the functions described above.
Embodiment 3:
A kind of calculating equipment is present embodiments provided, which can be computer, as shown in figure 12 comprising logical Processor 1202, memory, input unit 1203, display 1204 and the network interface 1205 of the connection of system bus 1201 are crossed, The processor is calculated for offer and control ability, the memory include non-volatile memory medium 1206 and built-in storage 1207, which is stored with operating system, computer program and database, which is The operation of operating system and computer program in non-volatile memory medium provides environment, and processor 1202 executes memory and deposits When the computer program of storage, the interactive image stylization processing method of above-described embodiment 1 is realized, as follows:
Obtain multiple first training images and multiple second training images;Wherein, first training image is to include mesh The image of mark and background, second training image are style image;
It constructs and scratches figure network, adaptivenon-uniform sampling is carried out to the target and background of the first training image, obtains first object figure Picture;
Stylized neural network is constructed, first object image and the second training image are inputted into stylized neural network, it is right Stylized neural network is trained;
According to interactively entering for user, the target and background using stingy figure network handles processing image is adaptively divided It cuts, obtains the second target image;
Stylized processing is carried out to the second target image using the stylized neural network after training, obtains final stylization Image.
Embodiment 4:
A kind of storage medium is present embodiments provided, which is computer readable storage medium, is stored with meter Calculation machine program when described program is executed by processor, when processor executes the computer program of memory storage, realizes above-mentioned reality The interactive image stylization processing method of example 1 is applied, as follows:
Obtain multiple first training images and multiple second training images;Wherein, first training image is to include mesh The image of mark and background, second training image are style image;
It constructs and scratches figure network, adaptivenon-uniform sampling is carried out to the target and background of the first training image, obtains first object figure Picture;
Stylized neural network is constructed, first object image and the second training image are inputted into stylized neural network, it is right Stylized neural network is trained;
According to interactively entering for user, the target and background using stingy figure network handles processing image is adaptively divided It cuts, obtains the second target image;
Stylized processing is carried out to the second target image using the stylized neural network after training, obtains final stylization Image.
Storage medium described in the present embodiment can be disk, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), USB flash disk, mobile hard disk etc. be situated between Matter.
In conclusion the present invention scratches figure network and stylized neural network by constructing, it can be defeated according to the interaction of user Enter, carries out adaptivenon-uniform sampling using the target and background of stingy figure network handles processing image, obtain the target that user specifies region Image recycles stylized neural network to carry out stylized processing to the target image, to simulate different art forms Style is drawn, enhances the form of expression of visual information in image, promotes the attraction of image, is led in multimedia, art teaching etc. Domain has practical value.
The above, only the invention patent preferred embodiment, but the scope of protection of the patent of the present invention is not limited to This, anyone skilled in the art is in the range disclosed in the invention patent, according to the present invention the skill of patent Art scheme and its inventive concept are subject to equivalent substitution or change, belong to the scope of protection of the patent of the present invention.

Claims (10)

1. a kind of interactive image stylization processing method, which is characterized in that the described method includes:
Obtain multiple first training images and multiple second training images;Wherein, first training image be comprising target and The image of background, second training image are style image;
It constructs and scratches figure network, adaptivenon-uniform sampling is carried out to the target and background of the first training image, obtains first object image;
Stylized neural network is constructed, first object image and the second training image are inputted into stylized neural network, to style Change neural network to be trained;
According to interactively entering for user, adaptivenon-uniform sampling is carried out using the target and background of stingy figure network handles processing image, is obtained To the second target image;
Stylized processing is carried out to the second target image using the stylized neural network after training, obtains final stylization figure Picture.
2. interactive image stylization processing method according to claim 1, which is characterized in that described construct scratches figure net Network carries out adaptivenon-uniform sampling to the target and background of the first training image, obtains first object image, specifically include:
In RGB color, using the full covariance mixed Gauss model of K Gaussian component respectively to the first training image Target and background is modeled;
The target and background of the first training image is partitioned into using iteration Energy minimization;Wherein, the energy minimizes Algorithm is initialized by K-means algorithm, respectively by belong to target and background pixel cluster be K class, the mean value of K class and Covariance is estimated to obtain by belonging to the rgb value of the pixel of target and background;
According to the target and background of the first training image, a figure is established, weight figure is handled by maximum-flow algorithm, and with minimum Algorithm is cut to be split;
Iteration optimization mixed Gauss model and segmentation result obtain first object image.
3. interactive image stylization processing method according to claim 2, which is characterized in that first training image Gibbs energy it is as follows:
E (α, k, θ, z)=U (α, k, θ, z)+V (α, z)
Wherein:
α is the label of pixel, and 0 is background, and 1 is target, and 2 be possible background, and 3 be possible target;K=1...K indicates K- The classification of means algorithm cluster;θ is the weight of Gaussian component;Z is the image data of the first training image;
U is the area item of energy function, indicate in the image data of the first training image, a pixel be classified as target or The punishment of person's background:
According to gauss hybrid models, negative logarithm is taken to obtain the probability that some pixel belongs to target or background:
Wherein, π is the weight that single Gaussian component contributes probability, and μ is the mean vector of each Gaussian component, and ∑ is covariance Matrix;
V is boundary energy item, indicates discontinuous punishment between neighborhood territory pixel m and n, which uses in rgb space It is as follows in the similitude for measuring neighborhood territory pixel m and n:
Wherein, β=(2 < (zm-zn)2>)-1, β determines by the contrast of the first training image, and γ is constant, and C is adjacent color pair Set.
4. according to the described in any item interactive image stylization processing methods of claim 2-3, which is characterized in that the iteration After optimization mixed Gauss model and segmentation result, further includes: carried out using boundary of the border matting to segmentation smooth.
5. interactive image stylization processing method according to claim 1, which is characterized in that the building stylization mind Through network, first object image and the second training image are inputted into stylized neural network, stylized neural network is instructed Practice, specifically include:
It establishes and generates network;Wherein, the generation network is by encoder sub-network, AdaIN sub-network and decoder sub-network group At encoder sub-network utilizes several layer buildings before pre-training VGG19 network;
By first object image and the second training image input coding device sub-network, the output of encoder sub-network is obtained, will be compiled Input of the output of code device sub-network as AdaIN sub-network, obtains the output of AdaIN sub-network, by the defeated of AdaIN sub-network Input as decoder sub-network out exports stylized image by decoder sub-network;
Using encoder sub-network as loss network is calculated, according to the stylized image of decoder sub-network output, AdaIN subnet The output of network and the second training image calculate loss;
Using back-propagation algorithm, the optimal way finely tuned using level is successively passed by calculating transmitting gradient from the last layer It passs, updates all parameters.
6. interactive image stylization processing method according to claim 5, which is characterized in that described by first object figure Picture and the second training image input coding device sub-network, obtain the output of encoder sub-network, by the output of encoder sub-network As the input of AdaIN sub-network, the output of AdaIN sub-network is obtained, using the output of AdaIN sub-network as decoder subnet The input of network exports stylized image by decoder sub-network, specifically includes:
By first object image and the second training image input coding device sub-network, in feature space to first object image and Two training images are encoded, the corresponding fisrt feature mapping corresponding with the second training image second of output first object image Feature Mapping;
The fisrt feature mapping and second feature mapping input AdaIN sub-network, output target that encoder sub-network is exported are special Sign mapping;
The target signature that AdaIN sub-network is exported maps input decoder sub-network, is transformed into figure by decoder sub-network Image space exports stylized image.
7. interactive image stylization processing method according to claim 5, which is characterized in that the loss is damaged by content Style of becoming estranged loss composition, as follows:
L=Lc+2Ls
Wherein, LcFor content loss, LsFor style loss, λ is weight;
The content loss be AdaIN sub-network output and decoder sub-network output stylized image between Euclidean away from From as follows:
Lc=| | f (g (t))-t | |2
Wherein, LcFor content loss, t is the output of AdaIN sub-network, and g (t) is the stylized image of decoder sub-network output;
The style loss is as follows:
Wherein, σ is standard deviation, and μ is average value,For a wherein layer network for pre-training VGG19 network, s is the second training figure The image data of picture.
8. a kind of interactive image stylization processing system, which is characterized in that the system comprises:
Image collection module, for obtaining multiple first training images and multiple second training images;Wherein, first training Image is the image comprising target and background, and second training image is style image;
Figure network struction module is scratched adaptively to divide the target and background of the first training image for constructing stingy figure network It cuts, obtains first object image;
Stylized neural network constructs module, and for constructing stylized neural network, first object image and the second training are schemed As inputting stylized neural network, stylized neural network is trained;
Module is scratched, for interactively entering according to user, is carried out using the target and background of stingy figure network handles processing image Adaptivenon-uniform sampling obtains the second target image;
Stylized processing module, for being carried out at stylization using the stylized neural network after training to the second target image Reason obtains final stylized image.
9. a kind of calculating equipment, including processor and for the memory of storage processor executable program, which is characterized in that When the processor executes the program of memory storage, the described in any item interactive image stylizations of claim 1-7 are realized Processing method.
10. a kind of storage medium, is stored with program, which is characterized in that when described program is executed by processor, realize claim The described in any item interactive image stylization processing methods of 1-7.
CN201910396504.2A 2019-05-14 2019-05-14 Interactive image stylization processing method, calculates equipment and storage medium at system Pending CN110222722A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910396504.2A CN110222722A (en) 2019-05-14 2019-05-14 Interactive image stylization processing method, calculates equipment and storage medium at system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910396504.2A CN110222722A (en) 2019-05-14 2019-05-14 Interactive image stylization processing method, calculates equipment and storage medium at system

Publications (1)

Publication Number Publication Date
CN110222722A true CN110222722A (en) 2019-09-10

Family

ID=67820970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910396504.2A Pending CN110222722A (en) 2019-05-14 2019-05-14 Interactive image stylization processing method, calculates equipment and storage medium at system

Country Status (1)

Country Link
CN (1) CN110222722A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689478A (en) * 2019-09-25 2020-01-14 北京字节跳动网络技术有限公司 Image stylization processing method and device, electronic equipment and readable medium
CN111127309A (en) * 2019-12-12 2020-05-08 杭州格像科技有限公司 Portrait style transfer model training method, portrait style transfer method and device
CN111340905A (en) * 2020-02-13 2020-06-26 北京百度网讯科技有限公司 Image stylization method, apparatus, device, and medium
CN111951154A (en) * 2020-08-14 2020-11-17 中国工商银行股份有限公司 Method and device for generating picture containing background and medium
CN112102461A (en) * 2020-11-03 2020-12-18 北京智源人工智能研究院 Face rendering method and device, electronic equipment and storage medium
CN112465064A (en) * 2020-12-14 2021-03-09 合肥工业大学 Image identification method, device and equipment based on deep course learning
CN112561779A (en) * 2019-09-26 2021-03-26 北京字节跳动网络技术有限公司 Image stylization processing method, device, equipment and storage medium
CN112561778A (en) * 2019-09-26 2021-03-26 北京字节跳动网络技术有限公司 Image stylization processing method, device, equipment and storage medium
CN112734769A (en) * 2020-12-31 2021-04-30 山东大学 Medical image segmentation and quantitative analysis method based on interactive information guided deep learning method, computer device and storage medium
CN112766079A (en) * 2020-12-31 2021-05-07 北京航空航天大学 Unsupervised image-to-image translation method based on content style separation
WO2021109876A1 (en) * 2019-12-02 2021-06-10 Oppo广东移动通信有限公司 Image processing method, apparatus and device, and storage medium
CN113160033A (en) * 2020-12-28 2021-07-23 武汉纺织大学 Garment style migration system and method
CN113240599A (en) * 2021-05-10 2021-08-10 Oppo广东移动通信有限公司 Image toning method and device, computer-readable storage medium and electronic equipment
CN113469876A (en) * 2021-07-28 2021-10-01 北京达佳互联信息技术有限公司 Image style migration model training method, image processing method, device and equipment
CN113542759A (en) * 2020-04-15 2021-10-22 辉达公司 Generating antagonistic neural network assisted video reconstruction
CN113763232A (en) * 2020-08-10 2021-12-07 北京沃东天骏信息技术有限公司 Image processing method, device, equipment and computer readable storage medium
CN113763233A (en) * 2021-08-04 2021-12-07 深圳盈天下视觉科技有限公司 Image processing method, server and photographing device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104820990A (en) * 2015-05-15 2015-08-05 北京理工大学 Interactive-type image-cutting system
CN106548208A (en) * 2016-10-28 2017-03-29 杭州慕锐科技有限公司 A kind of quick, intelligent stylizing method of photograph image
CN107463622A (en) * 2017-07-06 2017-12-12 西南交通大学 A kind of automatic Symbolic method for keeping landmark shape facility
CN107945204A (en) * 2017-10-27 2018-04-20 西安电子科技大学 A kind of Pixel-level portrait based on generation confrontation network scratches drawing method
CN108629747A (en) * 2018-04-25 2018-10-09 腾讯科技(深圳)有限公司 Image enchancing method, device, electronic equipment and storage medium
CN108961349A (en) * 2018-06-29 2018-12-07 广东工业大学 A kind of generation method, device, equipment and the storage medium of stylization image
CN109325903A (en) * 2017-07-31 2019-02-12 北京大学 The method and device that image stylization is rebuild
CN109697690A (en) * 2018-11-01 2019-04-30 北京达佳互联信息技术有限公司 Image Style Transfer method and system
CN109712068A (en) * 2018-12-21 2019-05-03 云南大学 Image Style Transfer and analogy method for cucurbit pyrography

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104820990A (en) * 2015-05-15 2015-08-05 北京理工大学 Interactive-type image-cutting system
CN106548208A (en) * 2016-10-28 2017-03-29 杭州慕锐科技有限公司 A kind of quick, intelligent stylizing method of photograph image
CN107463622A (en) * 2017-07-06 2017-12-12 西南交通大学 A kind of automatic Symbolic method for keeping landmark shape facility
CN109325903A (en) * 2017-07-31 2019-02-12 北京大学 The method and device that image stylization is rebuild
CN107945204A (en) * 2017-10-27 2018-04-20 西安电子科技大学 A kind of Pixel-level portrait based on generation confrontation network scratches drawing method
CN108629747A (en) * 2018-04-25 2018-10-09 腾讯科技(深圳)有限公司 Image enchancing method, device, electronic equipment and storage medium
CN108961349A (en) * 2018-06-29 2018-12-07 广东工业大学 A kind of generation method, device, equipment and the storage medium of stylization image
CN109697690A (en) * 2018-11-01 2019-04-30 北京达佳互联信息技术有限公司 Image Style Transfer method and system
CN109712068A (en) * 2018-12-21 2019-05-03 云南大学 Image Style Transfer and analogy method for cucurbit pyrography

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
LINGYU LIANG等: "Facial Skin Beautification Using Adaptive Region-Aware Masks", 《IEEE TRANSACTIONS ON CYBERNETICS》 *
XUN HUANG等: "Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 *
丁红 等: "基于快速收敛Grabcut的目标提取算法", 《计算机工程与设计》 *
吴凯琳: "计算机绘制低多边形风格肖像", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
栾奕欣: "基于深度学习的图像风格化处理", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
梁凌宇: "人脸图像的自适应美化与渲染研究", 《中国博士学位论文全文数据库 信息科技辑》 *
陈梓炜: "基于深度特征插值的图像变换方法研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689478B (en) * 2019-09-25 2023-12-01 北京字节跳动网络技术有限公司 Image stylization processing method and device, electronic equipment and readable medium
CN110689478A (en) * 2019-09-25 2020-01-14 北京字节跳动网络技术有限公司 Image stylization processing method and device, electronic equipment and readable medium
CN112561779B (en) * 2019-09-26 2023-09-29 北京字节跳动网络技术有限公司 Image stylization processing method, device, equipment and storage medium
CN112561779A (en) * 2019-09-26 2021-03-26 北京字节跳动网络技术有限公司 Image stylization processing method, device, equipment and storage medium
CN112561778A (en) * 2019-09-26 2021-03-26 北京字节跳动网络技术有限公司 Image stylization processing method, device, equipment and storage medium
WO2021109876A1 (en) * 2019-12-02 2021-06-10 Oppo广东移动通信有限公司 Image processing method, apparatus and device, and storage medium
CN111127309A (en) * 2019-12-12 2020-05-08 杭州格像科技有限公司 Portrait style transfer model training method, portrait style transfer method and device
CN111127309B (en) * 2019-12-12 2023-08-11 杭州格像科技有限公司 Portrait style migration model training method, portrait style migration method and device
CN111340905A (en) * 2020-02-13 2020-06-26 北京百度网讯科技有限公司 Image stylization method, apparatus, device, and medium
CN111340905B (en) * 2020-02-13 2023-08-04 北京百度网讯科技有限公司 Image stylization method, device, equipment and medium
CN113542759B (en) * 2020-04-15 2024-05-10 辉达公司 Generating an antagonistic neural network assisted video reconstruction
CN113542759A (en) * 2020-04-15 2021-10-22 辉达公司 Generating antagonistic neural network assisted video reconstruction
CN113763232A (en) * 2020-08-10 2021-12-07 北京沃东天骏信息技术有限公司 Image processing method, device, equipment and computer readable storage medium
CN111951154B (en) * 2020-08-14 2023-11-21 中国工商银行股份有限公司 Picture generation method and device containing background and medium
CN111951154A (en) * 2020-08-14 2020-11-17 中国工商银行股份有限公司 Method and device for generating picture containing background and medium
CN112102461A (en) * 2020-11-03 2020-12-18 北京智源人工智能研究院 Face rendering method and device, electronic equipment and storage medium
CN112102461B (en) * 2020-11-03 2021-04-09 北京智源人工智能研究院 Face rendering method and device, electronic equipment and storage medium
CN112465064A (en) * 2020-12-14 2021-03-09 合肥工业大学 Image identification method, device and equipment based on deep course learning
CN113160033A (en) * 2020-12-28 2021-07-23 武汉纺织大学 Garment style migration system and method
CN112766079B (en) * 2020-12-31 2023-05-26 北京航空航天大学 Unsupervised image-to-image translation method based on content style separation
CN112734769B (en) * 2020-12-31 2022-11-04 山东大学 Medical image segmentation and quantitative analysis method based on interactive information guided deep learning method, computer device and storage medium
CN112766079A (en) * 2020-12-31 2021-05-07 北京航空航天大学 Unsupervised image-to-image translation method based on content style separation
CN112734769A (en) * 2020-12-31 2021-04-30 山东大学 Medical image segmentation and quantitative analysis method based on interactive information guided deep learning method, computer device and storage medium
CN113240599A (en) * 2021-05-10 2021-08-10 Oppo广东移动通信有限公司 Image toning method and device, computer-readable storage medium and electronic equipment
CN113469876A (en) * 2021-07-28 2021-10-01 北京达佳互联信息技术有限公司 Image style migration model training method, image processing method, device and equipment
CN113469876B (en) * 2021-07-28 2024-01-09 北京达佳互联信息技术有限公司 Image style migration model training method, image processing method, device and equipment
CN113763233A (en) * 2021-08-04 2021-12-07 深圳盈天下视觉科技有限公司 Image processing method, server and photographing device

Similar Documents

Publication Publication Date Title
CN110222722A (en) Interactive image stylization processing method, calculates equipment and storage medium at system
Li et al. A closed-form solution to photorealistic image stylization
CN110378985B (en) Animation drawing auxiliary creation method based on GAN
CN109816009A (en) Multi-tag image classification method, device and equipment based on picture scroll product
CN107845072B (en) Image generating method, device, storage medium and terminal device
CN107993238A (en) A kind of head-and-shoulder area image partition method and device based on attention model
CN109902798A (en) The training method and device of deep neural network
CN110443239A (en) The recognition methods of character image and its device
CN102184221A (en) Real-time video abstract generation method based on user preferences
CN106778852A (en) A kind of picture material recognition methods for correcting erroneous judgement
CN113255813B (en) Multi-style image generation method based on feature fusion
CN110097616B (en) Combined drawing method and device, terminal equipment and readable storage medium
CN111178312B (en) Face expression recognition method based on multi-task feature learning network
CN114663685B (en) Pedestrian re-recognition model training method, device and equipment
CN113408537B (en) Remote sensing image domain adaptive semantic segmentation method
CN108875693A (en) A kind of image processing method, device, electronic equipment and its storage medium
CN110782448A (en) Rendered image evaluation method and device
CN110516734A (en) A kind of image matching method, device, equipment and storage medium
CN109978074A (en) Image aesthetic feeling and emotion joint classification method and system based on depth multi-task learning
CN114758180A (en) Knowledge distillation-based light flower recognition method
CN113838158B (en) Image and video reconstruction method and device, terminal equipment and storage medium
CN113554653A (en) Semantic segmentation method for long-tail distribution of point cloud data based on mutual information calibration
CN115018729A (en) White box image enhancement method for content
CN115082800A (en) Image segmentation method
US11734389B2 (en) Method for generating human-computer interactive abstract image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190910

RJ01 Rejection of invention patent application after publication