CN109147003A - Method, equipment and the storage medium painted to line manuscript base picture - Google Patents
Method, equipment and the storage medium painted to line manuscript base picture Download PDFInfo
- Publication number
- CN109147003A CN109147003A CN201810866829.8A CN201810866829A CN109147003A CN 109147003 A CN109147003 A CN 109147003A CN 201810866829 A CN201810866829 A CN 201810866829A CN 109147003 A CN109147003 A CN 109147003A
- Authority
- CN
- China
- Prior art keywords
- manuscript base
- picture
- base picture
- line
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000003860 storage Methods 0.000 title claims abstract description 17
- 238000004040 coloring Methods 0.000 claims abstract description 70
- 238000012549 training Methods 0.000 claims abstract description 69
- 238000010801 machine learning Methods 0.000 claims abstract description 54
- 238000012545 processing Methods 0.000 claims abstract description 38
- 238000010422 painting Methods 0.000 claims abstract description 11
- 230000006870 function Effects 0.000 claims description 17
- 230000015654 memory Effects 0.000 claims description 15
- 239000003973 paint Substances 0.000 claims description 5
- 238000013138 pruning Methods 0.000 claims description 5
- 238000005457 optimization Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 14
- 238000005516 engineering process Methods 0.000 abstract description 11
- 238000010586 diagram Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000004913 activation Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000005036 nerve Anatomy 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 238000003475 lamination Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
Present disclose provides method, equipment and storage mediums that a kind of pair of line manuscript base picture is painted.Wherein, this method comprises: obtaining First Line manuscript base picture and the first reference picture, wherein first reference picture includes image information referenced when painting to the First Line manuscript base picture;And using the model based on machine learning training, it is based on first reference picture, colouring processing is carried out to the First Line manuscript base picture.The disclosure solves the technical issues of difference of coloring effect present in the existing technology painted to line original text and colouring low efficiency.
Description
Technical field
This disclosure relates to field of image processing, method, the equipment painted in particular to a kind of pair of line manuscript base picture
And storage medium.
Background technique
Caricature and cartoon making process are usually that producer first completes line original text, then carries out colouring to line original text and complete color original text.?
During colouring, traditional scheme be usually utilize the auxiliary such as Adobe Photoshop software by hand to each region of line original text into
Row colouring.Therefore there are very high human costs for one caricature of production or animation, while longer Production Time also constrains production
The distribution speed of product.
With the continuous development of information technology, had already appeared at present it is some using artificial intelligence approach to line original text carry out from
The technology of dynamic colouring.The significantly larger than artificial colouring in efficiency of this automatic colouring technology, and advantage of lower cost, it is only necessary to one
Fixed computer resource.Paintschainer is a kind of prior art, it can using the color manuscript base picture of specific style as
With reference to paint to new line original text.But due to the limitation of its method itself, its effect is very from artificial colouring result difference
It greatly, can not be satisfactory;Its method efficiency is lower simultaneously, so that the program is difficult to carry out commercialization, especially carries out animation system
Make (per second to need comprising more than ten width color original texts).
Prior art Paintschainer has the following disadvantages:
1) coloring effect is poor.In the color original text that Paintschainer is completed, the boundary between different zones is relatively fuzzyyer, very
More different zones should be described using different colours, but the color of its different zones is substantially coincident in these color original texts
's.Meanwhile there is also some " ripples " for the color original text of certain generations, this is the artificial neural network that Paintschainer scheme uses
The problem of network model causes is that as caused by artificial nerve network model parameter is excessive and training sample is very few " over-fitting " is asked
Topic.
2) colouring low efficiency.Since the artificial nerve network model parameter amount that Paintschainer scheme uses is very big,
Calculating speed is very slow in practical application.
For above-mentioned problem, currently no effective solution has been proposed.
Summary of the invention
The embodiment of the present disclosure provides method, equipment and the storage medium that a kind of pair of line manuscript base picture is painted, so that
The technical issues of solving the difference of coloring effect present in the existing technology painted to line original text and colouring low efficiency less.
According to the one aspect of the embodiment of the present disclosure, the method that a kind of pair of line manuscript base picture is painted is provided, comprising: obtain
First Line manuscript base picture and the first reference picture are taken, wherein first reference picture includes to carry out to the First Line manuscript base picture
Referenced image information when colouring;And using the model based on machine learning training, it is based on first reference picture, it is right
The First Line manuscript base picture carries out colouring processing.
According to the another aspect of the embodiment of the present disclosure, the equipment that a kind of pair of line manuscript base picture is painted is additionally provided,
It include: processor;And memory, it is connected to the processor, for providing processing following processing step for the processor
Instruction: First Line manuscript base picture and the first reference picture are obtained, wherein first reference picture includes to the First Line
Manuscript base picture image information referenced when being painted;And using the model based on machine learning training, it is based on described first
Reference picture carries out colouring processing to the First Line manuscript base picture.
According to the another aspect of the embodiment of the present disclosure, the equipment that a kind of pair of line manuscript base picture is painted is additionally provided,
It include: image collection module, for obtaining First Line manuscript base picture and the first reference picture, wherein the first reference picture packet
Containing image information referenced when painting to the First Line manuscript base picture;And colouring module, for using based on machine
The model of learning training is based on first reference picture, carries out colouring processing to the First Line manuscript base picture.
In the embodiments of the present disclosure, using reference picture is based on, using the model based on machine learning training to line manuscript base
As the mode painted, the technical effect for improving coloring effect and upper colour efficiency is realized, and then solve existing to line
The technical issues of difference of coloring effect present in the technology that original text is painted and colouring low efficiency.
Detailed description of the invention
Attached drawing described herein is used to provide further understanding of the disclosure, constitutes part of this application, this public affairs
The illustrative embodiments and their description opened do not constitute the improper restriction to the disclosure for explaining the disclosure.In the accompanying drawings:
Fig. 1 shows the method painted to line manuscript base picture for realizing the first aspect according to the embodiment of the present disclosure 1
Terminal hardware structure diagram;
Fig. 2 is the flow diagram for the method painted according to the embodiment of the present disclosure 1 to line manuscript base picture;
Fig. 3 is the further process in the method painted according to the embodiment of the present disclosure 1 to line manuscript base picture
Schematic diagram;
Fig. 4 shows the schematic diagram for the model painted according to the embodiment of the present disclosure 1 to line manuscript base picture;
Fig. 5 shows the line manuscript base of the model as the list of the convolutional layer structure of characteristic extracting module;
Fig. 6 shows the list of the convolutional layer structure of the reference picture characteristic extracting module of the model;
Fig. 7 shows the color manuscript base of the model as the list of the convolutional layer structure of feature coding module;
Fig. 8 shows the color manuscript base of the model as the list of the convolutional layer structure of generation module;
Fig. 9 shows the flow diagram for the operation being trained to model described in embodiment 1;
Figure 10 shows the flow diagram of the operation optimized to model described in embodiment 1;
Figure 11 shows the flow diagram for the operation assessed model described in embodiment 1;
Figure 12 shows the schematic diagram according to the equipment as described in example 2 painted to line manuscript base picture;And
Figure 13 shows the schematic diagram according to the equipment described in embodiment 3 painted to line manuscript base picture.
Specific embodiment
In order to make those skilled in the art more fully understand disclosure scheme, below in conjunction in the embodiment of the present disclosure
The technical solution in the embodiment of the present disclosure is clearly and completely described in attached drawing, it is clear that described embodiment is only
The embodiment of disclosure a part, instead of all the embodiments.Based on the embodiment in the disclosure, ordinary skill people
Member's every other embodiment obtained without making creative work, all should belong to the model of disclosure protection
It encloses.
It should be noted that the specification and claims of the disclosure and term " first " in above-mentioned attached drawing, "
Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way
Data be interchangeable under appropriate circumstances, so as to embodiment of the disclosure described herein can in addition to illustrating herein or
Sequence other than those of description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that cover
Cover it is non-exclusive include, for example, the process, method, system, product or equipment for containing a series of steps or units are not necessarily limited to
Step or unit those of is clearly listed, but may include be not clearly listed or for these process, methods, product
Or other step or units that equipment is intrinsic.
Embodiment 1
According to the embodiment of the present disclosure, the embodiment of the method that a kind of pair of line manuscript base picture is painted is additionally provided, needs to illustrate
, step shown in the flowchart of the accompanying drawings can hold in a computer system such as a set of computer executable instructions
Row, although also, logical order is shown in flow charts, and it in some cases, can be to be different from sequence herein
Execute shown or described step.
Embodiment of the method provided by the embodiment of the present application one can be in mobile terminal, terminal or similar fortune
It calculates and is executed in device.It is (or mobile that Fig. 1 shows a kind of terminal for realizing the method painted to line manuscript base picture
Equipment) hardware block diagram.As shown in Figure 1, terminal 10 (or mobile device 10) may include one or more (figures
Middle to use 102a, 102b ... ..., 102n is shown) (processor 102 can include but is not limited to microprocessor to processor 102
The processing unit of MCU or programmable logic device FPGA etc.), memory 104 for storing data and be used for communication function
Transmitting device 106.It in addition to this, can also include: display, input/output interface (I/O interface), universal serial bus
(USB) port (a port that can be used as in the port of I/O interface is included), network interface, power supply and/or camera.This
Field those of ordinary skill is appreciated that structure shown in FIG. 1 is only to illustrate, and does not cause to the structure of above-mentioned electronic device
It limits.For example, terminal 10 may also include than shown in Fig. 1 more perhaps less component or have with shown in Fig. 1
Different configurations.
It is to be noted that said one or multiple processors 102 and/or other data processing circuits lead to herein
Can often " data processing circuit " be referred to as.The data processing circuit all or part of can be presented as software, hardware, firmware
Or any other combination.In addition, data processing circuit for single independent processing module or all or part of can be integrated to meter
In any one in other elements in calculation machine terminal 10 (or mobile device).As involved in the embodiment of the present application,
The data processing circuit controls (such as the selection for the variable resistance end path connecting with interface) as a kind of processor.
Memory 104 can be used for storing the software program and module of application software, as in the embodiment of the present disclosure to line
Corresponding program instruction/the data storage device of the method that manuscript base picture is painted, processor 102 are stored in memory by operation
Software program and module in 104 realize above-mentioned application program thereby executing various function application and data processing
The method painted to line manuscript base picture.Memory 104 may include high speed random access memory, may also include non-volatile memories
Device, such as one or more magnetic storage device, flash memory or other non-volatile solid state memories.In some instances, it deposits
Reservoir 104 can further comprise the memory remotely located relative to processor 102, these remote memories can pass through network
It is connected to terminal 10.The example of above-mentioned network includes but is not limited to internet, intranet, local area network, moves and lead to
Letter net and combinations thereof.
Transmitting device 106 is used to that data to be received or sent via a network.Above-mentioned network specific example may include
The wireless network that the communication providers of terminal 10 provide.In an example, transmitting device 106 includes that a network is suitable
Orchestration (Network Interface Controller, NIC), can be connected by base station with other network equipments so as to
Internet is communicated.In an example, transmitting device 106 can be radio frequency (Radio Frequency, RF) module,
For wirelessly being communicated with internet.
Display can such as touch-screen type liquid crystal display (LCD), the liquid crystal display aloow user with
The user interface of terminal 10 (or mobile device) interacts.
Under above-mentioned running environment, painting methods are carried out to line manuscript base picture this application provides as shown in Figure 2.Fig. 2 is
The flow chart for the method painted according to the first aspect of the embodiment of the present disclosure 1 to line manuscript base picture.With reference to Fig. 2 institute
Show, which comprises
S201: First Line manuscript base picture and the first reference picture are obtained;And
S202: using the model based on machine learning training, it is based on the first reference picture, First Line manuscript base picture is carried out
Color processing.
Wherein, the first reference picture includes image information referenced when painting to First Line manuscript base picture.
As described in the background art, prior art Paintschainer is primarily present artificial neural network
" over-fitting " problem caused by network model parameter is excessive and training sample is very few, and then lead to coloring effect difference and upper colour efficiency
Low problem.
In order to solve the technical problem, according to the offer of the first aspect of the present embodiment to line manuscript base as painting
Method, by obtaining First Line manuscript base picture and the first reference picture, and using the model based on machine learning training, based on the
One reference picture carries out colouring processing to First Line manuscript base picture.
Due to including information referenced when painting to the First Line manuscript base picture, such as area in the first reference picture
The information such as domain color and region division, therefore the disclosure can use the information of the first reference picture offer, to First Line
Manuscript base picture carries out colouring processing.To in this way, the present disclosure ensure that during painting to new line original text,
By acquisition First Line manuscript base picture and the first reference picture, and using the model based on machine learning training, it is based on first
Reference picture carries out colouring processing to First Line manuscript base picture.To improve line original text region resolution capability, enhance generation color
The effect of original text.Meanwhile using machine learning training pattern, acquires a large amount of training samples and model is repeatedly trained, finally
Avoid " over-fitting " problem.
Wherein, the first reference picture includes image information referenced when painting to First Line manuscript base picture.Such as but
It is not limited to, which includes the information such as regional color and the region division of reference picture.
Optionally, as shown in figure 3, using the model based on machine learning training, it is based on the first reference picture, to First Line
Manuscript base picture carries out the operation of colouring processing, comprising:
Step S2011: using the first convolution model for including multiple convolutional layers, line manuscript base is extracted from First Line manuscript base picture
As feature;
Step S2012, it using the second convolution model including multiple convolutional layers, extracts from the first reference picture with reference to figure
As feature;
Step S2013, using the third convolution model including multiple convolutional layers, First Line original text characteristics of image and ginseng are based on
Characteristics of image is examined, the color original text characteristics of image of coding is generated;And
Step S2014, using the Volume Four product module type including multiple convolutional layers, it is based on color original text characteristics of image, generates first
Color manuscript base picture is as the line manuscript base picture after colouring.
Wherein Fig. 4 shows the model based on machine learning training according to the first aspect of the present embodiment 1
Schematic diagram.Refering to what is shown in Fig. 4, the model includes line manuscript base as characteristic extracting module 401 (corresponding to the first convolution model), reference
Image characteristics extraction module 402 (corresponding to the second convolution model), color manuscript base are as feature coding module 403 is (corresponding to third volume
Product module type) and color manuscript base as generation module 404 (correspond to Volume Four product module type).
Wherein, line original text characteristic extracting module 401 is for receiving line manuscript base picture, and line manuscript base picture is extracted from line manuscript base picture
Feature.Wherein the line original text characteristics of image covers the letter such as structure and region division of line manuscript base picture in the form of one compared with low-dimensional
Breath.
Reference picture characteristic extracting module 402 extracts reference picture for receiving reference picture from reference picture
Feature.Wherein the reference picture feature contains the regional color of reference picture and colour gamut draws grading information.
Color manuscript base is used to receive line original text characteristics of image and reference picture feature as feature coding module 403, and is based on
As the color original text characteristics of image after coding, this feature includes for we for line original text characteristics of image and reference picture feature output three-dimensional matrice
Wish the region division and regional color information that color manuscript base picture has.
Color manuscript base is painted as generation module 404 is used to receive color original text characteristics of image using the convolution operation of convolutional layer
Line manuscript base picture afterwards.
The disclosure, which passes through, utilizes multiple convolution models, extracts from First Line manuscript base picture and the first reference picture respectively corresponding
Characteristics of image, the color original text characteristics of image of coding is then generated according to the characteristics of image, utilizes the technology of " decoding re-encoding " former
Reason generates the first color manuscript base picture.To improve line original text region resolution capability, to enhance the effect for generating color original text.
Optionally, the first convolution model (that is, line manuscript base is as characteristic extracting module 401) includes 10 convolutional layers and 5
Down-sampled layer is wherein provided with a down-sampled layer between every two convolutional layer.The specific network layer of this model is arranged such as Fig. 5 institute
Show.Such as: the module design has a convolutional neural networks, this network is made of 10 convolutional layers and 5 down-sampled layers, wherein
Each convolutional layer uses ReLU function as activation primitive.After we are toward this module input line manuscript base picture, dimension can be exported
Degree for 8 × 8 × 512 matrix as line original text characteristics of image, this feature covered in the form of one compared with low-dimensional line manuscript base as
The information such as structure and region division.
Optionally, the second convolution model (that is, reference picture characteristic extracting module 402) includes VGG19 convolutional neural networks
Structure employed in model for the convolutional layer part of feature extraction.The specific network layer setting of this model is as shown in Figure 6.
Such as: the reference picture for the expression colouring style that input user provides is adopted using 16 convolutional layers of VGG19 model and 5 drops
Sample layer exports the matrix that dimension is 8 × 8 × 512 as reference picture feature, contains regional color and the region of reference picture
Draw grading information.
Optionally, third convolution model (that is, color manuscript base is as feature coding module 403) rises including 4 convolutional layers and 3 and adopts
Sample layer, wherein there are two the convolutional layers for setting between every two liter sample level.The specific network layer of this model is arranged such as Fig. 7 institute
Show.Such as: then input line original text characteristics of image and reference picture feature adjust dimension and allow to utilize splicing
(concatenate) mode merges it.Specific practice is using channel dimension as axis (axis), is 8 × 8 by dimension
× 512 line original text characteristics of image and dimension be 8 × 8 × 512 reference picture merging features at 8 × 8 × 1024 three-dimensional matrice.
It rises sample level using 4 convolutional layers and 3 after fusion to be encoded, wherein each convolutional layer uses ReLU function as activation
Function, for output three-dimensional matrice as the color original text characteristics of image after coding, this feature includes it is desirable that the area that color manuscript base picture has
Domain divides and regional color information.
Optionally, Volume Four product module type (that is, color manuscript base is as generation module 404) includes that 8 convolutional layers and 2 rise sampling
Layer, wherein there are two the convolutional layers for setting between every two liter sample level.The specific network layer of this model is arranged such as Fig. 8 institute
Show.Such as: the color original text characteristics of image after input coding rises sample level using 8 convolutional layers and 2 and carries out the generation of color manuscript base picture,
Wherein each convolutional layer uses ReLU function as activation primitive.Finally network is exported further according to the size of line manuscript base picture
Image down or amplification (resize), the image after colouring can be obtained.
It is alternatively possible to be trained by following operation to the model based on machine learning training.Fig. 9 is to the machine of being based on
The flow chart that the model of device learning training is trained.
S901, the second color manuscript base picture and the second reference picture are obtained;
S902, the second line manuscript base picture is generated based on the second acquired color manuscript base picture;
S903, it is based on the second line manuscript base picture and the second reference picture, using the model based on machine learning training, to the
Two wires manuscript base picture carries out colouring processing;
S904, to the second color manuscript base picture and colouring, treated that the second line manuscript base picture is compared;And
S905, using it is after comparison as a result, to based on machine learning training model optimize.
The disclosure is trained the model based on machine learning training by above-mentioned 5 steps.For example, can set in advance
A line manuscript base is set as generation module, can extract using edge detection algorithm according to the color manuscript base picture of input and export edge
Image is as corresponding line manuscript base picture.Another example is: the method according to the first aspect of the present embodiment, first initialization institute
There is the parameter of network layer, for reference picture characteristic extracting module, we used the model parameters of pre-training on ImageNet
As initial parameter, other network layer parameters are initialized using normal distribution random number.Then will comprising color manuscript base picture and
The training sample of reference picture (that is, the second color manuscript base picture recited above and second reference picture) is divided into multiple groups, in batches
In model used in secondary input present patent application technology, the color manuscript base picture that is generated is (that is, colouring treated the second line original text
Image), and treated that the second line manuscript base picture is compared to the second color manuscript base picture and colouring.Then next batch is inputted again
Training sample can be obtained by the model after training after circulation is multiple.Using after comparison as a result, to based on machine learning training
Model optimize.
Optionally, to the second color manuscript base picture and colouring treated operation that the second line manuscript base picture is compared, including
Calculate the L1 space length between the second color manuscript base picture and colouring treated the second line manuscript base picture, and to being based on engineering
The operation that the model of training optimizes is practised, flow chart is as shown in Figure 10.
S1001: using L1 space length as loss function, the gradient of loss function is calculated;And
S1002: being based on gradient, is optimized according to stochastic gradient descent principle to the model based on machine learning training.
After the present embodiment generates color manuscript base picture by above-mentioned module, to the second color manuscript base picture and colouring, treated the
The operation that two wires manuscript base picture is compared, including calculate between the second color manuscript base picture and colouring treated the second line manuscript base picture
L1 space length, and to the operation that optimizes of model based on machine learning training.Such as: the color manuscript base generated
As after, the L1 space length between the color manuscript base picture of generation and the original color manuscript base picture of input is calculated as loss function, is calculated
The gradient of the loss function simultaneously optimizes according to stochastic gradient descent principle (Stochastic Gradient Descent, i.e. SGD)
Network parameter.
Optionally, optimization includes at least one of carrying out the following processing to the model of machine learning training: parameter two-value
Change, reasoning and beta pruning.Such as but be not limited only to, which can make the parameter of model reduction 50% and improve 10 speeds
Degree.
Optionally, the model based on machine learning training is assessed by following operation, Figure 11 is to based on machine
The flow chart that the model of learning training is assessed.
S1101: third line manuscript base picture, third reference picture are obtained and with third line manuscript base as corresponding true color manuscript base
Picture;
S1102: being based on third line manuscript base picture and third reference picture, is generated using the model based on machine learning training
Third color manuscript base picture;
S1103: the first similarity and third color manuscript base picture and the of third color manuscript base picture and true color manuscript base picture are calculated
Second similarity of three reference pictures;
S1104: the sum of the similarity of the first similarity and the second similarity is calculated;And
S1105: using the sum of similarity calculated, whether assessment third color original text is pressed under the premise of guaranteeing that content is constant
It paints according to third reference picture.
Such as: under evaluation profile, input line manuscript base picture and reference picture export color manuscript base picture by network processes, complete
At colouring task.We take a variety of image quality measure algorithms and evaluate the color manuscript base picture of output.Utilize these figures
Image quality amount assessment algorithm can calculate the similarity for generating color manuscript base picture and true color manuscript base picture and similar with reference picture
Degree, with the sum of the two similarities assess generate color manuscript base seem it is no guarantee content it is constant under the premise of by reference picture
Color, so that it is guaranteed that network can export up-to-standard color manuscript base picture.
Optionally, the first similarity is any one similarity in the following terms: SSIM (Strctural
Similarity, structural similarity), FSIM (Feature similarity, characteristic similarity) and MS-SSIM (Multi-
Scale extension ofSSIM, multi-level structure similarity).
In addition, the second similarity is any one similarity in the following terms: SSIM (Strctural
Similarity, structural similarity), FSIM (Feature similarity, characteristic similarity) and MS-SSIM (Multi-
Scale extension ofSSIM, multi-level structure similarity).
Method described in first aspect to the present embodiment utilizes " decoding by a kind of artificial neural network structure
The technical principle of re-encoding ", improves line original text region resolution capability, to enhance the effect for generating color original text.Joined using model
The optimisation technique of the artificial intelligence fields such as number binaryzation, model reasoning and model beta pruning, greatly reduces artificial neural network
Parameter amount improves the efficiency of colouring;We acquire a large amount of training samples and are repeatedly trained to model simultaneously, finally avoid
" over-fitting " problem.And then solve it is existing it is poor to line original text coloring effect, paint low efficiency the technical issues of.
In addition, according to the second aspect of the present embodiment, providing a kind of storage medium 102, storage medium with reference to Fig. 1
Program including storage, wherein the method described in equipment any of the above one where control storage medium in program operation.
It should be noted that for the various method embodiments described above, for simple description, therefore, it is stated as a series of
Combination of actions, but those skilled in the art should understand that, the disclosure is not limited by the described action sequence because
According to the disclosure, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art should also know
It knows, the embodiments described in the specification are all preferred embodiments, the related actions and modules not necessarily disclosure
It is necessary.
Through the above description of the embodiments, those skilled in the art can be understood that according to above-mentioned implementation
The method of example can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but it is very much
In the case of the former be more preferably embodiment.Based on this understanding, the technical solution of the disclosure is substantially in other words to existing
The part that technology contributes can be embodied in the form of software products, which is stored in a storage
In medium (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal device (can be mobile phone, calculate
Machine, server or network equipment etc.) execute method described in each embodiment of the disclosure.
Embodiment 2
According to the embodiment of the present disclosure, a kind of line original text automatic colouring equipment 1200 is additionally provided.The equipment is and embodiment 1
The corresponding equipment of method described in one aspect.With reference to shown in Figure 12, which includes: processor 1210;And memory
1220, it is connect with processor, for providing the instruction for handling following processing step for processor:
Obtain First Line manuscript base picture and the first reference picture, wherein the first reference picture include to First Line manuscript base picture into
Referenced image information when row colouring;And
Using the model based on machine learning training, it is based on the first reference picture, First Line manuscript base picture is carried out at colouring
Reason.
Optionally, using the model based on machine learning training, it is based on first reference picture, to the First Line original text
Image carries out the operation of colouring processing, comprising: using the first convolution model for including multiple convolutional layers, from the First Line manuscript base
Line original text characteristics of image is extracted as in;Using the second convolution model including multiple convolutional layers, mentioned from first reference picture
Take reference picture feature;Using the third convolution model including multiple convolutional layers, based on the First Line original text characteristics of image and
The reference picture feature generates the color original text characteristics of image of coding, wherein the color original text characteristics of image includes that color manuscript base picture has
Region division and regional color information;And using the Volume Four product module type for including multiple convolutional layers, it is based on the color original text
Characteristics of image generates the first color manuscript base picture as the line manuscript base picture after colouring.
Optionally, first convolution model includes 10 convolutional layers and 5 down-sampled layers, is wherein rolled up described in every two
The down-sampled layer is provided between lamination.
Optionally, second convolution model includes mentioning employed in VGG19 convolutional neural networks model for feature
The structure of the convolutional layer part taken.
Optionally, the third convolution model includes that 4 convolutional layers and 3 rise sample level, and wherein liter described in every two is adopted
There are two the convolutional layers for setting between sample layer.
Optionally, the Volume Four product module type includes that 8 convolutional layers and 2 rise sample level, and wherein liter described in every two is adopted
There are two the convolutional layers for setting between sample layer.
Optionally, further include being trained by following operation to the model based on machine learning training: obtaining the
Two color manuscript base pictures and the second reference picture;The second line manuscript base picture is generated based on the second acquired color manuscript base picture;Based on described
Second line manuscript base picture and second reference picture, using the model based on machine learning training, to second line
Manuscript base picture carries out colouring processing;To the second color manuscript base picture and colouring, treated that the second line manuscript base picture compares
It is right;And using after the comparison as a result, being optimized to the model based on machine learning training.
Optionally, to the second color manuscript base picture and colouring treated behaviour that the second line manuscript base picture is compared
Make, the L1 space length between treated including calculating the second color manuscript base picture and colouring the second line manuscript base picture,
And to the operation that the model based on machine learning training optimizes, comprising: using the L1 space length as loss
Function calculates the gradient of the loss function;And it is based on the gradient, machine is based on to described according to stochastic gradient descent principle
The model of device learning training optimizes.
Optionally, to it is described optimization include to the machine learning training model carry out the following processing at least one
Kind: parameter binaryzation, reasoning and beta pruning.
Optionally, the model based on machine learning training is assessed by following operation: obtains third line original text
Image, third reference picture and with the third line manuscript base as corresponding true color manuscript base picture;Based on the third line manuscript base
Picture and the third reference picture utilize the model based on machine learning training to generate third color manuscript base picture;Calculate institute
The first similarity and the third color manuscript base picture for stating third color manuscript base picture and the true color manuscript base picture are joined with the third
Examine the second similarity of image;Calculate the sum of the similarity of first similarity Yu second similarity;And utilize institute
The sum of similarity of calculating, assess the third color original text whether guarantee content it is constant under the premise of according to the third with reference to figure
As colouring.
Optionally, first similarity is any one similarity in the following terms: SSIM, FSIM and
MS-SSIM。
Optionally, second similarity is any one similarity in the following terms: SSIM, FSIM and
MS-SSIM。
To the present embodiment equipment during painting to new line original text, by obtain First Line manuscript base picture with
And first reference picture, and using the model based on machine learning training, it is based on the first reference picture, to First Line manuscript base picture
Carry out colouring processing.The technical principle for utilizing " decoding re-encoding ", improves line original text region resolution capability, to enhance generation
The effect of color original text.Meanwhile using machine learning training pattern, acquires a large amount of training samples and model is repeatedly trained, most
" over-fitting " problem is avoided eventually.
Embodiment 3
According to the embodiment of the present disclosure, a kind of line original text automatic colouring equipment 1300 is additionally provided.The equipment is and embodiment 1
The corresponding equipment of method described in one aspect.With reference to shown in Figure 13, which includes: image collection module 1310, for obtaining
First Line manuscript base picture and the first reference picture are taken, wherein first reference picture includes to carry out to the First Line manuscript base picture
Referenced image information when colouring;And colouring module 1320, for being based on institute using the model based on machine learning training
The first reference picture is stated, colouring processing is carried out to the First Line manuscript base picture.
Optionally, using the model based on machine learning training, colouring module 1320 includes: the first submodule, for benefit
With the first convolution model including multiple convolutional layers, line original text characteristics of image is extracted from the First Line manuscript base picture;Second submodule
Block, for extracting reference picture feature from first reference picture using the second convolution model for including multiple convolutional layers;
Third submodule, for using the third convolution model for including multiple convolutional layers, based on the First Line original text characteristics of image and
The reference picture feature generates the color original text characteristics of image of coding, wherein the color original text characteristics of image includes that color manuscript base picture has
Region division and regional color information;And the 4th submodule, for utilizing the Volume Four product module for including multiple convolutional layers
Type is based on the color original text characteristics of image, generates the first color manuscript base picture as the line manuscript base picture after colouring.
Optionally, further include training module, for by following submodule to it is described based on machine learning training model
It is trained: the 5th submodule, for obtaining the second color manuscript base picture and the second reference picture;6th submodule, for being based on
The second acquired color manuscript base picture generates the second line manuscript base picture;7th submodule, for based on the second line manuscript base picture and
Second reference picture carries out at colouring the second line manuscript base picture using the model based on machine learning training
Reason;8th submodule, for treated that the second line manuscript base picture is compared to the second color manuscript base picture and colouring;
And the 9th submodule, for using after the comparison as a result, being optimized to the model based on machine learning training.
Optionally, the 8th submodule includes first unit, after calculating the second color manuscript base picture and colouring processing
The second line manuscript base picture between L1 space length and the 9th submodule, comprising: second unit, for by the L1
Space length calculates the gradient of the loss function as loss function;And third unit, for being based on the gradient, root
The model based on machine learning training is optimized according to stochastic gradient descent principle.
Optionally, third unit includes the first subelement, for carrying out following place to the model of machine learning training
At least one of reason: parameter binaryzation, reasoning and beta pruning.
Optionally, further include evaluation module, the model based on machine learning training is carried out by following submodule
Assessment: the tenth submodule, for obtaining third line manuscript base picture, third reference picture and with the third line manuscript base as corresponding
True color manuscript base picture;11st submodule utilizes institute for being based on the third line manuscript base picture and the third reference picture
It states the model based on machine learning training and generates third color manuscript base picture;12nd submodule, for calculating the third color manuscript base
As second with the first similarity of the true color manuscript base picture and the third color manuscript base picture and the third reference picture
Similarity;13rd submodule, for calculating the sum of the similarity of first similarity Yu second similarity;And the
14 submodules, for assessing the third color original text whether before guaranteeing that content is constant using the sum of similarity calculated
It puts and paints according to the third reference picture.
To the present embodiment equipment during painting to new line original text, by obtain First Line manuscript base picture with
And first reference picture, and using the model based on machine learning training, it is based on the first reference picture, to First Line manuscript base picture
Carry out colouring processing.The technical principle for utilizing " decoding re-encoding ", improves line original text region resolution capability, to enhance generation
The effect of color original text.Meanwhile using machine learning training pattern, acquires a large amount of training samples and model is repeatedly trained, most
" over-fitting " problem is avoided eventually.
Above-mentioned embodiment of the present disclosure serial number is for illustration only, does not represent the advantages or disadvantages of the embodiments.
In above-described embodiment of the disclosure, all emphasizes particularly on different fields to the description of each embodiment, do not have in some embodiment
The part of detailed description, reference can be made to the related descriptions of other embodiments.
In several embodiments provided herein, it should be understood that disclosed technology contents can pass through others
Mode is realized.Wherein, the apparatus embodiments described above are merely exemplary, such as the division of the unit, only
A kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or components can combine or
Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual
Between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication link of unit or module
It connects, can be electrical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the disclosure can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in a computer readable storage medium.Based on this understanding, the technical solution of the disclosure is substantially
The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words
It embodies, which is stored in a storage medium, including some instructions are used so that a computer
Equipment (can for personal computer, server or network equipment etc.) execute each embodiment the method for the disclosure whole or
Part steps.And storage medium above-mentioned includes: that USB flash disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited
Various Jie that can store program code such as reservoir (RAM, RandomAccess Memory), mobile hard disk, magnetic or disk
Matter.
The above is only the preferred embodiment of the disclosure, it is noted that for the ordinary skill people of the art
For member, under the premise of not departing from disclosure principle, several improvements and modifications can also be made, these improvements and modifications are also answered
It is considered as the protection scope of the disclosure.
Claims (10)
1. the method that a kind of pair of line manuscript base picture is painted characterized by comprising
First Line manuscript base picture and the first reference picture are obtained, wherein first reference picture includes to the First Line manuscript base
As image information referenced when being painted;And
Using the model based on machine learning training, it is based on first reference picture, the First Line manuscript base picture is carried out
Color processing.
2. the method according to claim 1, wherein using the model based on machine learning training, based on described
First reference picture carries out the operation of colouring processing to the First Line manuscript base picture, comprising:
Using the first convolution model including multiple convolutional layers, line original text characteristics of image is extracted from the First Line manuscript base picture;
Using the second convolution model including multiple convolutional layers, reference picture feature is extracted from first reference picture;
Using the third convolution model including multiple convolutional layers, it is based on the First Line original text characteristics of image and the reference picture
Feature generates the color original text characteristics of image of coding, wherein the color original text characteristics of image include the region division that has of color manuscript base picture with
And regional color information;And
Using the Volume Four product module type including multiple convolutional layers, it is based on the color original text characteristics of image, the first color manuscript base picture is generated and makees
For the line manuscript base picture after colouring.
3. according to the method described in claim 2, it is characterized in that, further including to described by following operation based on machine learning
Trained model is trained:
Obtain the second color manuscript base picture and the second reference picture;
The second line manuscript base picture is generated based on the second acquired color manuscript base picture;
Based on the second line manuscript base picture and second reference picture, using the model based on machine learning training,
Colouring processing is carried out to the second line manuscript base picture;
To the second color manuscript base picture and colouring, treated that the second line manuscript base picture is compared;And
Using after the comparison as a result, being optimized to the model based on machine learning training.
4. according to the method described in claim 3, it is characterized in that, to the second color manuscript base picture and colouring treated institute
It states the operation that the second line manuscript base picture is compared, including calculates the second color manuscript base picture and colouring treated described second
L1 space length between line manuscript base picture, and to the operation that the model based on machine learning training optimizes, comprising:
Using the L1 space length as loss function, the gradient of the loss function is calculated;And
Based on the gradient, the model based on machine learning training is optimized according to stochastic gradient descent principle.
5. according to the method described in claim 4, it is characterized in that, the optimization includes the model to machine learning training
At least one of carry out the following processing: parameter binaryzation, reasoning and beta pruning.
6. the method according to claim 1, wherein by it is following operation to it is described based on machine learning training
Model is assessed:
Obtain third line manuscript base picture, third reference picture and with the third line manuscript base as corresponding true color manuscript base picture;
It is raw using the model based on machine learning training based on the third line manuscript base picture and the third reference picture
At third color manuscript base picture;
Calculate the third color manuscript base picture and the true color manuscript base picture the first similarity and the third color manuscript base picture with
Second similarity of the third reference picture;
Calculate the sum of the similarity of first similarity Yu second similarity;And
Using the sum of similarity calculated, assess the third color original text whether under the premise of guaranteeing that content is constant according to described
The colouring of third reference picture.
7. according to the method described in claim 6, it is characterized in that, first similarity is any in the following terms
One similarity: structural similarity, characteristic similarity and multi-level structure similarity.
8. a kind of storage medium, which is characterized in that the storage medium includes the program of storage, wherein run in described program
When as processor perform claim require any one of 1 to 7 described in method.
9. the equipment (1200) that a kind of pair of line manuscript base picture is painted characterized by comprising
Processor (1210);And
Memory (1220), is connected to the processor, for providing the instruction for handling following processing step for the processor:
First Line manuscript base picture and the first reference picture are obtained, wherein first reference picture includes to the First Line manuscript base
As image information referenced when being painted;And
Using the model based on machine learning training, it is based on first reference picture, the First Line manuscript base picture is carried out
Color processing.
10. the equipment (1300) that a kind of pair of line manuscript base picture is painted characterized by comprising
Image collection module (1310), for obtaining First Line manuscript base picture and the first reference picture, wherein first reference
Image includes image information referenced when painting to the First Line manuscript base picture;And
It paints module (1320), for first reference picture being based on, to described using the model based on machine learning training
First Line manuscript base picture carries out colouring processing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810866829.8A CN109147003A (en) | 2018-08-01 | 2018-08-01 | Method, equipment and the storage medium painted to line manuscript base picture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810866829.8A CN109147003A (en) | 2018-08-01 | 2018-08-01 | Method, equipment and the storage medium painted to line manuscript base picture |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109147003A true CN109147003A (en) | 2019-01-04 |
Family
ID=64798735
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810866829.8A Pending CN109147003A (en) | 2018-08-01 | 2018-08-01 | Method, equipment and the storage medium painted to line manuscript base picture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109147003A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109993202A (en) * | 2019-02-15 | 2019-07-09 | 广东智媒云图科技股份有限公司 | A kind of line chirotype shape similarity judgment method, electronic equipment and storage medium |
CN110223359A (en) * | 2019-05-27 | 2019-09-10 | 浙江大学 | It is a kind of that color model and its construction method and application on the dual-stage polygamy colo(u)r streak original text of network are fought based on generation |
CN110503701A (en) * | 2019-08-29 | 2019-11-26 | 广东工业大学 | A kind of painting methods and device of caricature manual draw |
CN111080746A (en) * | 2019-12-10 | 2020-04-28 | 中国科学院计算技术研究所 | Image processing method, image processing device, electronic equipment and storage medium |
CN111553961A (en) * | 2020-04-27 | 2020-08-18 | 北京奇艺世纪科技有限公司 | Line draft corresponding color chart acquisition method and device, storage medium and electronic device |
CN112837396A (en) * | 2021-01-29 | 2021-05-25 | 深圳市天耀创想网络科技有限公司 | Line draft generation method and device based on machine learning |
CN113888560A (en) * | 2021-09-29 | 2022-01-04 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for processing image |
CN114299184A (en) * | 2021-12-30 | 2022-04-08 | 青海师范大学 | Hidden building colored drawing line manuscript graph coloring method and device based on semantic matching |
CN115937338A (en) * | 2022-04-25 | 2023-04-07 | 北京字跳网络技术有限公司 | Image processing method, apparatus, device and medium |
CN115953597A (en) * | 2022-04-25 | 2023-04-11 | 北京字跳网络技术有限公司 | Image processing method, apparatus, device and medium |
WO2023207779A1 (en) * | 2022-04-25 | 2023-11-02 | 北京字跳网络技术有限公司 | Image processing method and apparatus, device, and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663806A (en) * | 2012-03-02 | 2012-09-12 | 西安交通大学 | Artistic-vision-based cartoon stylized rendering method of image |
CN104200504A (en) * | 2014-08-18 | 2014-12-10 | 江苏诺亚动漫制作有限公司 | Paperless animation production method |
CN106780682A (en) * | 2017-01-05 | 2017-05-31 | 杭州玉鸟科技有限公司 | A kind of caricature preparation method and device |
US20170308656A1 (en) * | 2016-03-10 | 2017-10-26 | Siemens Healthcare Gmbh | Content-based medical image rendering based on machine learning |
CN107330956A (en) * | 2017-07-03 | 2017-11-07 | 广东工业大学 | A kind of unsupervised painting methods of caricature manual draw and device |
-
2018
- 2018-08-01 CN CN201810866829.8A patent/CN109147003A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663806A (en) * | 2012-03-02 | 2012-09-12 | 西安交通大学 | Artistic-vision-based cartoon stylized rendering method of image |
CN104200504A (en) * | 2014-08-18 | 2014-12-10 | 江苏诺亚动漫制作有限公司 | Paperless animation production method |
US20170308656A1 (en) * | 2016-03-10 | 2017-10-26 | Siemens Healthcare Gmbh | Content-based medical image rendering based on machine learning |
CN108701370A (en) * | 2016-03-10 | 2018-10-23 | 西门子保健有限责任公司 | The medical imaging based on content based on machine learning renders |
CN106780682A (en) * | 2017-01-05 | 2017-05-31 | 杭州玉鸟科技有限公司 | A kind of caricature preparation method and device |
CN107330956A (en) * | 2017-07-03 | 2017-11-07 | 广东工业大学 | A kind of unsupervised painting methods of caricature manual draw and device |
Non-Patent Citations (2)
Title |
---|
LVMIN ZHANG等: "Style Transfer for Anime Sketches with Enhanced Residual U-net and Auxiliary Classifier GAN", 《ARXIV》 * |
卢倩雯等: "基于生成对抗网络的漫画草稿图简化", 《自动化学报》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109993202B (en) * | 2019-02-15 | 2023-08-22 | 广东智媒云图科技股份有限公司 | Line manuscript type graph similarity judging method, electronic equipment and storage medium |
CN109993202A (en) * | 2019-02-15 | 2019-07-09 | 广东智媒云图科技股份有限公司 | A kind of line chirotype shape similarity judgment method, electronic equipment and storage medium |
CN110223359A (en) * | 2019-05-27 | 2019-09-10 | 浙江大学 | It is a kind of that color model and its construction method and application on the dual-stage polygamy colo(u)r streak original text of network are fought based on generation |
CN110223359B (en) * | 2019-05-27 | 2020-11-17 | 浙江大学 | Dual-stage multi-color-matching-line draft coloring model based on generation countermeasure network and construction method and application thereof |
CN110503701A (en) * | 2019-08-29 | 2019-11-26 | 广东工业大学 | A kind of painting methods and device of caricature manual draw |
CN111080746A (en) * | 2019-12-10 | 2020-04-28 | 中国科学院计算技术研究所 | Image processing method, image processing device, electronic equipment and storage medium |
CN111080746B (en) * | 2019-12-10 | 2024-04-26 | 中国科学院计算技术研究所 | Image processing method, device, electronic equipment and storage medium |
CN111553961B (en) * | 2020-04-27 | 2023-09-08 | 北京奇艺世纪科技有限公司 | Method and device for acquiring line manuscript corresponding color map, storage medium and electronic device |
CN111553961A (en) * | 2020-04-27 | 2020-08-18 | 北京奇艺世纪科技有限公司 | Line draft corresponding color chart acquisition method and device, storage medium and electronic device |
CN112837396A (en) * | 2021-01-29 | 2021-05-25 | 深圳市天耀创想网络科技有限公司 | Line draft generation method and device based on machine learning |
CN112837396B (en) * | 2021-01-29 | 2024-05-07 | 深圳市天耀创想网络科技有限公司 | Line manuscript generation method and device based on machine learning |
CN113888560A (en) * | 2021-09-29 | 2022-01-04 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for processing image |
CN114299184A (en) * | 2021-12-30 | 2022-04-08 | 青海师范大学 | Hidden building colored drawing line manuscript graph coloring method and device based on semantic matching |
CN115937338A (en) * | 2022-04-25 | 2023-04-07 | 北京字跳网络技术有限公司 | Image processing method, apparatus, device and medium |
CN115953597A (en) * | 2022-04-25 | 2023-04-11 | 北京字跳网络技术有限公司 | Image processing method, apparatus, device and medium |
WO2023207779A1 (en) * | 2022-04-25 | 2023-11-02 | 北京字跳网络技术有限公司 | Image processing method and apparatus, device, and medium |
CN115937338B (en) * | 2022-04-25 | 2024-01-30 | 北京字跳网络技术有限公司 | Image processing method, device, equipment and medium |
CN115953597B (en) * | 2022-04-25 | 2024-04-16 | 北京字跳网络技术有限公司 | Image processing method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109147003A (en) | Method, equipment and the storage medium painted to line manuscript base picture | |
CN109636886B (en) | Image processing method and device, storage medium and electronic device | |
CN107833183B (en) | Method for simultaneously super-resolving and coloring satellite image based on multitask deep neural network | |
CN107861938B (en) | POI (Point of interest) file generation method and device and electronic equipment | |
CN107729948A (en) | Image processing method and device, computer product and storage medium | |
CN107464217B (en) | Image processing method and device | |
CN108961245A (en) | Picture quality classification method based on binary channels depth parallel-convolution network | |
CN106485235A (en) | A kind of convolutional neural networks generation method, age recognition methods and relevant apparatus | |
CN109522874A (en) | Human motion recognition method, device, terminal device and storage medium | |
CN107291822A (en) | The problem of based on deep learning disaggregated model training method, sorting technique and device | |
CN106503236A (en) | Question classification method and device based on artificial intelligence | |
CN108090904A (en) | A kind of medical image example dividing method and device | |
CN106354701A (en) | Chinese character processing method and device | |
CN109902672A (en) | Image labeling method and device, storage medium, computer equipment | |
CN106295645B (en) | A kind of license plate character recognition method and device | |
CN109325513B (en) | Image classification network training method based on massive single-class images | |
CN109359527B (en) | Hair region extraction method and system based on neural network | |
CN112069883B (en) | Deep learning signal classification method integrating one-dimensional two-dimensional convolutional neural network | |
CN110276076A (en) | A kind of text mood analysis method, device and equipment | |
CN109544662A (en) | A kind of animation style line original text painting methods and system based on SRUnet | |
CN107578367A (en) | A kind of generation method and device of stylized image | |
CN108596222A (en) | Image interfusion method based on deconvolution neural network | |
CN109978074A (en) | Image aesthetic feeling and emotion joint classification method and system based on depth multi-task learning | |
CN110287343A (en) | Picture Generation Method and device | |
CN115082800B (en) | Image segmentation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190104 |