CN112991497A - Method, device, storage medium and terminal for coloring black-and-white cartoon video - Google Patents

Method, device, storage medium and terminal for coloring black-and-white cartoon video Download PDF

Info

Publication number
CN112991497A
CN112991497A CN202110513081.5A CN202110513081A CN112991497A CN 112991497 A CN112991497 A CN 112991497A CN 202110513081 A CN202110513081 A CN 202110513081A CN 112991497 A CN112991497 A CN 112991497A
Authority
CN
China
Prior art keywords
cartoon
picture
color
black
white
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110513081.5A
Other languages
Chinese (zh)
Other versions
CN112991497B (en
Inventor
傅慧源
马华东
王宇航
张宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202110513081.5A priority Critical patent/CN112991497B/en
Publication of CN112991497A publication Critical patent/CN112991497A/en
Application granted granted Critical
Publication of CN112991497B publication Critical patent/CN112991497B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method, a device, a storage medium and a terminal for coloring a black and white cartoon video, wherein the method comprises the following steps: splitting an original black and white animation video into a black and white animation picture set; constructing a first color cartoon sample illustration; converting a first black-and-white cartoon picture and a first color cartoon sample picture in the black-and-white cartoon picture set into a Lab mode; inputting the converted first black-and-white cartoon picture and the first color cartoon sample picture into a pre-trained cartoon picture coloring model, and outputting a first color cartoon picture; repeating the process for each black-and-white cartoon picture in the black-and-white cartoon picture set to obtain a color picture corresponding to each black-and-white cartoon picture to form a color cartoon picture set; combining the color cartoon picture sets to generate a color cartoon video; the pre-trained cartoon picture coloring model comprises an encoder network, a color transfer network, a color coding network and a decoder network. By the method and the device, the coloring efficiency of the black-and-white cartoon video can be improved.

Description

Method, device, storage medium and terminal for coloring black-and-white cartoon video
Technical Field
The invention relates to the technical field of deep learning, in particular to a method, a device, a storage medium and a terminal for coloring black and white animation videos.
Background
Today, animations have become part of our everyday entertainment, with thousands of animations occupying a large percentage of the audience ratings of global television and network video platforms. However, animation is a complex and time-consuming process that requires a large number of workers to collaborate at different stages. A key frame sketch defining the main character actions is depicted by the main artist, while the middle action sketch is done by an inexperienced artist. The worker then repeatedly tints all line art on the basis of the table of character colors previously designed by the artist. This coloring process is considered a tedious, labor intensive task with two drawbacks: the technical difficulty is high due to the fact that the method greatly depends on the professional skills of professionals; an animation with a duration of about 1.5 hours requires manual coloring of hundreds of thousands of frames, which is inefficient and very costly.
With the rise of deep learning, a series of methods for coloring animation images based on convolutional neural network and generative confrontation network images appear. The main principle of the method is that a large number of color images are collected, the color images are grayed to be used as training data, color lines are marked on the black-white wire frame images manually, and mapping from black and white to color is established through a training neural network so as to achieve the purpose of coloring the black-white wire frame images. The disadvantages of this approach are as follows: the data set is difficult to manufacture, and the data set is less in public at the present stage; when the animation video is colored, after the video is subjected to frame extraction, a single frame is colored, the difference between frames is large, and flicker and incoherence are visually felt. For example, in the same piece of clothing, the previous frame returns to dark blue, and the next frame may return to dark gray, so that the visual experience is difficult to satisfy.
In summary, how to improve the efficiency and accuracy of black and white animation video rendering is a problem that needs to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a method and a device for coloring a black-and-white cartoon video, a storage medium and a terminal. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
In a first aspect, an embodiment of the present application provides a method for coloring a black-and-white animation video, where the method includes:
splitting an original black and white animation video into a black and white animation picture set;
constructing a first color cartoon sample illustration;
converting a first black-and-white cartoon picture and a first color cartoon sample picture in the black-and-white cartoon picture set into a Lab mode;
inputting the converted first black-and-white cartoon picture and the first color cartoon sample picture into a pre-trained cartoon picture coloring model, and outputting a first color cartoon picture;
repeatedly converting each black-and-white cartoon picture in the black-and-white cartoon picture set into a Lab mode, inputting the Lab mode and the converted first color cartoon sample picture into a pre-trained cartoon picture coloring model to obtain a color picture corresponding to each black-and-white cartoon picture to form a color cartoon picture set;
and combining the color cartoon picture sets to generate a color cartoon video.
Optionally, constructing the first color cartoon sample illustration includes:
acquiring a first black-and-white cartoon picture from the black-and-white cartoon picture set;
receiving a coloring instruction aiming at the first black-and-white cartoon picture, and coloring the first black-and-white cartoon picture into a color picture based on the coloring instruction;
determining a color picture as a first color cartoon sample illustration;
alternatively, the first and second electrodes may be,
acquiring a color picture with content similarity greater than a preset value with any black-white cartoon picture in a black-white cartoon picture set;
the color picture is determined as a first color animation sample instance.
Optionally, the step of inputting the converted first black-and-white cartoon picture and first color cartoon sample picture into a pre-trained cartoon picture coloring model, and outputting a first color cartoon picture includes:
inputting the L channel of the first black-and-white cartoon picture and the L channel of the first color cartoon sample picture into a color transfer network, and outputting a color transfer matrix;
inputting the first black and white cartoon picture into an encoder network for convolution operation, and outputting a coding characteristic picture;
inputting the color transfer matrix into the color coding network, and outputting color coding characteristics;
and inputting the coding characteristic picture and the color coding characteristic into the decoder network for characteristic stacking, and outputting a first color cartoon picture.
Optionally, inputting the L channel of the first black-and-white animation picture and the L channel of the first color animation sample picture into a color transfer network, and outputting a color transfer matrix, including:
the color transmission network carries out convolution operation on the first black-and-white cartoon picture and the first color cartoon sample illustration, and generates a feature code of the first black-and-white cartoon picture and the first color cartoon sample illustration;
the color transfer network calculates the cosine similarity of the feature codes of the first black-and-white cartoon picture and the first color cartoon sample case graph to generate a similarity matrix;
the color transfer network generates a color transfer matrix by integrating the ab channel of the first color cartoon pattern with the similarity matrix.
Optionally, the first color cartoon picture is colored by calculating a color transfer matrix between the first black-and-white cartoon picture and the first color cartoon sample picture.
Optionally, the generating a pre-trained cartoon picture coloring model according to the following steps includes:
adopting a convolutional neural network to establish an encoder network, a color coding network, a decoder network and a color transfer network;
connecting the encoder network, the decoder network, the color coding network and the color transmission network to generate a first cartoon picture coloring model;
collecting a plurality of color cartoon pictures;
converting each color picture in the plurality of color cartoon pictures into a black-and-white cartoon picture to generate a plurality of black-and-white cartoon pictures;
inputting a plurality of black-and-white cartoon pictures and a plurality of color cartoon pictures into a first cartoon picture coloring model for training;
when the number of training iterations reaches a preset number, generating a second cartoon picture coloring model;
selecting any black-and-white cartoon picture from the plurality of black-and-white cartoon pictures, inputting the selected black-and-white cartoon picture into a second cartoon picture coloring model, and outputting a target color picture;
calculating the difference percentage of the optical flow information between the color picture corresponding to any selected black-and-white cartoon picture and the target color picture;
and generating a pre-trained cartoon picture coloring model according to the difference percentage of the optical flow information.
Optionally, the generating a pre-trained cartoon picture coloring model according to the difference percentage of the optical flow information includes:
when the difference percentage of the optical flow information is larger than a preset value, adjusting parameters of the first cartoon picture coloring model;
continuing to execute the step of inputting the black-and-white cartoon pictures and the color cartoon pictures into the first cartoon picture coloring model for training until the difference percentage of the optical flow information is smaller than a preset value, and stopping training;
and generating a pre-trained cartoon picture coloring model.
In a second aspect, an embodiment of the present application provides an apparatus for coloring a black-and-white cartoon video, including:
the video splitting module is used for splitting the original black and white cartoon video into a black and white cartoon picture set;
the sample illustration constructing module is used for constructing a first color cartoon sample illustration;
the mode conversion module is used for converting a first black-and-white cartoon picture and a first color cartoon sample picture in the black-and-white cartoon picture set into a Lab mode;
the picture output module is used for inputting the converted first black-and-white cartoon picture and the first color cartoon sample picture into a pre-trained cartoon picture coloring model and outputting a first color cartoon picture;
the color cartoon picture set building module is used for repeatedly converting each black-and-white cartoon picture in the black-and-white cartoon picture set into a Lab mode and inputting the Lab mode and the converted first color cartoon sample picture into a pre-trained cartoon picture coloring model to obtain a color picture corresponding to each black-and-white cartoon picture to form a color cartoon picture set;
and the color cartoon video generation module is used for combining the color cartoon picture sets to generate a color cartoon video.
In a third aspect, embodiments of the present application provide a computer storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fourth aspect, an embodiment of the present application provides a terminal, which may include: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the embodiment of the application, the device for coloring the black and white cartoon video firstly splits the original black and white cartoon video into the black and white cartoon picture set, then constructing a first color cartoon sample illustration, converting a first black-and-white cartoon picture and the first color cartoon sample illustration in the black-and-white cartoon picture set into a Lab mode, inputting the first black-and-white cartoon picture and the first color cartoon sample picture after conversion into a cartoon picture coloring model trained in advance, outputting a first color cartoon picture, then, repeatedly converting each black-and-white cartoon picture in the black-and-white cartoon picture set into a Lab mode and inputting the Lab mode and the converted first color cartoon sample picture into a pre-trained cartoon picture coloring model, obtaining a color picture corresponding to each black-and-white cartoon picture to form a color cartoon picture set, and finally combining the color cartoon picture sets to generate a color cartoon video; the first color cartoon picture is colored by calculating a color transfer matrix between the first black-and-white cartoon picture and the first color cartoon sample picture. The black-and-white film is rapidly colored by calculating the sample diagram and the color transfer matrix of the black-and-white image, so that the coloring efficiency of the black-and-white cartoon video is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a schematic flowchart of a method for coloring a black-and-white animation video according to an embodiment of the present application;
FIG. 2 is a schematic flowchart of a method for training a cartoon picture coloring model according to an embodiment of the present disclosure;
FIG. 3 is a schematic block diagram of a process of rendering a cartoon picture according to an embodiment of the present application;
fig. 4 is a process diagram of a processing procedure of a color delivery network in an animation picture coloring model according to an embodiment of the present application;
fig. 5 is a schematic block diagram of a process of coloring a black-and-white animation video according to an embodiment of the present application;
fig. 6 is a schematic diagram of an apparatus for coloring a black-and-white animation video according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
The following description and the drawings sufficiently illustrate specific embodiments of the invention to enable those skilled in the art to practice them.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
In the description of the present invention, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
In the technical scheme provided by the application, since the embodiment of the application performs fast coloring on the black-and-white film by calculating the sample graph and the color transfer matrix of the black-and-white image, the coloring efficiency of the black-and-white animation video is improved, and the following adopts an exemplary embodiment for detailed description.
The method for coloring black-and-white animation video according to the embodiment of the present application will be described in detail with reference to fig. 1 to 5. The method may be implemented in dependence on a computer program, operable on a von neumann-based rendering device for black and white animation video. The computer program may be integrated into the application or may run as a separate tool-like application. The device for coloring black and white animation video in the embodiment of the present application may be a user terminal, including but not limited to: personal computers, tablet computers, handheld devices, in-vehicle devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and the like. The user terminals may be called different names in different networks, for example: user equipment, access terminal, subscriber unit, subscriber station, mobile station, remote terminal, mobile device, user terminal, wireless communication device, user agent or user equipment, cellular telephone, cordless telephone, Personal Digital Assistant (PDA), terminal equipment in a 5G network or future evolution network, and the like.
Referring to fig. 1, a flow chart of a method for coloring a black-and-white animation video is provided according to an embodiment of the present application. As shown in fig. 1, the method of the embodiment of the present application may include the following steps:
s101, splitting an original black and white cartoon video into a black and white cartoon picture set;
the original black-white animation video is a black-white animation video to be colored, the black-white video to be colored comprises thousands of continuous black-white animation picture frames, and the black-white animation picture can also be a binary image, namely, the color in the image is non-black or white.
Generally, the original black-and-white animation video can be a black-and-white animation stored in the user terminal, and can also be a black-and-white animation movie downloaded from the cloud.
In the embodiment of the application, when the original black-and-white animation video needs to be colored, the original black-and-white animation video to be colored is determined, then a folder is created in the local computer disk, each black-and-white animation picture in the original black-and-white animation video is read by utilizing the OpenCV module of python according to the time sequence, and then the black-and-white animation pictures are written into the folder of the computer disk in a picture format.
For example, when a user colors a locally-stored black-and-white cartoon video cat of thank you, a folder is created on a local disk, each frame of black-and-white image in the cat of anhy you is read by an OpenCV module of python according to the chronological order, and finally each frame of black-and-white image in the cat of anhy you is stored in the local disk folder.
S102, constructing a first color cartoon sample illustration;
wherein, the color cartoon pattern is a color image.
In one possible implementation manner, when constructing the color cartoon sample illustration, first obtaining a first black-and-white cartoon picture from a black-and-white cartoon picture set, then receiving a coloring instruction for the first black-and-white cartoon picture, coloring the first black-and-white cartoon picture into a color cartoon picture based on the coloring instruction, and finally determining the color cartoon picture as the first color cartoon sample illustration.
In another possible implementation manner, a color picture with content similarity greater than a preset value with any black-and-white cartoon picture in the black-and-white cartoon picture set is obtained first, and then the color picture is determined as the first color cartoon sample case.
In the embodiment of the application, a user manually colors a certain frame of a black-and-white video by using professional software (such as PhotoShop used in the application) to be used as a sample picture input by an animation picture coloring model, or selects and colors a color animation image which is similar in content and accords with coloring expectation from an internet animation library.
S103, converting a first black-and-white cartoon picture and a first color cartoon sample picture in the black-and-white cartoon picture set into a Lab mode;
wherein, the Lab mode is a color standard in the industry.
Generally, the color standard of the original black-and-white image is the RGB color mode, and it is preferred to convert the RGB color mode of the original black-and-white image into the Lab color mode in this application.
In a possible implementation manner, when converting RGB into a Lab color mode, first dividing the first black and white animation picture and the first color animation sample picture into a plurality of blocks, respectively calculating average values of an R channel, a G channel, and a B channel in each block, and finally calculating a Lab value of each block according to the average values of the R channel, the G channel, and the B channel in each block.
S104, inputting the converted first black-and-white cartoon picture and the first color cartoon sample picture into a pre-trained cartoon picture coloring model, and outputting a first color cartoon picture;
the pre-trained cartoon picture coloring model is a mathematical model capable of coloring black and white cartoon pictures, and comprises an encoder network, a decoder network and a color transfer network.
Generally, a flowchart for generating a pre-trained animation picture coloring model is, for example, as shown in fig. 2, firstly, creating an encoder network, a color coding network, a decoder network, and a color transfer network by using a convolutional neural network, then, connecting the encoder network, the decoder network, the color coding network, and the color transfer network to generate a first animation picture coloring model, then, collecting a plurality of color animation pictures, converting each color picture of the plurality of color animation pictures into a black-and-white animation picture to generate a plurality of black-and-white animation pictures, then, inputting the plurality of black-and-white animation pictures and the plurality of color animation pictures into the first animation picture coloring model for training, then, when the number of iterations of the training reaches a preset number, generating a second animation picture coloring model, and then, selecting any one black-and-white animation picture from the plurality of black-and-white animation pictures, and inputting any selected black-and-white cartoon picture into a second cartoon picture coloring model, outputting a target color picture, calculating the difference percentage of optical flow information between the color picture corresponding to any selected black-and-white cartoon picture and the target color picture, and finally generating a pre-trained cartoon picture coloring model according to the difference percentage of the optical flow information.
Specifically, when a pre-trained cartoon picture coloring model is generated according to the difference percentage of the optical flow information, when the difference percentage of the optical flow information is larger than a preset value, the parameters of the first cartoon picture coloring model are adjusted, the step of inputting the black-and-white cartoon pictures and the color cartoon pictures into the first cartoon picture coloring model for training is continuously executed, the training is stopped until the difference percentage of the optical flow information is smaller than the preset value, and finally the pre-trained cartoon picture coloring model is generated.
Preferably, before the black-and-white animation pictures and the color animation pictures are input into the color transfer network in the first animation picture coloring model, the sizes of the black-and-white animation pictures and the color animation pictures are adjusted to be in accordance with the size adapted to the current model.
In the animation picture coloring model training, the inter-frame coloring consistency is maintained by using the optical flow information. So that the colored cartoon video can not have frame loss or error frame when the model is applied.
In a possible implementation manner, for example, as shown in fig. 3, fig. 3 is a schematic diagram of a process of processing a pre-trained rendering model provided in the present application, when a first black-and-white animation picture and a first color animation sample picture after conversion are input into the pre-trained animation picture rendering model for processing, first an L channel of the first black-and-white animation picture and an L channel of the first color animation sample picture are input into a color transfer network, a color transfer matrix is output, then the first black-and-white animation picture is input into an encoder network for convolution operation, an encoding feature picture is output, finally the color transfer matrix is input into the color encoding network, a color encoding feature is output, and the encoding feature picture and the color encoding feature are input into the decoder network for feature stacking, and the first color animation picture is output.
Further, for example, as shown in fig. 4, fig. 4 is a schematic view of a processing flow of a color transfer network in a pre-trained coloring model provided in the present application, in which the color transfer network performs convolution operation on a first black-and-white animation picture and a first color animation sample case to generate a feature code of the first black-and-white animation picture and the first color animation sample case, then the color transfer network calculates cosine similarity of the feature code of the first black-and-white animation picture and the first color animation sample case to generate a similarity matrix, and finally the color transfer network integrates an ab channel of the first color animation sample case with the similarity matrix to generate the color transfer matrix.
S105, repeatedly converting each black-and-white cartoon picture in the black-and-white cartoon picture set into a Lab mode, inputting the Lab mode and the converted first color cartoon sample picture into a pre-trained cartoon picture coloring model, and obtaining a color picture corresponding to each black-and-white cartoon picture to form a color cartoon picture set;
in general, a color picture of a black-and-white image can be obtained through steps S101 to S104.
In the embodiment of the present application, each black-and-white video includes thousands of black-and-white animation pictures, and therefore, it is necessary to circularly perform the processing procedures from step S103 to step S104 on each black-and-white animation picture, so that a color picture corresponding to each black-and-white animation picture forms a color animation picture set.
And S106, combining the color cartoon picture sets to generate a color cartoon video.
The color picture is colored by calculating a color transfer matrix between the first black-and-white cartoon picture and the first color cartoon sample picture.
In a possible implementation manner, after a color animation picture set is obtained, the colored images are read by using an OpenCV module of Python and are combined into a color animation video.
The invention utilizes the color information of the sample graph, improves the accuracy of coloring by transmitting the color of the sample graph to the black-and-white video image, and utilizes the optical flow loss to restrict the video frame-to-frame consistency, thereby reducing the video frame-to-frame color jumping situation, greatly reducing the labor cost and improving the coloring efficiency and effect.
For example, as shown in fig. 5, fig. 5 is a schematic process diagram of a black-and-white video coloring process according to the present application, which includes splitting an original black-and-white animation video into pictures to obtain a black-and-white animation picture set, constructing a color animation sample case by a user, obtaining a black-and-white animation picture from the black-and-white animation picture set, converting the sample case and the black-and-white animation picture into a Lab color space, calling a coloring model, calculating a color transfer matrix, coloring the selected black-and-white animation picture, traversing the black-and-white animation picture set from the black-and-white animation picture set to obtain a color animation picture set, and finally synthesizing a color animation video.
In the embodiment of the application, the device for coloring the black and white cartoon video firstly splits the original black and white cartoon video into the black and white cartoon picture set, then constructing a first color cartoon sample illustration, converting a first black-and-white cartoon picture and the first color cartoon sample illustration in the black-and-white cartoon picture set into a Lab mode, inputting the first black-and-white cartoon picture and the first color cartoon sample picture after conversion into a cartoon picture coloring model trained in advance, outputting a first color cartoon picture, then, repeatedly converting each black-and-white cartoon picture in the black-and-white cartoon picture set into a Lab mode and inputting the Lab mode and the converted first color cartoon sample picture into a pre-trained cartoon picture coloring model, obtaining a color picture corresponding to each black-and-white cartoon picture to form a color cartoon picture set, and finally combining the color cartoon picture sets to generate a color cartoon video; the first color cartoon picture is colored by calculating a color transfer matrix between the first black-and-white cartoon picture and the first color cartoon sample picture. The black-and-white film is rapidly colored by calculating the sample diagram and the color transfer matrix of the black-and-white image, so that the coloring efficiency of the black-and-white cartoon video is improved.
The following are embodiments of the apparatus of the present invention that may be used to perform embodiments of the method of the present invention. For details which are not disclosed in the embodiments of the apparatus of the present invention, reference is made to the embodiments of the method of the present invention.
Referring to fig. 6, a schematic structural diagram of a coloring apparatus for black-and-white animation video according to an exemplary embodiment of the present invention is shown. The coloring device for black and white cartoon video can be realized by software, hardware or a combination of the software and the hardware to be all or part of the terminal. The device 1 comprises a video splitting module 10, a sample graph constructing module 20, a mode converting module 30, a picture output module 40, a color cartoon picture set constructing module 50 and a color cartoon video generating module 60.
The video splitting module 10 is configured to split an original black and white animation video into a black and white animation picture set;
a sample map construction module 20 for constructing a first color cartoon sample map;
the mode conversion module 30 is configured to convert a first black-and-white animation picture and a first color animation sample picture in a black-and-white animation picture set into a Lab mode;
the picture output module 40 is used for inputting the converted first black-and-white cartoon picture and the first color cartoon sample picture into a pre-trained cartoon picture coloring model and outputting a first color cartoon picture;
the color cartoon picture set constructing module 50 is configured to repeatedly convert each black-and-white cartoon picture in the black-and-white cartoon picture set into a Lab mode and input the Lab mode and the converted first color cartoon sample picture into a pre-trained cartoon picture coloring model to obtain a color picture corresponding to each black-and-white cartoon picture to form a color cartoon picture set;
and a color animation video generation module 60, configured to combine the color animation picture sets to generate a color animation video.
It should be noted that, when the apparatus for coloring a black-and-white animation video provided in the foregoing embodiment executes the method for coloring a black-and-white animation video, only the division of the functional modules is illustrated, and in practical applications, the functions may be allocated to different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to complete all or part of the functions described above. In addition, the embodiment of the device and the method for coloring the black-and-white animation video provided by the above embodiment belong to the same concept, and details of the implementation process are shown in the method embodiment and are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the embodiment of the application, the device for coloring the black and white cartoon video firstly splits the original black and white cartoon video into the black and white cartoon picture set, then constructing a first color cartoon sample illustration, converting a first black-and-white cartoon picture and the first color cartoon sample illustration in the black-and-white cartoon picture set into a Lab mode, inputting the first black-and-white cartoon picture and the first color cartoon sample picture after conversion into a cartoon picture coloring model trained in advance, outputting a first color cartoon picture, then, repeatedly converting each black-and-white cartoon picture in the black-and-white cartoon picture set into a Lab mode and inputting the Lab mode and the converted first color cartoon sample picture into a pre-trained cartoon picture coloring model, obtaining a color picture corresponding to each black-and-white cartoon picture to form a color cartoon picture set, and finally combining the color cartoon picture sets to generate a color cartoon video; the first color cartoon picture is colored by calculating a color transfer matrix between the first black-and-white cartoon picture and the first color cartoon sample picture. The black-and-white film is rapidly colored by calculating the sample diagram and the color transfer matrix of the black-and-white image, so that the coloring efficiency of the black-and-white cartoon video is improved.
The present invention also provides a computer readable medium having stored thereon program instructions, which when executed by a processor, implement the method for rendering black and white animation video provided by the above-mentioned method embodiments.
The present invention also provides a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of rendering a black and white cartoon video of the above-described method embodiments.
Please refer to fig. 7, which provides a schematic structural diagram of a terminal according to an embodiment of the present application. As shown in fig. 7, terminal 1000 can include: at least one processor 1001, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002.
Wherein a communication bus 1002 is used to enable connective communication between these components.
The user interface 1003 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Processor 1001 may include one or more processing cores, among other things. The processor 1001 interfaces various components throughout the electronic device 1000 using various interfaces and lines to perform various functions of the electronic device 1000 and to process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1005 and invoking data stored in the memory 1005. Alternatively, the processor 1001 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1001 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 1001, but may be implemented by a single chip.
The Memory 1005 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1005 includes a non-transitory computer-readable medium. The memory 1005 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1005 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, a picture playing function, etc.), instructions for implementing the above-described method embodiments, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 7, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a coloring application program for a black and white animation video.
In the terminal 1000 shown in fig. 7, the user interface 1003 is mainly used as an interface for providing input for a user, and acquiring data input by the user; and the processor 1001 may be configured to invoke a shading application for black and white animation video stored in the memory 1005, and specifically perform the following operations:
splitting an original black and white animation video into a black and white animation picture set;
constructing a first color cartoon sample illustration;
converting a first black-and-white cartoon picture and a first color cartoon sample picture in the black-and-white cartoon picture set into a Lab mode;
inputting the converted first black-and-white cartoon picture and the first color cartoon sample picture into a pre-trained cartoon picture coloring model, and outputting a first color cartoon picture;
repeatedly converting each black-and-white cartoon picture in the black-and-white cartoon picture set into a Lab mode, inputting the Lab mode and the converted first color cartoon sample picture into a pre-trained cartoon picture coloring model to obtain a color picture corresponding to each black-and-white cartoon picture to form a color cartoon picture set;
and combining the color cartoon picture sets to generate a color cartoon video.
In one embodiment, the processor 1001, when executing the building of the first color cartoon sample instance graph, specifically performs the following operations:
acquiring a first black-and-white cartoon picture from the black-and-white cartoon picture set;
receiving a coloring instruction aiming at the first black-and-white cartoon picture, and coloring the first black-and-white cartoon picture into a color picture based on the coloring instruction;
determining a color picture as a first color cartoon sample illustration;
alternatively, the first and second electrodes may be,
acquiring a color picture with content similarity greater than a preset value with any black-white cartoon picture in a black-white cartoon picture set;
the color picture is determined as a first color animation sample instance.
In an embodiment, when the processor 1001 inputs the converted first black-and-white cartoon picture and the first color cartoon sample picture into the pre-trained cartoon picture coloring model and outputs the first color cartoon picture, the following operations are specifically performed:
inputting the L channel of the first black-and-white cartoon picture and the L channel of the first color cartoon sample picture into a color transfer network, and outputting a color transfer matrix;
inputting the first black and white cartoon picture into an encoder network for convolution operation, and outputting a coding characteristic picture;
inputting the color transfer matrix into the color coding network, and outputting color coding characteristics; and inputting the coding characteristic picture and the color coding characteristic into the decoder network for characteristic stacking, and outputting a first color cartoon picture.
In one embodiment, when the processor 1001 inputs the L channel of the first black-and-white animation picture and the L channel of the first color animation sample map into the color transfer network and outputs the color transfer matrix, the following operations are specifically performed:
the color transmission network carries out convolution operation on the first black-and-white cartoon picture and the first color cartoon sample illustration, and generates a feature code of the first black-and-white cartoon picture and the first color cartoon sample illustration;
the color transfer network calculates the cosine similarity of the feature codes of the first black-and-white cartoon picture and the first color cartoon sample case graph to generate a similarity matrix;
the color transfer network generates a color transfer matrix by integrating the ab channel of the first color cartoon pattern with the similarity matrix.
In the embodiment of the application, the device for coloring the black and white cartoon video firstly splits the original black and white cartoon video into the black and white cartoon picture set, then constructing a first color cartoon sample illustration, converting a first black-and-white cartoon picture and the first color cartoon sample illustration in the black-and-white cartoon picture set into a Lab mode, inputting the first black-and-white cartoon picture and the first color cartoon sample picture after conversion into a cartoon picture coloring model trained in advance, outputting a first color cartoon picture, then, repeatedly converting each black-and-white cartoon picture in the black-and-white cartoon picture set into a Lab mode and inputting the Lab mode and the converted first color cartoon sample picture into a pre-trained cartoon picture coloring model, obtaining a color picture corresponding to each black-and-white cartoon picture to form a color cartoon picture set, and finally combining the color cartoon picture sets to generate a color cartoon video; the first color cartoon picture is colored by calculating a color transfer matrix between the first black-and-white cartoon picture and the first color cartoon sample picture. The black-and-white film is rapidly colored by calculating the sample diagram and the color transfer matrix of the black-and-white image, so that the coloring efficiency of the black-and-white cartoon video is improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by a computer program to instruct associated hardware, and the shading program for the black and white cartoon video may be stored in a computer readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (10)

1. A method of rendering black and white animation video, the method comprising:
splitting an original black and white animation video into a black and white animation picture set;
constructing a first color cartoon sample illustration;
converting a first black-and-white cartoon picture and a first color cartoon sample picture in the black-and-white cartoon picture set into a Lab mode;
inputting the converted first black-and-white cartoon picture and the first color cartoon sample picture into a pre-trained cartoon picture coloring model, and outputting a first color cartoon picture; the pre-trained cartoon picture coloring model comprises an encoder network, a color transfer network, a color coding network and a decoder network;
repeatedly converting each black-and-white cartoon picture in the black-and-white cartoon picture set into a Lab mode, inputting the Lab mode and the converted first color cartoon sample picture into a pre-trained cartoon picture coloring model to obtain a color picture corresponding to each black-and-white cartoon picture to form a color cartoon picture set;
and combining the color cartoon picture sets to generate a color cartoon video.
2. The method of claim 1, wherein constructing the first color cartoon pattern map comprises:
acquiring a first black-and-white cartoon picture from the black-and-white cartoon picture set;
receiving a coloring instruction aiming at the first black-and-white cartoon picture, and coloring the first black-and-white cartoon picture into a color picture based on the coloring instruction;
determining the color picture as a first color cartoon sample illustration;
alternatively, the first and second electrodes may be,
acquiring a color picture with the content similarity of any black-white cartoon picture in the black-white cartoon picture set larger than a preset value;
and determining the color picture as a first color cartoon sample illustration.
3. The method according to claim 2, wherein the inputting the converted first black-and-white cartoon picture and first color cartoon sample picture into a pre-trained cartoon picture coloring model and outputting the first color cartoon picture comprises:
inputting the L channel of the first black-and-white cartoon picture and the L channel of the first color cartoon sample picture into the color transfer network, and outputting a color transfer matrix;
inputting the first black-and-white cartoon picture into the encoder network for convolution operation, and outputting a coding characteristic picture;
inputting the color transfer matrix into the color coding network, and outputting color coding characteristics;
and inputting the coding characteristic picture and the color coding characteristic into the decoder network for characteristic stacking, and outputting a first color cartoon picture.
4. The method according to claim 3, wherein inputting the L channel of the first black and white animation picture and the L channel of the first color animation sample picture into the color transfer network, and outputting a color transfer matrix comprises:
the color transfer network performs convolution operation on the first black-and-white cartoon picture and the first color cartoon sample illustration, and generates feature codes of the first black-and-white cartoon picture and the first color cartoon sample illustration;
the color transfer network calculates the cosine similarity of the feature codes of the first black-and-white cartoon picture and the first color cartoon sample case drawing to generate a similarity matrix;
and the color transfer network generates a color transfer matrix after integrating the ab channel of the first color cartoon sample graph and the similarity matrix.
5. The method of claim 1, wherein the first color animation picture is rendered by calculating a color transfer matrix between the first black and white animation picture and the first color animation sample graph.
6. The method of claim 1, wherein generating a pre-trained cartoon picture coloring model comprises:
adopting a convolutional neural network to establish an encoder network, a color coding network, a decoder network and a color transfer network;
connecting the encoder network, the decoder network, the color coding network and the color transmission network to generate a first cartoon picture coloring model;
collecting a plurality of color cartoon pictures;
converting each color picture in the plurality of color cartoon pictures into a black-and-white cartoon picture to generate a plurality of black-and-white cartoon pictures;
inputting the black-and-white cartoon pictures and the color cartoon pictures into the first cartoon picture coloring model for training;
when the iteration times of the training reach preset times, generating a second cartoon picture coloring model;
selecting any black-and-white cartoon picture from the plurality of black-and-white cartoon pictures, inputting the selected black-and-white cartoon picture into the second cartoon picture coloring model, and outputting a target color picture;
calculating the difference percentage of the optical flow information between the color picture corresponding to any selected black-and-white cartoon picture and the target color picture;
and generating a pre-trained cartoon picture coloring model according to the difference percentage of the optical flow information.
7. The method of claim 6, wherein generating a pre-trained cartoon picture coloring model from the percentage difference of the optical flow information comprises:
when the difference percentage of the optical flow information is larger than a preset value, adjusting the parameters of the first cartoon picture coloring model;
continuing to execute the step of inputting the black-and-white cartoon pictures and the color cartoon pictures into the first cartoon picture coloring model for training until the difference percentage of the optical flow information is smaller than a preset value, and stopping training;
and generating a pre-trained cartoon picture coloring model.
8. An apparatus for rendering black and white animation video, the apparatus comprising:
the video splitting module is used for splitting the original black and white cartoon video into a black and white cartoon picture set;
the sample illustration constructing module is used for constructing a first color cartoon sample illustration;
the mode conversion module is used for converting a first black-and-white cartoon picture and a first color cartoon sample picture in the black-and-white cartoon picture set into a Lab mode;
the picture output module is used for inputting the converted first black-and-white cartoon picture and the first color cartoon sample picture into a pre-trained cartoon picture coloring model and outputting a first color cartoon picture;
the color cartoon picture set building module is used for repeatedly converting each black-and-white cartoon picture in the black-and-white cartoon picture set into a Lab mode and inputting the Lab mode and the converted first color cartoon sample picture into a pre-trained cartoon picture coloring model to obtain a color picture corresponding to each black-and-white cartoon picture to form a color cartoon picture set;
and the color cartoon video generation module is used for combining the color cartoon picture sets to generate a color cartoon video.
9. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to perform the method steps according to any of claims 1-7.
10. A terminal, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1-7.
CN202110513081.5A 2021-05-11 2021-05-11 Method, device, storage medium and terminal for coloring black-and-white cartoon video Active CN112991497B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110513081.5A CN112991497B (en) 2021-05-11 2021-05-11 Method, device, storage medium and terminal for coloring black-and-white cartoon video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110513081.5A CN112991497B (en) 2021-05-11 2021-05-11 Method, device, storage medium and terminal for coloring black-and-white cartoon video

Publications (2)

Publication Number Publication Date
CN112991497A true CN112991497A (en) 2021-06-18
CN112991497B CN112991497B (en) 2021-10-19

Family

ID=76337491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110513081.5A Active CN112991497B (en) 2021-05-11 2021-05-11 Method, device, storage medium and terminal for coloring black-and-white cartoon video

Country Status (1)

Country Link
CN (1) CN112991497B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744168A (en) * 2021-09-03 2021-12-03 武汉平行世界网络科技有限公司 Method and device for filling colors

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288515A (en) * 2019-05-27 2019-09-27 宁波大学 The method and CNN coloring learner intelligently coloured to the microsctructural photograph of electron microscope shooting
CN110717953A (en) * 2019-09-25 2020-01-21 北京影谱科技股份有限公司 Black-white picture coloring method and system based on CNN-LSTM combined model
US20200077065A1 (en) * 2018-08-31 2020-03-05 Disney Enterprises Inc. Video Color Propagation
CN111476863A (en) * 2020-04-02 2020-07-31 北京奇艺世纪科技有限公司 Method and device for coloring black and white cartoon, electronic equipment and storage medium
CN112489164A (en) * 2020-12-07 2021-03-12 南京理工大学 Image coloring method based on improved depth separable convolutional neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200077065A1 (en) * 2018-08-31 2020-03-05 Disney Enterprises Inc. Video Color Propagation
CN110288515A (en) * 2019-05-27 2019-09-27 宁波大学 The method and CNN coloring learner intelligently coloured to the microsctructural photograph of electron microscope shooting
CN110717953A (en) * 2019-09-25 2020-01-21 北京影谱科技股份有限公司 Black-white picture coloring method and system based on CNN-LSTM combined model
CN111476863A (en) * 2020-04-02 2020-07-31 北京奇艺世纪科技有限公司 Method and device for coloring black and white cartoon, electronic equipment and storage medium
CN112489164A (en) * 2020-12-07 2021-03-12 南京理工大学 Image coloring method based on improved depth separable convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BO ZHANG等: "Deep Exemplar-based Video Colorization", 《2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744168A (en) * 2021-09-03 2021-12-03 武汉平行世界网络科技有限公司 Method and device for filling colors

Also Published As

Publication number Publication date
CN112991497B (en) 2021-10-19

Similar Documents

Publication Publication Date Title
CN113421312A (en) Method and device for coloring black and white video, storage medium and terminal
CN112102437B (en) Canvas-based radar map generation method and device, storage medium and terminal
CN110069974B (en) Highlight image processing method and device and electronic equipment
CN110070495B (en) Image processing method and device and electronic equipment
CN113096233B (en) Image processing method and device, electronic equipment and readable storage medium
US11893770B2 (en) Method for converting a picture into a video, device, and storage medium
CN112991497B (en) Method, device, storage medium and terminal for coloring black-and-white cartoon video
CN113742025A (en) Page generation method, device, equipment and storage medium
CN114040246A (en) Image format conversion method, device, equipment and storage medium of graphic processor
CN111967397A (en) Face image processing method and device, storage medium and electronic equipment
CN113225554B (en) Image coding and decoding method and device based on neural network, storage medium and terminal
CN114004905A (en) Method, device and equipment for generating character style image and storage medium
CN110636331B (en) Method and apparatus for processing video
CN115953597B (en) Image processing method, device, equipment and medium
CN112489144A (en) Image processing method, image processing apparatus, terminal device, and storage medium
CN111462158A (en) Image processing method and device, intelligent device and storage medium
CN110555799A (en) Method and apparatus for processing video
CN112634444B (en) Human body posture migration method and device based on three-dimensional information, storage medium and terminal
CN115471592A (en) Dynamic image processing method and system
CN114782250A (en) Video image processing method and device, electronic equipment and storage medium
CN111325816B (en) Feature map processing method and device, storage medium and terminal
CN114663570A (en) Map generation method and device, electronic device and readable storage medium
CN111068314A (en) Unity-based NGUI resource rendering processing method and device
CN112087636B (en) Image coding processing method and device, storage medium and terminal
CN112396671B (en) Water ripple effect realization method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant