CN107730568B - Coloring method and device based on weight learning - Google Patents

Coloring method and device based on weight learning Download PDF

Info

Publication number
CN107730568B
CN107730568B CN201711048927.2A CN201711048927A CN107730568B CN 107730568 B CN107730568 B CN 107730568B CN 201711048927 A CN201711048927 A CN 201711048927A CN 107730568 B CN107730568 B CN 107730568B
Authority
CN
China
Prior art keywords
color
image
weight
pixel
gray level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711048927.2A
Other languages
Chinese (zh)
Other versions
CN107730568A (en
Inventor
郑元杰
宋双
连剑
刘弘
魏本征
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Normal University
Original Assignee
Shandong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Normal University filed Critical Shandong Normal University
Priority to CN201711048927.2A priority Critical patent/CN107730568B/en
Publication of CN107730568A publication Critical patent/CN107730568A/en
Application granted granted Critical
Publication of CN107730568B publication Critical patent/CN107730568B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a coloring method based on weight learning, which comprises the following steps: selecting a plurality of gray level images and corresponding color images, and respectively calculating the characteristic difference between adjacent pixels of the gray level images and the corresponding color images to be used as a training data set; training a weight learning model by using a random forest algorithm based on the training data set; marking colors on a target gray level image to be colored; extracting characteristic difference between adjacent pixels of the target gray level image, and using the characteristic difference as the input of a weight learning model to obtain the optimal weight; and carrying out color transfer according to the color marks and the optimal weight to obtain a color image corresponding to the target gray image. The coloring method of the invention obtains the weight by using a learning mode, can obtain better interrelation among pixels and obtains better coloring effect.

Description

Coloring method and device based on weight learning
Technical Field
The invention relates to a computer-aided image coloring method, in particular to an image coloring method based on weight learning.
Background
The image is a real reflection of human visual perception as a carrier of information, and color is information important for people to understand the image and is one of the most important attributes of the image. People have experienced a transition from black and white images to color images, but in the early days, the imaging technology at that time was limited, and only black and white photographs and images could be generated, so adding appropriate colors to these old photographs and images to make them more ornamental becomes a very important task.
The term coloring was first proposed by Wilson mark in 1970 and was defined as a process for coloring black and white or monochrome images and video by computer[1]. The appearance of the coloring technology can restore, enhance or change the color of the image, improve the visual effect of people, enable people to extract more accurate information from the image, and is beneficial to people to understand the content of the image deeply, thereby improving the use value of the image.
In recent years, coloring techniques have been developed rapidly. In the early days, people colored images with satisfactory colors by hand, this task was often accomplished by specialized personnel and was time consuming. With the development of digital image processing technology, people hope to help processing the demand by computer, and the subject of digital image coloring is also brought forward.
The currently used coloring methods can be roughly classified into two types, a coloring method based on color marking and a coloring method based on a reference image. The reference image-based coloring method does not require user interaction, but rather achieves color migration by means of the reference image. The color-marker-based approach requires the user to draw a color marker on the grayscale image and then use an algorithm to achieve the transfer of the marker color location to the unknown color region. The advantage of this method is that people can mark different parts of the image according to the requirements, so that the dyed image can meet the requirements for color. Among them, Levin et al propose an optimization algorithm using the similarity relationship (weight) of pixels between adjacent domains. The weights, which in such methods represent the similarity of two adjacent pixels, are used during the color transfer process to indicate how much color is transferred to the periphery. The larger the weight value is, the greater the similarity between the two is. Many methods are improved based on this, and some methods define different weight functions, but the weight calculation methods used in these methods are all predefined, and it is not explicitly stated which weight can achieve better results. Therefore, how to improve the weights to perfect the coloring effect is a technical problem that needs to be urgently solved by those skilled in the art.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a coloring method based on weight learning, which mainly comprises the following two key parts: in the weight learning stage and the coloring stage, a weight learning model from a gray image to a color image is established, and the target image is colored according to the weight learned by the model. The coloring method of the invention obtains the weight by using a learning mode, can obtain better interrelation among pixels and obtains better coloring effect.
In order to achieve the purpose, the invention adopts the following technical scheme:
a weight learning based shading method comprising the steps of:
step 1: selecting a plurality of gray level images and corresponding color images, and respectively calculating the characteristic difference between adjacent pixels of the gray level images and the corresponding color images to be used as a training data set;
step 2: training a weight learning model by using a random forest algorithm based on the training data set;
and step 3: marking colors on a target gray level image to be colored;
and 4, step 4: extracting characteristic difference between adjacent pixels of the target gray level image, and using the characteristic difference as the input of a weight learning model to obtain the optimal weight;
and 5: and carrying out color transfer according to the color marks and the optimal weight to obtain a color image corresponding to the target gray image.
Further, the characteristic difference between adjacent pixels of the gray image is:
Frs=||Fs-Fr||
where r is a certain pixel in the gray image, s is a neighborhood pixel of r, F ═ { F1, F2} represents a feature vector composed of luminance and gradient features, and F1 and F2 represent luminance and gradient features, respectively.
Further, the characteristic difference between adjacent pixels of the color image is a color difference:
Figure GDA0001507209950000021
wherein d isrs=||Ls-Lr||2+||as-ar||2+||bs-br||2
r is a certain pixel in the gray image, s is a neighborhood pixel of r, L represents brightness information, a and b represent two color components respectively, drsDenotes the distance of pixels s and r, var is the threshold, max (d)rs) Watch (A)Finding the largest d in a neighborhoodrs
Further, the weight learning model employs (F)rs,Drs) As a training set.
Further, the weight learning model is set to learn to obtain DrsThe optimal weights for pixels r and s are:
Wrs,s∈N(r)=exp-Drs
further, for the marked pixel r in the image, the weights of the pixel r and each pixel s in the neighborhood are normalized.
Further, in the color transfer process, the optimization function of the color values of the adjacent pixels is as follows:
Figure GDA0001507209950000031
wherein C represents a UV component, CrRepresenting the color of the central pixel, CsRepresenting the color of a peripheral pixel within the field.
According to a second object of the present invention, the present invention further provides a coloring apparatus based on weight learning, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the coloring method based on weight learning when executing the program.
According to a third object of the present invention, the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the above-described weight learning-based shading method.
The invention has the advantages of
(1) The invention provides a learning way to obtain weight, according to the hypothesis: in a color image, if the colors are close between pixels, there is a large degree of similarity between pixels in the corresponding gray-scale image. That is, in a color image, the correlation between adjacent pixel colors can more accurately express the information of the weight. The color distance of adjacent pixels in the color space is used as the true value of the weight to participate in training, and a learning model from the gray image characteristic to the color image weight is established. Given an arbitrary gray scale target image, a better weight can be learned through the model, so that a satisfactory coloring result is obtained.
(2) The invention not only uses the gray level to represent the pixels, but also adds the gradient characteristics, adds more information to the calculation of the relationship between the pixels, and then establishes and learns the association from the gray level image to the color image in a characteristic combination mode. Experiments have shown that the combination of luminance and grey scale characteristics can produce better coloration results.
(3) The present invention compares the results obtained by the method proposed by Levin et al, which has a smaller difference from the original color image and a better coloring effect.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
FIG. 1 is a flow chart of the weight learning based shading method of the present invention;
FIG. 2 is a diagram of an example effect of the present invention, 2(a) and 2(d) being marked images; 2(b) and 2(e) are the results obtained by the process; 2(c) and 2(f) are original images.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The embodiments and features of the embodiments in the present application may be combined with each other without conflict.
The general idea provided by the invention is as follows: in order to obtain more accurate interrelation between pixels and obtain better coloring effect, the invention provides a weight learning mode for coloring. The method comprises the steps of considering color information of an image when obtaining the weight, calculating a true value of the weight in an original color image, establishing a relation model from a local neighborhood in a gray level image to the optimal weight in the color image in a pre-training mode, predicting accurate weight information corresponding to a target gray level image by using the model, and transmitting the marked color to an unknown region by using the weight.
Example one
The embodiment discloses a coloring method based on weight learning, which comprises the following steps:
step 1: selecting a plurality of gray level images and corresponding color images, and respectively calculating the characteristic difference between adjacent pixels of the gray level images and the corresponding color images to be used as a training data set;
the training set includes two parts, one is the characteristic difference between pixels in the gray image, and the other is the color difference in the color space. Calculating the characteristic difference F on the prepared gray-scale image and color image respectivelyrsAnd color difference Drs,FrsAnd DrsSeparately vectorized, FrsIs a two-dimensional vector representing the luminance and gradient two-dimensional vectors of adjacent pixels, DrsIs a one-dimensional vector representing the color difference. L represents the magnitude of the number of pairs of relationships between adjacent pixels. Will (F)rs,Drs) And participating in training as a training set.
For the feature difference between adjacent pixels in the gray image:
the present embodiment adopts a combination of features of the gray scale and the gradient of an image to represent each pixel in the image, wherein F1 and F2 represent features of brightness and gradient, respectively, and F ═ F1 and F2 represent feature vectors formed by the two. First, for each of the grayscale imagesPixel, we extract the f1, f2 features to form two feature maps. For each pixel s, a 3 × 3 neighborhood is first taken, and then the feature difference F between r and each adjacent pixel s in the neighborhood is calculatedrs
Frs=||Fs-Fr||
For feature differences between adjacent pixels of a color image:
the present embodiment uses the color difference as the characteristic difference of the color image. Lab color space is adopted in the calculation process, wherein L represents brightness information, and a and b respectively represent two color components. In the three-dimensional color space, each color is regarded as a point, and the similarity relationship between two colors is represented by its distance. For each pixel s in the color image, a 3 × 3 neighborhood is first taken with s as the center, and then the distance between r and each adjacent pixel s in the neighborhood is calculated. The distance between r and s is marked DrsAnd the weight between r and s is denoted as Wrs
drs=||Ls-Lr||2+||as-ar||2+||bs-br||2
Figure GDA0001507209950000051
Where var is a threshold, the formula may be adjusted empirically. max (d)rs) Representation to find the largest d in a neighborhoodrs
As can be seen from the above formula, DrsThe distance between the two is shown, the weight shows the similarity relation of the two, the larger the distance between the two is, the smaller the similarity is, so the distance and the similarity are in inverse proportion.
Step 2: training a weight learning model by using a random forest algorithm based on the training data set;
the weight learning scheme provided by the invention mainly utilizes the characteristic relation in the gray level image to learn the relation between colors in the color space. Random forest is used for constructing a forest in a random mode, and a plurality of forestsWhen a new sample is input, the constructed tree is used for decision making. For a random forest classifier, training data X [ X1, X2, …, XL are given]And its corresponding label Y (Y1, Y2, …, YL) (where L is the number of training data), the learning model M will be constructed using the training set (X, Y). Wherein F is extracted from the gray imagersD calculated in color image as training data XrsAs the mark Y. Will (F)rs,Drs) And (4) outputting a weight learning model as a program of a training set input random forest.
And step 3: marking colors on a target gray level image to be colored;
in order to observe the coloring effect of the method, the embodiment converts an original color image into a gray image as a target gray image to be colored, then draws a colored line mark on the gray image by using a painting brush according to the distribution of colors in the original image, in this example, two images (child and field respectively from top to bottom) are used, the marked images are as shown in fig. 2, and the finally obtained result (fig. 2(b)) is compared with the original image (fig. 2(c)) to verify the effectiveness of the method.
And 4, step 4: extracting characteristic difference between adjacent pixels of the target gray level image, and using the characteristic difference as the input of a weight learning model to obtain the optimal weight;
learning weights for the target image: the weight learning model establishes mapping between the gray level image and the color image, and the optimal weight can be learned by inputting any image with unknown color.
In this example, as shown in fig. 2, two images of the child and the field are processed. And (3) respectively extracting gray level and gradient characteristics from the two marked images of the child and the field, and then calculating characteristic differences among pixels, wherein the characteristic differences are stored as two-dimensional vectors. Inputting the vector into a random forest model and outputting a one-dimensional vector Drs. The vector is actually a learned distance relationship, and then the optimal weight is calculated by the following formula
Wrs,s∈N(r)=exp-Drs
And 5: and carrying out color transfer according to the color marks and the optimal weight to obtain a color image corresponding to the target gray image.
For a marked pixel r in the image, a neighborhood of 3 x 3 centered on r is obtained, the weights of each pixel s and the central pixel in the neighborhood are known, and the weights need to be normalized so that
Figure GDA0001507209950000061
And then the optimal coloring method is utilized to realize the color transfer. The process of transferring color adopts the method of optimizing coloring proposed by Levin et al, the process of transmission is carried out in YUV color space, wherein Y is known, the UV value of unknown color area is recovered, and finally the YUV space is converted into RGB space.
The optimal coloring method restores the color of the unknown region by reducing the difference between the color value of the central pixel and the color weight sum of the adjacent pixels. The optimization function is as follows:
Figure GDA0001507209950000062
the method uses YUV color space, Y represents brightness, and UV represents two components respectively. Wherein the UV component is represented by CrRepresenting the color of the central pixel, CsRepresenting the color of a peripheral pixel within the field. The final result obtained is shown in FIG. 2 (b).
In addition to the above-mentioned method of optimizing coloring proposed by Levin et al, other color transfer methods known in the art may be employed, such as color propagation by minimizing the color difference of the 4-neighborhood pixels of the seed pixel, as proposed by Horiuchi, or minimizing the color difference between the pixel neighborhoods as an optimization target; the coloring method based on the self-adaptive boundary detection proposed by Huang; partial differential solution method for coloring with a luminance gradient proposed by Sapiro, and the like.
Example two
An object of the present embodiment is to provide a computing device.
A weight learning based shading apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the program, comprising:
step 1: selecting a plurality of gray level images and corresponding color images, and respectively calculating the characteristic difference between adjacent pixels of the gray level images and the corresponding color images to be used as a training data set;
step 2: training a weight learning model by using a random forest algorithm based on the training data set;
and step 3: marking colors on a target gray level image to be colored;
and 4, step 4: extracting characteristic difference between adjacent pixels of the target gray level image, and using the characteristic difference as the input of a weight learning model to obtain the optimal weight;
and 5: and carrying out color transfer according to the color marks and the optimal weight to obtain a color image corresponding to the target gray image.
EXAMPLE III
An object of the present embodiment is to provide a computer-readable storage medium.
A computer-readable storage medium, having stored thereon a computer program for grayscale image rendering, which program, when executed by a processor, performs the steps of:
step 1: selecting a plurality of gray level images and corresponding color images, and respectively calculating the characteristic difference between adjacent pixels of the gray level images and the corresponding color images to be used as a training data set;
step 2: training a weight learning model by using a random forest algorithm based on the training data set;
and step 3: marking colors on a target gray level image to be colored;
and 4, step 4: extracting characteristic difference between adjacent pixels of the target gray level image, and using the characteristic difference as the input of a weight learning model to obtain the optimal weight;
and 5: and carrying out color transfer according to the color marks and the optimal weight to obtain a color image corresponding to the target gray image.
The steps involved in the apparatuses of the above second and third embodiments correspond to the first embodiment of the method, and the detailed description thereof can be found in the relevant description of the first embodiment. The term "computer-readable storage medium" should be taken to include a single medium or multiple media containing one or more sets of instructions; it should also be understood to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by a processor and that cause the processor to perform any of the methods of the present invention.
The invention establishes a learning model from gray level image characteristics to color image weights. And the color transfer is carried out based on the weight obtained by the model, and the experiment proves that the coloring effect is better.
Those skilled in the art will appreciate that the modules or steps of the present invention described above can be implemented using general purpose computer means, or alternatively, they can be implemented using program code that is executable by computing means, such that they are stored in memory means for execution by the computing means, or they are separately fabricated into individual integrated circuit modules, or multiple modules or steps of them are fabricated into a single integrated circuit module. The present invention is not limited to any specific combination of hardware and software.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (5)

1. A method of coloring based on weight learning, comprising the steps of:
step 1: selecting a plurality of gray level images and corresponding color images, and respectively calculating the characteristic difference between adjacent pixels of the gray level images and the corresponding color images to be used as a training data set;
step 2: training a weight learning model by using a random forest algorithm based on the training data set;
and step 3: marking colors on a target gray level image to be colored;
and 4, step 4: extracting characteristic difference between adjacent pixels of the target gray level image, and using the characteristic difference as the input of a weight learning model to obtain the optimal weight;
and 5: carrying out color transfer according to the color marks and the optimal weight to obtain a color image corresponding to the target gray level image;
the characteristic difference between adjacent pixels of the gray image is as follows:
Frs=||Fs-Fr||
wherein r is a certain pixel in the gray image, s is a neighborhood pixel of r, F ═ { F1, F2} represents a feature vector composed of luminance and gradient features, and F1 and F2 represent luminance and gradient features, respectively;
the characteristic difference between adjacent pixels of the color image is the color difference:
Figure FDA0002697660670000011
wherein d isrs=||Ls-Lr||2+||as-ar||2+||bs-br||2
r is a certain pixel in the gray image, s is a neighborhood pixel of r, L represents brightness information, a and b represent two color components respectively, drsDenotes the distance of pixels s and r, var is the threshold, max (d)rs) Representation to find the largest d in a neighborhoodrs
The weight learning model adopts (F)rs,Drs) As a training set;
learning by the weight learning model to obtain DrsThe optimal weights for pixels r and s are:
Figure FDA0002697660670000012
2. a weight learning based rendering method as claimed in claim 1 wherein for a pixel r that has been marked in an image, the weights of the pixel r and each pixel s in the neighborhood are normalized.
3. The weight learning-based rendering method of claim 2, wherein the optimization function of the color values of the neighboring pixels in the color transfer process is:
Figure FDA0002697660670000013
wherein C represents a UV component, CrRepresenting the color of the central pixel, CsRepresenting the color of the perimeter pixel in the neighborhood.
4. A weight learning based shading apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the weight learning based shading method according to any one of claims 1 to 3 when executing the program.
5. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the weight learning based shading method according to any one of claims 1 to 3.
CN201711048927.2A 2017-10-31 2017-10-31 Coloring method and device based on weight learning Active CN107730568B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711048927.2A CN107730568B (en) 2017-10-31 2017-10-31 Coloring method and device based on weight learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711048927.2A CN107730568B (en) 2017-10-31 2017-10-31 Coloring method and device based on weight learning

Publications (2)

Publication Number Publication Date
CN107730568A CN107730568A (en) 2018-02-23
CN107730568B true CN107730568B (en) 2021-01-08

Family

ID=61202029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711048927.2A Active CN107730568B (en) 2017-10-31 2017-10-31 Coloring method and device based on weight learning

Country Status (1)

Country Link
CN (1) CN107730568B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108389168B (en) * 2018-02-26 2022-01-28 上海工程技术大学 Method for acquiring aerial image of unmanned aerial vehicle in fixed area
CN108921932B (en) * 2018-06-28 2022-09-23 福州大学 Method for generating multiple reasonable colorings of black and white figure pictures based on convolutional neural network
CN112102438B (en) * 2020-08-26 2024-01-26 东南大学 Graph dyeing problem searching method based on elite solution driven multi-level tabu search
CN112073596B (en) * 2020-09-18 2021-07-20 青岛大学 Simulated color processing method and system for specific black-and-white video signal
CN115272527A (en) * 2022-08-02 2022-11-01 上海人工智能创新中心 Image coloring method based on color disc countermeasure network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839079A (en) * 2014-03-18 2014-06-04 浙江师范大学 Similar image colorization algorithm based on classification learning
CN104376529A (en) * 2014-11-25 2015-02-25 深圳北航新兴产业技术研究院 Gray level image colorization system and method based on GLCM
CN104851074A (en) * 2015-03-26 2015-08-19 温州大学 Feature similarity-based non-local neighborhood gray level image colorization method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839079A (en) * 2014-03-18 2014-06-04 浙江师范大学 Similar image colorization algorithm based on classification learning
CN104376529A (en) * 2014-11-25 2015-02-25 深圳北航新兴产业技术研究院 Gray level image colorization system and method based on GLCM
CN104851074A (en) * 2015-03-26 2015-08-19 温州大学 Feature similarity-based non-local neighborhood gray level image colorization method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Colorization using Optimization;Anat Levin,et al.;《ACM Transactions on Graphics》;20040630;1-6 *
基于色彩传递与扩展的图像着色算法;朱黎博等;《中国图象图形学报》;20100228;第15卷(第2期);200-205 *

Also Published As

Publication number Publication date
CN107730568A (en) 2018-02-23

Similar Documents

Publication Publication Date Title
CN107730568B (en) Coloring method and device based on weight learning
CN108875935B (en) Natural image target material visual characteristic mapping method based on generation countermeasure network
CN1475969B (en) Method and system for intensify human image pattern
CN103914699A (en) Automatic lip gloss image enhancement method based on color space
CN111667400B (en) Human face contour feature stylization generation method based on unsupervised learning
CN106204690B (en) Image processing method and device
CN110728722B (en) Image color migration method and device, computer equipment and storage medium
CN110490959B (en) Three-dimensional image processing method and device, virtual image generating method and electronic equipment
US20190087982A1 (en) Automatic coloring of line drawing
CN110348358B (en) Skin color detection system, method, medium and computing device
CN115424088A (en) Image processing model training method and device
Seo et al. One-to-one example-based automatic image coloring using deep convolutional generative adversarial network
CN110473176B (en) Image processing method and device, fundus image processing method and electronic equipment
CN113052783A (en) Face image fusion method based on face key points
Titus et al. Fast colorization of grayscale images by convolutional neural network
CN102184403A (en) Optimization-based intrinsic image extraction method
CN111080754B (en) Character animation production method and device for connecting characteristic points of head and limbs
CN114359030A (en) Method for synthesizing human face backlight picture
CN115018729B (en) Content-oriented white box image enhancement method
CN108573506B (en) Image processing method and system
JP2017157014A (en) Image processing device, image processing method, image processing system and program
CN111862253B (en) Sketch coloring method and system for generating countermeasure network based on deep convolution
CN115810081A (en) Three-dimensional model generation method and device
Zhou et al. Saliency preserving decolorization
CN111062862A (en) Color-based data enhancement method and system, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant