CN108921932A - A method of the black and white personage picture based on convolutional neural networks generates various reasonable coloring in real time - Google Patents

A method of the black and white personage picture based on convolutional neural networks generates various reasonable coloring in real time Download PDF

Info

Publication number
CN108921932A
CN108921932A CN201810691393.3A CN201810691393A CN108921932A CN 108921932 A CN108921932 A CN 108921932A CN 201810691393 A CN201810691393 A CN 201810691393A CN 108921932 A CN108921932 A CN 108921932A
Authority
CN
China
Prior art keywords
color
convolutional neural
neural networks
image
black
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810691393.3A
Other languages
Chinese (zh)
Other versions
CN108921932B (en
Inventor
陈国栋
田影
潘冠慈
茌良召
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN201810691393.3A priority Critical patent/CN108921932B/en
Publication of CN108921932A publication Critical patent/CN108921932A/en
Application granted granted Critical
Publication of CN108921932B publication Critical patent/CN108921932B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)
  • Color Image Communication Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to a kind of, and the black and white personage picture based on convolutional neural networks generates the method that various reasonable colours in real time, includes the following steps:Step S1:Select training dataset;Step S2:The local prompt network of building;Step S3:The global prompt network of building;Step S4:Construct main coloured networks;Step S5:Generate coloring interface.The present invention only needs to add several control points, can realize the multiple color effect of image within a short period of time.

Description

A kind of black and white personage picture based on convolutional neural networks generates various reasonable in real time The method of color
Technical field
The present invention relates to technical field of image processing, especially a kind of black and white personage picture based on convolutional neural networks is real The method of Shi Shengcheng various reasonable coloring.
Background technique
Since former camera work is insufficient, people can only shoot black and white portrait photographs to be kept as a souvenir.About in thirty or forty Nian Qian, photochrome is also very rare, and people mainly record the appearance of oneself and relatives with black-and-white photograph.In our life In some older pictures it is very memorable, especially some precious black and white portrait photos have immeasurable reserve value.By Universal not yet in colour phhotograpy technology at that time, some collections specially ask painter to enamel for black and white figure map piece, but the time It has grown color and has been easy desalination.Later with science and technology be constantly progressive and computer universal so that black-and-white photograph discoloration It obtains relatively easily.But tinting software needs corresponding learning art, operates also more troublesome.So people are for colouring skill Art has strong craving.
In computer graphics, there are two kinds of extensive image rendering methods:The editor of user's guidance propagates and data The automatic coloring of driving.First method is by Levin et al. proposition and popularizes, and user draws colour Chinese character strokes on the image, then Optimization process generates the color image to match with the scribble of user.It can obtain so preferable as a result, but usually requiring numerous Trivial operation selects accurate required natural coloration because the image-region of each different colours must be explicitly pointed out by user It is also very difficult.Second is data-driven color method.It is by one of following two mode come to gray scale pictures It is coloured:(1) by matching it with the exemplary color character image in database, and the nonparametric from the photo Color " is stolen " on ground, this idea is similar<Image analogies>Image co nvolution;2)<Learning Large-Scale Automatic Image Colorization.>By learning the ginseng from gray scale to color from extensive character image data Number mapping.But colouring results include incorrect color.<DeepProp:Extracting Deep Features from a Single Image for Edit Propagation.>Using deep learning method, the skill of deep neural network model is utilized Art automatically extracts the advanced features for being used for personage from low-level features.This method is sat using lower-level vision piece and space pixel It is denoted as the input for deep neural network, automatically extracts the feature that suitable user specifies stroke from single image.Use depth mind Through network as classifier, estimate that user's stroke probability, these probability indicate each according to the extraction feature in whole image Pixel belongs to a possibility that each stroke.Although whole process is automatic but colouring results are more single.
To sum up, in the prior art, black and white character image is coloured usually using some mapping softwares, but needs learning software Operating procedure uses also troublesome.There are also some color methods be using depth network technology, but coloring effect difference and Color result is single.Personage's color method based on convolutional neural networks only needs user that black and white picture is added to coloring interface just Colouring results can be generated in real time, and various reasonable colouring results may be implemented by adding control point to image.Under normal circumstances Black white image colouring algorithm based on depth network implementations is more complicated, and coloring effect is easy to appear wrong coloring.Based on drawing The black white image of software realization colours, and operates comparatively laborious, time-consuming and laborious.
Summary of the invention
In view of this, the purpose of the present invention is to propose to a kind of, the black and white personage picture based on convolutional neural networks generates in real time The method of various reasonable coloring, it is only necessary to add several control points, can realize the multiple color of image within a short period of time Effect.
The present invention is realized using following scheme:A kind of black and white personage picture based on convolutional neural networks generates a variety of in real time The method rationally coloured, specifically includes following steps:
Step S1:Select training dataset;
Step S2:The local prompt network of building;
Step S3:The global prompt network of building;
Step S4:Construct main coloured networks;
Step S5:Generate coloring interface.
Further, step S1 is specially:The training image of convolutional neural networks uses LFW data set (Labeled Faces in the Wild) and ImageNet data set, and use CIE Lab color space;Wherein, positive sample all is from Face picture in LFW data set, negative sample are background (such as trees, flower, the furniture selected from ImageNet data set Equal pictures).
Preferably, suitable color space is to realize image colorant essential step, the present invention is respectively compared RGB, YUV and The effect of CIE Lab color space.In the case where RGB, correspond to red, green and blue channel.The present invention directly uses Rgb value is trained.However, RGB image is converted to YUV, and input gray level image is replaced with to the Y of image to test Channel.Ensure the output image brightness having the same of all models in this way.In the case where YUV and CIE Lab, the present invention is defeated Coloration and final color image is created using the brightness of input picture out.For all different color spaces, by these Value is standardized as [0,1] range of output layer Sigmoid transmission function.As the result is shown compared with RGB and YUV, CIE Lab is given The most reasonable approach is gone out, therefore the present invention uses CIE Lab color space.
Further, in step S2:The local addition point for prompting Web vector graphic sparse is used as input, be used to describe to input, Simulation addition point and definition interfaces;Specifically include following steps:
Step S21:The color image for training is given, gray scale and CIE Lab color space are converted the image into;
Step S22:Using gray level image as the input of convolutional neural networks model, by a, b of CIE Lab color space Component is exported as the target of convolutional neural networks model;
Step S23:The gray level image of sparse addition point and input is cascaded;
Step S24:The assessment of loss function l at each pixelδ, and the loss function at all pixels is added together To assess the loss function L of whole image:
In formula, X is the gray level image of input, and U is user's tensor, and θ is the parameter of convolutional neural networks F function, and Y is to be System output, h and w respectively correspond the height and width of input gray level image, and q is image category;Wherein, the training mesh of convolutional neural networks Mark is exactly the loss function L for minimizing network.Local prompt network since conservative colorization, is permitted from input angle Perhaps color needed for injection coloring, rather than since setting that is more vibrant but being easy to appear artifact, and mistake can be repaired Accidentally, it is only necessary to which wrong coloring problem can quickly be solved several times by clicking.
Step S25:Selecting reasonable color is to realize the essential step of coloring true to nature, to each pixel, prediction output face Color probability distribution is:
In formula, R is set of real numbers, and h is the height of image, and w is the width of image, and Q is the quantity for quantifying color case;Using CIE Lab The space ab is divided into 10 × 10 branch mailbox, and remains Q=313 branch mailbox in colour gamut by the parameter of color space.
Step S26:Prediction character image and ground true colors are measured using cross entropy loss function to each pixel It is distributed the distance between Z, and is summed to the distance of all pixels prediction.
Further, the ground true colors distribution Z is encoded using the ground true color Y of soft encoding scheme, really Ab color value is expressed as the convex combination at its 10 nearest case centers, is weighted with the gaussian kernel function of σ=5.
Further, step S2 further includes:It predicts the distribution of color at each coefficient pixel, and is with gray scale and user's point Condition recommends user.Predict that the task of distribution of color is undoubtedly related with Main Branches.Pass through connection main split's many levels Feature, and learn the double-deck classifier in top layer, use high column method.
Further, in order to provide discrete color suggestion, softmax distribution at softening inquiry pixel, make its not that Peak, and weighting k mean cluster (K=9) is executed to search distribution pattern.For example, system would generally be according to who object, clothing Material, the reasonable color of scene type recommendation are taken, for the object with different colours, system will provide extensive suggestion model It encloses.Once having selected the color suggested, system will generate colouring results in real time.With the change of addition point position in character image Dynamic, color suggestion will be constantly updated.
Further, in step S3, the no spatial information of input of overall situation prompt network, information is integrated by selection to be led The centre of color network;Wherein, the input of global prompt network is handled by 4 conv-relu layers, each conv-relu The kernel size of layer is 1 × 1 and 512 channel;User provides global statistics, is saturated by color histogram and the average image Degree description;Color Y is adjusted to calculate color histogram, in quantization by a quarter resolution ratio by using bilinear interpolation Each pixel is encoded in the space ab, and is averaged to space;It is empty by the way that ground true picture is converted to HSV color Between and the S channel of being spatially averaged calculate saturation degree;Ground true colors point randomly are shown to network in the training process The true saturation degree of cloth and ground.
Further, in step S4, the Main Branches of main coloured networks use U-Net framework, and the coloured networks are by 10 Convolution block is constituted, respectively conv1-10;In conv1-4, each piece of characteristic tensor spatially gradually halves, feature dimensions Number doubles, and each piece includes 2-3 conv-relu pairs;In the conv7-10 of lower half portion, spatial resolution is resumed, feature Dimension halves;In conv5-6, the expansion convolution with the factor 2 is used;Wherein, the variation of spatial resolution is adopted by secondary Sample or up-sampling operation realize that each convolution uses 3 × 3 kernels.
Further, further include in step S4:The symmetrical quick connection of addition is to help network recovery spatial information.For example, Conv2 and conv3 block is connected respectively to conv8 and conv9 block, this accesses important bottom-up information with also can be convenient.For example, Brightness value will limit the range of ab colour gamut.
For conv1-8 layers, the present invention is finely adjusted from the weight that these are trained in advance.The conv9 of addition, conv10 It is trained from the beginning for connecting with shortcut all.The last one conv layers, i.e. 1 × 1 kernel are mapped in conv10 and defeated Out between color.Since ab colour gamut is bounded, the present invention adds the last one tanh (tanh) layer in output.
Further, step S5 is specially:Using the local prompt network inputs addition point and forecast image of building Distribution of color, to define the function at coloring interface;Result is added in main coloured networks using overall situation prompt network and is used for Output.Merging for network is prompted with global by local prompt network, realizes the diversity of character image color.
Preferably, due to the uncertainty of addition point control area, being easy to appear artifact, but this is in coloring process System can realize the modification of mistake coloring by man-machine interactively.
Method of the invention does not both need to pre-process, and does not need to post-process yet, and all the elements are all learned in a manner of end to end It practises.One advantage of end-to-end learning framework is that it can be easily adapted to different types of user's input, any pixel Character image is all suitable for.
The present invention depicts the simple network architecture, and generates coloring schematic diagram.Pass through mode of learning end to end, instruction Practice two variants, it is local to prompt network and global prompt network.And result is added in main coloured networks, black and white is generated in real time The color effect of character image.The present invention gives the color plate of suggestion, adds dominating pair of vertices picture by user and changes color, Realize the coloring effect of various reasonable.It can be modified by man-machine interactively when there is mistake coloring.It operates conveniently Fast, the diversity of personage's black and white picture is simply and efficiently realized.Increase the visual effect of image.
Compared with prior art, the invention has the following beneficial effects:
1, the present invention realizes the coloring of black and white character image using convolutional neural networks, and user only needs the black and white of personage to shine Piece is added to interface, can generate colouring results in real time, by addition point operation, can realize various reasonable Color effect.
2, the present invention can provide addition point and suggest face by the local fusion for prompting network and global prompt network Color, and the modification of mistake coloring can be realized by man-machine interactively.
3, the present invention is tested by user, be ensure that the being simple and efficient property of operation, is substantially increased the quality of colour of picture, Facilitating user is that the precious black and white portrait photographs that family retains realize colorization, restores the meaning and value of shooting.
Detailed description of the invention
Fig. 1 is the flow diagram of the embodiment of the present invention.
Fig. 2 is the coloured networks architecture diagram of the embodiment of the present invention.
Fig. 3 is the coloring interface schematic diagram of the embodiment of the present invention.
Specific embodiment
The present invention will be further described with reference to the accompanying drawings and embodiments.
It is noted that following detailed description is all illustrative, it is intended to provide further instruction to the application.Unless another It indicates, all technical and scientific terms used herein has usual with the application person of an ordinary skill in the technical field The identical meanings of understanding.
It should be noted that term used herein above is merely to describe specific embodiment, and be not intended to restricted root According to the illustrative embodiments of the application.As used herein, unless the context clearly indicates otherwise, otherwise singular Also it is intended to include plural form, additionally, it should be understood that, when in the present specification using term "comprising" and/or " packet Include " when, indicate existing characteristics, step, operation, device, component and/or their combination.
As shown in Figure 1, present embodiments provide a kind of black and white personage picture based on convolutional neural networks generate in real time it is more The method that kind rationally colours, specifically includes following steps:
Step S1:Select training dataset;
Step S2:The local prompt network of building;
Step S3:The global prompt network of building;
Step S4:Construct main coloured networks;
Step S5:Generate coloring interface.
In the present embodiment, step S1 is specially:The training image of convolutional neural networks uses LFW data set (Labeled Faces in the Wild) and ImageNet data set, and use CIE Lab color space;Wherein, positive sample The face picture all being from LFW data set, negative sample be selected from ImageNet data set background (such as trees, Flower, the pictures such as furniture).
Preferably, suitable color space is to realize image colorant essential step, the present invention is respectively compared RGB, YUV and The effect of CIE Lab color space.In the case where RGB, correspond to red, green and blue channel.The present invention directly uses Rgb value is trained.However, RGB image is converted to YUV, and input gray level image is replaced with to the Y of image to test Channel.Ensure the output image brightness having the same of all models in this way.In the case where YUV and CIE Lab, the present invention is defeated Coloration and final color image is created using the brightness of input picture out.For all different color spaces, by these Value is standardized as [0,1] range of output layer Sigmoid transmission function.As the result is shown compared with RGB and YUV, CIE Lab is given The most reasonable approach is gone out, therefore the present invention uses CIE Lab color space.
In the present embodiment, in Fig. 2, the local specific figure layer for prompting network is in lower section, step S2:Local prompt net Network uses sparse addition point as input, for describing input, simulation addition point and definition interfaces;Specifically include following step Suddenly:
Step S21:The color image for training is given, gray scale and CIE Lab color space are converted the image into;
Step S22:Using gray level image as the input of convolutional neural networks model, by a, b of CIE Lab color space Component is exported as the target of convolutional neural networks model;
Step S23:The gray level image of sparse addition point and input is cascaded;
Step S24:The assessment of loss function l at each pixelδ, and the loss function at all pixels is added together To assess the loss function L of whole image:
In formula, X is the gray level image of input, and U is user's tensor, and θ is the parameter of convolutional neural networks F function, and Y is to be System output, h and w respectively correspond the height and width of input gray level image, and q is image category;Wherein, the training mesh of convolutional neural networks Mark is exactly the loss function L for minimizing network.Local prompt network since conservative colorization, is permitted from input angle Perhaps color needed for injection coloring, rather than since setting that is more vibrant but being easy to appear artifact, and mistake can be repaired Accidentally, it is only necessary to which wrong coloring problem can quickly be solved several times by clicking.
Step S25:Selecting reasonable color is to realize the essential step of coloring true to nature, to each pixel, prediction output face Color probability distribution is:
In formula, R is set of real numbers, and h is the height of image, and w is the width of image, and Q is the quantity for quantifying color case, and R is indicated;Using The space ab is divided into 10 × 10 branch mailbox, and remains Q=313 branch mailbox in colour gamut by the parameter of CIE Lab color space.
Step S26:Prediction character image and ground true colors are measured using cross entropy loss function to each pixel It is distributed the distance between Z, and is summed to the distance of all pixels prediction.
In the present embodiment, the ground true colors distribution Z is encoded using the ground true color Y of soft encoding scheme, very Real ab color value is expressed as the convex combination at its 10 nearest case centers, is weighted with the gaussian kernel function of σ=5.
In the present embodiment, step S2 further includes:Predict the distribution of color at each coefficient pixel, and with gray scale and user Point is that condition recommends user.Predict that the task of distribution of color is undoubtedly related with Main Branches.By connecting the multiple layers of main split Secondary feature, and learn the double-deck classifier in top layer, use high column method.
In the present embodiment, in order to provide discrete color suggestion, the softmax at softening inquiry pixel is distributed, and makes it Less peak, and weighting k mean cluster (K=9) is executed to search distribution pattern.For example, system would generally be according to personage couple As, clothes material, scene type recommend reasonable color, for the object with different colours, system, which will provide, is widely built Discuss range.Once having selected the color suggested, system will generate colouring results in real time.With addition point position in character image It changes, color suggestion will be constantly updated.
In the present embodiment, in such as 2, overall situation prompt network is in figure shown in top branch, and in step S3, the overall situation prompts net Information is integrated into the centre of main coloured networks by the no spatial information of the input of network, selection;Wherein, the input of global prompt network It is handled by 4 conv-relu layers, each conv-relu layers of kernel size is 1 × 1 and 512 channel;User mentions For global statistics, by color histogram and average image saturation description;Color Y is adjusted to by using bilinear interpolation A quarter resolution ratio calculates color histogram, encodes to each pixel in the space ab of quantization, and to space into Row is average;By ground true picture being converted to HSV color space and the S channel of being spatially averaged calculates saturation degree;? The distribution of ground true colors and the true saturation degree in ground randomly are shown to network in training process.
In the present embodiment, in step S4, the Main Branches of main coloured networks use U-Net framework, the coloured networks by 10 convolution blocks are constituted, respectively conv1-10;In conv1-4, each piece of characteristic tensor spatially gradually halves, Intrinsic dimensionality doubles, and each piece includes 2-3 conv-relu pairs;In the conv7-10 of lower half portion, spatial resolution is extensive Multiple, intrinsic dimensionality halves;In conv5-6, the expansion convolution with the factor 2 is used;Wherein, the variation of spatial resolution passes through Double sampling or up-sampling operation realize that each convolution uses 3 × 3 kernels.
In the present embodiment, further include in step S4:The symmetrical quick connection of addition is to help network recovery spatial information.Example Such as, conv2 and conv3 block is connected respectively to conv8 and conv9 block, this accesses important bottom-up information with also can be convenient. For example, brightness value will limit the range of ab colour gamut.
For conv1-8 layers, the present invention is finely adjusted from the weight that these are trained in advance.The conv9 of addition, conv10 It is trained from the beginning for connecting with shortcut all.The last one conv layers, i.e. 1 × 1 kernel are mapped in conv10 and defeated Out between color.Since ab colour gamut is bounded, the present invention adds the last one tanh (tanh) layer in output.
In the present embodiment, step S5 is specially:Using the local prompt network inputs addition point and prognostic chart of building The distribution of color of picture, to define the function at coloring interface;Result is added in main coloured networks using overall situation prompt network For exporting.Merging for network is prompted with global by local prompt network, realizes the diversity of character image color.
Preferably, in the present embodiment, in coloring process, due to the uncertainty of addition point control area, being easy Existing artifact, but the system can realize the modification of mistake coloring by man-machine interactively.
The method of the present embodiment does not both need to pre-process, and does not need to post-process, all the elements are all in a manner of end to end yet Study.One advantage of end-to-end learning framework is that it can be easily adapted to different types of user's input, any pixel Character image be all suitable for.As shown in figure 3, Fig. 3 is the coloring interface schematic diagram of the embodiment of the present invention.
The foregoing is merely presently preferred embodiments of the present invention, all equivalent changes done according to scope of the present invention patent with Modification, is all covered by the present invention.

Claims (10)

1. a kind of method that black and white personage picture based on convolutional neural networks generates various reasonable coloring in real time, feature exist In:Include the following steps:
Step S1:Select training dataset;
Step S2:The local prompt network of building;
Step S3:The global prompt network of building;
Step S4:Construct main coloured networks;
Step S5:Generate coloring interface.
2. a kind of black and white personage picture based on convolutional neural networks according to claim 1 generates various reasonable in real time The method of color, it is characterised in that:Step S1 is specially:The training image of convolutional neural networks using LFW data set and ImageNet data set, and use CIE Lab color space;Wherein, positive sample all is from the face figure in LFW data set Piece, negative sample are the backgrounds selected from ImageNet data set.
3. a kind of black and white personage picture based on convolutional neural networks according to claim 1 generates various reasonable in real time The method of color, it is characterised in that:In step S2:The sparse addition point of local prompt Web vector graphic is defeated for describing as input Enter, simulate addition point and definition interfaces;Specifically include following steps:
Step S21:The color image for training is given, gray scale and CIE Lab color space are converted the image into;
Step S22:Using gray level image as the input of convolutional neural networks model, a, b component of CIE Lab color space are made It is exported for the target of convolutional neural networks model;
Step S23:The gray level image of sparse addition point and input is cascaded;
Step S24:The assessment of loss function l at each pixelδ, and by the loss function at all pixels it is added together to Assess the loss function L of whole image:
In formula, X is the gray level image of input, and U is user's tensor, and θ is the parameter of convolutional neural networks F function, and Y is that system is defeated Out, h and w respectively corresponds the height and width of input gray level image, and q is image category;
Step S25:To each pixel, prediction output color probability distribution is:
In formula, R is set of real numbers, and h is the height of image, and w is the width of image, and Q is the quantity for quantifying color case;Using CIE Lab color The space ab is divided into 10 × 10 branch mailbox by the parameter in space;
Step S26:Prediction character image and ground true colors distribution Z are measured using cross entropy loss function to each pixel The distance between, and sum to the distance results of all pixels prediction.
4. a kind of black and white personage picture based on convolutional neural networks according to claim 1 generates various reasonable in real time The method of color, it is characterised in that:The ground true colors distribution Z is encoded using the ground true color Y of soft encoding scheme, really Ab color value be expressed as the convex combination at its 10 nearest case centers, be weighted with gaussian kernel function.
5. a kind of black and white personage picture based on convolutional neural networks according to claim 1 generates various reasonable in real time The method of color, it is characterised in that:Step S2 further includes:Predict the distribution of color at each coefficient pixel, and with gray scale and user Point is that condition recommends user.
6. a kind of black and white personage picture based on convolutional neural networks according to claim 4 generates various reasonable in real time The method of color, it is characterised in that:In order to provide discrete color suggestion, the softmax at softening inquiry pixel is distributed, and makes it not So peak, and weighting k mean cluster (K=9) is executed to search distribution pattern.
7. a kind of black and white personage picture based on convolutional neural networks according to claim 1 generates various reasonable in real time The method of color, it is characterised in that:In step S3, information is integrated by the no spatial information of input of overall situation prompt network, selection The centre of main coloured networks;Wherein, the input of global prompt network is handled by 4 conv-relu layers, each conv- Relu layers of kernel size is 1 × 1 and 512 channel;User provides global statistics, is satisfied by color histogram and the average image It is described with degree;Color Y is adjusted to calculate color histogram by a quarter resolution ratio by using bilinear interpolation, is being quantified The space ab in each pixel is encoded, and be averaged to space;By the way that ground true picture is converted to HSV color The space and S channel of being spatially averaged calculates saturation degree;Ground true colors randomly are shown to network in the training process Distribution and the true saturation degree in ground.
8. a kind of black and white personage picture based on convolutional neural networks according to claim 1 generates various reasonable in real time The method of color, it is characterised in that:In step S4, the Main Branches of main coloured networks use U-Net framework, and the coloured networks are by 10 A convolution block is constituted, respectively conv1-10;In conv1-4, each piece of characteristic tensor spatially gradually halves, feature Dimension doubles, and each piece includes 2-3 conv-relu pairs;In the conv7-10 of lower half portion, spatial resolution is resumed, special Sign dimension halves;In conv5-6, the expansion convolution with the factor 2 is used;Wherein, the variation of spatial resolution passes through secondary Sampling or up-sampling operation realize that each convolution uses 3 × 3 kernels.
9. a kind of black and white personage picture based on convolutional neural networks according to claim 8 generates various reasonable in real time The method of color, it is characterised in that:Further include in step S4:The symmetrical quick connection of addition is to help network recovery spatial information.
10. a kind of black and white personage's picture based on convolutional neural networks according to claim 1 generates various reasonable in real time The method of coloring, it is characterised in that:Step S5 is specially:Using the local prompt network inputs addition point and prognostic chart of building The distribution of color of picture, to define the function at coloring interface;Result is added in main coloured networks using overall situation prompt network For exporting.
CN201810691393.3A 2018-06-28 2018-06-28 Method for generating multiple reasonable colorings of black and white figure pictures based on convolutional neural network Active CN108921932B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810691393.3A CN108921932B (en) 2018-06-28 2018-06-28 Method for generating multiple reasonable colorings of black and white figure pictures based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810691393.3A CN108921932B (en) 2018-06-28 2018-06-28 Method for generating multiple reasonable colorings of black and white figure pictures based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN108921932A true CN108921932A (en) 2018-11-30
CN108921932B CN108921932B (en) 2022-09-23

Family

ID=64421798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810691393.3A Active CN108921932B (en) 2018-06-28 2018-06-28 Method for generating multiple reasonable colorings of black and white figure pictures based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN108921932B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288515A (en) * 2019-05-27 2019-09-27 宁波大学 The method and CNN coloring learner intelligently coloured to the microsctructural photograph of electron microscope shooting
CN110533740A (en) * 2019-07-31 2019-12-03 成都旷视金智科技有限公司 A kind of image rendering methods, device, system and storage medium
CN110717953A (en) * 2019-09-25 2020-01-21 北京影谱科技股份有限公司 Black-white picture coloring method and system based on CNN-LSTM combined model
WO2020151148A1 (en) * 2019-01-23 2020-07-30 平安科技(深圳)有限公司 Neural network-based black-and-white photograph color restoration method, apparatus, and storage medium
CN115082703A (en) * 2022-07-19 2022-09-20 深圳大学 Concept-associated color extraction method, device, computer device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101547363A (en) * 2009-04-03 2009-09-30 北京航空航天大学 Spatial-correlated panoramic data recovery method
CN103116895A (en) * 2013-03-06 2013-05-22 清华大学 Method and device of gesture tracking calculation based on three-dimensional model
US20170039436A1 (en) * 2015-08-03 2017-02-09 Nokia Technologies Oy Fusion of RGB Images and Lidar Data for Lane Classification
CN107730568A (en) * 2017-10-31 2018-02-23 山东师范大学 Color method and device based on weight study
CN107833183A (en) * 2017-11-29 2018-03-23 安徽工业大学 A kind of satellite image based on multitask deep neural network while super-resolution and the method for coloring

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101547363A (en) * 2009-04-03 2009-09-30 北京航空航天大学 Spatial-correlated panoramic data recovery method
CN103116895A (en) * 2013-03-06 2013-05-22 清华大学 Method and device of gesture tracking calculation based on three-dimensional model
US20170039436A1 (en) * 2015-08-03 2017-02-09 Nokia Technologies Oy Fusion of RGB Images and Lidar Data for Lane Classification
CN107730568A (en) * 2017-10-31 2018-02-23 山东师范大学 Color method and device based on weight study
CN107833183A (en) * 2017-11-29 2018-03-23 安徽工业大学 A kind of satellite image based on multitask deep neural network while super-resolution and the method for coloring

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马平平: "基于区域自适应的图像间颜色迁移算法与性能评价", 《万方数据学位论文库》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020151148A1 (en) * 2019-01-23 2020-07-30 平安科技(深圳)有限公司 Neural network-based black-and-white photograph color restoration method, apparatus, and storage medium
CN110288515A (en) * 2019-05-27 2019-09-27 宁波大学 The method and CNN coloring learner intelligently coloured to the microsctructural photograph of electron microscope shooting
CN110533740A (en) * 2019-07-31 2019-12-03 成都旷视金智科技有限公司 A kind of image rendering methods, device, system and storage medium
CN110717953A (en) * 2019-09-25 2020-01-21 北京影谱科技股份有限公司 Black-white picture coloring method and system based on CNN-LSTM combined model
CN110717953B (en) * 2019-09-25 2024-03-01 北京影谱科技股份有限公司 Coloring method and system for black-and-white pictures based on CNN-LSTM (computer-aided three-dimensional network-link) combination model
CN115082703A (en) * 2022-07-19 2022-09-20 深圳大学 Concept-associated color extraction method, device, computer device and storage medium
CN115082703B (en) * 2022-07-19 2022-11-11 深圳大学 Concept-associated color extraction method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN108921932B (en) 2022-09-23

Similar Documents

Publication Publication Date Title
CN108921932A (en) A method of the black and white personage picture based on convolutional neural networks generates various reasonable coloring in real time
CN108830912B (en) Interactive gray image coloring method for depth feature-based antagonistic learning
CN106127702B (en) A kind of image defogging method based on deep learning
US8508546B2 (en) Image mask generation
CN107833183A (en) A kind of satellite image based on multitask deep neural network while super-resolution and the method for coloring
CN111625608B (en) Method and system for generating electronic map according to remote sensing image based on GAN model
CN109886121A (en) A kind of face key independent positioning method blocking robust
Žeger et al. Grayscale image colorization methods: Overview and evaluation
CN109064396A (en) A kind of single image super resolution ratio reconstruction method based on depth ingredient learning network
CN108681991A (en) Based on the high dynamic range negative tone mapping method and system for generating confrontation network
CN106204779A (en) The check class attendance method learnt based on plurality of human faces data collection strategy and the degree of depth
CN109325989A (en) License plate image generation method, device, equipment and medium
CN110175986A (en) A kind of stereo-picture vision significance detection method based on convolutional neural networks
CN108846869B (en) Automatic clothes color matching method based on natural image colors
CN112991371B (en) Automatic image coloring method and system based on coloring overflow constraint
CN110163801A (en) A kind of Image Super-resolution and color method, system and electronic equipment
CN109920012A (en) Image colorant system and method based on convolutional neural networks
CN110263813A (en) A kind of conspicuousness detection method merged based on residual error network and depth information
CN110197517A (en) The SAR image painting methods that consistent sex resistance generates network are recycled based on multiple domain
CN105096286A (en) Method and device for fusing remote sensing image
CN110322530A (en) It is a kind of based on depth residual error network can interaction figure picture coloring
CN111738113A (en) Road extraction method of high-resolution remote sensing image based on double-attention machine system and semantic constraint
CN109920018A (en) Black-and-white photograph color recovery method, device and storage medium neural network based
CN110120034A (en) A kind of image quality evaluating method relevant to visual perception
CN116543227A (en) Remote sensing image scene classification method based on graph convolution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant