WO2020151148A1 - 基于神经网络的黑白照片色彩恢复方法、装置及存储介质 - Google Patents

基于神经网络的黑白照片色彩恢复方法、装置及存储介质 Download PDF

Info

Publication number
WO2020151148A1
WO2020151148A1 PCT/CN2019/088627 CN2019088627W WO2020151148A1 WO 2020151148 A1 WO2020151148 A1 WO 2020151148A1 CN 2019088627 W CN2019088627 W CN 2019088627W WO 2020151148 A1 WO2020151148 A1 WO 2020151148A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
image
neural network
black
white
Prior art date
Application number
PCT/CN2019/088627
Other languages
English (en)
French (fr)
Inventor
曹靖康
王义文
王健宗
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020151148A1 publication Critical patent/WO2020151148A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to a method, device and computer-readable storage medium for black and white photo color restoration based on neural network.
  • black-and-white photos have a special meaning.
  • Black-and-white photos can reflect a certain sense of the times, but they cannot fully show the real scene at that time. Therefore, the color restoration of black and white photos can evoke people's deeper memories and record more complete historical information.
  • today's technology is to use optimized algorithms to restore black and white photos through continuous iteration, so that the iteration efficiency is too slow and the resulting color photos are not satisfactory.
  • This application provides a method, device and computer-readable storage medium for color restoration of black and white photos based on neural networks, and its main purpose is to provide a solution for color restoration of black and white photos.
  • a neural network-based color restoration method for black and white photos provided by this application includes:
  • Input the black-and-white image that needs to perform color restoration obtain the L component in the black-and-white image, and input the L component into the trained convolutional neural network model to generate the corresponding ab component, and finally combine L, a, b The components are combined to generate a color image corresponding to the black and white image.
  • the present application also provides a neural network-based black-and-white photo color restoration device.
  • the device includes a memory and a processor.
  • the memory stores a neural network-based image that can run on the processor.
  • a black and white photo color restoration program when the neural network-based black and white photo color restoration program is executed by the processor, the following steps are implemented:
  • the present application also provides a computer-readable storage medium on which a neural network-based black-and-white photo color recovery program is stored, and the neural network-based black-and-white photo color recovery program It can be executed by one or more processors to realize the steps of the method for color restoration of black and white photos based on neural network as described above.
  • the multi-layer network structure of the convolutional neural network can automatically extract the deep features of the input data. Different levels of the network can learn different levels of features, thereby greatly improving the accuracy of image processing. Further, the convolutional neural network can pass local perception Shared with the whole world, preserves the associated information between images, and greatly reduces the number of required parameters. Through pooling technology, the number of network parameters is further reduced, the robustness of the model is improved, and the model can continue to expand in depth and continue to increase The hidden layer can process the image more efficiently. Therefore, the neural network-based black-and-white photo color restoration method, device and computer-readable storage medium proposed in this application can well realize the color restoration of the black-and-white photo.
  • FIG. 1 is a schematic flowchart of a method for color restoration of black and white photos based on a neural network according to an embodiment of the application;
  • FIG. 2 is a schematic diagram of the internal structure of a black and white photo color restoration device based on a neural network provided by an embodiment of the application;
  • FIG. 3 is a schematic diagram of modules of a neural network-based black-and-white photo color restoration program in a neural network-based black-and-white photo color restoration device provided by an embodiment of the application.
  • This application provides a method for color restoration of black and white photos based on neural network.
  • FIG. 1 it is a schematic flowchart of a method for color restoration of black and white photos based on a neural network according to an embodiment of this application.
  • the method can be executed by a device, and the device can be implemented by software and/or hardware.
  • the black and white photo color restoration method based on neural network includes:
  • the color image generally refers to an image in an RGB color mode composed of R, G, and B components for each pixel.
  • RGB color mode is a color standard in the industry, which is obtained by changing the three color channels of red (R), green (G), and blue (B) and superimposing them with each other.
  • RGB is the color representing the three channels of red, green, and blue. This standard includes almost all colors that human vision can perceive, and it is currently one of the most widely used color systems.
  • the Lab color mode is composed of three elements of illuminance (L) and related colors a and b.
  • L represents Luminosity, which is equivalent to brightness
  • a represents the range from red to green
  • b represents the range from blue to yellow.
  • the RGB color mode cannot be directly converted into the Lab color mode. Therefore, the preferred embodiment of the present application first converts the RGB color mode into the XYZ color mode and then into the Lab color mode, namely: RGB——XYZ——Lab. Therefore, the steps of converting a color image from RGB color mode to Lab color mode described in this application are divided into two parts:
  • M is a 3x3 matrix
  • the value ranges of the r, g, b and R, G, and B are all [0, 1).
  • f(x) is a correction function similar to the Gamma function:
  • X1, Y1, and Z1 are respectively the XYZ values after linear normalization, that is, their value ranges are all [0,1).
  • the value range of the function f(x) is also [0,1) like the independent variable.
  • the value range of L is [0,100), and a and b are approximately [-169,+169) and [-160,+160).
  • S20 Use an edge detection algorithm and a threshold segmentation method to locate objects in the image and segment foreground objects on the color image in the Lab color mode.
  • the edge detection algorithm is the Canny edge detection algorithm. This application adopts the Canny edge detection method for positioning, including the following steps:
  • * represents convolution
  • is a smoothness parameter
  • the larger the ⁇ the wider the frequency band of the Gaussian filter, and the better the smoothness
  • x and y are the pixel coordinates.
  • the amplitude and direction can be calculated by the coordinate conversion formula from rectangular coordinates to polar coordinates:
  • M[x,y] reflects the edge amplitude of the image
  • ⁇ [x,y] reflects the direction of the edge, so that M[x,y] obtains the direction angle ⁇ [x,y] of the local maximum. Reflects the direction of the edge.
  • This application uses two thresholds T 1 and T 2 (T 1 ⁇ T 2 ) to obtain two threshold edge images N 1 [i,j] and N 2 [i,j].
  • the dual-threshold method is to connect these discontinuous edges in N 2 [i,j] into a complete contour, so when the discontinuity point of the edge is reached, it will look for connectable edges in the neighborhood of N 1 [i,j]. Until all the discontinuities in N 2 [i,j] are connected.
  • the present application detects the edges of all objects in the image according to the above-mentioned edge detection algorithm, thereby positioning the objects in the image.
  • the objects in the image include foreground objects and background objects.
  • this application adopts the threshold segmentation method to segment the foreground object.
  • the basic idea of the threshold segmentation method is to set a threshold T and traverse each pixel in the image. When the gray value of the pixel is greater than T, it is determined that the pixel belongs to the foreground object. If the value is less than or equal to T, it is determined that the pixel is a background object.
  • the convolutional neural network model in the preferred embodiment of this application is a feed-forward neural network. Its artificial neurons can respond to surrounding units within a part of the coverage area. Its basic structure includes two layers, one of which is feature extraction. In the layer, the input of each neuron is connected to the local receptive field of the previous layer, and the local features are extracted. Once the local feature is extracted, the positional relationship between it and other features is also determined; the second is the feature mapping layer, each computing layer of the network is composed of multiple feature maps, and each feature map is a plane. The weights of all neurons on the plane are equal.
  • the feature mapping structure uses a sigmoid function with a small influencing function core as the activation function of the convolutional network, so that the feature mapping has displacement invariance. In addition, since neurons on a mapping surface share weights, the number of free parameters of the network is reduced.
  • Each convolutional layer in the convolutional neural network is followed by a calculation layer for local averaging and secondary extraction. This unique two-feature extraction structure reduces the feature resolution.
  • the convolutional neural network model has the following structure:
  • the input layer is the only data input port of the entire convolutional neural network, which is mainly used to define different types of data input;
  • Convolutional layer convolve the data of the input convolutional layer, and output the convolutional feature map
  • the Pooling layer performs down-sampling operations on the incoming data in spatial dimensions, so that the length and width of the input feature map become half of the original;
  • Fully connected layer The fully connected layer is the same as an ordinary neural network. Each neuron is connected to all input neurons, and then calculated through the activation function;
  • Output layer The output layer is also called the classification layer, and the classification score of each category will be calculated in the final output.
  • the convolutional neural network of the present application obtains image features through a convolutional layer.
  • the lower-level convolutional layer is divided into two parts that share parameters, one part is used to predict image pixel values, and the other is used to predict object categories in the image. Due to the high degree of abstraction of category information, the convolution process is a fixed-size image, and the global feature and the middle layer feature are merged. At this time, the feature map contains relatively rich information, and each pixel contains both itself and its neighborhood. The information, including global category information, is more accurate for the final prediction.
  • the input layer is an input image, which sequentially enters a 7*7 convolution layer, a 3*3 maximum pooling layer, and then enters 4 convolution modules.
  • Each convolution module starts with a building block with linear projection, followed by a different number of building blocks with ontology mapping, and finally outputs the pixel value of the predicted image and the category of the predicted image in the softmax layer.
  • the object category refers to the category of objects contained in the image, such as people, animals, plants, vehicles, and so on.
  • This application inputs the L component in the image of the Lab color mode into the convolutional neural network model, so as to train the convolutional neural network model for color prediction.
  • the method for training the convolutional neural network model is as follows:
  • Step a Determine an input and output vector, where the input vector is the L component of the image, and the output vector is the prediction of the object category and color in the image;
  • Step b Perform a convolution operation on the L component.
  • the convolution operation refers to the inner product operation of the image and the filter matrix.
  • this application needs to pad the image on the border to increase the size of the matrix.
  • This application sets a set of filters ⁇ filter 0 , filter 1 ⁇ in the convolutional layer of the convolutional neural network model, which are respectively applied to the image color channel and the category channel to generate a set of features.
  • the scale of each filter is d*h, where d is the dimension of the image and h is the size of the window.
  • the size of the picture after filling is (n+2p)*(n+2p)
  • the filter size is unchanged
  • the output picture size is (n+2p-f+ 1)*(n+2p-f+1).
  • Step c Construct a loss function that evaluates the difference between the predicted value output by the network model and the true value.
  • the loss function is used to evaluate the predicted value of the network model output The difference from the true value Y. Used here To represent the loss function, it is a non-negative real number function. The smaller the loss value, the better the performance of the network model.
  • the loss function used in this application is:
  • the loss function of the color part adopts the Frobenius norm
  • the loss function of the category part adopts cross entropy (Cross Entropy)
  • is the weighting factor.
  • the Frobenius norm is a matrix norm, defined as the sum of the squares of the absolute values of the elements of matrix A, namely Cross entropy is mainly used to measure the difference information between two probability distributions.
  • p represents the distribution of the real marker
  • q is the predicted marker distribution of the trained model.
  • the cross entropy loss function can measure p and q
  • the similarity of the formula is
  • (x (i) , y (i) ) represents the i-th group of data and its corresponding category labels. It is a p+1-dimensional vector, and y (i) means to take one of 1, 2...k and a number representing the category label (assuming there are k image types in total).
  • Step d Use the Softmax function to output the classification label of the object category.
  • Softmax is a promotion of logistic regression. Logistic regression is used to deal with two classification problems, and its promoted Softmax regression is used to deal with multiple classification problems. According to the different input images that need to perform color restoration, the most similar result can be obtained through the activation function.
  • the application also provides a black and white photo color restoration device based on neural network.
  • FIG. 2 it is a schematic diagram of the internal structure of a black-and-white photo color restoration device based on a neural network provided by an embodiment of this application.
  • the black-and-white photo color restoration device 1 based on a neural network may be a PC (Personal Computer, personal computer), or a terminal device such as a smart phone, a tablet computer, or a portable computer.
  • the neural network-based black and white photo color restoration device 1 at least includes a memory 11, a processor 12, a communication bus 13, and a network interface 14.
  • the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, and the like.
  • the memory 11 may be an internal storage unit of the neural network-based black and white photo color restoration device 1, for example, the hard disk of the neural network-based black and white photo color restoration device 1.
  • the memory 11 may also be an external storage device of the neural network-based black-and-white photo color restoration device 1, such as a plug-in hard disk equipped on the neural network-based black-and-white photo color restoration device 1, and a smart memory card (Smart Media Card, SMC, Secure Digital (SD) Card, Flash Card, etc.
  • SD Secure Digital
  • the storage 11 may also include both an internal storage unit of the black and white photo color restoration device 1 based on a neural network and an external storage device.
  • the memory 11 can be used not only to store application software and various data installed in the neural network-based black-and-white photo color recovery device 1, such as the code of the neural network-based black-and-white photo color recovery program 01, etc., but also to temporarily store Data to be output or to be output.
  • the processor 12 may be a central processing unit (CPU), controller, microcontroller, microprocessor or other data processing chip in some embodiments, and is used to run the program code or processing stored in the memory 11 Data, such as the implementation of neural network-based black and white photo color restoration program 01, etc.
  • CPU central processing unit
  • controller microcontroller
  • microprocessor or other data processing chip in some embodiments, and is used to run the program code or processing stored in the memory 11 Data, such as the implementation of neural network-based black and white photo color restoration program 01, etc.
  • the communication bus 13 is used to realize the connection and communication between these components.
  • the network interface 14 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and is usually used to establish a communication connection between the device 1 and other electronic devices.
  • the device 1 may further include a user interface.
  • the user interface may include a display (Display) and an input unit such as a keyboard (Keyboard).
  • the optional user interface may also include a standard wired interface and a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, etc.
  • the display can also be appropriately called a display screen or a display unit, which is used to display the information processed in the neural network-based black and white photo color restoration device 1 and to display a visualized user interface.
  • Figure 2 only shows a neural network-based black-and-white photo color restoration device 1 with components 11-14 and a neural network-based black-and-white photo color restoration program 01.
  • Figure 1 does not It does not constitute a limitation on the black-and-white photo color restoration device 1 based on a neural network, and may include fewer or more components than shown, or a combination of certain components, or a different component arrangement.
  • the memory 11 stores the black and white photo color restoration program 01; the processor 12 executes the neural network-based black and white photo color restoration program 01 stored in the memory 11 to implement the following steps:
  • Step 1 Obtain a color image from the network, and convert the color image from an RGB color mode to a Lab color mode.
  • the color image generally refers to an image in an RGB color mode composed of R, G, and B components for each pixel.
  • RGB color mode is a color standard in the industry, which is obtained by changing the three color channels of red (R), green (G), and blue (B) and superimposing them with each other.
  • RGB is the color representing the three channels of red, green, and blue. This standard includes almost all colors that human vision can perceive, and it is currently one of the most widely used color systems.
  • the Lab color mode is composed of three elements of illuminance (L) and related colors a and b.
  • L represents Luminosity, which is equivalent to brightness
  • a represents the range from red to green
  • b represents the range from blue to yellow.
  • the RGB color mode cannot be directly converted into the Lab color mode. Therefore, the preferred embodiment of the present application first converts the RGB color mode into an XYZ color mode and then into a Lab color mode, namely: RGB—XYZ—Lab. Therefore, the steps of converting a color image from RGB color mode to Lab color mode described in this application are divided into two parts:
  • Part 1 Convert the color image from RGB color mode to XYZ color mode, the method is as follows:
  • M is a 3x3 matrix
  • the value ranges of the r, g, b and R, G, and B are all [0, 1).
  • Part 2 Convert the color image from XYZ color to Lab color mode, the method is as follows:
  • f(x) is a correction function similar to the Gamma function:
  • X1, Y1, and Z1 are respectively the XYZ values after linear normalization, that is, their value ranges are all [0,1).
  • the value range of the function f(x) is also [0,1) like the independent variable.
  • the value range of L is [0,100), and a and b are approximately [-169,+169) and [-160,+160).
  • Step 2 Use edge detection algorithm and threshold segmentation method to locate objects in the image and segment foreground objects on the color image of Lab color mode.
  • the edge detection algorithm is the Canny edge detection algorithm. This application adopts the Canny edge detection method for positioning, including the following steps:
  • * represents convolution
  • is a smoothness parameter
  • the larger the ⁇ the wider the frequency band of the Gaussian filter, and the better the smoothness
  • x and y are the pixel coordinates.
  • the amplitude and direction can be calculated by the coordinate conversion formula from rectangular coordinates to polar coordinates:
  • M[x,y] reflects the edge amplitude of the image
  • ⁇ [x,y] reflects the direction of the edge, so that M[x,y] obtains the direction angle ⁇ [x,y] of the local maximum. Reflects the direction of the edge.
  • This application uses two thresholds T 1 and T 2 (T 1 ⁇ T 2 ) to obtain two threshold edge images N 1 [i,j] and N 2 [i,j].
  • the dual-threshold method is to connect these discontinuous edges in N 2 [i,j] into a complete contour, so when the discontinuity point of the edge is reached, it will look for connectable edges in the neighborhood of N 1 [i,j]. Until all the discontinuities in N 2 [i,j] are connected.
  • the present application detects the edges of all objects in the image according to the above-mentioned edge detection algorithm, thereby positioning the objects in the image.
  • the objects in the image include foreground objects and background objects.
  • this application adopts the threshold segmentation method to segment the foreground object.
  • the basic idea of the threshold segmentation method is to set a threshold T and traverse each pixel in the image. When the gray value of the pixel is greater than T, it is determined that the pixel belongs to the foreground object. If the value is less than or equal to T, it is determined that the pixel is a background object.
  • Step 3 Build a Convolutional Neural Networks (CNN) model that combines global priors and local image feature structures.
  • CNN Convolutional Neural Networks
  • the convolutional neural network model in the preferred embodiment of this application is a feed-forward neural network. Its artificial neurons can respond to surrounding units within a part of the coverage area. Its basic structure includes two layers, one of which is feature extraction. In the layer, the input of each neuron is connected to the local receptive field of the previous layer, and the local features are extracted. Once the local feature is extracted, the positional relationship between it and other features is also determined; the second is the feature mapping layer, each computing layer of the network is composed of multiple feature maps, and each feature map is a plane. The weights of all neurons on the plane are equal.
  • the feature mapping structure uses a sigmoid function with a small influencing function core as the activation function of the convolutional network, so that the feature mapping has displacement invariance. In addition, since neurons on a mapping surface share weights, the number of free parameters of the network is reduced.
  • Each convolutional layer in the convolutional neural network is followed by a calculation layer for local averaging and secondary extraction. This unique two-feature extraction structure reduces the feature resolution.
  • the convolutional neural network model has the following structure:
  • the input layer is the only data input port of the entire convolutional neural network, which is mainly used to define different types of data input;
  • Convolutional layer convolve the data of the input convolutional layer, and output the convolutional feature map
  • the Pooling layer performs down-sampling operations on the incoming data in spatial dimensions, so that the length and width of the input feature map become half of the original;
  • Fully connected layer The fully connected layer is the same as an ordinary neural network. Each neuron is connected to all input neurons, and then calculated through the activation function;
  • Output layer The output layer is also called the classification layer, and the classification score of each category will be calculated in the final output.
  • the convolutional neural network of the present application obtains image features through a convolutional layer.
  • the low-level convolutional layer is divided into two parts that share parameters, one part is used to predict image pixel values, and the other is used to predict the category of objects in the image. Due to the high degree of abstraction of category information, the convolution process is a fixed-size image, and the global feature and the middle layer feature are merged. At this time, the feature map contains relatively rich information, and each pixel contains both itself and its neighborhood. The information, including global category information, is more accurate for the final prediction.
  • the input layer is an input image, which sequentially enters a 7*7 convolution layer, a 3*3 maximum pooling layer, and then enters 4 convolution modules.
  • Each convolution module starts with a building block with linear projection, followed by a different number of building blocks with ontology mapping, and finally outputs the pixel value of the predicted image and the category of the predicted image in the softmax layer.
  • Step 4 Using the image of the Lab color mode and the determined convolutional neural network model structure, training the convolutional neural network model to predict the category and color of objects in the image.
  • the object category refers to the category of objects contained in the image, such as people, animals, plants, vehicles, and so on.
  • This application inputs the L component in the image of the Lab color mode into the convolutional neural network model, so as to train the convolutional neural network model for color prediction.
  • the method for training the convolutional neural network model is as follows:
  • Step a Determine an input and output vector, where the input vector is the L component of the image, and the output vector is the prediction of the object category and color in the image.
  • Step b Perform a convolution operation on the L component.
  • the convolution operation refers to the inner product operation of the image and the filter matrix.
  • this application needs to pad the image on the border to increase the size of the matrix.
  • This application sets a set of filters ⁇ filter 0 , filter 1 ⁇ in the convolutional layer of the convolutional neural network model, which are respectively applied to the image color channel and the category channel to generate a set of features.
  • the scale of each filter is d*h, where d is the dimension of the image and h is the size of the window.
  • the size of the picture after filling is (n+2p)*(n+2p)
  • the filter size is unchanged
  • the output picture size is (n+2p-f+ 1)*(n+2p-f+1).
  • Step c Construct a loss function that evaluates the difference between the predicted value output by the network model and the true value.
  • the loss function is used to evaluate the predicted value of the network model output The difference from the true value Y. Used here To represent the loss function, it is a non-negative real number function. The smaller the loss value, the better the performance of the network model.
  • the loss function used in this application is:
  • the loss function of the pixel color part adopts the Frobenius norm
  • the loss function of the category part adopts cross entropy (Cross Entropy)
  • is the weighting factor.
  • the Frobenius norm is a matrix norm, defined as the sum of the squares of the absolute values of the elements of matrix A, namely Cross entropy is mainly used to measure the difference information between two probability distributions.
  • p represents the distribution of the real marker
  • q is the predicted marker distribution of the trained model.
  • the cross entropy loss function can measure p and q
  • the similarity of the formula is
  • (x (i) , y (i) ) represents the i-th group of data and its corresponding category labels. It is a p+1-dimensional vector, and y (i) means to take one of 1, 2...k and a number representing the category label (assuming there are k image types in total).
  • Step d Use the Softmax function to output the classification label of the object category.
  • Softmax is a promotion of logistic regression. Logistic regression is used to deal with two classification problems, and its promoted Softmax regression is used to deal with multiple classification problems. According to the different input images that need to perform color restoration, the most similar result can be obtained through the activation function.
  • Step 5 Input the black and white image that needs to perform color restoration, obtain the L component in the black and white image, and input the L component into the trained convolutional neural network model to generate the corresponding ab component, and finally combine L and a The three components of b and b are combined to generate a color image corresponding to the black and white image.
  • the neural network-based black-and-white photo color restoration program can also be divided into one or more modules, and the one or more modules are stored in the memory 11 and run by one or more processors. (This embodiment is the processor 12) is executed to complete this application.
  • the module referred to in this application refers to a series of computer program instruction segments that can complete specific functions, which are used to describe the neural network-based black and white photo color restoration program based on The execution process of the neural network black and white photo color restoration device.
  • the black-and-white photo color restoration program 01 can be It is divided into an image acquisition and processing module 10, an image recognition module 20, a model construction module 30, a model training module 40, and an image color restoration module 50.
  • an image acquisition and processing module 10 an image acquisition and processing module 10
  • an image recognition module 20 an image recognition module 20
  • a model construction module 30 a model training module 40
  • an image color restoration module 50 an image color restoration module 50.
  • the image acquisition and processing module 10 is used to acquire a color image from the network and convert the color image from an RGB color mode to a Lab color mode.
  • converting the color image from the RGB color mode to the Lab color mode includes converting the color image from the RGB color mode to the XYZ color mode and converting the color image from the XYZ color to the Lab color mode, wherein:
  • the method for converting a color image from RGB color mode to XYZ color mode is as follows:
  • M is a 3x3 matrix:
  • the conversion of a color image from XYZ color to Lab color mode includes:
  • X1, Y1, and Z1 are the X, Y, and Z values after linear normalization, respectively.
  • the image recognition module 20 is used to: use an edge detection algorithm and a threshold segmentation method to locate objects in the image and segment foreground objects on the color image of the Lab color mode.
  • the edge detection algorithm includes the Canny edge detection algorithm
  • the positioning of the objects contained in the image includes:
  • the double threshold method is used to detect and connect the edges of the objects contained in the color image, and the positioning of the objects contained in the image is completed.
  • the threshold segmentation method includes setting a threshold T and traversing each pixel in the color image.
  • T When the gray value of the pixel is greater than T, it is determined that the pixel belongs to the foreground object. If the gray value of is less than or equal to T, it is determined that the pixel is a background object.
  • the model construction module 30 is used to construct a convolutional neural network model combining global priors and local image feature structures.
  • the model training module 40 is configured to use the color image of the Lab color mode and the determined convolutional neural network model structure to train the convolutional neural network model to predict the category and color of objects in the image.
  • the method for training the convolutional neural network model is as follows:
  • the image color restoration module 50 is used to input the black and white image that needs to perform color restoration, obtain the L component in the black and white image, and input the L component into the trained convolutional neural network model to generate the corresponding ab component, Finally, the three components of L, a, and b are combined to generate a color image corresponding to the black and white image.
  • image acquisition and processing module 10 image recognition module 20, model construction module 30, model training module 40, and image color restoration module 50 implement functions or operation steps when executed, which are substantially the same as those in the foregoing embodiment. No longer.
  • an embodiment of the present application also proposes a computer-readable storage medium that stores a black-and-white photo color restoration program based on a neural network, and the neural network-based black-and-white photo color restoration program can be Or multiple processors execute to achieve the following operations:
  • Input the black-and-white image that needs to perform color restoration obtain the L component in the black-and-white image, and input the L component into the trained convolutional neural network model to generate the corresponding ab component, and finally combine L, a, b The components are combined to generate a color image corresponding to the black and white image.
  • the specific implementation of the computer-readable storage medium of the present application is basically the same as the foregoing embodiments of the neural network-based black-and-white photo color restoration device and method, and will not be repeated here.

Abstract

本申请涉及人工智能技术,公开了一种黑白照片色彩恢复方法,包括:获取彩色图像,并将所述彩色图像从RGB色彩模式转化为Lab色彩模式;对Lab色彩模式的彩色图像进行图像内物体的定位和前景物体的分割;构建结合全局先验和局部图像特征结构的卷积神经网络模型;利用所述Lab色彩模式的彩色图像以及所述卷积神经网络模型结构,训练卷积神经网络模型;输入需要执行色彩恢复的黑白图像,获取所述黑白图像中的L分量,并将所述L分量输入训练好的卷积神经网络模型中,生成对应的ab分量,最后将L、a、b三个分量结合产生所述黑白图像对应的彩色图像。本申请还提出一种黑白照片色彩恢复装置以及一种计算机可读存储介质。本申请能够对黑白照片进行色彩恢复。

Description

基于神经网络的黑白照片色彩恢复方法、装置及存储介质
本申请要求于2019年1月23日提交中国专利局,申请号为201910063673.4、发明名称为“基于神经网络的黑白照片色彩恢复方法、装置及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种基于神经网络的黑白照片色彩恢复方法、装置及计算机可读存储介质。
背景技术
黑白照片作为早期摄影的产物,有着特殊的意义,黑白照片能够反映一定的时代感,但是无法完全展现当时的真实场景。所以对黑白照片进行色彩的恢复,能够唤起人们更深刻的记忆,还能够记录更完整的历史信息。而如今技术均是使用优化算法,通过不断迭代对黑白照片进行恢复,这样进行迭代效率过慢并且形成的彩色照片并不能让人满意。
发明内容
本申请提供一种基于神经网络的黑白照片色彩恢复方法、装置及计算机可读存储介质,其主要目的在于提供一种对黑白照片进行色彩恢复的方案。
为实现上述目的,本申请提供的一种基于神经网络的黑白照片色彩恢复方法,包括:
从网络中获取彩色图像,并将所述彩色图像从RGB色彩模式转化为Lab色彩模式;
利用边缘检测算法和阈值分割法对Lab色彩模式的彩色图像进行图像内物体的定位和前景物体的分割;
构建结合全局先验和局部图像特征结构的卷积神经网络模型;
利用所述Lab色彩模式的彩色图像以及上述确定的卷积神经网络模型结构,训练卷积神经网络模型进行图像中物体类别和颜色的预测;
输入需要执行色彩恢复的黑白图像,获取所述黑白图像中的L分量,并将所述L分量输入训练好的卷积神经网络模型中,生成对应的ab分量,最后将L、a、b三个分量结合产生所述黑白图像对应的彩色图像。
此外,为实现上述目的,本申请还提供一种基于神经网络的黑白照片色彩恢复装置,该装置包括存储器和处理器,所述存储器中存储有可在所述处理器上运行的基于神经网络的黑白照片色彩恢复程序,所述基于神经网络的黑白照片色彩恢复程序被所述处理器执行时实现如下步骤:
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有基于神经网络的黑白照片色彩恢复程序,所述基于神经网络的黑白照片色彩恢复程序可被一个或者多个处理器执行,以实现如上所述的基于神经网络的黑白照片色彩恢复方法的步骤。
卷积神经网络的多层网络结构能自动提取输入数据的深层特征,不同层次的网络可以学习到不同层次的特征,从而大大提高对图像处理的准确率,进一步地,卷积神经网络通过局部感知和全局共享,保留了图像间的关联信息,并且大大减少了所需参数的数量,通过池化技术,进一步缩减网络参数数量,提高模型的鲁棒性,可以让模型持续地扩展深度,继续增加隐层,从而更高效地对图像进行处理,因此,本申请提出的基于神经网络的黑白照片色彩恢复方法、装置及计算机可读存储介质可以很好的实现黑白照片的色彩的恢复。
附图说明
图1为本申请一实施例提供的基于神经网络的黑白照片色彩恢复方法的流程示意图;
图2为本申请一实施例提供的基于神经网络的黑白照片色彩恢复装置的内部结构示意图;
图3为本申请一实施例提供的基于神经网络的黑白照片色彩恢复装置中基于神经网络的黑白照片色彩恢复程序的模块示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供一种基于神经网络的黑白照片色彩恢复方法。参照图1所示,为本申请一实施例提供的基于神经网络的黑白照片色彩恢复方法的流程示意图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。
在本实施例中,基于神经网络的黑白照片色彩恢复方法包括:
S10、从网络中获取彩色图像,并将所述彩色图像从RGB色彩模式转化为Lab色彩模式。
所述彩色图像通常是指每个像素由R、G、B分量构成的RGB色彩模式的图像。
所述RGB色彩模式是工业界的一种颜色标准,是通过对红(R)、绿(G)、蓝(B)三个颜色通道的变化以及它们相互之间的叠加来得到各式各样的颜色的,RGB即是代表红、绿、蓝三个通道的颜色,这个标准几乎包括了人类视力所能感知的所有颜色,是目前运用最广的颜色系统之一。
所述Lab色彩模式是由照度(L)和有关色彩的a,b三个要素组成。L表 示照度(Luminosity),相当于亮度,a表示从红色至绿色的范围,b表示从蓝色至黄色的范围。
通常,RGB色彩模式无法直接转换成Lab色彩模式,因此,本申请较佳实施例首先将所述RGB色彩模式转换成XYZ色彩模式再转换成Lab色彩模式,即:RGB——XYZ——Lab。因此本申请所述将彩色图像从RGB色彩模式转化为Lab色彩模式步骤分为两部分:
一、将彩色图像从RGB色彩模式转化为XYZ色彩模式,方法如下:
R、G、B取值范围均为[0,255],XYZ的转换公式如下:
[X,Y,Z]=[M]*[R,G,B],
其中M为一个3x3矩阵:
Figure PCTCN2019088627-appb-000001
R、G、B是经过Gamma校正的色彩分量:R=g(r),G=g(g),B=g(b)。
其中r、g、b为原始的色彩分量,g(x)是Gamma校正函数:
当x<0.018时,g(x)=4.5318*x,
当x>=0.018时,g(x)=1.099*d^0.45-0.099,
所述r、g、b以及R、G、B的取值范围则均为[0,1)。
计算完成后,XYZ的取值范围则有所变化,分别是:[0,0.9506),[0,1),[0,1.0890)。
二、将彩色图像从XYZ色彩转化为Lab色彩模式,方法如下:
L=116*f(Y1)-16,
a=500*(f(X1)-f(Y1)),
b=200*(f(Y1)-f(Z1)),
其中f(x)是一个类似Gamma函数的校正函数:
当x>0.008856时,f(x)=x^(1/3),
当x<=0.008856时,f(x)=(7.787*x)+(16/116),
X1、Y1、Z1分别是线性归一化之后的XYZ值,即它们的取值范围都是[0,1)。此外,函数f(x)的值域也和自变量一样都是[0,1)。
计算完成后,L的取值范围[0,100),而a和b则约为[-169,+169)和[-160,+160)。
S20、利用边缘检测算法和阈值分割法对Lab色彩模式的彩色图像进行图像内物体的定位和前景物体的分割。
所述边缘检测的基本思想认为边缘点是图像中像素灰度有阶跃变化或者屋顶变化的那些像素点,即灰度导数较大或极大的地方。本申请较佳实施例中,所述边缘检测算法为Canny边缘检测算法。本申请采用了Canny边缘检测的方法进行定位,包括下述步骤:
Ⅰ、用高斯滤波器对彩色图像进行平滑滤波。
假设f(x,y)是原始图像,G(x,y)是平滑后的图像,则有:
H(x,y)=exp[-(x 2+y 2)/2σ 2],
G(x,y)=f(x,y)*H(x,y),
其中,*代表卷积,σ是个平滑程度参数,σ越大,高斯滤波器的频带就越宽,平滑程度就越好,x,y为像素坐标。
Ⅱ、用一阶偏导的有限差分计算梯度的幅值和方向。
所述幅值和方向可用直角坐标到极坐标的坐标转化公式来计算:
Figure PCTCN2019088627-appb-000002
θ[x,y]=arctan(G x(x,y)/G y(x,y)),
其中,M[x,y]反映了图像的边缘幅值,θ[x,y]反映了边缘的方向,使得M[x,y]取得局部最大值的方向角θ[x,y],就反映了边缘的方向。
Ⅲ、将非局部极大值点的幅度置为零从而得到细化的边缘。
Ⅳ、用双阈值法检测和连接所述彩色图像中的所包含的物体的边缘,完成所述图像内所包含的物体的定位。
本申请使用两个阈值T 1和T 2(T 1<T 2),从而得到两个阈值边缘图像N 1[i,j]和N 2[i,j]。双阈值法要在N 2[i,j]中把这些间断的边缘连接成完整的轮廓,因此当到达边缘的间断点时,就在N 1[i,j]的邻域内寻找可以连接的边缘,直到N 2[i,j]中的所有间断点连接起来为止。
本申请根据上述的边缘检测算法检测出图像中的所有物体的边缘,从而对图像中的物体进行了定位。
应该了解,图像中的物体包括前景物体以及背景物体,通常由于前景物体的灰度与背景物体的灰度有明显的差异,因此本申请采用阈值分割法进行前景物体分割。
所述阈值分割法的基本思路是通过设置一个阈值T,并遍历图像中的每个像素点,当像素点的灰度值大于T时,判断该像素点属于前景物体,当像素点的灰度值小于或者等于T,判断该像素点属于背景物体。
S30、构建结合全局先验和局部图像特征结构的卷积神经网络(Convolutional Neural Networks,CNN)模型。
本申请较佳实施例中的所述卷积神经网络模型是一种前馈神经网络,它的人工神经元可以响应一部分覆盖范围内的周围单元,其基本结构包括两层,其一为特征提取层,每个神经元的输入与前一层的局部接受域相连,并提取该局部的特征。一旦该局部特征被提取后,它与其它特征间的位置关系也随之确定下来;其二是特征映射层,网络的每个计算层由多个特征映射组成,每个特征映射是一个平面,平面上所有神经元的权值相等。特征映射结构采用影响函数核小的sigmoid函数作为卷积网络的激活函数,使得特征映射具有位移不变性。此外,由于一个映射面上的神经元共享权值,因而减少了网络自由参数的个数。卷积神经网络中的每一个卷积层都紧跟着一个用来求局部平均与二次提取的计算层,这种特有的两次特征提取结构减小了特征分辨率。
本申请较佳实施例中,所述卷积神经网络模型具有如下结构:
输入层:输入层是整个卷积神经网络唯一的数据输入口,主要用于定义不同类型的数据输入;
卷积层:对输入卷积层的数据进行卷积操作,输出卷积后的特征图;
下采样层(Pooling层):Pooling层对传入数据在空间维度上进行下采样操作,使得输入的特征图的长和宽变为原来的一半;
全连接层:全连接层和普通神经网络一样,每个神经元都与输入的所有神经元相互连接,然后经过激活函数进行计算;
输出层:输出层也被称为分类层,在最后输出时会计算每一类别的分类分值。
本申请的卷积神经网络通过卷积层获取图像特征,低层的卷积层分为共享参数的两部分,一部分用于预测图像像素值,一部分用于预测图像中的物体类别。由于类别信息抽象程度较高,所以卷积处理的是固定尺寸图像,将全局特征和中间层特征进行融合,此时的特征图包含了比较丰富的信息,每一个像素既包含了本身和邻域的信息,也包含的全局的类别信息,对于最终的预测更为准确。
在本申请实施例中,输入层为输入的图像,该图像依次进入一个7*7的卷积层,3*3的最大值池化层,随后进入4个卷积模块。每个卷积模块从具有线性投影的构建块开始,随后是具有本体映射的不同数量的构建块,最后在softmax层输出预测图像的像素值和预测图像的类别。
S40、利用所述Lab色彩模式的图像以及上述确定的卷积神经网络模型结构,训练卷积神经网络模型进行图像内物体类别和颜色的预测。
其中,所述物体类别是指图像中包含的物体的类别,如人物、动物、植物、车辆等。
本申请将所述Lab色彩模式的图像中的L分量输入到所述卷积神经网络模型中,从而训练所述卷积神经网络模型进行颜色的预测。
本申请较佳实施例中,所述训练卷积神经网络模型的方法如下:
步骤a:确定输入输出向量,其中,所述输入向量为图像的L分量,输出向量为对图像内物体类别和颜色的预测;
步骤b:对所述L分量进行卷积操作。本申请较佳实施例中,所述卷积操作是指对图像和滤波矩阵做内积的操作。可选地,在进行卷积操作前,本申请需要对图像在边界上进行填充(Padding),以增加矩阵的大小。本申请在卷积神经网络模型的卷积层设置了1组过滤器{filter 0,filter 1},分别应用在图像颜色通道和类别通道上来生成1组特征。每个过滤器的规模是d*h,其中,d是图像的维数,h是窗口的大小。设每个方向扩展像素点数量为p,则填充后图片的大小为(n+2p)*(n+2p),若滤波器大小保持不变,则输出图片大小为(n+2p-f+1)*(n+2p-f+1)。
步骤c:构建评价网络模型输出的预测值与真实值之间的差异的损失函数。在神经网络中,损失函数用来评价网络模型输出的预测值
Figure PCTCN2019088627-appb-000003
与真实值Y之间的差异。这里用
Figure PCTCN2019088627-appb-000004
来表示损失函数,它使一个非负实数函数,损失 值越小,网络模型的性能越好。本申请所采用的损失函数为:
Figure PCTCN2019088627-appb-000005
其中,颜色部分的损失函数采用Frobenius范数,类别部分的损失函数采用交叉熵(Cross Entropy),α为权重因子。Frobenius范数是一种矩阵范数,定义为矩阵A各项元素的绝对值平方的总和,即
Figure PCTCN2019088627-appb-000006
Figure PCTCN2019088627-appb-000007
交叉熵主要用于度量两个概率分布间的差异性信息,在神经网络中,假设p表示真实标记的分布,q则为训练后的模型的预测标记分布,交叉熵损失函数可以衡量p与q的相似性,其公式为
Figure PCTCN2019088627-appb-000008
其中,假设一共有m组已知样本,(x (i),y (i))表示第i组数据及其对应的类别标记。
Figure PCTCN2019088627-appb-000009
为p+1维向量,y (i)则表示取1,2…k中的一个表示类别标号的一个数(假设共有k类图像类型)。
步骤d:用Softmax函数输出物体类别的分类标签。Softmax是对逻辑回归的推广,逻辑回归用于处理二分类问题,其推广的Softmax回归则用于处理多分类问题。根据所输入需要执行色彩恢复的图像不同,通过该激活函数获得相似度最高的结果。
S50、输入需要执行色彩恢复的黑白图像,获取所述黑白图像中的L分量,并将所述L分量输入训练好的卷积神经网络模型中,生成对应的ab分量,最后将L、a、b三个分量结合产生所述黑白图像对应的彩色图像。
本申请还提供一种基于神经网络的黑白照片色彩恢复装置。参照图2所示,为本申请一实施例提供的基于神经网络的黑白照片色彩恢复装置的内部结构示意图。
在本实施例中,基于神经网络的黑白照片色彩恢复装置1可以是PC(Personal Computer,个人电脑),也可以是智能手机、平板电脑、便携计算机等终端设备。该基于神经网络的黑白照片色彩恢复装置1至少包括存储器11、处理器12,通信总线13,以及网络接口14。
其中,存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、磁性存储器、磁盘、光盘等。存储器11在一些实施例中可以是基于神经网络的黑白照片色彩恢复装置1的内部存储单元,例如该基于神经网络的黑白照片色彩恢复装置1的硬盘。存储器11在另一些实施例中也可以是基于神经网络的黑白照片色彩恢复装置1的外部存储设备,例如基于神经网络的黑白照片色彩恢复装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储 器11还可以既包括基于神经网络的黑白照片色彩恢复装置1的内部存储单元也包括外部存储设备。存储器11不仅可以用于存储安装于基于神经网络的黑白照片色彩恢复装置1的应用软件及各类数据,例如基于神经网络的黑白照片色彩恢复程序01的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行基于神经网络的黑白照片色彩恢复程序01等。
通信总线13用于实现这些组件之间的连接通信。
网络接口14可选的可以包括标准的有线接口、无线接口(如WI-FI接口),通常用于在该装置1与其他电子设备之间建立通信连接。
可选地,该装置1还可以包括用户接口,用户接口可以包括显示器(Display)、输入单元比如键盘(Keyboard),可选的用户接口还可以包括标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在基于神经网络的黑白照片色彩恢复装置1中处理的信息以及用于显示可视化的用户界面。
图2仅示出了具有组件11-14以及基于神经网络的黑白照片色彩恢复程序01的基于神经网络的黑白照片色彩恢复装置1,本领域技术人员可以理解的是,图1示出的结构并不构成对基于神经网络的黑白照片色彩恢复装置1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。
在图2所示的装置1实施例中,存储器11中存储有黑白照片色彩恢复程序01;处理器12执行存储器11中存储的基于神经网络的黑白照片色彩恢复程序01时实现如下步骤:
步骤一、从网络中获取彩色图像,并将所述彩色图像从RGB色彩模式转化为Lab色彩模式。
所述彩色图像通常是指每个像素由R、G、B分量构成的RGB色彩模式的图像。
所述RGB色彩模式是工业界的一种颜色标准,是通过对红(R)、绿(G)、蓝(B)三个颜色通道的变化以及它们相互之间的叠加来得到各式各样的颜色的,RGB即是代表红、绿、蓝三个通道的颜色,这个标准几乎包括了人类视力所能感知的所有颜色,是目前运用最广的颜色系统之一。
所述Lab色彩模式是由照度(L)和有关色彩的a,b三个要素组成。L表示照度(Luminosity),相当于亮度,a表示从红色至绿色的范围,b表示从蓝色至黄色的范围。
通常,RGB色彩模式无法直接转换成Lab色彩模式,因此,本申请较佳 实施例首先将所述RGB色彩模式转换成XYZ色彩模式再转换成Lab色彩模式,即:RGB——XYZ——Lab。因此本申请所述将彩色图像从RGB色彩模式转化为Lab色彩模式步骤分为两部分:
第一部分:将彩色图像从RGB色彩模式转化为XYZ色彩模式,方法如下:
R、G、B取值范围均为[0,255],XYZ的转换公式如下:
[X,Y,Z]=[M]*[R,G,B],
其中M为一个3x3矩阵:
Figure PCTCN2019088627-appb-000010
R、G、B是经过Gamma校正的色彩分量:R=g(r),G=g(g),B=g(b)。
其中r、g、b为原始的色彩分量,g(x)是Gamma校正函数:
当x<0.018时,g(x)=4.5318*x,
当x>=0.018时,g(x)=1.099*d^0.45-0.099,
所述r、g、b以及R、G、B的取值范围则均为[0,1)。
计算完成后,XYZ的取值范围则有所变化,分别是:[0,0.9506),[0,1),[0,1.0890)。
第二部分:将彩色图像从XYZ色彩转化为Lab色彩模式,方法如下:
L=116*f(Y1)-16,
a=500*(f(X1)-f(Y1)),
b=200*(f(Y1)-f(Z1)),
其中f(x)是一个类似Gamma函数的校正函数:
当x>0.008856时,f(x)=x^(1/3),
当x<=0.008856时,f(x)=(7.787*x)+(16/116),
X1、Y1、Z1分别是线性归一化之后的XYZ值,即它们的取值范围都是[0,1)。此外,函数f(x)的值域也和自变量一样都是[0,1)。
计算完成后,L的取值范围[0,100),而a和b则约为[-169,+169)和[-160,+160)。
步骤二、利用边缘检测算法和阈值分割法对Lab色彩模式的彩色图像进行图像内物体的定位和前景物体的分割。
所述边缘检测的基本思想认为边缘点是图像中像素灰度有阶跃变化或者屋顶变化的那些像素点,即灰度导数较大或极大的地方。本申请较佳实施例中,所述边缘检测算法为Canny边缘检测算法。本申请采用了Canny边缘检测的方法进行定位,包括下述步骤:
Ⅰ、用高斯滤波器对彩色图像进行平滑滤波。
假设f(x,y)是原始图像,G(x,y)是平滑后的图像,则有:
H(x,y)=exp[-(x 2+y 2)/2σ 2],
G(x,y)=f(x,y)*H(x,y),
其中,*代表卷积,σ是个平滑程度参数,σ越大,高斯滤波器的频带就越宽,平滑程度就越好,x,y为像素坐标。
Ⅱ、用一阶偏导的有限差分计算梯度的幅值和方向。
所述幅值和方向可用直角坐标到极坐标的坐标转化公式来计算:
Figure PCTCN2019088627-appb-000011
θ[x,y]=arctan(G x(x,y)/G y(x,y)),
其中,M[x,y]反映了图像的边缘幅值,θ[x,y]反映了边缘的方向,使得M[x,y]取得局部最大值的方向角θ[x,y],就反映了边缘的方向。
Ⅲ、将非局部极大值点的幅度置为零从而得到细化的边缘。
Ⅳ、用双阈值法检测和连接所述彩色图像中的所包含的物体的边缘,完成所述图像内所包含的物体的定位。
本申请使用两个阈值T 1和T 2(T 1<T 2),从而得到两个阈值边缘图像N 1[i,j]和N 2[i,j]。双阈值法要在N 2[i,j]中把这些间断的边缘连接成完整的轮廓,因此当到达边缘的间断点时,就在N 1[i,j]的邻域内寻找可以连接的边缘,直到N 2[i,j]中的所有间断点连接起来为止。
本申请根据上述的边缘检测算法检测出图像中的所有物体的边缘,从而对图像中的物体进行了定位。
应该了解,图像中的物体包括前景物体以及背景物体,通常由于前景物体的灰度与背景物体的灰度有明显的差异,因此本申请采用阈值分割法进行前景物体分割。
所述阈值分割法的基本思路是通过设置一个阈值T,并遍历图像中的每个像素点,当像素点的灰度值大于T时,判断该像素点属于前景物体,当像素点的灰度值小于或者等于T,判断该像素点属于背景物体。
步骤三、构建结合全局先验和局部图像特征结构的卷积神经网络(Convolutional Neural Networks,CNN)模型。
本申请较佳实施例中的所述卷积神经网络模型是一种前馈神经网络,它的人工神经元可以响应一部分覆盖范围内的周围单元,其基本结构包括两层,其一为特征提取层,每个神经元的输入与前一层的局部接受域相连,并提取该局部的特征。一旦该局部特征被提取后,它与其它特征间的位置关系也随之确定下来;其二是特征映射层,网络的每个计算层由多个特征映射组成,每个特征映射是一个平面,平面上所有神经元的权值相等。特征映射结构采用影响函数核小的sigmoid函数作为卷积网络的激活函数,使得特征映射具有位移不变性。此外,由于一个映射面上的神经元共享权值,因而减少了网络自由参数的个数。卷积神经网络中的每一个卷积层都紧跟着一个用来求局部平均与二次提取的计算层,这种特有的两次特征提取结构减小了特征分辨率。
本申请较佳实施例中,所述卷积神经网络模型具有如下结构:
输入层:输入层是整个卷积神经网络唯一的数据输入口,主要用于定义不同类型的数据输入;
卷积层:对输入卷积层的数据进行卷积操作,输出卷积后的特征图;
下采样层(Pooling层):Pooling层对传入数据在空间维度上进行下采样操作,使得输入的特征图的长和宽变为原来的一半;
全连接层:全连接层和普通神经网络一样,每个神经元都与输入的所有神经元相互连接,然后经过激活函数进行计算;
输出层:输出层也被称为分类层,在最后输出时会计算每一类别的分类分值。
本申请的卷积神经网络通过卷积层获取图像特征,低层的卷积层分为共享参数的两部分,一部分用于预测图像像素值,一部分用于预测图像内物体的类别。由于类别信息抽象程度较高,所以卷积处理的是固定尺寸图像,将全局特征和中间层特征进行融合,此时的特征图包含了比较丰富的信息,每一个像素既包含了本身和邻域的信息,也包含的全局的类别信息,对于最终的预测更为准确。
在本申请实施例中,输入层为输入的图像,该图像依次进入一个7*7的卷积层,3*3的最大值池化层,随后进入4个卷积模块。每个卷积模块从具有线性投影的构建块开始,随后是具有本体映射的不同数量的构建块,最后在softmax层输出预测图像的像素值和预测图像的类别。
步骤四、利用所述Lab色彩模式的图像以及上述确定的卷积神经网络模型结构,训练卷积神经网络模型进行图像内物体类别和颜色的预测。
其中,所述物体类别是指图像中包含的物体的类别,如人物、动物、植物、车辆等。
本申请将所述Lab色彩模式的图像中的L分量输入到所述卷积神经网络模型中,从而训练所述卷积神经网络模型进行颜色的预测。
本申请较佳实施例中,所述训练卷积神经网络模型的方法如下:
步骤a:确定输入输出向量,其中,所述输入向量为图像的L分量,输出向量为对图像内物体类别和颜色的预测。
步骤b:对所述L分量进行卷积操作。本申请较佳实施例中,所述卷积操作是指对图像和滤波矩阵做内积的操作。可选地,在进行卷积操作前,本申请需要对图像在边界上进行填充(Padding),以增加矩阵的大小。本申请在卷积神经网络模型的卷积层设置了1组过滤器{filter 0,filter 1},分别应用在图像颜色通道和类别通道上来生成1组特征。每个过滤器的规模是d*h,其中,d是图像的维数,h是窗口的大小。设每个方向扩展像素点数量为p,则填充后图片的大小为(n+2p)*(n+2p),若滤波器大小保持不变,则输出图片大小为(n+2p-f+1)*(n+2p-f+1)。
步骤c:构建评价网络模型输出的预测值与真实值之间的差异的损失函数。在神经网络中,损失函数用来评价网络模型输出的预测值
Figure PCTCN2019088627-appb-000012
与真实值Y之间的差异。这里用
Figure PCTCN2019088627-appb-000013
来表示损失函数,它使一个非负实数函数,损失值越小,网络模型的性能越好。本申请所采用的损失函数为:
Figure PCTCN2019088627-appb-000014
其中,像素颜色部分的损失函数采用Frobenius范数,类别部分的损失函数采用交叉熵(Cross Entropy),α为权重因子。Frobenius范数是一种矩阵范数,定义为矩阵A各项元素的绝对值平方的总和,即
Figure PCTCN2019088627-appb-000015
交叉熵主要用于度量两个概率分布间的差异性信息,在神经网络中,假设p表示真实标记的分布,q则为训练后的模型的预测标记分布,交叉熵损失函数可以衡量p与q的相似性,其公式为
Figure PCTCN2019088627-appb-000016
其中,假设一共有m组已知样本,(x (i),y (i))表示第i组数据及其对应的类别标记。
Figure PCTCN2019088627-appb-000017
为p+1维向量,y (i)则表示取1,2…k中的一个表示类别标号的一个数(假设共有k类图像类型)。
步骤d:用Softmax函数输出物体类别的分类标签。Softmax是对逻辑回归的推广,逻辑回归用于处理二分类问题,其推广的Softmax回归则用于处理多分类问题。根据所输入需要执行色彩恢复的图像不同,通过该激活函数获得相似度最高的结果。
步骤五、输入需要执行色彩恢复的黑白图像,获取所述黑白图像中的L分量,并将所述L分量输入训练好的卷积神经网络模型中,生成对应的ab分量,最后将L、a、b三个分量结合产生所述黑白图像对应的彩色图像。
可选地,在其他实施例中,基于神经网络的黑白照片色彩恢复程序还可以被分割为一个或者多个模块,一个或者多个模块被存储于存储器11中,并由一个或多个处理器(本实施例为处理器12)所执行以完成本申请,本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段,用于描述基于神经网络的黑白照片色彩恢复程序在基于神经网络的黑白照片色彩恢复装置中的执行过程。
例如,参照图3所示,为本申请基于神经网络的黑白照片色彩恢复装置一实施例中的黑白照片色彩恢复程序的程序模块示意图,该实施例中,所述黑白照片色彩恢复程序01可以被分割为图像获取及处理模块10、图像识别模块20、模型构建模块30、模型训练模块40和图像色彩恢复模块50,示例性地:
图像获取及处理模块10用于:从网络中获取彩色图像,并将所述彩色图像从RGB色彩模式转化为Lab色彩模式。
可选地,将所述彩色图像从RGB色彩模式转化为Lab色彩模式包括将彩色图像从RGB色彩模式转化为XYZ色彩模式以及将彩色图像从XYZ色彩转化为Lab色彩模式,其中:
所述将彩色图像从RGB色彩模式转化为XYZ色彩模式方法如下:
[X,Y,Z]=[M]*[R,G,B],
其中,M为一个3x3矩阵:
Figure PCTCN2019088627-appb-000018
R、G、B是经过Gamma校正的色彩分量:R=g(r),G=g(g),B=g(b),而r、g、b为原始的色彩分量,g(x)是Gamma校正函数,
当x<0.018时,g(x)=4.5318*x,
当x>=0.018时,g(x)=1.099*d^0.45-0.099;
所述将彩色图像从XYZ色彩转化为Lab色彩模式包括:
L=116*f(Y1)-16,
a=500*(f(X1)-f(Y1)),
b=200*(f(Y1)-f(Z1)),
其中f(x)是Gamma函数的校正函数,
当x>0.008856时,f(x)=x^(1/3),
当x<=0.008856时,f(x)=(7.787*x)+(16/116),
X1、Y1、Z1分别是线性归一化之后的X、Y、Z值。
图像识别模块20用于:利用边缘检测算法和阈值分割法对Lab色彩模式的彩色图像进行图像内物体的定位和前景物体的分割。
可选地,所述边缘检测算法包括Canny边缘检测算法,以及所述图像内所包含的物体的定位包括:
用高斯滤波器对所述彩色图像进行平滑滤波;
用一阶偏导的有限差分计算所述彩色图像的梯度的幅值和方向;
将非局部极大值点的幅度置为零,以得到细化的边缘;及
用双阈值法检测和连接所述彩色图像中的所包含的物体的边缘,完成所述图像内所包含的物体的定位。
可选地,所述阈值分割法包括设置一个阈值T,并遍历所述彩色图像中的每个像素点,当像素点的灰度值大于T时,判断该像素点属于前景物体,当像素点的灰度值小于或者等于T,判断该像素点属于背景物体。
模型构建模块30用于:构建结合全局先验和局部图像特征结构的卷积神经网络模型。
模型训练模块40用于:利用所述Lab色彩模式的彩色图像以及上述确定的卷积神经网络模型结构,训练卷积神经网络模型进行图像中物体类别和颜色的预测。
可选地,所述训练卷积神经网络模型的方法如下:
确定输入输出向量,其中,所述输入向量为图像的L分量,输出向量为对图像内物体类别和颜色的预测;
对所述L分量进行卷积操作;
构建评价网络模型输出的预测值与真实值之间的差异的损失函数;及
用Softmax函数输出物体类别的分类标签。
图像色彩恢复模块50用于:输入需要执行色彩恢复的黑白图像,获取所述黑白图像中的L分量,并将所述L分量输入训练好的卷积神经网络模型中,生成对应的ab分量,最后将L、a、b三个分量结合产生所述黑白图像对应的彩色图像。
上述图像获取及处理模块10、图像识别模块20、模型构建模块30、模型训练模块40和图像色彩恢复模块50等程序模块被执行时所实现的功能或操作步骤与上述实施例大体相同,在此不再赘述。
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质上存储有基于神经网络的黑白照片色彩恢复程序,所述基于神经网络的黑白照片色彩恢复程序可被一个或多个处理器执行,以实现如下操作:
从网络中获取彩色图像,并将所述彩色图像从RGB色彩模式转化为Lab色彩模式;
利用边缘检测算法和阈值分割法对Lab色彩模式的彩色图像进行图像内物体的定位和前景物体的分割;
构建结合全局先验和局部图像特征结构的卷积神经网络模型;
利用所述Lab色彩模式的彩色图像以及上述确定的卷积神经网络模型结构,训练卷积神经网络模型进行图像中物体类别和颜色的预测;
输入需要执行色彩恢复的黑白图像,获取所述黑白图像中的L分量,并将所述L分量输入训练好的卷积神经网络模型中,生成对应的ab分量,最后将L、a、b三个分量结合产生所述黑白图像对应的彩色图像。
本申请计算机可读存储介质具体实施方式与上述基于神经网络的黑白照片色彩恢复装置和方法各实施例基本相同,在此不作累述。
需要说明的是,上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。并且本文中的术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种基于神经网络的黑白照片色彩恢复方法,其特征在于,所述方法包括:
    从网络中获取彩色图像,并将所述彩色图像从RGB色彩模式转化为Lab色彩模式;
    利用边缘检测算法和阈值分割法对Lab色彩模式的彩色图像进行图像内物体的定位和前景物体的分割;
    构建结合全局先验和局部图像特征结构的卷积神经网络模型;
    利用所述Lab色彩模式的彩色图像以及上述确定的卷积神经网络模型结构,训练卷积神经网络模型进行图像中物体类别和颜色的预测;
    输入需要执行色彩恢复的黑白图像,获取所述黑白图像中的L分量,并将所述L分量输入训练好的卷积神经网络模型中,生成对应的ab分量,最后将L、a、b三个分量结合产生所述黑白图像对应的彩色图像。
  2. 如权利要求1所述的基于神经网络的黑白照片色彩恢复方法,其特征在于,将所述彩色图像从RGB色彩模式转化为Lab色彩模式包括将彩色图像从RGB色彩模式转化为XYZ色彩模式以及将彩色图像从XYZ色彩转化为Lab色彩模式,其中:
    所述将彩色图像从RGB色彩模式转化为XYZ色彩模式方法如下:
    [X,Y,Z]=[M]*[R,G,B],
    其中,M为一个3x3矩阵:
    Figure PCTCN2019088627-appb-100001
    R、G、B是经过Gamma校正的色彩分量:R=g(r),G=g(g),B=g(b),而r、g、b为原始的色彩分量,g(x)是Gamma校正函数,
    当x<0.018时,g(x)=4.5318*x,
    当x>=0.018时,g(x)=1.099*d^0.45-0.099;
    所述将彩色图像从XYZ色彩转化为Lab色彩模式包括:
    L=116*f(Y1)-16,
    a=500*(f(X1)-f(Y1)),
    b=200*(f(Y1)-f(Z1)),
    其中f(x)是Gamma函数的校正函数,
    当x>0.008856时,f(x)=x^(1/3),
    当x<=0.008856时,f(x)=(7.787*x)+(16/116),
    X1、Y1、Z1分别是线性归一化之后的X、Y、Z值。
  3. 如权利要求1所述的基于神经网络的黑白照片色彩恢复方法,其特征在于,所述边缘检测算法包括Canny边缘检测算法,以及所述图像内所包含的物体的定位包括:
    用高斯滤波器对所述彩色图像进行平滑滤波;
    用一阶偏导的有限差分计算所述彩色图像的梯度的幅值和方向;
    将非局部极大值点的幅度置为零,以得到细化的边缘;及
    用双阈值法检测和连接所述彩色图像中的所包含的物体的边缘,完成所述图像内所包含的物体的定位。
  4. 如权利要求1至3中任意一项所述的基于神经网络的黑白照片色彩恢复方法,其特征在于,所述阈值分割法包括设置一个阈值T,并遍历所述彩色图像中的每个像素点,当像素点的灰度值大于T时,判断该像素点属于前景物体,当像素点的灰度值小于或者等于T,判断该像素点属于背景物体。
  5. 如权利要求1所述的基于神经网络的黑白照片色彩恢复方法,其特征在于,所述卷积神经网络模型的训练方法如下:
    确定输入输出向量,其中,所述输入向量为图像的L分量,输出向量为对图像内物体类别和颜色的预测;
    对所述L分量进行卷积操作;
    构建评价网络模型输出的预测值与真实值之间的差异的损失函数;及
    用Softmax函数输出物体类别的分类标签。
  6. 如权利要求2或3所述的基于神经网络的黑白照片色彩恢复方法,其特征在于,所述卷积神经网络模型的训练方法如下:
    确定输入输出向量,其中,所述输入向量为图像的L分量,输出向量为对图像内物体类别和颜色的预测;
    对所述L分量进行卷积操作;
    构建评价网络模型输出的预测值与真实值之间的差异的损失函数;及
    用Softmax函数输出物体类别的分类标签。
  7. 如权利要求4所述的基于神经网络的黑白照片色彩恢复方法,其特征在于,所述卷积神经网络模型的训练方法如下:
    确定输入输出向量,其中,所述输入向量为图像的L分量,输出向量为对图像内物体类别和颜色的预测;
    对所述L分量进行卷积操作;
    构建评价网络模型输出的预测值与真实值之间的差异的损失函数;及
    用Softmax函数输出物体类别的分类标签。
  8. 一种基于神经网络的黑白照片色彩恢复装置,其特征在于,所述装置包括存储器和处理器,所述存储器上存储有可在所述处理器上运行的基于神经网络的黑白照片色彩恢复程序,所述基于神经网络的黑白照片色彩恢复程序被所述处理器执行时实现如下步骤:
    从网络中获取彩色图像,并将所述彩色图像从RGB色彩模式转化为Lab色彩模式;
    利用边缘检测算法和阈值分割法对Lab色彩模式的彩色图像进行图像内物体的定位和前景物体的分割;
    构建结合全局先验和局部图像特征结构的卷积神经网络模型;
    利用所述Lab色彩模式的彩色图像以及上述确定的卷积神经网络模型结构,训练卷积神经网络模型进行图像中物体类别和颜色的预测;
    输入需要执行色彩恢复的黑白图像,获取所述黑白图像中的L分量,并将所述L分量输入训练好的卷积神经网络模型中,生成对应的ab分量,最后将L、a、b三个分量结合产生所述黑白图像对应的彩色图像。
  9. 如权利要求8所述的基于神经网络的黑白照片色彩恢复装置,其特征在于,将所述彩色图像从RGB色彩模式转化为Lab色彩模式包括将彩色图像从RGB色彩模式转化为XYZ色彩模式以及将彩色图像从XYZ色彩转化为Lab色彩模式,其中:
    所述将彩色图像从RGB色彩模式转化为XYZ色彩模式方法如下:
    [X,Y,Z]=[M]*[R,G,B],
    其中,M为一个3x3矩阵:
    Figure PCTCN2019088627-appb-100002
    R、G、B是经过Gamma校正的色彩分量:R=g(r),G=g(g),B=g(b),而r、g、b为原始的色彩分量,g(x)是Gamma校正函数,
    当x<0.018时,g(x)=4.5318*x,
    当x>=0.018时,g(x)=1.099*d^0.45-0.099;
    所述将彩色图像从XYZ色彩转化为Lab色彩模式包括:
    L=116*f(Y1)-16,
    a=500*(f(X1)-f(Y1)),
    b=200*(f(Y1)-f(Z1)),
    其中f(x)是Gamma函数的校正函数,
    当x>0.008856时,f(x)=x^(1/3),
    当x<=0.008856时,f(x)=(7.787*x)+(16/116),
    X1、Y1、Z1分别是线性归一化之后的X、Y、Z值。
  10. 如权利要求8所述的基于神经网络的黑白照片色彩恢复装置,其特征在于,所述边缘检测算法包括Canny边缘检测算法,以及所述图像内所包含的物体的定位包括:
    用高斯滤波器对所述彩色图像进行平滑滤波;
    用一阶偏导的有限差分计算所述彩色图像的梯度的幅值和方向;
    将非局部极大值点的幅度置为零,以得到细化的边缘;及
    用双阈值法检测和连接所述彩色图像中的所包含的物体的边缘,完成所述图像内所包含的物体的定位。
  11. 如权利要求8或9或10所述的基于神经网络的黑白照片色彩恢复装置,其特征在于,所述阈值分割法包括设置一个阈值T,并遍历所述彩色图像中的每个像素点,当像素点的灰度值大于T时,判断该像素点属于前景物体,当像素点的灰度值小于或者等于T,判断该像素点属于背景物体。
  12. 如权利要求8所述的基于神经网络的黑白照片色彩恢复装置,其特征在于,所述卷积神经网络模型的训练方法如下:
    确定输入输出向量,其中,所述输入向量为图像的L分量,输出向量为对图像内物体类别和颜色的预测;
    对所述L分量进行卷积操作;
    构建评价网络模型输出的预测值与真实值之间的差异的损失函数;及
    用Softmax函数输出物体类别的分类标签。
  13. 如权利要求9或10所述的基于神经网络的黑白照片色彩恢复装置,其特征在于,所述卷积神经网络模型的训练方法如下:
    确定输入输出向量,其中,所述输入向量为图像的L分量,输出向量为对图像内物体类别和颜色的预测;
    对所述L分量进行卷积操作;
    构建评价网络模型输出的预测值与真实值之间的差异的损失函数;及
    用Softmax函数输出物体类别的分类标签。
  14. 如权利要求11所述的基于神经网络的黑白照片色彩恢复装置,其特征在于,所述卷积神经网络模型的训练方法如下:
    确定输入输出向量,其中,所述输入向量为图像的L分量,输出向量为对图像内物体类别和颜色的预测;
    对所述L分量进行卷积操作;
    构建评价网络模型输出的预测值与真实值之间的差异的损失函数;及
    用Softmax函数输出物体类别的分类标签。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有基于神经网络的黑白照片色彩恢复程序,所述基于神经网络的黑白照片色彩恢复程序可被一个或者多个处理器执行,以实现如权利要求1至5中任一项所述的基于神经网络的黑白照片色彩恢复方法的步骤。
    如下步骤:
    从网络中获取彩色图像,并将所述彩色图像从RGB色彩模式转化为Lab色彩模式;
    利用边缘检测算法和阈值分割法对Lab色彩模式的彩色图像进行图像内物体的定位和前景物体的分割;
    构建结合全局先验和局部图像特征结构的卷积神经网络模型;
    利用所述Lab色彩模式的彩色图像以及上述确定的卷积神经网络模型结构,训练卷积神经网络模型进行图像中物体类别和颜色的预测;
    输入需要执行色彩恢复的黑白图像,获取所述黑白图像中的L分量,并将所述L分量输入训练好的卷积神经网络模型中,生成对应的ab分量,最后将L、a、b三个分量结合产生所述黑白图像对应的彩色图像。
  16. 如权利要求15所述的计算机可读存储介质,其特征在于,将所述彩色图像从RGB色彩模式转化为Lab色彩模式包括将彩色图像从RGB色彩模式转化为XYZ色彩模式以及将彩色图像从XYZ色彩转化为Lab色彩模式, 其中:
    所述将彩色图像从RGB色彩模式转化为XYZ色彩模式方法如下:
    [X,Y,Z]=[M]*[R,G,B],
    其中,M为一个3x3矩阵:
    Figure PCTCN2019088627-appb-100003
    R、G、B是经过Gamma校正的色彩分量:R=g(r),G=g(g),B=g(b),而r、g、b为原始的色彩分量,g(x)是Gamma校正函数,
    当x<0.018时,g(x)=4.5318*x,
    当x>=0.018时,g(x)=1.099*d^0.45-0.099;
    所述将彩色图像从XYZ色彩转化为Lab色彩模式包括:
    L=116*f(Y1)-16,
    a=500*(f(X1)-f(Y1)),
    b=200*(f(Y1)-f(Z1)),
    其中f(x)是Gamma函数的校正函数,
    当x>0.008856时,f(x)=x^(1/3),
    当x<=0.008856时,f(x)=(7.787*x)+(16/116),
    X1、Y1、Z1分别是线性归一化之后的X、Y、Z值。
  17. 如权利要求15所述的计算机可读存储介质,其特征在于,所述边缘检测算法包括Canny边缘检测算法,以及所述图像内所包含的物体的定位包括:
    用高斯滤波器对所述彩色图像进行平滑滤波;
    用一阶偏导的有限差分计算所述彩色图像的梯度的幅值和方向;
    将非局部极大值点的幅度置为零,以得到细化的边缘;及
    用双阈值法检测和连接所述彩色图像中的所包含的物体的边缘,完成所述图像内所包含的物体的定位。
  18. 如权利要求15或16或17所述的计算机可读存储介质,其特征在于,所述阈值分割法包括设置一个阈值T,并遍历所述彩色图像中的每个像素点,当像素点的灰度值大于T时,判断该像素点属于前景物体,当像素点的灰度值小于或者等于T,判断该像素点属于背景物体。
  19. 如权利要求15或16或17所述的计算机可读存储介质,其特征在于,所述卷积神经网络模型的训练方法如下:
    确定输入输出向量,其中,所述输入向量为图像的L分量,输出向量为对图像内物体类别和颜色的预测;
    对所述L分量进行卷积操作;
    构建评价网络模型输出的预测值与真实值之间的差异的损失函数;及
    用Softmax函数输出物体类别的分类标签。
  20. 如权利要求18所述的计算机可读存储介质,其特征在于,所述卷积 神经网络模型的训练方法如下:
    确定输入输出向量,其中,所述输入向量为图像的L分量,输出向量为对图像内物体类别和颜色的预测;
    对所述L分量进行卷积操作;
    构建评价网络模型输出的预测值与真实值之间的差异的损失函数;及
    用Softmax函数输出物体类别的分类标签。
PCT/CN2019/088627 2019-01-23 2019-05-27 基于神经网络的黑白照片色彩恢复方法、装置及存储介质 WO2020151148A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910063673.4 2019-01-23
CN201910063673.4A CN109920018A (zh) 2019-01-23 2019-01-23 基于神经网络的黑白照片色彩恢复方法、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2020151148A1 true WO2020151148A1 (zh) 2020-07-30

Family

ID=66960503

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/088627 WO2020151148A1 (zh) 2019-01-23 2019-05-27 基于神经网络的黑白照片色彩恢复方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN109920018A (zh)
WO (1) WO2020151148A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110675462B (zh) * 2019-09-17 2023-06-16 天津大学 一种基于卷积神经网络的灰度图像彩色化方法
CN111292255B (zh) * 2020-01-10 2023-01-17 电子科技大学 一种基于rgb图像的填充与修正技术
CN111311695B (zh) * 2020-02-12 2022-11-25 东南大学 一种基于卷积神经网络的清水混凝土表面色差分析方法
CN111476863B (zh) * 2020-04-02 2024-03-12 北京奇艺世纪科技有限公司 一种黑白漫画上色的方法、装置、电子设备及存储介质
CN112884866B (zh) * 2021-01-08 2023-06-06 北京奇艺世纪科技有限公司 一种黑白视频的上色方法、装置、设备及存储介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460770A (zh) * 2016-12-13 2018-08-28 华为技术有限公司 抠图方法及装置
CN108921932A (zh) * 2018-06-28 2018-11-30 福州大学 一种基于卷积神经网络的黑白人物图片实时生成多种合理着色的方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103559719B (zh) * 2013-11-20 2016-05-04 电子科技大学 一种交互式图像分割方法
CN104574405B (zh) * 2015-01-15 2018-11-13 北京天航华创科技股份有限公司 一种基于Lab空间的彩色图像阈值分割方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460770A (zh) * 2016-12-13 2018-08-28 华为技术有限公司 抠图方法及装置
CN108921932A (zh) * 2018-06-28 2018-11-30 福州大学 一种基于卷积神经网络的黑白人物图片实时生成多种合理着色的方法

Also Published As

Publication number Publication date
CN109920018A (zh) 2019-06-21

Similar Documents

Publication Publication Date Title
WO2020151148A1 (zh) 基于神经网络的黑白照片色彩恢复方法、装置及存储介质
CN110648375B (zh) 基于参考信息的图像彩色化
CN109325398B (zh) 一种基于迁移学习的人脸属性分析方法
CN108765278B (zh) 一种图像处理方法、移动终端及计算机可读存储介质
CN108388882B (zh) 基于全局-局部rgb-d多模态的手势识别方法
Zhang et al. Deep hierarchical guidance and regularization learning for end-to-end depth estimation
CN105354248A (zh) 基于灰度的分布式图像底层特征识别方法及系统
CN107066916B (zh) 基于反卷积神经网络的场景语义分割方法
CN104484658A (zh) 一种基于多通道卷积神经网络的人脸性别识别方法及装置
CN111989689A (zh) 用于识别图像内目标的方法和用于执行该方法的移动装置
WO2021115242A1 (zh) 一种超分辨率图像处理方法以及相关装置
CN109871845B (zh) 证件图像提取方法及终端设备
CN111476710B (zh) 基于移动平台的视频换脸方法及系统
EP2863362B1 (en) Method and apparatus for scene segmentation from focal stack images
CN112906550B (zh) 一种基于分水岭变换的静态手势识别方法
CN110852311A (zh) 一种三维人手关键点定位方法及装置
CN111768415A (zh) 一种无量化池化的图像实例分割方法
CN111259710B (zh) 采用停车位框线、端点的停车位结构检测模型训练方法
CN112651406A (zh) 一种深度感知和多模态自动融合的rgb-d显著性目标检测方法
CN109064431B (zh) 一种图片亮度调节方法、设备及其存储介质
CN106960188B (zh) 天气图像分类方法及装置
CN116597267B (zh) 图像识别方法、装置、计算机设备和存储介质
US10991085B2 (en) Classifying panoramic images
CN115661482B (zh) 一种基于联合注意力的rgb-t显著目标检测方法
WO2023174063A1 (zh) 背景替换的方法和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19911829

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19911829

Country of ref document: EP

Kind code of ref document: A1