CN112270397A - Color space conversion method based on deep neural network - Google Patents

Color space conversion method based on deep neural network Download PDF

Info

Publication number
CN112270397A
CN112270397A CN202011157124.2A CN202011157124A CN112270397A CN 112270397 A CN112270397 A CN 112270397A CN 202011157124 A CN202011157124 A CN 202011157124A CN 112270397 A CN112270397 A CN 112270397A
Authority
CN
China
Prior art keywords
layer
neural network
color space
training
dbn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011157124.2A
Other languages
Chinese (zh)
Other versions
CN112270397B (en
Inventor
苏泽斌
杨金锴
李鹏飞
景军锋
张缓缓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Polytechnic University
Original Assignee
Xian Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Polytechnic University filed Critical Xian Polytechnic University
Priority to CN202011157124.2A priority Critical patent/CN112270397B/en
Publication of CN112270397A publication Critical patent/CN112270397A/en
Application granted granted Critical
Publication of CN112270397B publication Critical patent/CN112270397B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Color Image Communication Systems (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The invention discloses a color space conversion method based on a deep neural network, which is implemented according to the following steps: step 1, making a training sample and a testing sample, wherein the training sample is used for establishing a neural network model, and the testing sample is used for checking the conversion precision of the trained model; establishing a deep confidence network model; step 2, optimizing parameters such as neuron connection weight of the deep belief network by using a particle swarm algorithm; step 3, inputting the training samples into the step 3 for training, and then performing reverse fine adjustment by using a BP neural network to obtain a stable PSO-DBN model and obtain a color space conversion model from LaBbto CMYK; and 4, inputting the test sample into a conversion model to perform color conversion, calculating a conversion error, and checking the precision of the model to complete color space conversion. The problem of L a b to CMYK color space conversion model conversion precision low that exists among the prior art is solved.

Description

Color space conversion method based on deep neural network
Technical Field
The invention belongs to the technical field of image processing, and relates to a color space conversion method based on a deep neural network.
Background
Color management of digital printing machines can be divided into three steps, namely calibration equipment, characterization and color space conversion, wherein the color space conversion is an important part of the color management of digital printing. The CMYK color space is a color standard applied to digital printing, and describes a relationship between ink amounts of four colors of cyan (C), magenta (M), yellow (Y), and black (K) in a digital printing product. The color space L a b is independent of the equipment, can be used as a connecting color space between different equipment, and is widely applied to color evaluation of digital printing machines. The value of L in L a b represents brightness; the positive number range of the a value is a red color gamut, and the negative number range is a green color gamut; the b value is in the positive range of yellow color gamut and in the negative range of blue color gamut. Different devices have different color space description methods for images, and the color gamut of the images has large difference, so that color difference exists between a digital printing product and a sample image. And establishing a conversion relation between L a b and CMYK color space with higher conversion precision, so that the quality of digital printing products can be improved to a great extent.
Neural network technology has received great attention in color management and color space conversion applications. The traditional color space conversion method uses shallow neural networks, such as BPNN, GRNN and ELM, which are influenced by the structure of the shallow neural networks, so that a local optimal solution can be easily obtained under a complex problem, and the precision of the local optimal solution is difficult to further improve. The Deep Belief Network (DBN) is an unsupervised learning method, can extract features from a large amount of data, has wide adaptability and strong mapping capability, and is suitable for constructing a color space conversion model. The parameters of the DBN algorithm often need to be determined manually through experience and multiple adjustments, which greatly affects the network practicability. The Particle Swarm Optimization (PSO) can optimize the parameters of the DBN algorithm, and finally gives the optimal parameters to the DBN network, so that the conversion precision of the DBN is improved.
Disclosure of Invention
The invention aims to provide a color space conversion method based on a deep neural network, which solves the problem of low conversion precision of a color space conversion model from L A B to CMYK in the prior art.
The technical scheme adopted by the invention is that a color space conversion method based on a deep neural network is implemented according to the following steps:
step 1, making a training sample and a testing sample, wherein the training sample is used for establishing a neural network model, and the testing sample is used for checking the conversion precision of the trained model; establishing a depth confidence network model, initializing parameters among layers of an input layer, a hidden layer and an output layer in the DBN, taking an L a b color space as an input value of the neural network, and taking a CMYK color space as an output value of the neural network;
step 2, optimizing parameters such as neuron connection weight of the deep belief network by using a particle swarm algorithm;
step 3, inputting the training samples into the step 3 for training, and then performing reverse fine adjustment by using a BP neural network to obtain a stable PSO-DBN model and obtain a color space conversion model from LaBbto CMYK;
and 4, inputting the test sample into a conversion model to perform color conversion, calculating a conversion error, and checking the precision of the model to complete color space conversion.
The invention is also characterized in that:
establishing a depth confidence network model in the step 1, initializing parameters among layers of an input layer, a hidden layer and an output layer in the DBN, taking an L a b color space as an input value of the neural network, and taking a CMYK color space as an output value of the neural network, wherein the implementation specifically comprises the following steps: establishing a deep confidence network model, wherein a limited Boltzmann machine is a main component of a DBN, and the training process of the DBN can be divided into two stages, namely pre-training and reverse fine adjustment; firstly, training each RBM in a network layer by adopting an unsupervised greedy learning algorithm, and transmitting data characteristic information layer by layer to realize initialization of network parameters; then, fine tuning is performed using a BP neural network algorithm, with initial weights obtained through pre-training from top to bottom; the process is supervised and trained to enable the model to obtain the optimal solution, and therefore the structure of the whole DBN network is determined.
The unsupervised training of the RBM in the step 1 is specifically implemented as follows:
in RBM, v1,v2And … denotes a visible layer, h1,h2And … denotes a hidden layer, wijA weight representing each neuron connection; introducing an energy function to define the total energy of the system and calculating the joint distribution probability;
Figure BDA0002743131260000031
in the above formula, θ ═ a, b, wijIs the connection weight between the visible layer and the hidden layer, V is the number of cells in the visible layer, H is the number of cells in the hidden layer, aiIs the offset of the visible layer, ajIs the offset of the hidden layer; the joint distribution of visible and hidden units is defined as follows, according to the defined system energy.
Figure BDA0002743131260000032
Z=∑v′h′e-E(v′,h′|θ) 3
In the above equation, Z is a normalization factor, ensuring that the joint probability varies within the range of [0,1 ]. Thus, the edge distribution of the hidden layer is as follows:
Figure BDA0002743131260000033
since the neurons in each layer of the RBM are independent, the activation probability of each cell can be obtained according to the following relation;
p(vi=I|h)=f(ai+∑jhjwij) 5
p(hi=I|v)=f(bj+∑iviwij) 6。
the training process of the DBN network in step 1 is specifically implemented as follows:
step 1.1, set node status as Fi,FjRepresenting the state of a node j connected with a node i, wherein the weight matrix is W; randomly selecting a training sample, inputting the data of the training sample to a visible layer of a first RBM, and updating the state F of each node of a hidden layer of the first layer according to a formula (7)jWhere σ ∈ [0,1]]。
Figure BDA0002743131260000041
Step 1.2, implicit node State F, determined by step 2.1jUpdating the state of the first RBM visible node according to formula (8) and recording the state as F'i
Figure BDA0002743131260000042
Step 1.3, the hidden layer node state F 'of the first RBM obtained in the previous step'iAs the input of a second RBM, updating the states of all nodes of the DBN layer by layer according to the steps until the four RBM nodes are updated;
step 1.4, calculating a network weight value according to a formula (9), updating a weight matrix in the network until the variable quantity of the weight matrix of the network is small enough or reaches a set maximum training frequency, and ending the DBN training;
Δωij=η(<FiFj>-<F'iF'j>) 9。
the step 2 is implemented according to the following steps:
step 2.1, preprocessing a data set, and firstly initializing parameters in a DBN neural network so as to determine the dimension of a particle;
step 2.2, initializing each parameter of the particle swarm, and hiding the DBN of 4 layers, wherein each layer has m1、m2、m3And m4A neuron, and the learning rate eta belongs to [0, 1); thus setting each particle in the population to a four-dimensional vector X (m)1,m2,m3,m4,η);
Step 2.3, calculating the fitness function value of each particle by using a formula (10) to obtain an individual extreme value PbestAnd group extremum Gbest
Figure BDA0002743131260000051
Wherein N is the total number of samples; m is the dimension of the particle; a isij、bijRespectively a predicted value and an actual value of jth dimensional data of the ith sample;
step 2.4, comparing the fitness value of each particle with PbestThe size of (d); if the fitness is greater than the individual extreme value, updating the fitness to PbestOtherwise, the data is kept unchanged. The same procedure yields the global optimum Gbest
And 2.5, updating the speed and the position of each particle. If the maximum iteration times are reached, the iteration is ended and final parameters are output, otherwise, the optimal particle position is continuously searched.
Step 3 is specifically implemented according to the following steps:
step 3.1, splitting the optimal solution in the step 3 into parameters of the DBN, and inputting the parameters into the DBN for training to obtain a stable neural network model;
and 3.2, taking 3 component values in the color space of the training sample La b as neural network input, taking 4 component values in the CMYK color space as neural network output, and training the PSO-DBN network.
The invention has the beneficial effects that: the invention discloses a color space conversion method based on a deep neural network, which solves the problem of low conversion precision of the color space conversion method in the prior art. Aiming at the problems that the conversion precision of the traditional color space conversion method is low, the neural network algorithm is easy to fall into the local minimum value and the like, the defect that the conversion precision of the traditional shallow color conversion method is low is overcome by optimizing the deep confidence network through the particle swarm algorithm, and meanwhile, the conversion model has high stability.
Drawings
FIG. 1 is a schematic flow chart of a deep neural network-based color space conversion method according to the present invention;
FIG. 2 is a diagram of a deep confidence network structure in a deep neural network-based color space conversion method according to the present invention;
FIG. 3 is a flow chart of a particle group algorithm optimized deep belief network algorithm in the color space transformation method based on the deep neural network of the present invention;
FIG. 4 is a flow chart of a color space conversion model from La b to CMYK according to the color space conversion method based on the deep neural network
FIG. 5 is a statistical diagram of the conversion color difference for verifying the designed color space conversion method in an embodiment of the color space conversion method based on the deep neural network according to the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to a color space conversion method based on a deep neural network, which is implemented according to the following steps as shown in figure 1:
step 1, making a training sample and a testing sample, wherein the training sample is used for establishing a neural network model, and the testing sample is used for checking the conversion precision of the trained model; establishing a depth confidence network model, initializing parameters among layers of an input layer, a hidden layer and an output layer in the DBN, taking an L a b color space as an input value of the neural network, and taking a CMYK color space as an output value of the neural network;
establishing a depth confidence network model in the step 1, initializing parameters among layers of an input layer, a hidden layer and an output layer in the DBN, taking an L a b color space as an input value of the neural network, and taking a CMYK color space as an output value of the neural network, wherein the implementation specifically comprises the following steps: establishing a deep belief network model, as shown in fig. 2, wherein a Restricted Boltzmann Machine (RBM) is a main component of a DBN, and a training process of the DBN can be divided into two stages, namely pre-training and reverse fine-tuning; firstly, training each RBM in a network layer by adopting an unsupervised greedy learning algorithm, and transmitting data characteristic information layer by layer to realize initialization of network parameters; then, fine tuning is performed using a BP neural network algorithm, with initial weights obtained through pre-training from top to bottom; the process is supervised and trained to enable the model to obtain the optimal solution, and therefore the structure of the whole DBN network is determined.
The unsupervised training of the RBM in the step 1 is specifically implemented as follows:
in RBM, v1,v2And … denotes a visible layer, h1,h2And … denotes a hidden layer, wijA weight representing each neuron connection; introducing an energy function to define the total energy of the system and calculating the joint distribution probability;
Figure BDA0002743131260000071
in the above formula, θ ═ a, b, wijIs the connection weight between the visible layer and the hidden layer, V is the number of cells in the visible layer, H is the number of cells in the hidden layer, aiIs the offset of the visible layer, ajIs the offset of the hidden layer; the joint distribution of visible and hidden units is defined as follows, according to the defined system energy.
Figure BDA0002743131260000072
Z=∑v′h′e-E(v′,h′|θ) 3
In the above equation, Z is a normalization factor, ensuring that the joint probability varies within the range of [0,1 ]. Thus, the edge distribution of the hidden layer is as follows:
Figure BDA0002743131260000073
since the neurons in each layer of the RBM are independent, the activation probability of each cell can be obtained according to the following relation;
p(vi=I|h)=f(ai+∑jhjwij) 5
p(hi=I|v)=f(bj+∑iviwij) 6。
the training process of the DBN network in step 1 is specifically implemented as follows:
step 1.1, set node status as Fi,FjRepresenting the state of a node j connected with a node i, wherein the weight matrix is W; randomly selecting a training sample, inputting the data of the training sample to a visible layer of a first RBM, and updating the state F of each node of a hidden layer of the first layer according to a formula (7)jWhere σ ∈ [0,1]]。
Figure BDA0002743131260000074
Step 1.2, implicit node State F, determined by step 2.1jUpdating the state of the first RBM visible node according to formula (8) and recording the state as F'i
Figure BDA0002743131260000081
Step 1.3, the hidden layer node state F 'of the first RBM obtained in the previous step'iAs the input of a second RBM, updating the states of all nodes of the DBN layer by layer according to the steps until the four RBM nodes are updated;
step 1.4, calculating a network weight value according to a formula (9), updating a weight matrix in the network until the variable quantity of the weight matrix of the network is small enough or reaches a set maximum training frequency, and ending the DBN training;
Δωij=η(<FiFj>-<F'iF'j>) 9。
step 2, optimizing parameters such as neuron connection weight of the deep belief network by using a particle swarm algorithm;
as shown in fig. 3, step 2 is specifically implemented according to the following steps:
step 2.1, preprocessing a data set, and firstly initializing parameters in a DBN neural network so as to determine the dimension of a particle;
step 2.2, initializing each parameter of the particle swarm, and hiding the DBN of 4 layers, wherein each layer has m1、m2、m3And m4And each neuron, wherein the learning rate eta belongs to [0,1 ]. Thus setting each particle in the population to a four-dimensional vector X (m)1,m2,m3,m4,η);
Step 2.3, calculating the fitness function value of each particle by using a formula (10) to obtain an individual extreme value PbestAnd group extremum Gbest
Figure BDA0002743131260000082
Wherein N is the total number of samples; m is the dimension of the particle; a isij、bijRespectively a predicted value and an actual value of jth dimensional data of the ith sample;
step 2.4, comparing the fitness value of each particle with PbestThe size of (d); if the fitness is greater than the individual extreme value, updating the fitness to PbestOtherwise, the data is kept unchanged. The same procedure yields the global optimum Gbest
And 2.5, updating the speed and the position of each particle. If the maximum iteration times are reached, the iteration is ended and final parameters are output, otherwise, the optimal particle position is continuously searched.
Step 3, inputting the training samples into the step 3 for training, and then performing reverse fine adjustment by using a BP neural network to obtain a stable PSO-DBN model and obtain a color space conversion model from LaBbto CMYK;
as shown in fig. 4, step 3 is specifically implemented according to the following steps:
step 3.1, splitting the optimal solution in the step 3 into parameters of the DBN, and inputting the parameters into the DBN for training to obtain a stable neural network model;
and 3.2, taking 3 component values in the color space of the training sample La b as neural network input, taking 4 component values in the CMYK color space as neural network output, and training the PSO-DBN network.
And 4, inputting the test sample into a conversion model to perform color conversion, calculating a conversion error, and checking the precision of the model to complete color space conversion.
Examples
The running platform of this example is Windows 10, the simulation environment uses MATLAB R2016a, uses the PANTONE TCX color card as the sample data set of this experiment, numbers all 2310 color patches of the color card, and randomly generates 800 random numbers in the range of 1 to 2310 using MATLAB software, and these random numbers correspond to the color patch numbered 800 as the training sample. The L a b values are used as inputs and the corresponding CMYK values are used as outputs to train the network and create a non-linear mapping. Then, another 50 color patches were randomly selected from the remaining 1510 color patches as test samples.
And inputting 50 test samples into a color space conversion model to obtain 50 CMYK predicted values, comparing the predicted values with actual CMYK values, and respectively calculating C, M, Y, K average conversion errors of four components. As shown in fig. 5, it can be seen that the accuracy of color space conversion from L × a × b to CMYK is high.
The invention relates to a color space conversion method based on a deep neural network, which converts an L A B color space into a CMYK color space. L a b and CMYK correspond to input and output values of the depth confidence network, respectively. Parameters such as connection weight of the deep confidence network are optimized by using a particle swarm algorithm, and the performance of the deep confidence network is improved. The working process is as follows: establishing a data set; determining a deep belief neural network structure and initializing parameters; optimizing the weight of the deep belief neural network through the particle swarm; finally, a stable neural network model is obtained. The color space conversion function from any La b to CMYK in the digital printing can be realized. The method improves the color space conversion precision and has higher conversion efficiency.

Claims (6)

1. A color space conversion method based on a deep neural network is characterized by comprising the following steps:
step 1, making a training sample and a testing sample, wherein the training sample is used for establishing a neural network model, and the testing sample is used for checking the conversion precision of the trained model; establishing a depth confidence network model, initializing parameters among layers of an input layer, a hidden layer and an output layer in the DBN, taking an L a b color space as an input value of the neural network, and taking a CMYK color space as an output value of the neural network;
step 2, optimizing parameters such as neuron connection weight of the deep belief network by using a particle swarm algorithm;
step 3, inputting the training samples into the step 3 for training, and then performing reverse fine adjustment by using a BP neural network to obtain a stable PSO-DBN model and obtain a color space conversion model from LaBbto CMYK;
and 4, inputting the test sample into a conversion model to perform color conversion, calculating a conversion error, and checking the precision of the model to complete color space conversion.
2. The method according to claim 1, wherein the step 1 of building a deep belief network model initializes parameters among layers of the input layer, the hidden layer, and the output layer in the DBN, and the color space L α a b serves as an input value of the neural network, and the color space CMYK serves as an output value of the neural network, which is implemented as follows: establishing a deep confidence network model, wherein a limited Boltzmann machine is a main component of a DBN, and the training process of the DBN can be divided into two stages, namely pre-training and reverse fine adjustment; firstly, training each RBM in a network layer by adopting an unsupervised greedy learning algorithm, and transmitting data characteristic information layer by layer to realize initialization of network parameters; then, fine tuning is performed using a BP neural network algorithm, with initial weights obtained through pre-training from top to bottom; the process is supervised and trained to enable the model to obtain the optimal solution, and therefore the structure of the whole DBN network is determined.
3. The deep neural network-based color space transformation method according to claim 2, wherein the unsupervised training of the RBM in step 1 is specifically performed as follows:
in RBM, v1,v2And … denotes a visible layer, h1,h2And … denotes a hidden layer, wijA weight representing each neuron connection; introducing an energy function to define the total energy of the system and calculating the joint distribution probability;
Figure FDA0002743131250000021
in the above formula, θ ═ a, b, wijIs the connection weight between the visible layer and the hidden layer, V is the number of cells in the visible layer, H is the number of cells in the hidden layer, aiIs the offset of the visible layer, ajIs the offset of the hidden layer; the joint distribution of visible and hidden units is defined as follows, according to the defined system energy.
Figure FDA0002743131250000022
Z=∑v′h′e-E(v′,h′|θ) (3)
In the above equation, Z is a normalization factor, ensuring that the joint probability varies within the range of [0,1 ]. Thus, the edge distribution of the hidden layer is as follows:
Figure FDA0002743131250000023
since the neurons in each layer of the RBM are independent, the activation probability of each cell can be obtained according to the following relation;
p(vi=I|h)=f(ai+∑jhjwij) (5)
p(hi=I|v)=f(bj+∑iviwij) (6)。
4. the method according to claim 3, wherein the DBN network training process in step 1 is implemented as follows:
step 1.1, set node status as Fi,FjRepresenting the state of a node j connected with a node i, wherein the weight matrix is W; randomly selecting a training sample, inputting the data of the training sample to a visible layer of a first RBM, and updating the state F of each node of a hidden layer of the first layer according to a formula (7)jWhere σ ∈ [0,1]]。
Figure FDA0002743131250000031
Step 1.2, implicit node State F, determined by step 2.1jUpdating the state of the first RBM visible node according to the formula (8) and recording as Fi';
Figure FDA0002743131250000032
Step 1.3, the hidden layer node state F of the first RBM obtained in the previous stepiThe state of each node of the DBN is updated layer by layer according to the steps until the four RBM nodes are updated;
step 1.4, calculating a network weight value according to a formula (9), updating a weight matrix in the network until the variable quantity of the weight matrix of the network is small enough or reaches a set maximum training frequency, and ending the DBN training;
Δωij=η(<FiFj>-<Fi'Fj'>) (9)。
5. the color space conversion method based on the deep neural network as claimed in claim 4, wherein the step 2 is implemented by the following steps:
step 2.1, preprocessing a data set, and firstly initializing parameters in a DBN neural network so as to determine the dimension of a particle;
step 2.2, initializing each parameter of the particle swarm, and hiding the DBN of 4 layers, wherein each layer has m1、m2、m3And m4A neuron, and the learning rate eta belongs to [0, 1); thus setting each particle in the population to a four-dimensional vector X (m)1,m2,m3,m4,η);
Step 2.3, calculating the fitness function value of each particle by using a formula (10) to obtain an individual extreme value PbestAnd group extremum Gbest
Figure FDA0002743131250000033
Wherein N is the total number of samples; m is the dimension of the particle; a isij、bijRespectively a predicted value and an actual value of jth dimensional data of the ith sample;
step 2.4, comparing the fitness value of each particle with PbestThe size of (d); if the fitness is greater than the individual extreme value, updating the fitness to PbestOtherwise, the data is kept unchanged. The same procedure yields the global optimum Gbest
And 2.5, updating the speed and the position of each particle. If the maximum iteration times are reached, the iteration is ended and final parameters are output, otherwise, the optimal particle position is continuously searched.
6. The color space conversion method based on the deep neural network as claimed in claim 5, wherein the step 3 is implemented according to the following steps:
step 3.1, splitting the optimal solution in the step 3 into parameters of the DBN, and inputting the parameters into the DBN for training to obtain a stable neural network model;
and 3.2, taking 3 component values in the color space of the training sample La b as neural network input, taking 4 component values in the CMYK color space as neural network output, and training the PSO-DBN network.
CN202011157124.2A 2020-10-26 2020-10-26 Color space conversion method based on deep neural network Active CN112270397B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011157124.2A CN112270397B (en) 2020-10-26 2020-10-26 Color space conversion method based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011157124.2A CN112270397B (en) 2020-10-26 2020-10-26 Color space conversion method based on deep neural network

Publications (2)

Publication Number Publication Date
CN112270397A true CN112270397A (en) 2021-01-26
CN112270397B CN112270397B (en) 2024-02-20

Family

ID=74342437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011157124.2A Active CN112270397B (en) 2020-10-26 2020-10-26 Color space conversion method based on deep neural network

Country Status (1)

Country Link
CN (1) CN112270397B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113119447A (en) * 2021-03-19 2021-07-16 西安理工大学 Method for color space conversion of color 3D printing
CN113409206A (en) * 2021-06-11 2021-09-17 西安工程大学 High-precision digital printing color space conversion method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102110428A (en) * 2009-12-23 2011-06-29 新奥特(北京)视频技术有限公司 Method and device for converting color space from CMYK to RGB
CN102111626A (en) * 2009-12-23 2011-06-29 新奥特(北京)视频技术有限公司 Conversion method and device from red-green-blue (RGB) color space to cyan-magenta-yellow-black (CMYK) color space
CN103383743A (en) * 2013-07-16 2013-11-06 南京信息工程大学 Chrominance space transformation method
CN103729678A (en) * 2013-12-12 2014-04-16 中国科学院信息工程研究所 Navy detection method and system based on improved DBN model
CN103729695A (en) * 2014-01-06 2014-04-16 国家电网公司 Short-term power load forecasting method based on particle swarm and BP neural network
WO2019101720A1 (en) * 2017-11-22 2019-05-31 Connaught Electronics Ltd. Methods for scene classification of an image in a driving support system
KR101993752B1 (en) * 2018-02-27 2019-06-27 연세대학교 산학협력단 Method and Apparatus for Matching Colors Using Neural Network
CN110475043A (en) * 2019-07-31 2019-11-19 西安工程大学 A kind of conversion method of CMYK to Lab color space

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102110428A (en) * 2009-12-23 2011-06-29 新奥特(北京)视频技术有限公司 Method and device for converting color space from CMYK to RGB
CN102111626A (en) * 2009-12-23 2011-06-29 新奥特(北京)视频技术有限公司 Conversion method and device from red-green-blue (RGB) color space to cyan-magenta-yellow-black (CMYK) color space
CN103383743A (en) * 2013-07-16 2013-11-06 南京信息工程大学 Chrominance space transformation method
CN103729678A (en) * 2013-12-12 2014-04-16 中国科学院信息工程研究所 Navy detection method and system based on improved DBN model
CN103729695A (en) * 2014-01-06 2014-04-16 国家电网公司 Short-term power load forecasting method based on particle swarm and BP neural network
WO2019101720A1 (en) * 2017-11-22 2019-05-31 Connaught Electronics Ltd. Methods for scene classification of an image in a driving support system
KR101993752B1 (en) * 2018-02-27 2019-06-27 연세대학교 산학협력단 Method and Apparatus for Matching Colors Using Neural Network
CN110475043A (en) * 2019-07-31 2019-11-19 西安工程大学 A kind of conversion method of CMYK to Lab color space

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张雷洪: "《优化与控制中的软计算方法研究》", pages: 110 - 111 *
李正明等: "基于PSO-DBN神经网络的光伏短期发电出力预测", 《电力系统保护与控制》, vol. 48, no. 8, 16 April 2020 (2020-04-16), pages 2 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113119447A (en) * 2021-03-19 2021-07-16 西安理工大学 Method for color space conversion of color 3D printing
CN113409206A (en) * 2021-06-11 2021-09-17 西安工程大学 High-precision digital printing color space conversion method

Also Published As

Publication number Publication date
CN112270397B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
CN110020682B (en) Attention mechanism relation comparison network model method based on small sample learning
CN110516596B (en) Octave convolution-based spatial spectrum attention hyperspectral image classification method
CN112507793B (en) Ultra-short term photovoltaic power prediction method
CN108564006B (en) Polarized SAR terrain classification method based on self-learning convolutional neural network
CN107194336B (en) Polarized SAR image classification method based on semi-supervised depth distance measurement network
CN110475043B (en) Method for converting CMYK to Lab color space
CN114897837A (en) Power inspection image defect detection method based on federal learning and self-adaptive difference
CN112347970B (en) Remote sensing image ground object identification method based on graph convolution neural network
CN111160553B (en) Novel field self-adaptive learning method
CN112270397A (en) Color space conversion method based on deep neural network
CN111340076B (en) Zero sample identification method for unknown mode of radar target of new system
CN109816714B (en) Point cloud object type identification method based on three-dimensional convolutional neural network
CN114429219A (en) Long-tail heterogeneous data-oriented federal learning method
CN112085738A (en) Image segmentation method based on generation countermeasure network
CN113011487B (en) Open set image classification method based on joint learning and knowledge migration
CN111222545B (en) Image classification method based on linear programming incremental learning
CN111832404A (en) Small sample remote sensing ground feature classification method and system based on feature generation network
CN116933141B (en) Multispectral laser radar point cloud classification method based on multicore graph learning
CN115206455B (en) Deep neural network-based rare earth element component content prediction method and system
CN113486929B (en) Rock slice image identification method based on residual shrinkage module and attention mechanism
CN110543656A (en) LED fluorescent powder glue coating thickness prediction method based on deep learning
CN113537325B (en) Deep learning method for image classification based on extracted high-low layer feature logic
CN112686310B (en) Anchor frame-based prior frame design method in target detection algorithm
CN110691319B (en) Method for realizing high-precision indoor positioning of heterogeneous equipment in self-adaption mode in use field
CN114611621A (en) Cooperative clustering method based on attention hypergraph neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant