CN115131453B - Color filling model training, color filling method and device and electronic equipment - Google Patents
Color filling model training, color filling method and device and electronic equipment Download PDFInfo
- Publication number
- CN115131453B CN115131453B CN202210546217.7A CN202210546217A CN115131453B CN 115131453 B CN115131453 B CN 115131453B CN 202210546217 A CN202210546217 A CN 202210546217A CN 115131453 B CN115131453 B CN 115131453B
- Authority
- CN
- China
- Prior art keywords
- target
- image
- filling
- color
- generator
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012549 training Methods 0.000 title claims abstract description 177
- 238000000034 method Methods 0.000 title claims abstract description 74
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 claims description 11
- 238000004422 calculation algorithm Methods 0.000 claims description 9
- 230000004044 response Effects 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims 1
- 230000008569 process Effects 0.000 description 13
- 238000004891 communication Methods 0.000 description 11
- 238000004590 computer program Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 6
- 230000004927 fusion Effects 0.000 description 6
- 238000004040 coloring Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000008485 antagonism Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Image Analysis (AREA)
Abstract
The disclosure provides a color filling model training method, a color filling device and electronic equipment, relates to the technical field of computers, and particularly relates to the field of cloud computing. The training method comprises the following steps: determining a training sample corresponding to the color filling model; constructing a generator and a discriminator corresponding to the color filling model; determining a target identifier corresponding to the current identifying parameter of the identifier; based on the target discriminator, using a filling image and a color image obtained by performing color filling on the gray image by the generator, and training to obtain a target generator corresponding to the target generation parameters by taking the same discrimination result of the target discriminator as a training target; based on the target generator, taking the identification result of the identifier on the filling image generated by the target generator as a training target, and training to obtain a target identifier corresponding to the target identification parameter; and if the target generator and the target discriminator meet the convergence condition of the color filling model, determining that the target generator and the target discriminator are training results of the color filling model.
Description
Technical Field
The disclosure relates to the technical field of computers, in particular to the field of cloud computing, and particularly relates to a color filling model training method, a color filling device and electronic equipment.
Background
In recent years, color filling of images is a currently more common field of computer image processing applications due to increased demands for image coloring, image restoration, and the like. In order to accurately fill the image, a neural network model is generally used to fill the image in the related art, however, the current neural network model has poor color filling effect on the image, and the filling accuracy is not high.
Disclosure of Invention
The disclosure provides a color filling model training method, a color filling device and electronic equipment for realizing high-accuracy color filling of gray images.
According to a first aspect of the present disclosure, there is provided a color-filling model training method, including:
determining a training sample corresponding to the color filling model; the training samples include: the color image and the true value label corresponding to the color image, the gray image and the false value label corresponding to the gray image; the gray level image is obtained through color image conversion;
Constructing a generator and a discriminator corresponding to the color filling model;
determining a target identifier corresponding to the current identifying parameter of the identifier;
based on the target discriminator, using a filling image and a color image obtained by performing color filling on the gray image by the generator, and training to obtain a target generator corresponding to the target generation parameters by taking the same discrimination result of the target discriminator as a training target;
based on the target generator, taking the identification result of the identifier on the filling image generated by the target generator as a training target, and training to obtain a target identifier corresponding to the target identification parameter;
and if the target generator and the target discriminator meet the convergence condition of the color filling model, determining that the target generator and the target discriminator are training results of the color filling model.
According to a second aspect of the present disclosure, there is provided a color filling method comprising:
responding to an image filling request initiated for an image to be filled, and acquiring color filling models corresponding to a target generator and a target discriminator; wherein the color filling model is obtained based on training of the color filling model training method of the first aspect;
inputting the image to be filled into a target generator of a color filling model to obtain a target filling image obtained by the target generator through color filling of the image to be filled;
The target filling image is input into a target discriminator of the color filling model, and a discrimination result of the target discriminator on the target filling image is obtained.
According to a third aspect of the present disclosure, there is provided a color-filling model training apparatus comprising:
the sample determining unit is used for determining training samples corresponding to the color filling model; the training samples include: the color image and the true value label corresponding to the color image, the gray image and the false value label corresponding to the gray image; the gray level image is obtained through color image conversion;
the model building unit is used for building a generator and a discriminator corresponding to the color filling model;
an authentication determining unit for determining a target authenticator corresponding to a current authentication parameter of the authenticator;
the first training unit is used for training a target generator corresponding to a target generation parameter based on the target discriminator by taking a filling image obtained by performing color filling on the gray image by the generator and a color image, wherein the discrimination result of the target discriminator is the same as a training target;
the second training unit is used for training a target identifier corresponding to the target identification parameter based on the target generator by taking the identification result of the identifier on the filling image generated by the target generator as a training target;
And the result determining unit is used for determining the target generator and the target discriminator as training results of the color filling model if the target generator and the target discriminator meet the convergence condition of the color filling model.
According to a fourth aspect of the present disclosure, there is provided a color filling apparatus comprising:
a request response unit, configured to obtain color filling models corresponding to the target generator and the target discriminator in response to an image filling request initiated for an image to be filled; wherein the color filling model is obtained based on training of the color filling model training method of the first aspect;
the target generating unit is used for inputting the image to be filled into a target generator of the color filling model to obtain a target filling image obtained by the target generator for performing color filling on the image to be filled;
and the target identification unit is used for inputting the target filling image into a target identifier of the color filling model to obtain an identification result of the target identifier on the target filling image.
According to a fifth aspect of the present disclosure, there is provided at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the first or second aspects and the various possible methods of the aspects.
The technique according to the present disclosure solves the problem of low color filling accuracy, determining training samples of a color filling model. The training samples may include color images and true value labels corresponding to the color images, and gray images and false value labels corresponding to the gray images. The gradation image can be obtained by gradation conversion of a color image. Then, a target discriminator corresponding to the current discrimination parameters of the discriminator is determined by constructing a generator and a discriminator corresponding to the color filling model. Through the target discriminator, the generator can be trained to obtain a target generator corresponding to the target generation parameters, and the generator is used for carrying out color filling on the gray level image in the training process to obtain a filling image and a color image, so that accurate training of the generator is realized. And then, training the discriminator through a target generator obtained by training to obtain a target detector corresponding to the target discrimination parameters, wherein the training is achieved by taking the discrimination result of the discriminator on the filling image as a training target, so that the accurate training of the discriminator can be realized. The accurate color filling model can be obtained through training of the generator and the discriminator, so that the more accurate color filling model is utilized to carry out accurate color filling on the image to be filled, and the color filling efficiency and accuracy are improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a system architecture diagram of a color filling system provided in accordance with an embodiment of the present disclosure;
FIG. 2 is a flow chart of one embodiment of a color-filling model training method provided in accordance with an embodiment of the present disclosure;
FIG. 3 is a flow chart of yet another embodiment of a color-filling model training method provided in accordance with an embodiment of the present disclosure;
FIG. 4 is a flow chart of yet another embodiment of a color-filling model training method provided in accordance with an embodiment of the present disclosure;
FIG. 5 is a flow chart of yet another embodiment of a color-filling model training method provided in accordance with an embodiment of the present disclosure;
FIG. 6 is a flow chart of one embodiment of a color filling method provided in accordance with an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of one embodiment of a color-filled model training apparatus provided in accordance with an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of one embodiment of a color filling method provided in accordance with an embodiment of the present disclosure;
fig. 9 is a block diagram of an electronic device used to implement a color-filling model training method or a color-filling method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The technical scheme of the disclosure can be applied to color filling of images, and the accurate target generator and the accurate target discriminator are obtained by constructing the generator and the discriminator to perform countermeasure training on the color filling model by using the generator and the discriminator, so that the target generator is used for performing accurate training on the images, the color filling accuracy of the images is improved, and the color filling efficiency and the color filling accuracy are improved.
In the prior art, a neural network model is generally used to perform color filling on a gray image, for example, mapping features are input from the gray image to the neural network model based on a convolution training model, so as to obtain an image with the color filled. However, the existing image coloring schemes are easy to have the problems of lighter color and poor coloring effect.
In order to solve the technical problems, the disclosure provides a color filling model training method, a color filling device and electronic equipment, which are applied to the cloud computing field in the computer technical field, so as to achieve the purposes of improving the color filling accuracy and obtaining a filling image with better filling effect.
In the disclosed embodiments, a color filling model constructed using a generator and a discriminator is considered. The color image is generated by the generator and the discriminator through countermeasure training, and the discrimination results of the discriminator on the color image generated by the generator and the original color image are the same, so that the accuracy of the generator on image filling is promoted, and the color forming precision and effectiveness of the generator are improved.
In an embodiment of the present disclosure, a training sample of a color-filled model is determined. The training samples may include color images and true value labels corresponding to the color images, and gray images and false value labels corresponding to the gray images. The gradation image can be obtained by gradation conversion of a color image. Then, a target discriminator corresponding to the current discrimination parameters of the discriminator is determined by constructing a generator and a discriminator corresponding to the color filling model. Through the target discriminator, the generator can be trained to obtain a target generator corresponding to the target generation parameters, and the generator is used for carrying out color filling on the gray level image in the training process to obtain a filling image and a color image, so that accurate training of the generator is realized. And then, training the discriminator through a target generator obtained by training to obtain a target detector corresponding to the target discrimination parameters, wherein the training is achieved by taking the discrimination result of the discriminator on the filling image as a training target, so that the accurate training of the discriminator can be realized. The accurate color filling model can be obtained through training of the generator and the discriminator, so that the more accurate color filling model is utilized to carry out accurate color filling on the image to be filled, and the color filling efficiency and accuracy are improved.
The technical scheme of the present disclosure will be described in detail with reference to the accompanying drawings.
As shown in fig. 1, a system architecture diagram of a color filling system is provided in an embodiment of the present disclosure. The software system may include: the electronic device 1 configured with the color filling method and the cloud server 2 configured with the color filling model training method provided by the present disclosure, a wired or wireless communication connection can be established between the electronic device 1 and the cloud server 2.
The cloud server 2 may train the generator and the discriminator in the color filling model based on the color filling model training method of the present disclosure, obtaining an accurate color filling model. The electronic device 1 may detect an image filling request of an image to be filled initiated by a user, so as to obtain color filling models corresponding to the target generator and the target discriminator in response to the image filling request, and perform color filling and discrimination on the image to be filled through the color filling models, so as to obtain a target filling image and a discrimination result. In some embodiments, the electronic device 1 may output the target fill image and/or the authentication result to the user. The image to be filled can be filled quickly and accurately through the trained color filling model.
As shown in fig. 2, a flowchart of an embodiment of a color-filling model training method provided in an embodiment of the present disclosure may be configured as a color-filling model training apparatus, where the color-filling model training apparatus may be located in an electronic device, and the color-filling model training method may include:
201: determining a training sample corresponding to the color filling model; the training samples include: the color image and the true value label corresponding to the color image, the gray image and the false value label corresponding to the gray image; the gray scale image is obtained by color image conversion.
The training samples may include color images, which may be true value labels, and gray scale images, which may correspond to false value labels. The training samples can comprise a plurality of training samples, and the gray level image corresponding to each training sample can be obtained through color image gray level conversion. In the actual training process, the color image of the training sample can be used as an original image to directly identify the identifier. The gray image is filled with the color image to obtain a filled image, and then the discriminator performs discrimination.
202: and constructing a generator and a discriminator corresponding to the color filling model.
The color filling model corresponding to the generator and the discriminator may be obtained by a conditional generation antagonism network (GAN, generative Adversarial Network) algorithm. The generator may be used for color filling of the gray scale image. The discriminator can perform coloring judgment on the filling image of the gray scale to obtain an accurate judgment result.
203: a target discriminator corresponding to the current discrimination parameters of the discriminator is determined.
During the first training, parameter initialization can be performed on the discriminator corresponding to the constructed color filling model to obtain a target discriminator.
In addition to the first training, a target discriminator corresponding to the target discrimination parameters obtained after the end of the previous training may be obtained.
204: based on the target discriminator, the filling image and the color image obtained by the generator for filling the gray image are used as training targets, and the target generator corresponding to the target generation parameters is obtained through training, wherein the discrimination results of the target discriminator are the same.
The discriminator may not change during the training generator. The generator may perform image color filling on the gray scale image to obtain a filled image.
205: based on the target generator, training to obtain a target identifier corresponding to the target identification parameter by taking the identification result of the identifier on the filling image generated by the target generator as a training target.
The generator may not change during the discriminator training process. The discriminator may perform discrimination by filling the image with a color filling effect, for example, an entropy probability of the discriminated information may be generated.
206: and if the target generator and the target discriminator meet the convergence condition of the color filling model, determining that the target generator and the target discriminator are training results of the color filling model.
In some embodiments, if the target generator and target evaluator meet the convergence criteria of the color-filling model, it may include: the loss value of the target generator and the loss value of the target discriminator are added to obtain a total loss value. And when the total loss value meets the identification condition, determining that the color filling model meets the convergence condition. The total loss value satisfies the discrimination condition, for example, may include the total loss value being smaller than a preset loss threshold value, or being larger than the preset loss threshold value. When the total loss value is smaller than a preset loss threshold value, the model precision of the color filling model is inversely proportional to the total loss threshold value, and when the total loss threshold value is larger than the preset loss threshold value, the model precision of the color filling model is directly proportional to the total loss threshold value. The penalty value of the target generator may comprise a generated penalty value obtained from the last iteration. The loss value of the target discriminator may comprise the discrimination loss value obtained from the last iteration.
Alternatively, after the target generator and the target discriminator are obtained, the image to be filled may be color-filled with the target generator to obtain a filled image. The filled image can be identified by a target identifier to obtain an identification result.
In an embodiment of the present disclosure, a training sample of a color-filled model is determined. The training samples may include color images and true value labels corresponding to the color images, and gray images and false value labels corresponding to the gray images. The gradation image can be obtained by gradation conversion of a color image. Then, a target discriminator corresponding to the current discrimination parameters of the discriminator is determined by constructing a generator and a discriminator corresponding to the color filling model. Through the target discriminator, the generator can be trained to obtain a target generator corresponding to the target generation parameters, and the generator is used for carrying out color filling on the gray level image in the training process to obtain a filling image and a color image, so that accurate training of the generator is realized. And then, training the discriminator through a target generator obtained by training to obtain a target detector corresponding to the target discrimination parameters, wherein the training is achieved by taking the discrimination result of the discriminator on the filling image as a training target, so that the accurate training of the discriminator can be realized. The accurate color filling model can be obtained through training of the generator and the discriminator, so that the more accurate color filling model is utilized to carry out accurate color filling on the image to be filled, and the color filling efficiency and accuracy are improved.
As shown in fig. 3, a flowchart of an embodiment of a software running method provided by an embodiment of the present disclosure may be applied to a fusion engine, where the fusion engine may have communication connection with an engineering end and a policy end of a software system, respectively, and the method may include:
301: determining a training sample corresponding to the color filling model; the training samples include: the color image and the true value label corresponding to the color image, the gray image and the false value label corresponding to the gray image; the gray scale image is obtained by color image conversion.
In this embodiment, part of the steps are the same as those shown in fig. 2, and for the specific content of each step, reference may be made to the description of the foregoing embodiment, which is not repeated here.
302: and constructing a generator and a discriminator corresponding to the color filling model.
303: a target discriminator corresponding to the current discrimination parameters of the discriminator is determined.
304: based on the target discriminator, the filling image and the color image obtained by the generator for filling the gray image are used as training targets, and the target generator corresponding to the target generation parameters is obtained through training, wherein the discrimination results of the target discriminator are the same.
305: based on the target generator, training to obtain a target identifier corresponding to the target identification parameter by taking the identification result of the identifier on the filling image generated by the target generator as a training target.
306: judging whether the target generator and the target discriminator meet the convergence condition of the color filling model, if so, executing step 307; if not, go to step 308.
Determining whether the target generator and the target discriminator satisfy the convergence condition of the color-filling model may include: and calculating the sum of the loss value of the target generator and the loss value of the target discriminator to obtain a total loss value. And if the total loss value is determined to meet the loss condition, determining that the color filling model meets the convergence condition, otherwise, determining that the color filling model does not meet the convergence condition.
307: the target generator and target discriminator are determined as training results of the color filling model.
308: determining that the training obtained target authentication parameter is the current authentication parameter corresponding to the authenticator, returning to step 303 and continuing execution.
In the embodiment of the disclosure, whether the target generator and the target discriminator meet the convergence condition of the color filling model can be judged, and the accurate training of the color filling model can be realized through the iterative training of the color filling model, so that the accurate color filling model is obtained.
As shown in fig. 4, the difference from the embodiment shown in fig. 2 or 3 is that, based on the target discriminator, a filling image obtained by color filling the gray image with the generator and a color image, the discrimination result by the target discriminator is the same as a training target, and the training to obtain a target generator corresponding to the target generation parameter may include:
401: inputting a color image of the training sample into a target discriminator, and carrying out image color discrimination on the color image through the target discriminator to obtain a first discrimination result; the first authentication result includes true or false.
402: and inputting the gray level image into a generator corresponding to the candidate generation parameters to obtain a filling image.
403: inputting the filling image into a target discriminator, and carrying out image color discrimination on the filling image through the target discriminator to obtain a second discrimination result; the second authentication result includes true or false; the label corresponding to the filling image is a false value label corresponding to the gray image.
404: a generated loss value is calculated based on the first authentication result and the second authentication result.
Wherein calculating the generated loss value based on the first authentication result and the second authentication result may include: and calculating the difference between the first authentication result and the second authentication result to obtain a generated loss value. The higher the similarity between the filled image and the color image is, the better the filling effect is, if the difference between the identification results of the two images is smaller, the difference between the filled image and the color image is smaller, and the generator can obtain an accurate loss value under the constraint condition.
The first authentication result and the second authentication result may refer to probability values obtained through authentication by the discriminator.
405: and if the generation loss value is determined to meet the generation termination condition, determining the candidate generation parameters as target generation parameters so as to obtain target generators corresponding to the target generation parameters.
In embodiments of the present disclosure, the target discriminator may be first determined and the stability of the target discriminator maintained when training the generator. In the training process, the target discriminator can be utilized to discriminate the color image first, and a first discrimination result is obtained. The generator can be normally used by determining candidate generation parameters through the generator, color filling is carried out on the gray level image to obtain a filling image, and the filling image is input into the target discriminator to carry out image color discrimination through the target discriminator to obtain a second discrimination result. Therefore, loss calculation is carried out based on the first identification result and the second identification result, the generated loss value obtained by the loss calculation is used for judging the generator termination condition, and the training efficiency and the training precision of the generator are improved.
In one possible design, after calculating the generated loss value based on the first loss value and the second loss value, the method further includes:
if the generation loss value does not meet the generation termination condition, updating candidate generation parameters of the generator based on the generation loss value, returning to the generator corresponding to the candidate generation parameters input with the gray image, and continuing to execute the step of obtaining the filling image.
Alternatively, a gradient descent algorithm may be employed to update the candidate generation parameters of the generator with the generation loss value.
In the embodiment of the present disclosure, when the generation loss value does not satisfy the generation condition, the candidate generation parameters of the generator may be updated based on the generation loss value, and the step of inputting the gray image into the generator corresponding to the candidate generation parameters and obtaining the filled image may be continued. By detecting the generation conditions, iterative training of the generator can be realized, and accurate training of the generator is completed.
As shown in fig. 5, the difference from the embodiment shown in fig. 2 or fig. 3 is that, based on the target generator, with the discrimination result of the discriminator on the filling image generated by the target generator satisfying the discrimination condition as a training target, training to obtain a target discriminator corresponding to the target discrimination parameter may include the following steps:
501: and determining that the second discrimination result is the target gray level image corresponding to the true filling image.
502: the target gray image is input into a target generator, and a target filling image obtained by the target generator for performing color filling on the target gray image is obtained.
503: inputting the target filling image into a discriminator corresponding to the candidate discrimination parameters to obtain a third discrimination result corresponding to the target filling image; the third authentication result includes true or false.
The calculation manner of the authentication result is the same as that described in the foregoing embodiment, and will not be repeated here.
504: and calculating an identification error value corresponding to the target gray image based on the third identification result and the second identification result corresponding to the target gray image.
505: and if the identification error value meets the identification convergence condition, determining the candidate identification parameter as a target identifier corresponding to the target identification parameter.
In the embodiment of the disclosure, when the discriminator is trained, the target gray image corresponding to the filling image with the second discrimination result being true can be determined, the target gray image with better filling effect discriminated by the discriminator is obtained, and the gray image aiming at the discriminator is obtained. And inputting the target gray image into a target generator to obtain a target filling image obtained by the target generator for performing color filling on the target gray image, and obtaining a color filling result of the target generator. When training the discriminator, first, candidate discrimination parameters of the discriminator can be determined, so that color images corresponding to the target filling image and the target object image are input into the discriminator of the candidate discrimination parameters, and the discriminator is utilized for image discrimination, so that a third discrimination result and a second discrimination result are obtained. And the second discrimination result corresponding to the target gray level image is true, discrimination error calculation is carried out through the third discrimination result and the second discrimination result, discrimination convergence conditions can be judged through the discrimination error value, so that candidate discrimination parameters are determined as target discriminators corresponding to the target discrimination parameters, the determination of the discrimination parameters of the discriminators is realized, and an accurate target discriminator is obtained.
In some embodiments, after calculating the authentication error value corresponding to the target gray image based on the third authentication result and the second authentication result corresponding to the target gray image, the method further includes:
if the identification error result does not meet the identification convergence condition, updating the candidate identification parameters based on the identification error, returning to the identifier corresponding to the input candidate identification parameters of the target filling image, and obtaining a third identification result corresponding to the target filling image to continue to execute.
A gradient descent algorithm may be employed to update the candidate authentication parameters with the authentication error.
In the embodiment of the disclosure, when the identification error result does not meet the identification convergence condition, the candidate identification parameters can be updated based on the identification error, so that the iterative updating of the parameters of the identifier is realized, and the identifier is continuously trained to obtain an accurate target identifier.
In some embodiments, calculating the authentication error value corresponding to the target gray image based on the third authentication result and the second authentication result corresponding to the target gray image includes:
acquiring a target color image corresponding to the target gray level image;
calculating an image loss value based on a target filling image and a target color image corresponding to the target gray image;
Calculating an authentication result loss value based on the third authentication result and the second authentication result;
and carrying out weighted summation on the image loss value and the discrimination result loss value to obtain a discrimination error value corresponding to the target gray level image.
The target color image may be a color image in a training sample in which the target gray scale image is located.
The selectable image loss value may be an image distance between the target fill image and the target color image. The image distance can refer to the feature distance between two images, and specifically can be obtained by carrying out feature extraction on a target filling image and a target color image to obtain a target filling feature and a target color feature, and calculating the feature distance between the target filling feature and the target color feature. The feature distance theory between the target filling feature and the target color feature may be calculated by a distance calculation formula, which may include, for example, a euclidean distance, a hamming distance, a manhattan distance, etc., and is not limited in this disclosure.
In the embodiment of the disclosure, when the identification error value of the target gray image is calculated, the target color image corresponding to the target gray image can be firstly obtained, the image loss value is calculated, the identification result loss value is also calculated through the third identification result and the second identification result, the second identification result is used as an identification label, and the loss value of the identification result can be accurately calculated. The pattern loss value and the discrimination result loss value are weighted and summed, so that the discrimination error value corresponding to the target gray image can be obtained, the accurate calculation of the discrimination error value is realized, and the accuracy of the discrimination error calculation is improved.
As yet another embodiment, calculating a generated loss value based on the first authentication result and the second authentication result includes:
calculating a first loss value corresponding to the first identification result;
calculating a second loss value corresponding to the second identification result;
a generated loss value is calculated based on the first loss value and the second loss value.
The first authentication result and the second authentication result are probability values, and a first loss value corresponding to the first authentication result and a second loss value corresponding to the second authentication result can be calculated through a backward binary cross entropy function. Calculating the generated loss value based on the first loss value and the second loss value may specifically include weighting the first loss value and the second loss value to obtain the generated loss value. The values of the weighting coefficients used for weighting the first loss value and the second loss value may be the same or opposite, and may be specifically determined according to the actual loss definition.
In the embodiment of the disclosure, when the generated loss value is calculated, the generated loss value can be calculated through the first loss value corresponding to the first identification result and the second loss value corresponding to the second identification result, so that the generated loss value comprises the generated loss corresponding to the generator and the identification loss corresponding to the identifier, the generated loss value can be used for accurately training the generator, and the training precision of the generator is improved.
As yet another embodiment, constructing the generator and discriminator corresponding to the color-filling model may include:
and constructing a generator and a discriminator corresponding to the color filling model based on the condition generation countermeasure network algorithm.
In the embodiment of the disclosure, based on a condition generation countermeasure algorithm, a generator and a discriminator corresponding to the color filling model can be constructed, accurate definition of the color filling model is realized, and model accuracy of the color filling model is improved.
As shown in fig. 6, a flowchart of an embodiment of a color filling method according to an embodiment of the disclosure may be configured as a color filling apparatus, where the color filling apparatus may be located in an electronic device, and the method may include the following steps:
601: responding to an image filling request initiated for an image to be filled, and acquiring color filling models corresponding to a target generator and a target discriminator; wherein the color filling model is obtained based on training of the color filling model training method of any of the above embodiments.
602: inputting the image to be filled into a target generator of a color filling model to obtain a target filling image obtained by the target generator through color filling of the image to be filled.
603: the target filling image is input into a target discriminator of the color filling model, and a discrimination result of the target discriminator on the target filling image is obtained.
The color-filled model may be provided to an electronic device. It should be noted that, the first, second, etc. words of the present disclosure are merely for distinguishing different electronic devices, and should not constitute specific line segments for the technical solutions of the present disclosure, and do not have specific sequential meanings of order and priority, etc.
In practical application, the method also comprises the step of outputting the target filling image and/or the identification result of the target filling image for the user.
In the embodiment of the disclosure, the fusion engine may be in communication connection with an engineering end and a policy end of the software system respectively. The fusion engine can start the engine process corresponding to the fusion engine based on the engine starting request sent by the engineering end. The sampling engine process may perform the following operations: the method comprises the steps of acquiring at least one service function registered by a policy end in a fusion engine, generating a space-time service interface based on the at least one service function, and performing interaction between an engineering end and the policy end corresponding to a service corresponding to a target service function in the at least one service function based on the space-time service interface, so that the engineering end and the policy end can complete service interaction in an engine process, repeated switching and calling of the engineering end process and the policy end process are avoided, data transmission efficiency is improved, and application of the engineering end and the policy end is further improved.
As shown in fig. 7, a schematic structural diagram of an embodiment of a color filling model training apparatus according to an embodiment of the disclosure is provided, where the apparatus may be located in an electronic device. The color-filling model training apparatus 700 may include the following units:
sample determination unit 701: training samples corresponding to the color filling models are determined; the training samples include: the color image and the true value label corresponding to the color image, the gray image and the false value label corresponding to the gray image; the gray scale image is obtained by color image conversion.
Model building unit 702: and a generator and a discriminator for constructing a color filling model.
An authentication determination unit 703: a target discriminator for determining a correspondence of a current discrimination parameter of the discriminator.
First training unit 704: the target generator is used for training and obtaining the corresponding target generation parameters based on the target discriminator by taking the filling image obtained by the generator for filling the gray image and the color image with the same discrimination result of the target discriminator as the training target.
The second training unit 705: the target discriminator is used for training to obtain target discrimination parameters based on the target generator, wherein the discrimination result of the discriminator on the filling image generated by the target generator meets discrimination conditions as a training target;
The result determination unit 706: and determining that the target generator and the target discriminator are training results of the color filling model if the target generator and the target discriminator meet convergence conditions of the color filling model.
As an embodiment, the apparatus may further include:
and the parameter updating unit is used for determining that the training obtained target identification parameter is the current identification parameter corresponding to the identifier if the target generator and the target identifier do not meet the convergence condition of the color filling model, and returning to the target identifier corresponding to the current identification parameter of the identifier until the color filling model meets the convergence condition.
In some embodiments, the first training unit 704 may include:
the first identification module is used for inputting the color image of the training sample into the target identifier, and carrying out image color identification on the color image through the target identifier to obtain a first identification result; the first authentication result includes true or false.
And the image filling module is used for inputting the gray image into a generator corresponding to the candidate generation parameters to obtain a filling image.
The second identification module is used for inputting the filling image into the target identifier, and carrying out image color identification on the filling image through the target identifier to obtain a second identification result; the second authentication result includes true or false; the label corresponding to the filling image is a false value label corresponding to the gray image;
The first loss module is used for calculating and generating a loss value based on the first identification result and the second identification result;
and the first determining module is used for determining the candidate generation parameters as target generation parameters if the generation loss value meets the generation termination condition so as to obtain target generators corresponding to the target generation parameters.
In some embodiments, the apparatus may further comprise:
and the second determining module is used for updating candidate generation parameters of the generator based on the generation loss value if the generation loss value is determined not to meet the generation termination condition, returning to the generator corresponding to the candidate generation parameters input the gray image, and continuing to execute the step of obtaining the filling image.
As an embodiment, the second training unit 705 may include:
the gray level determining module is used for determining a target gray level image corresponding to the filling image with the second identification result being true;
the first generation module is used for inputting the target gray image into the target generator to obtain a target filling image obtained by the target generator for performing color filling on the target gray image;
the filling identification module is used for inputting the target filling image into the identifier corresponding to the candidate identification parameter to obtain a third identification result corresponding to the target filling image; the third authentication result includes true or false;
The second loss module is used for calculating an identification error value corresponding to the target gray image based on a third identification result and a second identification result corresponding to the target gray image;
and the third determining module is used for determining the candidate authentication parameter as a target identifier corresponding to the target authentication parameter if the authentication error value meets the authentication convergence condition.
In some embodiments, the apparatus may further comprise:
and a fourth determining module, configured to update the candidate authentication parameters based on the authentication error if it is determined that the authentication error result does not satisfy the authentication convergence condition, return to the identifier corresponding to the candidate authentication parameters to which the target fill image is input, and continue to perform the step of obtaining the third authentication result corresponding to the target fill image.
In some embodiments, the second loss module may include:
the image acquisition sub-module is used for acquiring a target color image corresponding to the target gray level image;
the first calculating sub-module is used for calculating an image loss value based on the target filling image and the target color image corresponding to the target gray level image;
a second calculation sub-module for calculating an authentication result loss value based on the third authentication result and the second authentication result;
And the identification loss submodule is used for carrying out weighted summation on the image loss value and the identification result loss value to obtain an identification error value corresponding to the target gray level image.
In some embodiments, the first loss module may include:
the third computing sub-module is used for computing a first loss value corresponding to the first identification result;
a fourth calculation sub-module, configured to calculate a second loss value corresponding to the second authentication result;
the generation loss submodule is used for calculating a generation loss value based on the first loss value and the second loss value.
In some embodiments, the model building unit 702 may include:
and the algorithm construction sub-module is used for generating an countermeasure network algorithm based on the conditions and constructing a generator and a discriminator corresponding to the color filling model.
As shown in fig. 8, which is a schematic structural diagram of an embodiment of a color filling apparatus provided in an embodiment of the present disclosure, in an apparatus electronic device, a color filling method provided in the present disclosure may be configured, and the color filling apparatus 800 may include:
request response unit 801: the color filling method comprises the steps of responding to an image filling request initiated for an image to be filled, and acquiring color filling models corresponding to a target generator and a target discriminator; wherein the color filling model is obtained based on training of the color filling model training method of any of the above embodiments.
Target generation unit 802: inputting the image to be filled into a target generator of a color filling model to obtain a target filling image obtained by the target generator through color filling of the image to be filled;
a target discrimination unit 803 for inputting the target fill image into the target discriminator of the color fill model, and obtaining a discrimination result of the target discriminator on the target fill image.
Note that, the image filling model in this embodiment is not a color filling model for a specific user, and does not reflect personal information of a specific user. The color image in this embodiment is derived from the public data set.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program stored in a readable storage medium, from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any one of the embodiments described above.
Fig. 9 shows a schematic block diagram of an example electronic device 900 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the apparatus 900 includes a computing unit 901 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the device 900 can also be stored. The computing unit 901, the ROM 902, and the RAM 903 are connected to each other by a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
Various components in device 900 are connected to I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, or the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, an optical disk, or the like; and a communication unit 909 such as a network card, modem, wireless communication transceiver, or the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 901 performs the respective methods and processes described above, such as color filling model training or a color filling method. For example, in some embodiments, the color filling model training or color filling method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 900 via the ROM 902 and/or the communication unit 909. When the computer program is loaded into RAM 903 and executed by the computing unit 901, one or more steps of the color population model training or color population method described above may be performed. Alternatively, in other embodiments, the computing unit 901 may be configured to perform color filling model training or color filling methods by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.
Claims (11)
1. A method of training a color-filled model, comprising:
determining a training sample corresponding to the color filling model; the training sample comprises: a color image and a true value label corresponding to the color image, a gray image and a false value label corresponding to the gray image; the gray scale image is obtained through the color image conversion;
constructing a generator and a discriminator corresponding to the color filling model;
Determining a target identifier corresponding to the current identifying parameter of the identifier;
based on the target discriminator, using a filling image obtained by the generator for filling the gray level image with colors and the color image, and training to obtain a target generator corresponding to a target generation parameter by taking the same discrimination result of the target discriminator as a training target;
based on the target generator, taking the identification result of the identifier on the filling image generated by the target generator as a training target, and training to obtain a target identifier corresponding to a target identification parameter;
if the target generator and the target discriminator meet the convergence condition of the color filling model, determining that the target generator and the target discriminator are training results of the color filling model;
the training method based on the target generator takes the identification result of the identifier on the filling image generated by the target generator as a training target, trains and obtains a target identifier corresponding to a target identification parameter, and comprises the following steps:
inputting the filling image into the target discriminator, and carrying out image color discrimination on the filling image through the target discriminator to obtain a second discrimination result; the second authentication result includes true or false; the label corresponding to the filling image is a false value label corresponding to the gray level image;
Determining that the second identification result is a target gray level image corresponding to the true filling image;
inputting the target gray image into the target generator to obtain a target filling image obtained by the target generator for performing color filling on the target gray image;
inputting the target filling image into a discriminator corresponding to the candidate discrimination parameters to obtain a third discrimination result corresponding to the target filling image; the third authentication result includes true or false;
calculating an identification error value corresponding to the target gray image based on a third identification result and a second identification result corresponding to the target gray image;
if the identification error value meets the identification convergence condition, determining the candidate identification parameter as the target identifier corresponding to the target identification parameter;
wherein the calculating the identification error value corresponding to the target gray scale image based on the third identification result and the second identification result corresponding to the target gray scale image includes:
acquiring a target color image corresponding to the target gray level image;
calculating an image loss value based on a target filling image corresponding to the target gray level image and the target color image;
Calculating an authentication result loss value based on the third authentication result and the second authentication result;
and carrying out weighted summation on the image loss value and the identification result loss value to obtain an identification error value corresponding to the target gray level image.
2. The method as recited in claim 1, further comprising:
if the target generator and the target discriminator do not meet the convergence condition of the color filling model, determining that the training obtained target discrimination parameters are the current discrimination parameters corresponding to the discriminators, and returning to the target discriminator corresponding to the current discrimination parameters of the discriminators until the color filling model meets the convergence condition.
3. The method according to claim 1 or 2, wherein the training, based on the target discriminator, to obtain a target generator corresponding to a target generation parameter by training the same discrimination result of the target discriminator as the training target by using a filling image obtained by color filling the gray image with the generator and the color image, includes:
inputting a color image of the training sample into the target discriminator, and carrying out image color discrimination on the color image by the target discriminator to obtain a first discrimination result; the first authentication result includes true or false;
Inputting the gray level image into the generator corresponding to the candidate generation parameters to obtain a filling image;
calculating a generated loss value based on the first authentication result and the second authentication result;
and if the generation loss value is determined to meet the generation termination condition, determining the candidate generation parameters as the target generation parameters so as to obtain target generators corresponding to the target generation parameters.
4. A method according to claim 3, wherein said calculating a generated loss value based on said first authentication result and said second authentication result comprises:
calculating a first loss value corresponding to the first authentication result;
calculating a second loss value corresponding to the second identification result;
the generated loss value is calculated based on the first loss value and the second loss value.
5. The method of claim 4, wherein after calculating a generated loss value based on the first loss value and the second loss value, further comprising:
if the generation loss value does not meet the generation termination condition, updating candidate generation parameters of the generator based on the generation loss value, returning to the generator corresponding to the input candidate generation parameters of the gray image, and continuing to execute the step of obtaining the filling image.
6. The method according to claim 5, wherein after calculating the authentication error value corresponding to the target gray-scale image based on the third authentication result and the second authentication result corresponding to the target gray-scale image, further comprising:
if the identification error value does not meet the identification convergence condition, updating the candidate identification parameters based on the identification error, returning to the identifier corresponding to the candidate identification parameters input into the target filling image, and continuing to execute the step of obtaining a third identification result corresponding to the target filling image.
7. The method of claim 1, wherein said constructing a corresponding generator and discriminator of the color-filling model comprises:
and constructing the generator and the discriminator corresponding to the color filling model based on a condition generation countermeasure network algorithm.
8. A color filling method, comprising:
responding to an image filling request initiated for an image to be filled, and acquiring color filling models corresponding to a target generator and a target discriminator; wherein the color-filling model is trained based on the color-filling model training method of any one of claims 1-7;
Inputting the image to be filled into a target generator of the color filling model to obtain a target filling image obtained by the target generator for performing color filling on the image to be filled;
inputting the target filling image into a target discriminator of the color filling model, and obtaining a discrimination result of the target discriminator on the target filling image.
9. A color-filling model training device, comprising:
the sample determining unit is used for determining training samples corresponding to the color filling model; the training sample comprises: a color image and a true value label corresponding to the color image, a gray image and a false value label corresponding to the gray image; the gray scale image is obtained through the color image conversion;
a model construction unit, configured to construct a generator and a discriminator corresponding to the color filling model;
an authentication determining unit, configured to determine a target identifier corresponding to a current authentication parameter of the identifier;
the first training unit is used for training a target generator corresponding to a target generation parameter based on the target discriminator by taking a filling image obtained by performing color filling on the gray level image by the generator and the color image, wherein the training target is obtained by taking the same identification result of the target discriminator as a training target;
The second training unit is used for training a target identifier corresponding to a target identification parameter based on the target generator by taking the identification result of the identifier on the filling image generated by the target generator as a training target;
a result determining unit configured to determine that the target generator and the target discriminator are training results of the color filling model if the target generator and the target discriminator satisfy convergence conditions of the color filling model;
wherein the first training unit comprises:
the second identification module is used for inputting the filling image into the target identifier, and carrying out image color identification on the filling image through the target identifier to obtain a second identification result; the second authentication result includes true or false; the label corresponding to the filling image is a false value label corresponding to the gray level image;
wherein the second training unit comprises:
the gray level determining module is used for determining a target gray level image corresponding to the filling image with the second identification result being true;
the first generation module is used for inputting the target gray level image into the target generator to obtain a target filling image obtained by the target generator for performing color filling on the target gray level image;
The filling identification module is used for inputting the target filling image into an identifier corresponding to the candidate identification parameters to obtain a third identification result corresponding to the target filling image; the third authentication result includes true or false;
the second loss module is used for calculating an identification error value corresponding to the target gray image based on a third identification result and a second identification result corresponding to the target gray image;
a third determining module, configured to determine the candidate authentication parameter as the target identifier corresponding to the target authentication parameter if it is determined that the authentication error value meets an authentication convergence condition;
wherein the second loss module comprises:
the image acquisition sub-module is used for acquiring a target color image corresponding to the target gray level image;
a first calculation sub-module, configured to calculate an image loss value based on a target filling image corresponding to the target gray level image and the target color image;
a second calculation sub-module for calculating an authentication result loss value based on the third authentication result and the second authentication result;
and the identification loss submodule is used for carrying out weighted summation on the image loss value and the identification result loss value to obtain an identification error value corresponding to the target gray level image.
10. A color filling apparatus, comprising:
a request response unit, configured to obtain color filling models corresponding to the target generator and the target discriminator in response to an image filling request initiated for an image to be filled; wherein the color-filling model is trained based on the color-filling model training method of any one of claims 1-7;
the target generating unit is used for inputting the image to be filled into a target generator of the color filling model to obtain a target filling image obtained by the target generator for performing color filling on the image to be filled;
and the target identification unit is used for inputting the target filling image into a target identifier of the color filling model to obtain an identification result of the target identifier on the target filling image.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7 or the method of claim 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210546217.7A CN115131453B (en) | 2022-05-17 | 2022-05-17 | Color filling model training, color filling method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210546217.7A CN115131453B (en) | 2022-05-17 | 2022-05-17 | Color filling model training, color filling method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115131453A CN115131453A (en) | 2022-09-30 |
CN115131453B true CN115131453B (en) | 2023-08-04 |
Family
ID=83376695
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210546217.7A Active CN115131453B (en) | 2022-05-17 | 2022-05-17 | Color filling model training, color filling method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115131453B (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108615252A (en) * | 2018-05-03 | 2018-10-02 | 苏州大学 | The training method and device of color model on line original text based on reference picture |
CN109903236B (en) * | 2019-01-21 | 2020-12-18 | 南京邮电大学 | Face image restoration method and device based on VAE-GAN and similar block search |
CN113837953B (en) * | 2021-06-11 | 2024-04-12 | 西安工业大学 | Image restoration method based on generation countermeasure network |
CN113902630A (en) * | 2021-09-01 | 2022-01-07 | 西安电子科技大学 | Method for generating confrontation network image restoration based on multi-scale texture feature branch |
-
2022
- 2022-05-17 CN CN202210546217.7A patent/CN115131453B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN115131453A (en) | 2022-09-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112801164B (en) | Training method, device, equipment and storage medium of target detection model | |
CN113129870B (en) | Training method, device, equipment and storage medium of speech recognition model | |
CN113657289B (en) | Training method and device of threshold estimation model and electronic equipment | |
CN114218931B (en) | Information extraction method, information extraction device, electronic equipment and readable storage medium | |
CN113792850B (en) | Font generation model training method, font library building method, font generation model training device and font library building equipment | |
CN113095336A (en) | Method for training key point detection model and method for detecting key points of target object | |
CN113205041B (en) | Structured information extraction method, device, equipment and storage medium | |
CN111241838B (en) | Semantic relation processing method, device and equipment for text entity | |
CN115797565B (en) | Three-dimensional reconstruction model training method, three-dimensional reconstruction device and electronic equipment | |
CN113627536A (en) | Model training method, video classification method, device, equipment and storage medium | |
CN114511743B (en) | Detection model training, target detection method, device, equipment, medium and product | |
CN115359308B (en) | Model training method, device, equipment, storage medium and program for identifying difficult cases | |
CN113902696A (en) | Image processing method, image processing apparatus, electronic device, and medium | |
CN117649115A (en) | Risk assessment method and device, electronic equipment and storage medium | |
CN115131453B (en) | Color filling model training, color filling method and device and electronic equipment | |
CN114973333B (en) | Character interaction detection method, device, equipment and storage medium | |
CN114842305A (en) | Depth prediction model training method, depth prediction method and related device | |
CN112749978A (en) | Detection method, apparatus, device, storage medium, and program product | |
CN115471717B (en) | Semi-supervised training and classifying method device, equipment, medium and product of model | |
CN114581751B (en) | Training method of image recognition model, image recognition method and device | |
CN116416500B (en) | Image recognition model training method, image recognition device and electronic equipment | |
CN117522614B (en) | Data processing method and device, electronic equipment and storage medium | |
CN116403001A (en) | Target detection model training and target detection method, device, equipment and medium | |
CN117743617A (en) | Character interaction detection method, model training method and device | |
CN114648672A (en) | Method and device for constructing sample image set, electronic equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |