CN115660991A - Model training method, image exposure correction method, device, equipment and medium - Google Patents
Model training method, image exposure correction method, device, equipment and medium Download PDFInfo
- Publication number
- CN115660991A CN115660991A CN202211350088.0A CN202211350088A CN115660991A CN 115660991 A CN115660991 A CN 115660991A CN 202211350088 A CN202211350088 A CN 202211350088A CN 115660991 A CN115660991 A CN 115660991A
- Authority
- CN
- China
- Prior art keywords
- image
- sample image
- standard
- loss value
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
The disclosure provides a model training method, an image exposure correction device and a medium, and relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, image processing, deep learning and the like. The scheme comprises the following steps: generating a corrected sample image of the sample image by using the image exposure correction model; determining a difference result of the corrected sample image and a standard image corresponding to the sample image in a color dimension, and determining a first loss value based on the difference result; the discriminator model respectively determines a first discrimination result of the corrected sample image and a second discrimination result of the standard image, and determines a second loss value based on the first discrimination result and the second discrimination result; calculating an overall loss value based on the first loss value and the second loss value; and adjusting parameters of the image exposure correction model based on the overall loss value to obtain a target image exposure correction model. The method can avoid the image corrected by the image exposure correction model from generating larger color deviation to a greater extent.
Description
Technical Field
The present disclosure relates to the technical field of artificial intelligence, and specifically to the technical fields of computer vision, image processing, deep learning, and the like.
Background
An exposure problem may exist in an image acquired by a shooting mode, and the related art may perform exposure correction on the image through a trained neural network model, however, when performing exposure correction on the image, the existing neural network model easily causes a large color shift, so that the corrected image cannot achieve a desired effect.
Disclosure of Invention
The disclosure provides a model training method, an image exposure correction device, equipment and a medium.
According to a first aspect of the present disclosure, there is provided a training method of an image exposure correction model, the method including:
generating a corrected sample image of the sample image by using the image exposure correction model, wherein the sample image comprises an abnormal exposure area;
determining a difference result of the corrected sample image and a standard image corresponding to the sample image in a color dimension, and determining a first loss value based on the difference result;
the discriminator model respectively determines a first discrimination result of the corrected sample image and a second discrimination result of the standard image, and determines a second loss value based on the first discrimination result and the second discrimination result;
calculating an overall loss value based on the first loss value and the second loss value;
and adjusting parameters of the image exposure correction model based on the overall loss value to obtain a target image exposure correction model.
In the disclosed embodiment, the first penalty value comprises a first sub-penalty value; determining a difference result of the corrected sample image and a standard image corresponding to the sample image in a color dimension, and determining a first loss value based on the difference result, wherein the difference result comprises:
and calculating pixel difference values of the standard image corresponding to the corrected sample image and the sample image, and determining a first sub-loss value based on the pixel difference values.
In the embodiment of the present disclosure, calculating a pixel difference value of the corrected sample image and the standard image corresponding to the sample image, and determining the first sub-loss value based on the pixel difference value includes:
calculating a first pixel average value of all pixels in the correction sample image, and calculating a second pixel average value of all pixels in the standard sample image;
a first sub-loss value is determined based on a difference of the first pixel average value and the second pixel average value.
In the embodiment of the present disclosure, calculating a pixel difference value of the corrected sample image and the standard image corresponding to the sample image, and determining the first sub-loss value based on the pixel difference value includes:
calculating a first local pixel average value of all pixels corresponding to the abnormal exposure area in the corrected sample image, and calculating a second local pixel average value of all pixels corresponding to the abnormal exposure area in the standard image;
a first sub-loss value is determined based on a difference of the first local pixel average value and the second local pixel average value.
In an embodiment of the present disclosure, the first penalty value comprises a second sub-penalty value; determining a difference result of the corrected sample image and a standard image corresponding to the sample image in a color dimension, and determining a first loss value based on the difference result, wherein the difference result comprises:
and acquiring a first feature of the corrected sample image and a second feature of the standard image, and determining a second sub-loss value based on the difference value of the first feature and the second feature.
In an embodiment of the present disclosure, the first feature includes at least one of a content feature map and a global structure feature map of the corrected sample image, and the second feature includes at least one of a content feature map and a global structure feature map of the standard image.
In the embodiment of the present disclosure, determining the first determination result of the corrected sample image and the second determination result of the standard image by using the discriminator model respectively includes:
obtaining a plurality of sample sub-images with different scales of a corrected sample image, inputting each sample sub-image into a discriminator model, and obtaining a first discrimination result of the corrected sample image based on the discriminator;
and acquiring a plurality of standard sub-images of the standard image with different scales, inputting each standard sub-image into the discriminator model, and acquiring a second discrimination result of the standard image based on the discriminator.
In the embodiment of the present disclosure, the first determination result includes a first probability that the correction sample image is determined to be a real normal exposure image, and the second determination result includes a second probability that the standard image is determined to be a real normal exposure image;
determining a second loss value based on the first discrimination result and the second discrimination result, including: a probability difference between the first probability and the second probability is calculated, and a second loss value is determined based on the probability difference.
According to a second aspect of the present disclosure, there is provided an image exposure correction method, the method including:
inputting an image to be corrected into a target image exposure correction model, wherein the target image exposure correction model is obtained by training based on a training method provided by the first aspect of the disclosure;
and outputting the target image after exposure correction through the target image exposure correction model.
According to a third aspect of the present disclosure, a training apparatus for an image exposure correction model is provided, the training apparatus for the image exposure correction model includes a corrected image generation module, a first loss determination module, a second loss determination module, an overall loss determination module, and a model parameter adjustment module;
the correction image generation module is used for generating a correction sample image of the sample image by using the image exposure correction model, wherein the sample image comprises an abnormal exposure area;
the first loss determining module is used for determining a difference result of the corrected sample image and a standard image corresponding to the sample image in a color dimension and determining a first loss value based on the difference result;
the second loss determining module is used for respectively determining a first judging result of the corrected sample image and a second judging result of the standard image by using the discriminator model, and determining a second loss value based on the first judging result and the second judging result;
the overall loss determining module is used for calculating an overall loss value based on the first loss value and the second loss value;
the model parameter adjusting module is used for adjusting parameters of the image exposure correction model based on the overall loss value so as to obtain the target image exposure correction model.
In the disclosed embodiment, the first penalty value comprises a first sub-penalty value; the first loss determining module, when configured to determine a difference result of the corrected sample image and the standard image corresponding to the sample image in the color dimension, and determine a first loss value based on the difference result, is specifically configured to:
and calculating pixel difference values of the standard image corresponding to the corrected sample image and the sample image, and determining a first sub-loss value based on the pixel difference values.
In an embodiment of the present disclosure, the first loss determining module, when configured to calculate a pixel difference value of the corrected sample image and a standard image corresponding to the sample image, and determine the first sub-loss value based on the pixel difference value, is specifically configured to:
calculating a first pixel average value of all pixels in the correction sample image, and calculating a second pixel average value of all pixels in the standard sample image;
a first sub-loss value is determined based on a difference of the first pixel average value and the second pixel average value.
In an embodiment of the present disclosure, the first loss determining module, when configured to calculate a pixel difference value of the corrected sample image and a standard image corresponding to the sample image, and determine the first sub-loss value based on the pixel difference value, is specifically configured to:
calculating a first local pixel average value of all pixels corresponding to the abnormal exposure area in the corrected sample image, and calculating a second local pixel average value of all pixels corresponding to the abnormal exposure area in the standard image;
a first sub-penalty value is determined based on a difference of the first local pixel average and the second local pixel average.
In the disclosed embodiment, the first penalty value comprises a second sub-penalty value; the first loss determining module, when configured to determine a difference result of the corrected sample image and the standard image corresponding to the sample image in the color dimension, and determine a first loss value based on the difference result, is specifically configured to:
and taking the first feature of the corrected sample image and the second feature of the standard image, and determining a second sub-loss value based on the difference value of the first feature and the second feature.
In an embodiment of the present disclosure, the first feature includes at least one of a content feature map and a global structure feature map of the corrected sample image, and the second feature includes at least one of a content feature map and a global structure feature map of the standard image.
In an embodiment of the present disclosure, the second loss determining module, when configured to determine the first discrimination result of the corrected sample image and the second discrimination result of the standard image by using the discriminator model, is specifically configured to:
obtaining a plurality of sample sub-images with different scales of a corrected sample image, inputting each sample sub-image into a discriminator model, and obtaining a first discrimination result of the corrected sample image based on the discriminator;
and acquiring a plurality of standard sub-images of the standard image with different scales, inputting each standard sub-image into a discriminator model, and acquiring a second discrimination result of the standard image based on the discriminator.
In the embodiment of the present disclosure, the first determination result includes a first probability that the correction sample image is determined to be a real normal exposure image, and the second determination result includes a second probability that the standard image is determined to be a real normal exposure image;
the second loss determining module, when configured to determine the second loss value based on the first and second discrimination results, is specifically configured to: a probability difference between the first probability and the second probability is calculated, and a second loss value is determined based on the probability difference.
According to a fourth aspect of the present disclosure, there is provided an image exposure correction apparatus including an image input module and an image output module;
the image input module is used for inputting an image to be corrected into a target image exposure correction model, wherein the target image exposure correction model is obtained by training based on a training method provided by the first aspect of the disclosure;
and the image output module is used for outputting the target image after exposure correction through the target image exposure correction model.
According to a fifth aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of training an image exposure correction model provided by the first aspect of the disclosure or to perform the method of image exposure correction provided by the second aspect of the disclosure.
According to a sixth aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the training method of the image exposure correction model provided in the first aspect of the present disclosure or execute the image exposure correction method provided in the second aspect of the present disclosure.
According to a seventh aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of training an image exposure correction model provided by the first aspect of the present disclosure, or implements the method of image exposure correction provided by the second aspect of the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
The technical scheme provided by the disclosure has the following beneficial effects:
the training method of the image exposure correction model provided by the embodiment of the disclosure forms the image exposure correction model and a discriminator model into a generative confrontation network, and continuously confronts through the image exposure correction model and the discriminator model to continuously improve the effect of exposure correction of the image exposure correction model.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 schematically illustrates a schematic structural diagram of an image exposure correction model provided by the present disclosure;
FIG. 2 is a flow chart illustrating a training method of an image exposure correction model provided by the present disclosure;
FIG. 3 is a flow chart illustrating another method for training an image exposure correction model provided by the present disclosure;
FIG. 4 is a schematic view illustrating an actual scene flow of a training method of an image exposure correction model provided by the present disclosure;
FIG. 5 is a flow chart illustrating an image exposure correction method provided by the present disclosure;
FIG. 6 is a schematic diagram illustrating a training apparatus for an image exposure correction model provided by the present disclosure;
FIG. 7 illustrates a schematic diagram of an image exposure correction apparatus provided by the present disclosure;
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of embodiments of the present disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be understood that in the embodiments of the present disclosure, the character "/" generally indicates that the former and latter associated objects are in an "or" relationship. The terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated.
The images acquired by shooting may have an exposure problem, and the related art may perform exposure correction on the images through the trained neural network model, however, when the images are subjected to exposure correction by using the existing neural network model, a color is easily caused to generate a large shift, so that the corrected images cannot achieve an expected effect.
The training method of the image exposure correction model provided by the embodiment of the disclosure forms the image exposure correction model and a discriminator model into a generative confrontation network, and continuously confronts through the image exposure correction model and the discriminator model to continuously improve the effect of exposure correction of the image exposure correction model.
The execution subject of the method may be a terminal device, or a computer, or a server, or may also be other devices with data processing capabilities. The subject matter of the method is not limited in this respect.
Optionally, the terminal device may be a mobile phone, or may be a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), or the like, and the specific type of the terminal device is not limited in the embodiment of the present disclosure.
In some embodiments, the server may be a single server, or may be a server cluster composed of a plurality of servers. In some embodiments, the server cluster may also be a distributed cluster. The present disclosure is also not limited to a specific implementation of the server.
The following describes an exemplary training method of the image exposure correction model.
The present disclosure provides a training method of an image exposure correction model, wherein the image exposure correction model may be a neural network model. In the embodiment of the present disclosure, the image exposure correction model may be a Generator (Generator) model, and the image exposure correction model may form a Generator countermeasure network (GAN) with a Discriminator (Discriminator) model, and the image exposure correction model is trained by continuously competing the image exposure correction model and the Discriminator model. It should be noted that the training method of the image exposure correction model provided in the embodiment of the present disclosure is not exactly the same as the existing training method based on the generative countermeasure network, and a specific flow of the training method of the image exposure correction model in the embodiment of the present disclosure will be described in the following.
Fig. 1 shows a schematic structural diagram of an image exposure correction model provided by the present disclosure, and as shown in fig. 1, the image exposure correction model includes a feature extraction layer, a feature editing layer, and a feature integration layer, where the feature extraction layer may be an Encoder (Encoder) in a generator model, and the feature integration layer may be a Decoder (Decoder) in the generator model.
Here, taking an image with an exposure problem as an image to be processed, referring to fig. 1, inputting the image to be processed into a feature extraction layer, and sequentially extracting feature maps of different sizes of the image to be processed by using the feature extraction layer, it can be understood that the sizes of the extracted feature maps are gradually reduced; in particular, the next feature map can be extracted from the image to be processed or the previous feature map by a set of composite operations, wherein the set of composite operations includes convolution, activation operation and normalization operation.
The feature extraction layer inputs the feature map extracted by the last compounding operation into the feature editing layer, the feature editing layer edits the input feature map by using the residual error learning block, for example, brightness values can be modified, and the edited feature map is input into the feature integration layer after the feature editing layer.
The feature integration layer is used for reconstructing the edited feature map into an image, and the reconstructed image is the image after exposure correction. And the feature integration layer converts the edited feature maps into feature maps with different sizes in sequence, and the size of the feature map obtained by conversion in each step is gradually increased until the last feature map is converted into an image. Specifically, one feature map can be transformed into a larger size feature map by a set of compound operations until the last feature map is transformed into an image, wherein the set of compound operations includes deconvolution, activation operations, and normalization operations.
Fig. 2 shows a flowchart of a training method of an image exposure correction model provided by the present disclosure, and as shown in fig. 2, the method mainly includes the following steps:
s210: a corrected sample image of the sample image is generated using the image exposure correction model.
The embodiment of the disclosure takes an image with an exposure problem as a sample image, and trains an image exposure correction model by using the sample image. It is understood that the sample image includes an abnormally exposed region, and the abnormally exposed region may include at least one of an over-exposed region and an under-exposed region.
In step S210, the sample image is input to the image exposure correction model, and the abnormal exposure region of the sample image is corrected by the image exposure correction model and then output, where the image output by the image exposure correction model is defined as a corrected sample image.
Specifically, a sample image is input into a feature extraction layer, feature maps with different sizes of the sample image are sequentially extracted by executing a plurality of compounding operations, and one set of compounding operations in the feature extraction layer comprises convolution, activation operation and normalization operation; the feature extraction layer inputs the feature graph extracted by the last composite operation into the feature editing layer, the feature editing layer edits the input feature graph by using the residual error learning block, and the edited feature graph is input into the feature integration layer after the feature editing layer; and the feature integration layer converts the edited feature maps into feature maps with different sizes in sequence, the size of the feature map obtained by conversion in each step is gradually increased until the last feature map is converted into a correction sample image and output, and a set of compound operation in the feature integration layer comprises deconvolution, activation operation and normalization operation.
After S210, S220 and S230 may be executed, where the execution order of S220 and S230 may not be distinct.
S220: and determining a difference result of the corrected sample image and the standard image corresponding to the sample image in the color dimension, and determining a first loss value based on the difference result.
It should be noted here that the standard image is an image with normal exposure, and the picture content in the standard image is identical to the picture content in the sample image, for example, the picture content in the sample image is a front photograph of the person a, and the picture content in the standard image is also a front photograph of the person a.
Here, the sample image and the standard image may be two images continuously photographed, in which the sample image has an exposure problem and the standard image is normally exposed; of course, the standard image may also be an image obtained by professional manual correction of the exposure problem of the sample image, and the generation manner of the sample image and the standard image is not limited in the embodiment of the present disclosure.
The image exposure correction model may perform exposure correction on the sample image by adjusting the color of the sample image. It is understood that, as the color of each part of the corrected sample image is closer to the color of the corresponding part in the standard image, the effect of correcting the sample image is closer to the standard image. In other words, one of the targets of the image exposure correction model is to minimize the difference in color dimension between the corrected sample image and the standard image corresponding to the sample image. By determining the first loss value based on the difference result of the corrected sample image and the standard image corresponding to the sample image in the color dimension as part of the loss in the training process of the image exposure correction model, the effect of the image output by the trained image exposure correction model can be closer to the effect of the real image.
The process of adjusting the color of the sample image is actually adjusting the pixel value of the corresponding pixel in the sample image, and the image exposure correction model performs exposure correction on the sample image by adjusting the pixel value of the sample image. The pixel difference value between the corrected sample image and the standard image can reflect the difference between the corrected sample image and the standard image corresponding to the sample image in the color dimension, and therefore, the difference result between the corrected sample image and the standard image corresponding to the sample image in the color dimension can include the pixel difference value between the corrected sample image and the standard image.
Optionally, the first penalty value may comprise a first sub-penalty value. The embodiment of the disclosure may calculate a pixel difference value of the standard image corresponding to the corrected sample image and the sample image, and determine the first sub-loss value based on the pixel difference value. It is understood that, as the effect of correcting the sample image is closer to the standard image, the pixel values of the corresponding pixels of the corrected sample image and the standard image are closer, and thus the pixel difference values of the two are closer. In other words, one of the targets of the image exposure correction model is to make the pixel difference as small as possible; the smaller the pixel difference value is, the closer the effect of the correction sample image generated by the image exposure correction model is to the standard image, and the better the training of the image exposure correction model is. By taking the first sub-loss value determined based on the pixel difference value of the standard image corresponding to the corrected sample image and the sample image as part of the loss in the training process of the image exposure correction model, the image output by the image exposure correction model after training can be closer to the real image in the pixel value dimension.
The embodiment of the disclosure can extract the features of the corrected sample image and the standard image through the neural network model, correct the difference of the sample image and the standard image in the feature dimension (especially the difference of the high-level features), and reflect the difference of the standard image corresponding to the corrected sample image and the sample image in the color dimension. Therefore, the difference result of the corrected sample image and the standard image corresponding to the sample image in the color dimension may include the feature difference of the two.
Optionally, the first penalty value may include a second sub penalty value. The embodiment of the disclosure can acquire a first feature of the corrected sample image and a second feature of the standard image, and determine a second sub-loss value based on a difference value of the first feature and the second feature.
The first feature and the second feature may each be a feature capable of representing a high-level feature of the image, for example, the first feature includes at least one of a content feature map and a global structural feature map of the corrected sample image, and the second feature includes at least one of a content feature map and a global structural feature map of the standard image. Alternatively, the first feature and the second feature may be extracted by an image classification model, which may be VGG19, resnet, and the like.
It will be appreciated that the more similar the upper features of the corrected sample image will be to those of the standard image, the closer the effect of the corrected sample image will be to that of the standard image. One of the targets of the image exposure correction model is to make the difference between the high-level features of the corrected sample image and the high-level features of the standard image as small as possible, and determine a second sub-loss value as part of loss in the training process of the image exposure correction model by using the difference value of the first feature and the second feature, so that the image output by the trained image exposure correction model is closer to a real image in the feature dimension.
S230: and respectively determining a first judgment result of the corrected sample image and a second judgment result of the standard image by using the discriminator model, and determining a second loss value based on the first judgment result and the second judgment result.
In the embodiment of the present disclosure, determining the first determination result of the corrected sample image and the second determination result of the standard image by using the discriminator model respectively includes: obtaining a plurality of sample sub-images with different scales of a corrected sample image, inputting each sample sub-image into a discriminator model, and obtaining a first discrimination result of the corrected sample image based on the discriminator; and acquiring a plurality of standard sub-images of the standard image with different scales, inputting each standard sub-image into the discriminator model, and acquiring a second discrimination result of the standard image based on the discriminator.
The images are converted into sub-images with different scales and then input into the discriminator, so that the accuracy of the discrimination result output by the discriminator can be improved, and the effect of the countertraining can be improved. Here, sub-images of different scales of the image may be acquired through a pyramid operation, and taking the corrected sample image as an example, the corrected sample image may be represented by y', and the sample sub-image of the corrected sample image may be represented as:
y′ i =P(y′,scale=i)
wherein, y' i And (3) representing the sample sub-images, wherein i represents the number of layers of the pyramid, and the number of layers of the pyramid is the number of the sample sub-images of the corrected sample image.
In the embodiment of the present disclosure, the first determination result includes a first probability that the correction sample image is recognized as a true normal exposure image, and the second determination result includes a second probability that the standard image is recognized as a true normal exposure image.
When determining the second loss value based on the first and second discrimination results, the disclosed embodiments may calculate a probability difference between the first and second probabilities, and determine the second loss value based on the probability difference. It is understood that, as the effect of correcting the sample image is closer to the standard image, the probability that the corrected sample image is recognized as the true normal exposure image is closer to the probability that the standard image is recognized as the true normal exposure image, that is, the first probability is closer to the second probability. In other words, one of the targets of the image exposure correction model is to make the probability difference between the first probability and the second probability as small as possible, and the smaller the probability difference, the closer the effect of the corrected sample image generated by the image exposure correction model is to the standard image, and the better the training of the image exposure correction model is.
S240: an overall loss value is calculated based on the first sub-loss value and the second loss value.
In the embodiment of the present disclosure, the overall loss value may be calculated according to a preset configured operation rule based on the first sub-loss value and the second loss value. For example, a sum of the first sub-loss value and the second loss value may be calculated as the overall loss value; alternatively, a weighted average of the first sub-loss value and the second loss value may be calculated as the overall loss value.
S250: and adjusting parameters of the image exposure correction model based on the overall loss value to obtain a target image exposure correction model.
Fig. 3 shows a flow chart of another training method for an image exposure correction model provided in the present disclosure, and as shown in fig. 3, the method may mainly include the following steps:
s310: a corrected sample image of the sample image is generated using the image exposure correction model.
It should be noted that, for the specific step of S310, reference may be made to corresponding contents in S210, and details are not described herein again. After S310, S320, S330, and S340 may be executed, where the execution order of S320, S330, and S340 may not be distinguished from one another.
S320: and calculating pixel difference values of the standard image corresponding to the corrected sample image and the sample image, and determining a first sub-loss value based on the pixel difference values.
The embodiment of the disclosure may calculate a first pixel average value of all pixels in the correction sample image, calculate a second pixel average value of all pixels in the standard sample image, and determine a first sub-loss value based on a difference value between the first pixel average value and the second pixel average value when calculating a pixel difference value of the correction sample image and the standard image corresponding to the sample image and determining the first sub-loss value based on the pixel difference value.
In the embodiment of the disclosure, when the image exposure correction model adjusts the pixel value of the sample image, the pixel value of the pixel in the abnormal exposure area and the pixel value of the pixel in the normal exposure area of the sample image may be adjusted, the pixel difference value of the pixel average value of all the pixels of the sample correction image and the standard image may better reflect the effect difference between the sample correction image and the standard image, and the first sub-loss value determined based on the pixel difference value is used as the loss in the training process of the image exposure correction model, so that the effect of the sample correction image output by the image exposure correction model is closer to the standard image as a whole.
The embodiment of the present disclosure may calculate a first local pixel average value of all pixels corresponding to the abnormal exposure region in the correction sample image, calculate a second local pixel average value of all pixels corresponding to the abnormal exposure region in the standard image, and determine the first sub-loss value based on a difference value between the first local pixel average value and the second local pixel average value when calculating a pixel difference value of the standard image corresponding to the correction sample image and the sample image and determining the first sub-loss value based on the pixel difference value.
In the embodiment of the disclosure, when the pixel value of the sample image is adjusted, the image exposure correction model mainly adjusts the pixel value of the pixel in the abnormal exposure area of the sample image, the pixel difference value of the pixel average value of all the pixels in the abnormal exposure area of the sample correction image and the standard image can basically reflect the effect difference between the sample correction image and the standard image, and the basic requirement of the training process of the image exposure correction model can be met by taking the first sub-loss value determined based on the pixel difference value as the loss in the training process of the image exposure correction model.
S330: and acquiring a first feature of the corrected sample image and a second feature of the standard image, and determining a second sub-loss value based on the difference value of the first feature and the second feature.
In the disclosed embodiment, the corrected sample image may be represented by y', the standard image may be represented by y, and the second sub-loss value may be calculated by the following formula:
wherein L is per Represents a second sub-loss value, L i (y') denotes a first characteristic of the i-th layer, L i (y) second characteristics of the i-th layer, c i Are pre-configured coefficients. Here, the difference between the first feature and the second feature corresponding to the number of layers may be calculated, and then the differences corresponding to all the number of layers may be summed to obtain the second sub-loss value. Optionally, the embodiment of the present disclosure selects a feature map of the highest 5 layers of the corrected sample image as the first feature, and selects a feature map of the highest 5 layers of the standard image as the second feature.
S340: and respectively determining a first judgment result of the corrected sample image and a second judgment result of the standard image by using the discriminator model, and determining a second loss value based on the first judgment result and the second judgment result.
It should be noted that, for the specific step of S340, reference may be made to corresponding contents in S230, which is not described herein again.
S350: an overall loss value is calculated based on the first sub-loss value, the second sub-loss value, and the second loss value.
In the embodiment of the present disclosure, the overall loss value may be calculated according to a preset configured operation rule based on the first sub-loss value, the second sub-loss value, and the second loss value. For example, a sum of the first sub-loss value, the second sub-loss value, and the second loss value may be calculated as the overall loss value; alternatively, a weighted average of the first sub-loss value, the second sub-loss value, and the second loss value may be calculated as the overall loss value.
Optionally, the embodiment of the present disclosure may calculate the overall loss value based on the following formula:
L=L GAN (y,y′)+λL pixel (y,y′)+βL per (y,y′)
wherein L represents the overall loss value, L GAN (y, y') represents a second loss value, L pixel (y, y') represents a first sub-loss value L per (y, y') represents a second sub-loss value,both λ and β are coefficients that are preset according to actual design requirements.
S360: and adjusting parameters of the image exposure correction model based on the overall loss value to obtain a target image exposure correction model.
Fig. 4 is a schematic view illustrating an actual scene flow of a training method of an image exposure correction model according to the present disclosure, and as shown in fig. 4, the picture contents of a sample image and a standard image are completely the same, and the picture contents of the sample image and the standard image are a character avatar, where the sample image has an exposure problem and the standard image is normally exposed.
And inputting the sample image into an image exposure correction model, and outputting a sample correction image after the sample image is subjected to exposure correction by the image exposure correction model.
After obtaining the sample correction image, a pixel difference value of the standard image corresponding to the sample image and the correction sample image may be calculated, and the first sub-loss value may be determined based on the pixel difference value.
After obtaining the sample correction image, a plurality of sample sub-images of different scales of the correction sample image and a plurality of standard sub-images of different scales of the standard image may be obtained, each sample sub-image is input to the discriminator model to obtain a first probability corresponding to the correction sample image, each standard sub-image is input to the discriminator model to obtain a second probability corresponding to the standard image, a probability difference between the first probability and the second probability is calculated, and a second loss value is determined based on the probability difference, wherein the first probability represents a probability that the correction sample image is considered as a true normal exposure image, and the second probability represents a probability that the standard image is considered as a true normal exposure image.
After the sample correction image is obtained, the first feature of the correction sample image and the second feature of the standard image can be obtained by using the image classification model, and a second sub-loss value is determined based on a difference value of the first feature and the second feature, wherein the third probability represents a probability that the correction sample image is divided into the preset image type, and the fourth probability represents a probability that the standard image is divided into the preset image type.
After obtaining the first sub-loss value, the second sub-loss value, and the second loss value, an overall loss value may be calculated based on the first sub-loss value, the second sub-loss value, and the second loss value, and a parameter of the image exposure correction model may be adjusted based on the overall loss value.
After the parameters of the image exposure correction model are adjusted, the above process is repeated until the obtained overall loss is within the preset loss value range, and then the training process can be ended, and the image exposure correction model at the moment is used as the target image exposure correction model.
Fig. 5 shows a schematic flow chart of an image exposure correction method provided by the present disclosure, and as shown in fig. 5, the method mainly includes the following steps:
s510: and inputting the image to be corrected into the target image exposure correction model.
It should be noted that the target image exposure correction model used in step S510 is obtained by training based on the training method of the image exposure correction model provided in the foregoing embodiment.
S520: and outputting the target image after exposure correction through the target image exposure correction model.
Optionally, inputting the image to be corrected into a feature extraction layer, and sequentially extracting feature maps of different sizes of the image to be corrected by performing multiple compounding operations, wherein one set of compounding operations in the feature extraction layer comprises convolution, activation operation and normalization operation; the feature extraction layer inputs the feature graph extracted by the last composite operation into the feature editing layer, the feature editing layer edits the input feature graph by using the residual error learning block, and the edited feature graph is input into the feature integration layer after the feature editing layer; and the feature integration layer converts the edited feature maps into feature maps with different sizes in sequence, the size of the feature map obtained by conversion in each step is gradually increased until the last feature map is converted into a target image and output, and a set of compound operations in the feature integration layer comprises deconvolution, activation operation and normalization operation.
Based on the same principle as the above-mentioned training method of the image exposure correction model, the embodiment of the present disclosure provides a training device of the image exposure correction model, and fig. 6 shows a schematic diagram of the training device of the image exposure correction model provided by the present disclosure. As shown in fig. 6, the training apparatus 600 for an image exposure correction model includes a corrected image generation module 610, a first loss determination module 620, a second loss determination module 630, an overall loss determination module 640, and a model parameter adjustment module 650.
The corrected image generating module 610 is configured to generate a corrected sample image of the sample image by using the image exposure correction model, where the sample image includes an abnormal exposure area;
the first loss determining module 620 is configured to determine a difference result of the corrected sample image and the standard image corresponding to the sample image in the color dimension, and determine a first loss value based on the difference result;
the second loss determining module 630 is configured to determine a first determination result of the corrected sample image and a second determination result of the standard image by using the discriminator model, and determine a second loss value based on the first determination result and the second determination result;
the overall loss determination module 640 is configured to calculate an overall loss value based on the first loss value and the second loss value;
the model parameter adjusting module 650 is configured to adjust parameters of the image exposure correction model based on the overall loss value to obtain a target image exposure correction model.
The training device for the image exposure correction model, provided by the embodiment of the disclosure, forms the image exposure correction model and a discriminator model into a generative confrontation network, and continuously confronts through the image exposure correction model and the discriminator model to continuously improve the effect of exposure correction of the image exposure correction model.
In the disclosed embodiment, the first penalty value comprises a first sub-penalty value; the first loss determining module 620 is specifically configured to, when the first loss determining module is configured to determine a difference result of the corrected sample image and the standard image corresponding to the sample image in the color dimension, and determine a first loss value based on the difference result:
and calculating pixel difference values of the standard image corresponding to the corrected sample image and the sample image, and determining a first sub-loss value based on the pixel difference values.
In the embodiment of the present disclosure, the first loss determining module 620 is specifically configured to, when the first loss determining module is configured to calculate a pixel difference value of the corrected sample image and the standard image corresponding to the sample image, and determine the first sub-loss value based on the pixel difference value:
calculating a first pixel average value of all pixels in the correction sample image, and calculating a second pixel average value of all pixels in the standard sample image;
a first sub-loss value is determined based on a difference of the first pixel average value and the second pixel average value.
In this embodiment of the disclosure, when the first loss determining module 620 is configured to calculate a pixel difference value of the standard image corresponding to the corrected sample image and the sample image, and determine the first sub-loss value based on the pixel difference value, specifically configured to:
calculating a first local pixel average value of all pixels corresponding to the abnormal exposure area in the corrected sample image, and calculating a second local pixel average value of all pixels corresponding to the abnormal exposure area in the standard image;
a first sub-penalty value is determined based on a difference of the first local pixel average and the second local pixel average.
In the disclosed embodiment, the first penalty value comprises a second sub-penalty value; the first loss determining module 620 is specifically configured to, when the first loss determining module is configured to determine a difference result of the corrected sample image and the standard image corresponding to the sample image in the color dimension, and determine a first loss value based on the difference result:
and taking the first feature of the corrected sample image and the second feature of the standard image, and determining a second sub-loss value based on the difference value of the first feature and the second feature.
In an embodiment of the present disclosure, the first feature includes at least one of a content feature map and a global structure feature map of the corrected sample image, and the second feature includes at least one of a content feature map and a global structure feature map of the standard image.
In the embodiment of the present disclosure, the second loss determining module 630, when configured to determine the first discrimination result of the corrected sample image and the second discrimination result of the standard image by using the discriminator model, is specifically configured to:
obtaining a plurality of sample sub-images with different scales of a corrected sample image, inputting each sample sub-image into a discriminator model, and obtaining a first discrimination result of the corrected sample image based on the discriminator;
and acquiring a plurality of standard sub-images of the standard image with different scales, inputting each standard sub-image into the discriminator model, and acquiring a second discrimination result of the standard image based on the discriminator.
In the embodiment of the present disclosure, the first determination result includes a first probability that the correction sample image is determined to be a real normal exposure image, and the second determination result includes a second probability that the standard image is determined to be a real normal exposure image;
the second loss determining module 630, when configured to determine the second loss value based on the first and second discrimination results, is specifically configured to: a probability difference between the first probability and the second probability is calculated, and a second loss value is determined based on the probability difference.
It is understood that the modules of the training apparatus for an image exposure correction model in the embodiment of the present disclosure have functions of implementing the corresponding steps of the training method for an image exposure correction model described above. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above. The modules can be software and/or hardware, and each module can be implemented independently or by integrating a plurality of modules. For the functional description of each module of the training apparatus for the image exposure correction model, reference may be made to the corresponding description of the training method for the image exposure correction model, which is not described herein again.
Based on the same principle as the image exposure correction method described above, the embodiment of the present disclosure provides an image exposure correction apparatus, and fig. 7 shows a schematic diagram of the image exposure correction apparatus provided by the present disclosure. As shown in fig. 7, the image exposure correction apparatus 700 includes an image input module 710 and an image output module 720.
The image input module 710 is configured to input an image to be corrected into a target image exposure correction model, where the target image exposure correction model is obtained by training with the training method provided in the first aspect of the present disclosure.
The image output module 720 is used for outputting the exposure-corrected target image through the target image exposure correction model.
It is to be understood that each of the modules of the image exposure correction apparatus in the embodiments of the present disclosure has a function of implementing the corresponding step of the image exposure correction method described above. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above. The modules can be software and/or hardware, and each module can be implemented independently or by integrating a plurality of modules. For the functional description of each module of the image exposure correction apparatus, reference may be made to the corresponding description of the training method of the image exposure correction model, which is not described herein again.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
In an exemplary embodiment, an electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the above embodiments. The electronic device may be the computer or the server described above.
In an exemplary embodiment, the readable storage medium may be a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method according to the above embodiment.
In an exemplary embodiment, the computer program product comprises a computer program which, when being executed by a processor, carries out the method according to the above embodiments.
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the electronic device 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the electronic device 800 are connected to the I/O interface 805, including: an input unit 806 such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the electronic device 800 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.
Claims (20)
1. A method of training an image exposure correction model, the method comprising:
generating a corrected sample image of the sample image by using an image exposure correction model, wherein the sample image comprises an abnormal exposure area;
determining a difference result of the corrected sample image and a standard image corresponding to the sample image in a color dimension, and determining a first loss value based on the difference result;
respectively determining a first judgment result of the correction sample image and a second judgment result of the standard image by using a discriminator model, and determining a second loss value based on the first judgment result and the second judgment result;
calculating an overall loss value based on the first loss value and the second loss value;
and adjusting parameters of the image exposure correction model based on the overall loss value to obtain a target image exposure correction model.
2. The method of claim 1, wherein the first penalty value comprises a first sub-penalty value; the determining a difference result of the corrected sample image and a standard image corresponding to the sample image in a color dimension, and determining a first loss value based on the difference result, includes:
and calculating pixel difference values of the corrected sample image and a standard image corresponding to the sample image, and determining the first sub-loss value based on the pixel difference values.
3. The method of claim 2, wherein the calculating a pixel difference value of the corrected sample image and a standard image corresponding to the sample image, the determining the first sub-loss value based on the pixel difference value, comprises:
calculating a first pixel average value of all pixels in the correction sample image, and calculating a second pixel average value of all pixels in the standard sample image;
determining the first sub-loss value based on a difference of the first pixel average value and the second pixel average value.
4. The method of claim 2, wherein the calculating a pixel difference value of the corrected sample image and a standard image corresponding to the sample image, the determining the first sub-loss value based on the pixel difference value, comprises:
calculating a first local pixel average value of all pixels in the corrected sample image corresponding to the abnormal exposure area, and calculating a second local pixel average value of all pixels in the standard image corresponding to the abnormal exposure area;
determining the first sub-loss value based on a difference of the first local pixel average and the second local pixel average.
5. The method of claim 1, wherein the first penalty value comprises a second sub-penalty value; the determining a difference result of the corrected sample image and a standard image corresponding to the sample image in a color dimension, and determining a first loss value based on the difference result, includes:
acquiring a first feature of the correction sample image and a second feature of the standard image, and determining the second sub-loss value based on a difference value of the first feature and the second feature.
6. The method of claim 5, wherein the first features comprise at least one of a content feature map and a global structural feature map of the corrected sample image, and the second features comprise at least one of a content feature map and a global structural feature map of the standard image.
7. The method according to any one of claims 1 to 6, wherein the determining the first discrimination result of the corrected sample image and the second discrimination result of the standard image using a discriminator model, respectively, comprises:
obtaining a plurality of sample sub-images with different scales of the corrected sample image, inputting each sample sub-image into a discriminator model, and obtaining a first discrimination result of the corrected sample image based on the discriminator;
and acquiring a plurality of standard sub-images of the standard image with different scales, inputting each standard sub-image into the discriminator model, and acquiring a second discrimination result of the standard image based on the discriminator.
8. The method according to any one of claims 1 to 6, wherein the first discrimination result includes a first probability that the corrected sample image is recognized as a true normal exposure image, and the second discrimination result includes a second probability that the standard image is recognized as a true normal exposure image;
the determining a second loss value based on the first and second discrimination results includes: a probability difference between the first probability and the second probability is calculated, and a second loss value is determined based on the probability difference.
9. An image exposure correction method, the method comprising:
inputting an image to be corrected into a target image exposure correction model, wherein the target image exposure correction model is obtained by training based on the training method of any one of claims 1 to 8;
and outputting the target image after exposure correction through the target image exposure correction model.
10. An apparatus for training an image exposure correction model, the apparatus comprising:
the correction image generation module is used for generating a correction sample image of the sample image by using an image exposure correction model, wherein the sample image comprises an abnormal exposure area;
a first loss determining module, configured to determine a difference result of the corrected sample image and a standard image corresponding to the sample image in a color dimension, and determine a first loss value based on the difference result;
a second loss determining module, configured to determine a first determination result of the corrected sample image and a second determination result of the standard image by using a discriminator model, and determine a second loss value based on the first determination result and the second determination result;
an overall loss determination module to calculate an overall loss value based on the first loss value and the second loss value;
and the model parameter adjusting module is used for adjusting the parameters of the image exposure correction model based on the overall loss value so as to obtain a target image exposure correction model.
11. The method of claim 10, wherein the first penalty value comprises a first sub-penalty value; the first loss determining module, when configured to determine a difference result of the corrected sample image and the standard image corresponding to the sample image in the color dimension, and determine a first loss value based on the difference result, is specifically configured to:
and calculating pixel difference values of the corrected sample image and a standard image corresponding to the sample image, and determining the first sub-loss value based on the pixel difference values.
12. The apparatus according to claim 11, wherein the first loss determining module, when configured to calculate pixel difference values of the corrected sample image and a standard image corresponding to the sample image, and determine a first sub-loss value based on the pixel difference values, is specifically configured to:
calculating a first pixel average value of all pixels in the correction sample image, and calculating a second pixel average value of all pixels in the standard sample image;
determining a first sub-loss value based on a difference of the first pixel average value and the second pixel average value.
13. The apparatus according to claim 11, wherein the first loss determining module, when configured to calculate pixel difference values of the corrected sample image and a standard image corresponding to the sample image, and determine a first sub-loss value based on the pixel difference values, is specifically configured to:
calculating a first local pixel average value of all pixels in the corrected sample image corresponding to the abnormal exposure area, and calculating a second local pixel average value of all pixels in the standard image corresponding to the abnormal exposure area;
determining a first sub-loss value based on a difference of the first local pixel average and the second local pixel average.
14. The apparatus of claim 9, the first penalty value comprising a second sub-penalty value; the first loss determining module, when configured to determine a difference result of the corrected sample image and the standard image corresponding to the sample image in the color dimension, and determine a first loss value based on the difference result, is specifically configured to:
and taking a first feature of the corrected sample image and a second feature of the standard image, and determining a second sub-loss value based on a difference value of the first feature and the second feature.
15. The apparatus of claim 14, wherein the first feature comprises at least one of a content feature map and a global structural feature map of the corrected sample image, and the second feature comprises at least one of a content feature map and a global structural feature map of the standard image.
16. The apparatus according to any one of claims 10 to 15, wherein the second loss determining module, when configured to determine the first discrimination result of the corrected sample image and the second discrimination result of the standard image respectively by using a discriminator model, is specifically configured to:
obtaining a plurality of sample sub-images with different scales of the corrected sample image, inputting each sample sub-image into a discriminator model, and obtaining a first discrimination result of the corrected sample image based on the discriminator;
and acquiring a plurality of standard sub-images of the standard image with different scales, inputting each standard sub-image into the discriminator model, and acquiring a second discrimination result of the standard image based on the discriminator.
17. An image exposure correction apparatus, the apparatus comprising:
an image input module, configured to input an image to be corrected into a target image exposure correction model, where the target image exposure correction model is obtained by training based on the training method according to any one of claims 1 to 7;
and the image output module is used for outputting the target image after exposure correction through the target image exposure correction model.
18. An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8 or to perform the method of claim 9.
19. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-8 or perform the method of claim 9.
20. A computer program product comprising a computer program which, when executed by a processor, implements the method of any one of claims 1-8, or implements the method of claim 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211350088.0A CN115660991A (en) | 2022-10-31 | 2022-10-31 | Model training method, image exposure correction method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211350088.0A CN115660991A (en) | 2022-10-31 | 2022-10-31 | Model training method, image exposure correction method, device, equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115660991A true CN115660991A (en) | 2023-01-31 |
Family
ID=84995201
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211350088.0A Pending CN115660991A (en) | 2022-10-31 | 2022-10-31 | Model training method, image exposure correction method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115660991A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116503686A (en) * | 2023-03-28 | 2023-07-28 | 北京百度网讯科技有限公司 | Training method of image correction model, image correction method, device and medium |
-
2022
- 2022-10-31 CN CN202211350088.0A patent/CN115660991A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116503686A (en) * | 2023-03-28 | 2023-07-28 | 北京百度网讯科技有限公司 | Training method of image correction model, image correction method, device and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113674421B (en) | 3D target detection method, model training method, related device and electronic equipment | |
CN114186632B (en) | Method, device, equipment and storage medium for training key point detection model | |
CN113379627A (en) | Training method of image enhancement model and method for enhancing image | |
CN112862877A (en) | Method and apparatus for training image processing network and image processing | |
CN114792355B (en) | Virtual image generation method and device, electronic equipment and storage medium | |
CN115660991A (en) | Model training method, image exposure correction method, device, equipment and medium | |
CN114202648B (en) | Text image correction method, training device, electronic equipment and medium | |
CN116580223A (en) | Data processing and model fine tuning method and device, electronic equipment and storage medium | |
CN113888635B (en) | Visual positioning method and related device | |
CN114037630A (en) | Model training and image defogging method, device, equipment and storage medium | |
CN113643260A (en) | Method, apparatus, device, medium and product for detecting image quality | |
CN113657482A (en) | Model training method, target detection method, device, equipment and storage medium | |
CN115019057A (en) | Image feature extraction model determining method and device and image identification method and device | |
CN112819874A (en) | Depth information processing method, device, apparatus, storage medium, and program product | |
CN115482422B (en) | Training method of deep learning model, image processing method and device | |
CN114219744B (en) | Image generation method, device, equipment and storage medium | |
CN116361658B (en) | Model training method, task processing method, device, electronic equipment and medium | |
CN117194696B (en) | Content generation method, device, equipment and storage medium based on artificial intelligence | |
CN116363331B (en) | Image generation method, device, equipment and storage medium | |
CN115578583B (en) | Image processing method, device, electronic equipment and storage medium | |
CN116844169A (en) | Training of interference object processing model and interference object processing method and device | |
CN116385775A (en) | Image tag adding method and device, electronic equipment and storage medium | |
CN115564012A (en) | Training method of image inpainting model, image inpainting method and device | |
CN118151804A (en) | Automatic layout method, device and equipment for large screen and storage medium | |
CN118230377A (en) | Image processing method, apparatus, device, computer readable storage medium, and product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |