CN116721038A - Color correction method, electronic device, and storage medium - Google Patents
Color correction method, electronic device, and storage medium Download PDFInfo
- Publication number
- CN116721038A CN116721038A CN202310983691.0A CN202310983691A CN116721038A CN 116721038 A CN116721038 A CN 116721038A CN 202310983691 A CN202310983691 A CN 202310983691A CN 116721038 A CN116721038 A CN 116721038A
- Authority
- CN
- China
- Prior art keywords
- image
- color
- color correction
- model
- processed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012937 correction Methods 0.000 title claims abstract description 646
- 238000000034 method Methods 0.000 title claims abstract description 208
- 238000012545 processing Methods 0.000 claims abstract description 324
- 239000003086 colorant Substances 0.000 claims abstract description 16
- 238000012549 training Methods 0.000 claims description 234
- 230000008569 process Effects 0.000 claims description 113
- 230000006870 function Effects 0.000 claims description 79
- 238000000605 extraction Methods 0.000 claims description 58
- 238000010606 normalization Methods 0.000 claims description 16
- 230000004913 activation Effects 0.000 claims description 14
- 230000000694 effects Effects 0.000 abstract description 33
- 239000010410 layer Substances 0.000 description 143
- 238000004891 communication Methods 0.000 description 18
- 230000006872 improvement Effects 0.000 description 14
- 230000004044 response Effects 0.000 description 12
- 238000010295 mobile communication Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 9
- 238000005457 optimization Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 229920001621 AMOLED Polymers 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000001035 drying Methods 0.000 description 2
- 238000012804 iterative process Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 239000002096 quantum dot Substances 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- YYYARFHFWYKNLF-UHFFFAOYSA-N 4-[(2,4-dimethylphenyl)diazenyl]-3-hydroxynaphthalene-2,7-disulfonic acid Chemical compound CC1=CC(C)=CC=C1N=NC1=C(O)C(S(O)(=O)=O)=CC2=CC(S(O)(=O)=O)=CC=C12 YYYARFHFWYKNLF-UHFFFAOYSA-N 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 239000012792 core layer Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The application discloses a color correction method, electronic equipment and a storage medium, relates to the technical field of image processing, and is used for realizing color correction for images so as to improve the image processing effect. The method comprises the following steps: acquiring an image to be processed; acquiring a color correction coefficient of an image to be processed, wherein the color correction coefficient is used for correcting colors of pixel points included in the image to be processed on different color channels; the first image is displayed on a display screen of the electronic device based on the color correction coefficient.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a color correction method, an electronic device, and a storage medium.
Background
In the field of image processing technology, high quality images are an important premise for performing image processing work. However, due to the influence of environmental factors, camera parameters and the like, the acquired image often has the problems of image distortion, image blurring, image defects and the like, so that the quality of the acquired image is low, and the difficulty is increased for subsequent image processing work.
Currently, when a low quality image is acquired, one or more techniques of image enhancement, image deblurring, image restoration, etc. are generally used to process the acquired image to improve the quality of the image.
However, in the above-described techniques, the image processing is poor depending on only the techniques of image enhancement, image deblurring, image restoration, and the like.
Disclosure of Invention
The application provides a color correction method, electronic equipment and a storage medium, which are used for realizing color correction for an image so as to improve the image processing effect.
In order to achieve the above purpose, the application adopts the following technical scheme:
in a first aspect, a color correction method is provided and applied to an electronic device.
The method may include: acquiring an image to be processed; acquiring a color correction coefficient of the image to be processed; based on the color correction coefficient, a first image is displayed on a display screen of the electronic device.
The color correction coefficient is used for correcting colors of pixel points included in the image to be processed on different color channels. Therefore, the correction of colors on different color channels can be realized from the dimension of the pixel point, so that the color correction of an image is realized, and the effect of image processing is improved.
The first image is an image obtained by carrying out color correction on the image to be processed based on the color correction coefficient. Therefore, the image obtained after the color correction is displayed on the display screen of the electronic equipment can display the image with higher color quality and closer to the real color effect for the user, and the display effect of the image is improved.
In a possible implementation manner of the first aspect, the image to be processed may be a second image, or the image to be processed may also be an image obtained by performing image processing on the second image. That is, the color correction method provided by the application can perform color correction not only on the original second image, but also on the image processed based on the second image. Therefore, the types of the images to be processed are enriched, and the flexibility of color correction is improved.
The second image is an image to be displayed on the display screen of the first application. For example, the first application may be a camera application, gallery application, video application, gaming application, or clipping application. Accordingly, the second image may be a preview image or a shot image in the camera application, an image stored in the gallery application, a video image to be played in the video application, a game image in the game application, or an image to be clipped uploaded by the user in the clipping application.
Correspondingly, in the case that the image to be processed is an image obtained by performing image processing on the second image, the obtaining the image to be processed includes: receiving an operation of triggering the first application to display an image by a user; responding to the operation of triggering the first application to display an image, and acquiring the second image to be displayed; and performing image processing on the second image to obtain the image to be processed. That is, the user can trigger the first application to display the second image by implementing the triggering operation in the first application, and at this time, the second image to be displayed can be directly obtained, and then the image to be processed after the image processing can be obtained through the image processing. Therefore, the method for acquiring the image to be processed based on the first application can acquire the second image to be displayed quickly and efficiently, and further, the image to be processed after image processing can be acquired quickly and efficiently.
And displaying a first image on a display screen of the electronic device, comprising: the first image is displayed in an interface of the first application. At this time, the first image displayed in the interface of the first application is an image obtained after performing color correction on the image to be processed.
Taking the first application as an example, taking the image to be processed as an image obtained by performing image processing on the second image as an example, a process of acquiring the image to be processed will be described in an exemplary manner.
In another possible implementation manner of the first aspect, the first application is a camera application, and the second image is a preview image acquired by the electronic device in response to an operation of opening the camera application by a user, or a shooting image shot by the electronic device in response to a shooting operation of the user in the camera application.
In the case that the image to be processed is an image obtained by performing image processing on the second image, the obtaining the image to be processed includes: receiving an operation of starting the camera application by a user; responding to the operation of starting the camera application, and acquiring the second image acquired by the electronic equipment; performing image processing on the second image to obtain the image to be processed; or, receiving shooting operation of a user in the camera application; responding to the shooting operation, and acquiring the second image shot by the electronic equipment; and performing image processing on the second image to obtain the image to be processed. That is, the user can trigger the camera application to display the preview image or the shot image by performing a triggering operation (such as a starting operation of the camera application or a shooting operation of the camera application) in the camera application, and at this time, the preview image or the shot image can be directly obtained, and further, the preview image or the shot image after the image processing can be obtained through the image processing.
And displaying a first image on a display screen of the electronic device, comprising: displaying the first image in a preview interface of the camera application; or, displaying the first image in an image viewing interface of the camera application. At this time, the preview image or the photographed image displayed in the interface of the camera application, that is, the preview image or the photographed image obtained after the color correction.
In another possible implementation manner of the first aspect, the first application is a gallery application, and the second image is an image stored in the gallery application.
In the case that the image to be processed is an image obtained by performing image processing on the second image, the obtaining the image to be processed includes: receiving an operation of starting the gallery application by a user; responding to the operation of starting the gallery application, and acquiring the second image from a local gallery; and performing image processing on the second image to obtain the image to be processed. That is, the user can trigger the gallery application to display the image stored in the gallery application by implementing a triggering operation (such as an opening operation of the gallery application) in the gallery application, and at this time, the image stored in the gallery application can be directly obtained, and then the image stored in the gallery application after the image processing can be obtained through the image processing.
And displaying a first image on a display screen of the electronic device, comprising: and displaying the thumbnail of the first image in an image list interface of the gallery application. At this time, the image stored in the gallery application displayed on the interface of the gallery application, that is, the image stored in the gallery application obtained after the color correction.
In another possible implementation manner of the first aspect, the first application is a video application, and the second image is a video image to be played in the video application.
In the case that the image to be processed is an image obtained by performing image processing on the second image, the obtaining the image to be processed includes: receiving an operation of playing the video in the video application by a user; responding to the operation of playing the video, and acquiring the second image to be played from a server; and performing image processing on the second image to obtain the image to be processed. That is, the user can trigger the video application to display the video image to be played by implementing a triggering operation (such as a playing operation of the video application) in the video application, and at this time, the video image to be played can be directly obtained, and then the video image to be played after the image processing can be obtained through the image processing.
And displaying a first image on a display screen of the electronic device, comprising: the first image is displayed in a playback interface of the video application. At this time, the video image to be played displayed in the interface of the video application, that is, the video image to be played obtained after the color correction.
In another possible implementation manner of the first aspect, the first application is a game application, and the second image is a game image.
In the case that the image to be processed is an image obtained by performing image processing on the second image, the obtaining the image to be processed includes: receiving an operation of starting the game application by a user; acquiring the second image to be displayed from a server in response to the operation of starting the game application; and performing image processing on the second image to obtain the image to be processed. That is, the user can trigger the game application to display the game image by performing a trigger operation (e.g., a start operation of the game application) in the game application, and the game image can be directly acquired at this time, and further, the image-processed game image can be acquired through image processing.
And displaying a first image on a display screen of the electronic device, comprising: the first image is displayed in a game interface of the game application. At this time, the game image displayed on the interface of the game application, that is, the game image obtained after the color correction.
In another possible implementation manner of the first aspect, the first application is a clipping application, and the second image is an image to be clipped uploaded by a user in the clipping application.
In the case that the image to be processed is an image obtained by performing image processing on the second image, the obtaining the image to be processed includes: receiving an operation of uploading an image in a clipping application by a user; acquiring an uploaded second image in response to the uploading of the image; and performing image processing on the second image to obtain the image to be processed. That is, the user can trigger the editing application to display the uploaded image to be edited by performing a triggering operation (an image uploading operation) in the editing application, and at this time, the uploaded image to be edited can be directly obtained, and then, the image-processed image to be edited can be obtained through image processing.
And displaying a first image on a display screen of the electronic device, comprising: the first image is displayed in a clip interface of the clip application. At this time, the image to be clipped displayed in the interface of the clipping application, that is, the image to be clipped obtained after the color correction.
In summary, a plurality of types of first applications and a process of acquiring an image to be processed based on the corresponding types of applications are provided. Therefore, the second image to be displayed can be obtained quickly and efficiently, and the image after image processing can be obtained through image processing.
In another possible implementation manner of the first aspect, the color correction method provided by the present application may be performed using a first color correction model. Wherein:
acquiring a color correction coefficient of the image to be processed, including: inputting the image to be processed into a first color correction model, and predicting the color correction coefficient of the image to be processed through a predictor model of the first color correction model. In this way, prediction of the color correction coefficient of the image to be processed is achieved by setting a predictor model for predicting the color correction coefficient in the first color correction model.
Displaying a first image on a display screen of the electronic device based on the color correction coefficient, comprising: performing color correction on the image to be processed based on the color correction coefficient through a correction sub-model of the first color correction model to obtain the first image; the first image is displayed on a display screen of the electronic device. In this way, color correction of the image to be processed is achieved by setting a correction sub-model for color correction in the first color correction model.
In this possible implementation, a scheme for performing color correction based on a first color correction model is provided. Through the first color correction model, correction of colors on different color channels can be achieved from the dimension of the pixel points, so that color correction for images is achieved, and the effect of image processing is improved. Furthermore, the image obtained after the color correction is displayed on the display screen of the electronic equipment can display the image with higher color quality and closer to the real color effect for the user, and the display effect of the image is improved.
The above-described scheme of performing color correction based on the first color correction model is exemplarily described below based on two types included in the image to be processed.
For example, in the case where the image to be processed is the second image, acquiring the color correction coefficient of the image to be processed includes: and inputting the second image into a first color correction model, and predicting the color correction coefficient of the second image through a predictor model of the first color correction model. Displaying a first image on a display screen of the electronic device based on the color correction coefficient, comprising: performing color correction on the second image based on the color correction coefficient through a correction sub-model of the first color correction model to obtain the first image; the first image is displayed on a display screen of the electronic device.
For another example, in the case that the image to be processed is an image obtained by performing image processing on the second image, the obtaining the color correction coefficient of the image to be processed includes: and inputting the image after image processing into a first color correction model, and predicting the color correction coefficient of the image after image processing through a predictor model of the first color correction model. Displaying a first image on a display screen of the electronic device based on the color correction coefficient, comprising: performing color correction on the image processed by the image based on the color correction coefficient through a correction sub-model of the first color correction model to obtain the first image; the first image is displayed on a display screen of the electronic device.
In another possible implementation manner of the first aspect, predicting, by a predictor model of the first color correction model, a color correction coefficient of the image to be processed includes: extracting the characteristics of the image to be processed through the characteristic extraction layer of the predictor model to obtain the color characteristics of the image to be processed on different color channels; and carrying out convolution processing on the color characteristics of the image to be processed on different color channels through the convolution layer of the prediction submodel to obtain color correction coefficients of the image to be processed on different color channels.
In this possible implementation manner, the feature extraction layer and the convolution layer are set in the predictor model, the feature extraction layer is used for extracting color features of the image to be processed on different color channels, and the convolution layer is used for carrying out convolution processing on the extracted color features so as to obtain the color correction coefficient. Thus, the predictor model formed by the feature extraction layer and the convolution layer can be used for predicting and obtaining the color correction coefficients of the image to be processed on different color channels.
In another possible implementation manner of the first aspect, in a case that the image to be processed is an image obtained by performing image processing on the second image, the color correction method provided by the present application may be performed by using a second color correction model. Wherein:
acquiring an image to be processed, including: and inputting the second image into a second color correction model, and performing image processing on the second image through a processing sub-model of the second color correction model to obtain the image to be processed. In this way, by setting the processing sub-model for image processing in the second color correction model, image processing of the image to be processed is realized.
Acquiring a color correction coefficient of the image to be processed, including: and predicting the color correction coefficient of the second image through the predictor model of the second color correction model to serve as the color correction coefficient of the image to be processed. In this way, prediction of the color correction coefficients is achieved by setting a predictor model for predicting the color correction coefficients in the second color correction model.
Displaying a first image on a display screen of the electronic device based on the color correction coefficient, comprising: performing color correction on the image to be processed based on the color correction coefficient through a correction sub-model of the second color correction model to obtain the first image; the first image is displayed on a display screen of the electronic device. In this way, by setting the correction sub-model for color correction in the first color correction model, color correction of the image after image processing is achieved.
In this possible implementation, a scheme for performing color correction based on the second color correction model is provided. The model with the color correction function and the image processing function can process images to be processed, and can effectively improve the color quality of the images, so that the image processing effect is improved. Furthermore, the image obtained after the color correction is displayed on the display screen of the electronic equipment can display the image with higher color quality and closer to the real color effect for the user, and the display effect of the image is improved.
In another possible implementation manner of the first aspect, predicting the color correction coefficient of the second image by a predictor model of the second color correction model includes: extracting features of the second image through a feature extraction layer of the predictor model to obtain color features of the second image on different color channels; and carrying out convolution processing on the color characteristics of the second image on different color channels through the convolution layer of the prediction submodel to obtain color correction coefficients of the second image on different color channels.
In this possible implementation manner, the feature extraction layer and the convolution layer are set in the predictor model, the feature extraction layer is used to extract the color features of the second image on different color channels, and the convolution layer is used to perform convolution processing on the extracted color features to obtain the color correction coefficient. In this way, a predictor model formed by the feature extraction layer and the convolution layer is provided, and color correction coefficients of the second image on different color channels can be predicted.
In another possible implementation manner of the first aspect, the feature extraction layer includes at least two layers of object model structures, and the object model structures include a convolution layer, a batch normalization layer and an activation function layer.
In this possible implementation, a feature extraction layer is provided that is based on a structure of at least two layers of object models. The target model structure comprises a convolution layer, a batch standardization layer and an activation function layer. Therefore, the constructed feature extraction layer has good feature extraction capability, and can extract more abundant and effective color features, so that the color correction coefficient of the image to be processed is predicted based on the extracted color features, and the prediction accuracy of the color correction coefficient can be improved.
In another possible implementation manner of the first aspect, the color correction coefficient includes a first correction coefficient of each pixel point included in the image to be processed on a different color channel. Accordingly, color correction may be performed based on the first correction coefficient of each pixel point on a different color channel included in the color correction coefficient.
The corresponding process may include: and carrying out first adjustment processing on color values of all pixel points included in the image to be processed on the corresponding color channels based on first correction coefficients of all pixel points included in the image to be processed on different color channels, so as to obtain the first image.
That is, by performing the first adjustment processing in combination with the first correction coefficients on the different color channels and the color values on the corresponding color channels, correction of the color values on the color channels from the pixel point dimension can be achieved, that is, color correction of the image to be processed is achieved, so that a first image with higher color quality is obtained.
In another possible implementation manner of the first aspect, the color correction coefficients further include a first correction coefficient and a second correction coefficient of each pixel point included in the image to be processed on different color channels. Accordingly, the color correction can be performed by combining the first correction coefficient and the second correction coefficient of each pixel point on different color channels included by the color correction coefficient.
The corresponding process may include: and carrying out second adjustment processing on color values of all pixel points included in the image to be processed on the corresponding color channels based on second correction coefficients of all pixel points included in the image to be processed on different color channels, so as to obtain an image after the second adjustment processing. That is, the first adjustment process is performed by combining the first correction coefficients on the different color channels and the color values on the corresponding color channels, so that the preliminary correction of the color values on the color channels can be quickly and efficiently implemented from the pixel point dimension.
And based on the first correction coefficient of each pixel point on different color channels included in the image to be processed, performing first adjustment processing on the color value of each pixel point on the corresponding color channel included in the image after the second adjustment processing, so as to obtain the first image. That is, the first adjustment processing is performed by combining the first correction coefficients on the different color channels and the color values on the corresponding color channels after the second adjustment processing, so that the secondary correction of the color values on the color channels can be further realized from the dimension of the pixel point, and further, the color correction of the image to be processed is realized, and the first image with higher color quality is obtained.
In this possible implementation manner, a manner is provided in which color correction is performed by combining a first correction coefficient and a second correction coefficient of each pixel point included in the color correction coefficient on different color channels.
In another possible implementation manner of the first aspect, before performing the second adjustment processing on the color value of each pixel point included in the image to be processed on the corresponding color channel based on the second correction coefficient of each pixel point included in the image to be processed on the different color channel, the method further includes: and normalizing the second correction coefficient.
In this possible implementation manner, the second correction coefficient is normalized to eliminate the dimension influence between different eigenvalues in the second correction coefficient, so that subsequent preliminary correction on the color value is performed based on the normalized second correction coefficient, thereby ensuring smooth color correction.
In another possible implementation manner of the first aspect, the method further includes: and performing image processing on the first image to obtain the first image after image processing. That is, after the color correction is performed on the image to be processed to obtain the first image, the image processing can also be performed on the first image to obtain an image with higher image quality.
Displaying a first image on a display screen of the electronic device, comprising: and displaying the first image after the image processing on a display screen of the electronic device. In this way, an image with higher image quality can be displayed for the user.
In another possible implementation manner of the first aspect, the image processing is at least one of an image enhancement processing, an image superdivision processing, an image restoration processing, an image deblurring processing, an image denoising processing, an image rain removing processing, an image defogging processing, a quality improvement processing, or a high dynamic range processing.
In this possible implementation, multiple types of image processing functions are shown. In this way, the color correction can be performed on the image to be processed in at least one of the image enhancement scene, the image superdivision scene, the image restoration scene, the image deblurring scene, the image denoising scene, the image rain removing scene, the image defogging scene, the quality improvement scene or the high dynamic range scene, so that the application range of the color correction is widened.
In a second aspect, a training method of a color correction model is provided and applied to an electronic device.
The method may include: and performing iterative training on the initial model based on the image training data to obtain the color correction model.
In the process of any iteration training, the image training data is input into a model obtained after the previous iteration training, a color correction coefficient of the image training data is obtained through the model, and the image training data is subjected to color correction based on the color correction coefficient, so that an output image after the color correction is obtained. The color correction coefficient is used for correcting the colors of the pixel points included in the image training data on different color channels.
Further, model parameters are adjusted based on the output image and a sample image corresponding to the image training data. The sample image is an image with the color quality reaching the preset requirement. That is, in the model training process, the model parameters are adjusted based on the output image of the model and the sample image of the image training data so as to realize the training optimization of the model, and the model with better color correction capability can be obtained through training.
In the technical scheme, the training method of the color correction model is provided, the image training data is utilized to carry out iterative training on the initial model, and the model with better color correction capability can be obtained through training.
In a possible implementation manner of the second aspect, obtaining a color correction coefficient of the image training data through the model, performing color correction on the image training data based on the color correction coefficient, and obtaining a color-corrected output image, including: and predicting the color correction coefficient of the image training data through a predictor model of the model. That is, by setting the predictor model in the model, the color correction coefficient of the image training data is predicted by the predictor model in the process of any one of the iterative training.
And performing color correction on the image training data based on the color correction coefficient through a correction sub-model of the model to obtain the output image. That is, by setting the correction sub-model in the model, the image training data is color-corrected by using the correction sub-model in any one iterative training process, so as to obtain an output image.
Furthermore, based on the output image and the sample image, model parameters are adjusted so as to realize training optimization of the model, and thus the model with the color correction function is obtained through training.
In another possible implementation manner of the second aspect, obtaining a color correction coefficient of the image training data through the model, performing color correction on the image training data based on the color correction coefficient, and obtaining a color-corrected output image, including: and performing image processing on the image training data through a processing sub-model of the model to obtain an image after image processing. That is, the processing sub-model is set in the model, so that in the process of any iterative training, the image training data is subjected to image processing by using the processing sub-model to obtain an image after image processing.
And, the color correction coefficient of the image training data is predicted by a predictor model of the model. That is, by setting the predictor model in the model, the color correction coefficient of the image training data is predicted by the predictor model in the process of any one of the iterative training.
And performing color correction on the image processed by the image processing device based on the color correction coefficient through a correction sub-model of the model to obtain the output image. That is, by setting the correction sub-model in the model, the image training data is color-corrected by using the correction sub-model in any one iterative training process, so as to obtain an output image.
Furthermore, based on the output image and the sample image, model parameters are adjusted so as to realize training optimization of the model, thereby training to obtain the model with both the color correction function and the image processing function.
In another possible implementation manner of the second aspect, the image processing is at least one of an image enhancement processing, an image superdivision processing, an image restoration processing, an image deblurring processing, an image denoising processing, an image rain removing processing, an image defogging processing, a quality improvement processing or a high dynamic range processing.
In this possible implementation, multiple types of image processing functions are shown. Therefore, the model with at least one image processing function can be obtained by training in combination with at least one function of image enhancement, image superdivision, image recovery, image restoration, image deblurring, image denoising, image rain removal, image defogging, quality improvement or high dynamic range.
In another possible implementation manner of the second aspect, predicting the color correction coefficient of the image training data by a predictor model of the model includes: extracting features of the image training data through a feature extraction layer of the predictor model to obtain color features of the image training data on different color channels; and carrying out convolution processing on the color characteristics of the image training data on different color channels through the convolution layer of the prediction submodel to obtain color correction coefficients of the image training data on different color channels.
In this possible implementation manner, the feature extraction layer and the convolution layer are set in the predictor model, the feature extraction layer is used for extracting color features of the image training data on different color channels, and the convolution layer is used for carrying out convolution processing on the extracted color features so as to obtain the color correction coefficient. In this way, a predictor model formed by the feature extraction layer and the convolution layer is provided, and the color correction coefficient of the image training data can be predicted.
In another possible implementation manner of the second aspect, the feature extraction layer includes at least two layers of object model structures, and the object model structures include a convolution layer, a batch normalization layer and an activation function layer.
In this possible implementation, a feature extraction layer is provided that is based on a structure of at least two layers of object models. The target model structure comprises a convolution layer, a batch standardization layer and an activation function layer. Therefore, the constructed feature extraction layer has good feature extraction capability, and can extract more abundant and effective color features, so that the color correction coefficient of the image training data is predicted based on the extracted color features, and the prediction accuracy of the color correction coefficient can be improved.
In another possible implementation manner of the second aspect, the color correction coefficient includes a first correction coefficient of each pixel point included in the image training data on a different color channel. Accordingly, color correction may be performed based on the first correction coefficient of each pixel point on a different color channel included in the color correction coefficient.
The corresponding process may include: and carrying out first adjustment processing on color values of all pixel points included in the image training data on corresponding color channels based on first correction coefficients of all pixel points included in the image training data on different color channels to obtain the output image.
That is, by performing the first adjustment processing in combination with the first correction coefficients on the different color channels and the color values on the corresponding color channels, correction of the color values on the color channels from the pixel point dimension can be achieved, that is, color correction of the image training data is achieved, and therefore an output image with higher color quality is obtained.
In another possible implementation manner of the second aspect, the color correction coefficients include a first correction coefficient and a second correction coefficient of each pixel point included in the image training data on different color channels. Accordingly, the color correction can be performed by combining the first correction coefficient and the second correction coefficient of each pixel point on different color channels included by the color correction coefficient.
The corresponding process may include: and carrying out second adjustment processing on the color values of the pixel points on the corresponding color channels included in the image training data based on second correction coefficients of the pixel points on the different color channels included in the image training data, so as to obtain an image after the second adjustment processing. That is, the first adjustment process is performed by combining the first correction coefficients on the different color channels and the color values on the corresponding color channels, so that the preliminary correction of the color values on the color channels can be quickly and efficiently implemented from the pixel point dimension.
And based on the first correction coefficient of each pixel point on different color channels included in the image training data, performing first adjustment processing on the color value of each pixel point on the corresponding color channel included in the image after the second adjustment processing, so as to obtain the output image. That is, the first adjustment processing is performed by combining the first correction coefficients on the different color channels and the color values on the corresponding color channels after the second adjustment processing, so that the secondary correction of the color values on the color channels can be further realized from the dimension of the pixel point, and further, the color correction of the image training data is realized, and an output image with higher color quality is obtained.
In this possible implementation manner, a manner is provided in which color correction is performed by combining a first correction coefficient and a second correction coefficient of each pixel point included in the color correction coefficient on different color channels.
In another possible implementation manner of the second aspect, before performing the second adjustment processing on the color value of each pixel point included in the image training data on the corresponding color channel based on the second correction coefficient of each pixel point included in the image training data on the different color channel, the method further includes: and normalizing the second correction coefficient.
In this possible implementation manner, the second correction coefficient is normalized to eliminate the dimension influence between different eigenvalues in the second correction coefficient, so that subsequent preliminary correction on the color value is performed based on the normalized second correction coefficient, thereby ensuring smooth color correction.
In a third aspect, the present application provides an electronic device, comprising: a display screen, a processor and a memory. The display screen is provided with a display function. The memory is for storing program code and the processor is for invoking the program code stored in the memory to implement any one of the methods provided in the first or second aspects.
In a fourth aspect, there is provided a computer readable storage medium comprising program code which, when run on an electronic device, causes the electronic device to perform any one of the methods provided in the first or second aspects.
In a fifth aspect, there is provided a computer program product comprising program code which, when run on an electronic device, causes the electronic device to perform any one of the methods provided in the first or second aspects.
It should be noted that, the technical effects caused by any implementation manner of the third aspect to the fifth aspect may refer to the technical effects caused by the corresponding implementation manner in the first aspect, which are not described herein.
Drawings
Fig. 1 is a schematic diagram of an electronic device according to an embodiment of the present application;
fig. 2 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic software structure of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic flow chart of a color correction method according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of a color correction method according to an embodiment of the present application;
FIG. 6 is a schematic diagram showing a color correction of the present application;
FIG. 7 is a flowchart of a training method of a first color correction model according to an embodiment of the present application;
FIG. 8 is a flowchart of a training method of a second color correction model according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a training device for a color correction model according to an embodiment of the present application;
fig. 10 is a schematic diagram of a training device for a color correction model according to an embodiment of the present application.
Detailed Description
In the description of the present application, "/" means "or" unless otherwise indicated, for example, A/B may mean A or B. "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. Furthermore, "at least one" means one or more, and "a plurality" means two or more. The terms "first," "second," and the like do not limit the number and order of execution, and the terms "first," "second," and the like do not necessarily differ.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The color correction method provided by the embodiment of the application can be applied to the technical field of image processing, and particularly can be applied to a color correction scene aiming at an image to be processed.
Further, in some embodiments, the color correction method provided by the embodiments of the present application may be applied to image processing scenarios such as image enhancement (image enhancement), image super-resolution (super-resolution), image restoration (image restoration), image restoration (inpainting), image deblurring (deblurring), image de-drying (denoing), image de-raining (image de-rendering), image defogging (image defogging), quality improvement algorithm (low-level algorithm), high-dynamic range (HDR), and the like.
Where image enhancement refers to enhancing the contrast of an image to sharpen an otherwise unclear image or emphasize certain features of interest. Image superdivision refers to improving the resolution of an image to enrich the texture details of the image. Image restoration refers to restoring a true image from the resulting degraded image with maximum fidelity. Image restoration refers to restoring a lost portion of an image and reconstructing it based on background information. Image deblurring refers to removing motion blur of a blurred image to restore the blurred image to a clear image. Image de-drying refers to reducing noise in a digital image. The image rain removal refers to removing raindrops in an image picture. Image defogging refers to processing a haze image to eliminate or reduce the influence of haze on the image. The quality improvement algorithm refers to an algorithm that restores a low quality image to a high quality image. The high dynamic range is used to provide more color range and image detail to improve image contrast.
In the above image processing scenario, the color correction method provided by the embodiment of the present application may also be applied. In this way, not only the above-described image processing can be performed on the image to be processed, but also color correction for the image to be processed can be achieved.
The above-mentioned image processing scene may be a scene in which image processing is required in the electronic device. For example, the image-processed scene may be in a shooting scene of the electronic device. Accordingly, for a preview image or a shot image obtained by shooting in the shooting process, the color correction method provided by the embodiment of the application can be used for performing color correction on the preview image or the shot image. Alternatively, the image-processed scene may be in a video playback scene of the electronic device. Accordingly, for the video image to be played, the color correction method provided by the embodiment of the application can be used for correcting the color of the video image to be played. Of course, the above-mentioned image processing scene can also be located in other scenes of the electronic device, such as a scene of an image clip, a game screen display scene, and the like, which is not limited in the embodiment of the present application.
In the related art, when a low quality image is acquired, one or more of the above-mentioned techniques of image enhancement, image deblurring, image restoration and the like are generally used to process the acquired image so as to improve the quality of the image. However, in the related art, the image processing is poor depending on only the technologies of image enhancement, image deblurring, image restoration, and the like.
In view of this, the embodiment of the application provides a color correction method, which can realize the correction of colors on different color channels from the dimension of pixel points by acquiring the color correction coefficient of the image to be processed, thereby realizing the color correction for the image and improving the image processing effect. Furthermore, the image obtained after the color correction is displayed on the display screen of the electronic equipment can display the image with higher color quality and closer to the real color effect for the user, and the display effect of the image is improved.
In one possible implementation, the color correction method provided in the embodiment of the present application may be applied to the electronic device 100 shown in fig. 1. Fig. 1 is a schematic diagram of an electronic device according to an embodiment of the present application.
The electronic device 100 may be at least one of a smart phone, a smart watch, a desktop computer, a laptop computer, a virtual reality terminal, an augmented reality terminal, a wireless terminal, and a laptop portable computer.
The electronic device 100 is provided with a function of image processing. In some embodiments, the electronic device 100 may run applications supporting image processing for cameras, gallery, maps, navigation, video, games, etc. For example, when an image is captured by using a camera application executed by the electronic device 100, the color correction method provided by the embodiment of the present application may be applied to perform color correction on a preview image or a captured image. Or, when playing video by using the video application executed by the electronic device 100, the color correction method provided by the embodiment of the application may be applied to perform color correction on the video image to be played. Of course, the electronic device 100 may also run other types of applications that support image processing, such as editing, etc. applications.
In the embodiment of the present application, the electronic device 100 is configured to obtain an image to be processed; the color correction coefficient of the image to be processed is obtained, colors of all pixel points included in the image to be processed on different color channels are corrected based on the color correction coefficient, a first image after color correction is obtained, and the first image is displayed on a display screen of the electronic device 100.
Exemplary, a schematic structural diagram of the electronic device 100 in fig. 1 is shown in fig. 2. Fig. 2 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Referring to fig. 2, the electronic device 100 may include a processor 210, an external memory interface 220, an internal memory 221, a universal serial bus (universal serial bus, USB) interface 230, a charge management module 240, an antenna 1, an antenna 2, a mobile communication module 250, a wireless communication module 260, an audio module 270, a sensor module 280, keys 290, a motor 291, an indicator 292, a camera 293, a display 294, a subscriber identity module (subscriber identification module, SIM) card interface 295, and the like. Wherein the sensor module 280 may include a pressure sensor 280A, a touch sensor 280B, etc.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units such as, for example: the processor 210 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural-Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
The memory is used for storing instructions (program code) and data. In some embodiments, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that the processor 210 has just used or recycled. If the processor 210 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided and the latency of the processor 210 is reduced, thereby improving the efficiency of the system.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc. For example, in the embodiment of the present application, the electronic device 100 may provide the first color correction model or the second color correction model through the NPU to implement the color correction method provided in the embodiment of the present application, so as to implement color correction of the image to be processed.
In some embodiments, processor 210 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
In an embodiment of the present application, the processor 210 is configured to invoke the program code stored in the memory, and when the program code runs on the electronic device 100, cause the electronic device 100 to execute the color correction method in the embodiment of the present application.
The charge management module 240 is configured to receive a charge input from a charger.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands.
The mobile communication module 250 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 250 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 250 may receive electromagnetic waves from the antenna 1, perform processes such as filtering and amplifying the received electromagnetic waves, and transmit the electromagnetic waves to the modem processor for demodulation. The mobile communication module 250 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be disposed in the processor 210. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be provided in the same device as at least some of the modules of the processor 210.
The wireless communication module 260 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied on the electronic device 100. The wireless communication module 260 may be one or more devices that integrate at least one communication processing module. The wireless communication module 260 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 210. The wireless communication module 260 may also receive a signal to be transmitted from the processor 210, frequency modulate and amplify the signal, and convert the signal into electromagnetic waves to radiate the electromagnetic waves through the antenna 2.
In some embodiments, antenna 1 and mobile communication module 250 of electronic device 100 are coupled, and antenna 2 and wireless communication module 260 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include a global system for mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others.
The electronic device 100 implements display functions through a GPU, a display screen 294, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 294 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
The display 294 is for displaying images, videos, and the like. The display 294 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (FLED), a Miniled, microLed, micro-oeled, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 294, N being a positive integer greater than 1. For example, in an embodiment of the present application, the display screen 294 is used to display the color corrected first image.
The electronic device 100 may implement a photographing function through an ISP, a camera 293, a video codec, a GPU, a display screen 294, an application processor, and the like. For example, in the embodiment of the present application, when the electronic device 100 implements the shooting function through the ISP, the camera 293, the video codec, the GPU, the display screen 294, the application processor, and the like, the color correction method provided by the embodiment of the present application may be applied to perform color correction on the preview image or the shot image obtained by shooting during the shooting process.
The camera 293 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 293, N being a positive integer greater than 1.
The external memory interface 220 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 210 through an external memory interface 220 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 221 may be used to store computer executable program code that includes instructions. The processor 210 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 221. The internal memory 221 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 100 may implement audio functionality through the audio module 270. Such as music playing, recording, etc.
The pressure sensor 280A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, pressure sensor 280A may be disposed on display 294.
The touch sensor 280B is also referred to as a "touch panel". The touch sensor 280B may be disposed on the display screen 294, and the touch sensor 280B and the display screen 294 form a touch screen, which is also referred to as a "touch screen". The touch sensor 280B is used to detect a touch operation acting on or near it.
Keys 290 include a power on key, a volume key, etc. The keys 290 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 291 may generate a vibration alert. The motor 291 may be used for incoming call vibration alerting or for touch vibration feedback.
The indicator 292 may be an indicator light, which may be used to indicate a state of charge, a change in power, a message indicating a missed call, a notification, etc.
The SIM card interface 295 is for interfacing with a SIM card. The SIM card may be inserted into the SIM card interface 295 or removed from the SIM card interface 295 to enable contact and separation from the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1.
It should be noted that the structure shown in fig. 2 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown in fig. 2, or may combine some components, or may be arranged with different components.
The software system of the electronic device may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, an Android system with a layered architecture is taken as an example, and the software structure of the electronic equipment is illustrated. Fig. 3 is a schematic software structure of an electronic device according to an embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run), a system library, and a kernel layer.
The application layer may include a series of application packages. As shown in fig. 3, the application package may include applications for cameras, gallery, calendar, talk, map, navigation, WLAN, bluetooth, music, video, games, short messages, etc.
The color correction method provided by the embodiment of the application can be applied to image processing supporting application programs such as gallery, map, navigation, video and game, and can be used for performing color correction on the image to be displayed or played.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 3, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), two-dimensional (2D) graphics engines (e.g., SGL), etc.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
Fig. 4 is a flowchart of a color correction method according to an embodiment of the present application. Referring to fig. 4, a description will be given of a case where color correction is performed based on the first color correction model, the method including the following S401 to S404:
S401, the electronic equipment acquires an image to be processed.
In the embodiment of the present application, the image to be processed may be the second image. Alternatively, the image to be processed may be an image obtained by performing image processing on the second image. Thus, two types of images to be processed are provided, and the types of the images to be processed are enriched. Furthermore, color correction can be performed not only on an original image to be displayed, but also on an image processed by the image, so that the flexibility of color correction is improved.
The second image is an image to be displayed on the display screen of the first application. In some embodiments, in the case where the image to be processed is the second image, the process of acquiring the image to be processed may be: and receiving an operation of triggering the first application to display an image by a user, and responding to the operation of triggering the first application to display the image to acquire the second image to be displayed.
In the above-described embodiments, a process of acquiring an image to be processed based on a first application is provided. The second image to be displayed can be quickly and efficiently obtained by triggering the operation of displaying the image by the first application by the user.
For example, taking a first application as a camera application, the second image may be a preview image acquired by the electronic device in response to an operation of opening the camera application by a user, or a photographed image photographed by the electronic device in response to a photographing operation of the user in the camera application. Accordingly, taking the preview image as an example, the process of acquiring the image to be processed may be: and receiving an operation of starting the camera application by a user, and responding to the operation of starting the camera application, and acquiring the second image acquired by the electronic equipment (camera). It should be understood that the preview image refers to an image acquired when a photographing operation is not performed. Taking a shot image as an example, the process of acquiring the image to be processed may be: and receiving shooting operation of a user in the camera application, and responding to the shooting operation, and acquiring the second image shot by the electronic equipment. It should be understood that photographing an image refers to an image acquired after performing a photographing operation.
As another example, taking the first application as a gallery application, the second image may be an image stored in the gallery application. Accordingly, taking an image stored in a gallery application as an example, the process of acquiring the image to be processed may be: and receiving an operation of starting the gallery application by a user, and responding to the operation of starting the gallery application, and acquiring the second image from a local gallery.
As another example, taking the first application as a video application, the second image may be a video image to be played in the video application. Accordingly, taking a video image to be played as an example, the process of acquiring the image to be processed may be: and receiving the operation of playing the video in the video application by a user, and responding to the operation of playing the video, and acquiring the second image to be played from a server. Wherein the server may store a video stream (comprising multiple frames of images) of the video to be played.
As another example, taking the first application as a game application, the second image may be a game image in the game application. Accordingly, taking a game image as an example, the process of acquiring the image to be processed may be: and receiving an operation of starting the game application by a user, and responding to the operation of starting the game application, and acquiring the second image to be displayed from a server. Wherein the server may also store game images to be displayed.
As another example, taking the first application as a clipping application, the second image may be an image to be clipped that is uploaded by the user in the clipping application. Accordingly, taking an image to be clipped as an example, the process of acquiring the image to be processed may be: and receiving an image uploading operation of a user in the editing application, and responding to the image uploading operation to acquire an uploaded second image.
In the embodiment of the application, the image processing may be at least one of image enhancement processing, image superdivision processing, image restoration processing, image deblurring processing, image denoising processing, image rain removing processing, image defogging processing, quality improvement processing, or high dynamic range processing. In this embodiment, a plurality of types of image processing functions are shown. In this way, the color correction can be performed on the image to be processed in at least one of the image enhancement scene, the image superdivision scene, the image restoration scene, the image deblurring scene, the image denoising scene, the image rain removing scene, the image defogging scene, the quality improvement scene or the high dynamic range scene, so that the application range of the color correction is widened.
In some embodiments, in the case that the image to be processed is an image obtained by performing image processing on the second image, the process of obtaining the image to be processed may be: and receiving an operation of triggering the first application to display an image by a user, and responding to the operation of triggering the first application to display the image to acquire the second image to be displayed. And performing image processing on the second image to obtain the image to be processed.
In the above-described embodiments, a process of acquiring an image to be processed based on a first application is provided. The second image to be displayed can be quickly and efficiently obtained by triggering the operation of displaying the image by the first application by a user, and then the image to be processed after the image processing can be obtained by the image processing.
For example, taking a preview image as an example, the process of acquiring the image to be processed may be: and receiving an operation of starting the camera application by a user, and responding to the operation of starting the camera application to acquire the second image acquired by the electronic equipment. And performing image processing on the second image to obtain the image to be processed. Taking a shot image as an example, the process of acquiring the image to be processed may be: and receiving shooting operation of a user in the camera application, and responding to the shooting operation, and acquiring the second image shot by the electronic equipment. And performing image processing on the second image to obtain the image to be processed.
As another example, taking an image stored in a gallery application as an example, the process of acquiring the image to be processed may be: and receiving an operation of starting the gallery application by a user, and responding to the operation of starting the gallery application, and acquiring the second image from a local gallery. And performing image processing on the second image to obtain the image to be processed.
For another example, taking a video image to be played as an example, the process of obtaining the image to be processed may be: and receiving the operation of playing the video in the video application by a user, and responding to the operation of playing the video, and acquiring the second image to be played from a server. And performing image processing on the second image to obtain the image to be processed.
As another example, taking a game image as an example, the process of acquiring the image to be processed may be: and receiving an operation of starting the game application by a user, and responding to the operation of starting the game application, and acquiring the second image to be displayed from a server. And performing image processing on the second image to obtain the image to be processed.
For another example, taking an image to be clipped as an example, the process of acquiring the image to be processed may be: and receiving an image uploading operation of a user in the editing application, and responding to the image uploading operation to acquire an uploaded second image. And performing image processing on the second image to obtain the image to be processed.
In the above example, there are provided a plurality of types of first applications and a process of acquiring an image to be processed based on the corresponding types of applications. Therefore, the second image to be displayed can be obtained quickly and efficiently, and the image after image processing can be obtained through image processing.
It should be noted that the various images to be processed shown above are merely examples, and a process of acquiring the images to be processed by the electronic device will be described. In other embodiments, the image to be processed may be another type of image, and accordingly, the electronic device may acquire the image to be processed in other manners. The embodiment of the application does not limit the process how to acquire the image to be processed.
S402, the electronic equipment inputs the image to be processed into a first color correction model, and predicts the color correction coefficient of the image to be processed through a predictor model of the first color correction model.
In the embodiment of the application, the predictor model is used for predicting the color correction coefficient of the image to be processed. In some embodiments, the predictor model may be a color coefficient prediction module in the first color correction model.
The color correction coefficient is used for correcting the colors of the pixel points included in the image to be processed on different color channels. It should be appreciated that the image to be processed may include color values for each pixel point on different color channels.
Wherein the color correction coefficients may be in the form of eigenvectors of n x 3. Wherein n represents the total number of pixel points in the image to be processed, and n is a positive integer greater than 1. 3 represents the 3 color channels of RGB (i.e. red-R, green-G, blue-B). Correspondingly, the image to be processed may also be in the form of a feature vector of n×3. It should be understood that in the eigenvector of n×3, 1 pixel corresponds to the correction coefficient value on 3 color channels.
In some embodiments, the color correction coefficients may include first correction coefficients for each pixel included in the image to be processed on different color channels. In the embodiment of the present application, the first correction coefficient is used for performing color correction by means of a first adjustment process (such as a summation operation).
Further, the color correction coefficient may further include a second correction coefficient of each pixel point included in the image to be processed on a different color channel. In the embodiment of the present application, the second correction coefficient is used for performing color correction by means of a second adjustment process (such as a product operation). Wherein the adjustment force of the first adjustment process is smaller than the adjustment force of the second adjustment process.
Referring to fig. 4, the first correction coefficient may be an offset coefficient tensor, that is, an eigenvector of n×3 shown in fig. 4 (r 4 ,r 5 ,r 6 ). The second correction factor may be the adjustment factor tensor, i.e. the eigenvector (r) of n x 3 shown in fig. 4 1 ,r 2 ,r 3 )。
In some embodiments, the process of predicting the color correction coefficient of the image to be processed by the predictor model may be: and extracting the characteristics of the image to be processed through the characteristic extraction layer of the predictor model to obtain the color characteristics of the image to be processed on different color channels. And carrying out convolution processing on the color characteristics of the image to be processed on different color channels through the convolution layer of the prediction submodel to obtain color correction coefficients of the image to be processed on different color channels.
In the process shown in S402, a process of acquiring the color correction coefficient of the image to be processed by the electronic device is described. The method comprises the steps of setting a feature extraction layer and a convolution layer in a predictor model, extracting color features of an image to be processed on different color channels by using the feature extraction layer, and carrying out convolution processing on the extracted color features by using the convolution layer to obtain the color correction coefficient. Thus, the predictor model formed by the feature extraction layer and the convolution layer can be used for predicting and obtaining the color correction coefficients of the image to be processed on different color channels.
The feature extraction layer may include at least two layers of object model structures including a convolution layer, a batch normalization layer, and an activation function layer. The target model structure may be, for example, a conv+bn+relu model structure. Referring to fig. 4, taking an example in which the feature extraction layer includes a two-layer object model structure, the two-layer object model structure may be the two-layer conv+bn+relu model structure shown in fig. 4. The convolution layer of the predictor model may be the conv layer shown in fig. 4.
It should be appreciated that in the conv+bn+relu model structure, conv represents a convolution layer for performing convolution operations. bn (batch normalization) the batch normalization layer is used to adjust the distribution of the output data of the convolution layer to enter the active region of the activation function layer, so as to accelerate the convergence rate of the model, improve the generalization ability of the model and prevent the gradient from disappearing. relu represents the activation function layer that is used to increase the nonlinear relationship between layers, thereby defining the output of the model.
In the above embodiment, a feature extraction layer configured based on at least two layers of object model structures is provided. The target model structure comprises a convolution layer, a batch standardization layer and an activation function layer. Therefore, the constructed feature extraction layer has good feature extraction capability, and can extract more abundant and effective color features, so that the color correction coefficient of the image to be processed is predicted based on the extracted color features, and the prediction accuracy of the color correction coefficient can be improved.
S403, the electronic equipment performs color correction on the image to be processed based on the color correction coefficient through the correction sub-model of the first color correction model to obtain the first image.
In the embodiment of the application, the first image is an image obtained by performing color correction on the image to be processed based on the color correction coefficient. Illustratively, taking the form of a feature vector of n×3 for the image to be processed as an example, the first image is correspondingly also in the form of a feature vector of n×3.
In some embodiments, taking an example that the color correction coefficient includes a first correction coefficient of each pixel point included in the image to be processed on a different color channel, the process of color correction may correspondingly include: and carrying out first adjustment processing on color values of all pixel points included in the image to be processed on the corresponding color channels based on first correction coefficients of all pixel points included in the image to be processed on different color channels, so as to obtain the first image.
The first adjustment process may be: and carrying out summation operation on the first correction coefficient of each pixel point included in the image to be processed on different color channels and the color value of each pixel point included in the image to be processed on the corresponding color channel to obtain a first image.
The above-described embodiment describes the process of color correction, taking as an example the first correction coefficient included based on the color correction coefficient. In this way, a manner of performing color correction based on the first correction coefficient of each pixel point on different color channels included in the color correction coefficient is provided. The first adjustment processing is performed by combining the first correction coefficients on different color channels and the color values on the corresponding color channels, so that correction of the color values on the color channels can be realized from the dimension of the pixel point, and color correction of an image to be processed is realized, thereby obtaining a first image with higher color quality.
In other embodiments, taking an example that the color correction coefficient includes a first correction coefficient and a second correction coefficient of each pixel point included in the image to be processed on different color channels, the process of color correction may correspondingly include: and carrying out second adjustment processing on color values of all pixel points included in the image to be processed on the corresponding color channels based on second correction coefficients of all pixel points included in the image to be processed on different color channels, so as to obtain an image after the second adjustment processing. And carrying out first adjustment processing on color values of all pixel points on corresponding color channels in the image after the second adjustment processing based on first correction coefficients of all pixel points on different color channels in the image to be processed, so as to obtain the first image.
The second adjusting process may be: and carrying out product operation on the second correction coefficient of each pixel point on different color channels included in the image to be processed and the color value of each pixel point on the corresponding color channel included in the image to be processed, so as to obtain a second adjusted image. And further, based on the first correction coefficient of each pixel point on different color channels included in the image to be processed, carrying out summation operation with the color value of each pixel point on the corresponding color channel included in the image after the second adjustment processing, so as to obtain the first image.
The above embodiment describes the process of color correction by taking the first correction coefficient and the second correction coefficient included in the color correction coefficient as examples. In this way, a way of performing color correction by combining the first correction coefficient and the second correction coefficient of each pixel point on different color channels included by the color correction coefficient is provided. The first adjustment processing is performed by combining the first correction coefficients on different color channels and the color values on corresponding color channels, so that the primary correction of the color values on the color channels can be realized rapidly and efficiently from the dimension of the pixel point. And then, the first adjustment processing is carried out by combining the first correction coefficients on the different color channels and the color values on the corresponding color channels after the second adjustment processing, so that the secondary correction of the color values on the color channels can be further realized from the dimension of the pixel points, the color correction of the image to be processed is further realized, and the first image with higher color quality is obtained.
In some embodiments, the second correction coefficient is also normalized before the second adjustment process described above is performed. And then executing the subsequent second adjustment processing based on the second correction coefficient after the normalization processing.
The normalization process refers to scaling the characteristic value of the color correction coefficient to be within the [0,1] interval.
In some embodiments, a normalization process is performed on a second correction coefficient of each pixel point included in the color correction coefficient on different color channels through a sigmoid function, so as to obtain the normalized second correction coefficient.
For example, referring to fig. 4, the above adjustment coefficient tensor, i.e., the eigenvector (r) of n×3 shown in fig. 4, is calculated by a sigmoid function 1 ,r 2 ,r 3 ) And carrying out normalization processing, and further, executing a subsequent second adjustment processing by using the adjustment coefficient tensor after the normalization processing.
In this way, the second correction coefficient is normalized to eliminate the dimension influence among different characteristic values in the second correction coefficient, so that the subsequent preliminary correction on the color value is performed based on the normalized second correction coefficient, and the smooth progress of the color correction is ensured.
Based on the above S402 to S403, a color correction method is provided that can realize color correction for an image to be processed. The color correction coefficients of the image to be processed are obtained, so that the correction of colors on different color channels can be realized from the dimension of the pixel point, and the color correction of the image is realized, so that the image processing effect is improved. Furthermore, the image obtained after the color correction is displayed on the display screen of the electronic equipment can display the image with higher color quality and closer to the real color effect for the user, and the display effect of the image is improved.
In some embodiments, the electronic device may be provided with a first control for triggering the switching on or off of the color correction function. For example, the user may turn on or off the color correction function by performing a trigger operation on the first control. By way of example, the triggering operation may be a click operation, a slide operation, or other operation.
Accordingly, in some embodiments, in response to a user' S opening operation of the first control, after acquiring the image to be processed, the electronic device performs subsequent S402 to S403 to display the color-corrected first image on a display screen of the electronic device. Or, in other embodiments, after the electronic device acquires the image to be processed in response to the closing operation of the first control by the user, the subsequent S402 to S403 are not required to be executed, and the image to be processed is displayed on a display screen of the electronic device.
S404, the electronic device displays the first image on a display screen of the electronic device.
In some embodiments, the electronic device displays the first image in an interface of the first application.
For example, taking a first application as a camera application, the first image is displayed in a preview interface of the camera application. The preview interface may be an interface in the camera application that includes a shooting control. Or, displaying the first image in an image viewing interface of the camera application. The image viewing interface may be an interface in a camera application for triggering viewing of a captured image.
For another example, taking a first application as a gallery application, a thumbnail of the first image is displayed in an image list interface of the gallery application. For another example, taking a first application as a video application, the first image is displayed in a playback interface of the video application. As another example, taking a first application as a game application, the first image is displayed in a game interface of the game application. As another example, taking a first application as a clip application, the first image is displayed in a clip interface of the clip application.
Note that the various first applications shown above are merely examples, and a process of displaying the first image by the electronic apparatus will be described. In other embodiments, the first application may be another type of application, and accordingly, the electronic device may also display the first image in other manners. The embodiment of the application does not limit the process how the first image is displayed.
The above-described S403 to S404 explain the procedure in which the electronic device displays the first image on the display screen of the electronic device based on the color correction coefficient. Therefore, the image with higher color quality and more close to the real color effect can be displayed for the user, and the display effect of the image is improved.
In some embodiments, after obtaining the first image based on S403, the electronic device may further perform image processing on the first image to obtain the first image after image processing. Further, the image-processed first image is displayed on a display screen of the electronic device. In this way, after the color correction is performed on the image to be processed to obtain the first image, the image processing can also be performed on the first image to obtain an image with higher image quality.
The technical solution provided by the embodiment shown in fig. 4 provides a solution for performing color correction based on the first color correction model. Wherein the prediction of the color correction coefficients of the image to be processed is achieved by setting a predictor model for predicting the color correction coefficients in the first color correction model. The color correction of the image to be processed is achieved by setting a correction sub-model for color correction in the first color correction model. Therefore, the first color correction model can correct colors on different color channels from the dimension of the pixel point, so that the color correction of an image is realized, and the image processing effect is improved. Furthermore, the image obtained after the color correction is displayed on the display screen of the electronic equipment can display the image with higher color quality and closer to the real color effect for the user, and the display effect of the image is improved.
For example, in the case where the image to be processed is the second image, the flow shown in fig. 4 may be replaced with: and inputting the second image into a first color correction model, and predicting the color correction coefficient of the second image through a predictor model of the first color correction model. And carrying out color correction on the second image based on the color correction coefficient through a correction sub-model of the first color correction model to obtain the first image. The first image is displayed on a display screen of the electronic device.
Also for example, in the case where the image to be processed is an image obtained by performing image processing on the second image, the flow shown in fig. 4 may be replaced with: and inputting the image after image processing into a first color correction model, and predicting the color correction coefficient of the image after image processing through a predictor model of the first color correction model. And carrying out color correction on the image processed by the image based on the color correction coefficient through a correction sub-model of the first color correction model to obtain the first image. The first image is displayed on a display screen of the electronic device.
Note that, in the case where the image to be processed is an image obtained by performing image processing on the second image, other implementations are possible in addition to the flow shown in the above example. Fig. 5 is a schematic flow chart of a color correction method according to an embodiment of the present application. Referring to fig. 5, a description will be given of a case where color correction is performed based on the second color correction model, the method including the following S501 to S504:
S501, the electronic equipment inputs a second image into a second color correction model, and performs image processing on the second image through a processing sub-model of the second color correction model to obtain an image to be processed.
In the embodiment of the application, the processing sub-model is used for processing the image to be processed so as to obtain the image after the image processing. Referring to fig. 5, the image processed image is an output image of the processing submodel.
In some embodiments, the image processed image may be in the form of an n×3 feature vector. It should be appreciated that the image processed image may include color values for each pixel point on different color channels.
In some embodiments, the processing sub-model is a model for providing at least one of an image enhancement function, an image superdivision function, an image restoration function, an image deblurring function, an image denoising function, an image rain removing function, an image defogging function, a quality improvement function, or a high dynamic range function.
Illustratively, taking the processing sub-model as a model for providing the image enhancement function as an example, the image after image processing is an image obtained after the image enhancement processing. Taking the processing sub-model as a model for providing the image super-division function as an example, the image after image processing is the image obtained after the image super-division processing. Other image processing functions are similar and will not be described in detail.
In the above-described embodiment, the plurality of types of image processing functions are shown, and the processing sub-model may be a model provided with at least one of the plurality of types of image processing functions. In this way, the color correction can be performed on the image after the image processing in at least one of the image enhancement scene, the image superdivision scene, the image restoration scene, the image deblurring scene, the image denoising scene, the image rain removing scene, the image defogging scene, the quality improvement scene or the high dynamic range scene, so that the application range of the color correction is widened.
In some embodiments, the processing sub-model may be any of a Unet network model, a transformer network model, a generate countermeasure network (vector quantized generative adversarial network, VQ-GAN) network model.
The Unet network model is a common segmentation network in the field of image segmentation. In some embodiments, the Unet network model may be an encoder-decoder architecture based on convolutional neural networks, partitioning the input image into classes at multiple pixel levels. the transducer network model is a model that uses the attention mechanism to increase the model training speed. The VQ-GAN network model is a GAN-based image generation model capable of converting a low-quality image into a high-quality image.
Illustratively, taking the Unet network model as an example, the processing sub-model may include a feature extraction layer (or referred to as a downsampling module) and a deconvolution layer (or referred to as an upsampling module). Accordingly, the process of performing the above processing on the image to be processed through the processing sub-model may be: and extracting the characteristics of the image to be processed through the characteristic extraction layer of the processing submodel to obtain the image characteristics of the image to be processed. Further, the image feature is subjected to deconvolution processing by the deconvolution layer of the processing submodel, and an image after image processing is obtained.
In the process shown in S501, a process of acquiring an image to be processed by the electronic device is described. The feature extraction layer is used for extracting image features of an image to be processed, and the deconvolution layer is used for carrying out deconvolution processing on the extracted image features so as to obtain an image after image processing. In this way, a processing sub-model based on a feature extraction layer and a deconvolution layer is provided, and an image after image processing can be predicted.
S502, the electronic equipment predicts the color correction coefficient of the second image through the predictor model of the second color correction model, and the color correction coefficient is used as the color correction coefficient of the image to be processed.
In some embodiments, the process of predicting the color correction coefficient of the second image by using the predictor model may be: and extracting the characteristics of the second image through the characteristic extraction layer of the predictor model to obtain the color characteristics of the second image on different color channels. And carrying out convolution processing on the color characteristics of the second image on different color channels through the convolution layer of the prediction submodel to obtain color correction coefficients of the second image on different color channels.
In the process shown in S502, a process of the electronic device acquiring the color correction coefficient of the image to be processed is described. And the color correction coefficient is obtained by setting a feature extraction layer and a convolution layer in the predictor model, extracting color features of the second image on different color channels by using the feature extraction layer, and carrying out convolution processing on the extracted color features by using the convolution layer. In this way, a predictor model formed by the feature extraction layer and the convolution layer is provided, and color correction coefficients of the second image on different color channels can be predicted.
The structure of the predictor model of the second color-correction model is the same as that of the predictor model of the first color-correction model shown in S402 in fig. 4. The process of predicting the color correction coefficient of the second image by the predictor model of the second color correction model can refer to S402 in fig. 4, and will not be described again.
S503, the electronic equipment performs color correction on the image to be processed based on the color correction coefficient through the correction sub-model of the second color correction model to obtain the first image.
The correction sub-model of the second color correction model has the same structure as the correction sub-model of the first color correction model shown in S403 in fig. 4. The process of performing color correction by the correction sub-model of the second color correction model can refer to S403 in fig. 4, and will not be described again.
Based on the above-described S502 to S503, a color correction method is provided that is capable of not only performing image processing (such as image enhancement, etc.) on an image to be processed but also realizing color correction for the image after image processing.
In some embodiments, the electronic device may be provided with a first control. Accordingly, in some embodiments, in response to a user' S opening operation of the first control, after acquiring the image to be processed, the electronic device performs subsequent S502 to S503 to display the color-corrected first image on a display screen of the electronic device. Or, in other embodiments, after the electronic device acquires the image to be processed in response to the closing operation of the first control by the user, the subsequent S502 to S503 are not required to be executed, and the image to be processed after the image processing is displayed on the display screen of the electronic device.
In other embodiments, the electronic device may be provided with a second control for triggering the turning on or off of the image processing function. The image processing function includes the functions of image processing and color correction. For example, the user may turn on or off the image processing function by performing a trigger operation on the second control.
Accordingly, in some embodiments, in response to a user' S opening operation of the second control, after the second image is acquired, the electronic device performs subsequent S501 to S503 to display the first image after the image processing and the color correction on the display screen of the electronic device. Or, in other embodiments, the electronic device is configured to display the second image on the display screen of the electronic device without performing subsequent S501 to S503 after the second image is acquired in response to the user closing operation of the second control.
In still other embodiments, the electronic device may set the first control and the second control. Further, the user can select a corresponding control to flexibly control the on or off of the color correction function or the image processing function.
S504, the electronic device displays the first image on a display screen of the electronic device.
It should be noted that, the content of S504 is referred to S404 in fig. 4, and will not be described again.
The above-described S503 to S504 explain the procedure in which the electronic device displays the first image on the display screen of the electronic device based on the color correction coefficient. Therefore, the image with higher color quality and more close to the real color effect can be displayed for the user, and the display effect of the image is improved.
The technical solution provided by the embodiment shown in fig. 5 provides a solution for performing color correction based on the second color correction model. Wherein the image processing of the image to be processed is realized by setting a processing sub-model for image processing in the second color correction model. The prediction of the color correction coefficients is achieved by setting a predictor model for predicting the color correction coefficients in the second color correction model. The color correction of the image after the image processing is achieved by setting a correction sub-model for color correction in the first color correction model. Therefore, the model with the color correction function and the image processing function can not only perform image processing on the image to be processed, but also effectively improve the color quality of the image, thereby improving the effect of image processing. Furthermore, the image obtained after the color correction is displayed on the display screen of the electronic equipment can display the image with higher color quality and closer to the real color effect for the user, and the display effect of the image is improved.
Fig. 6 is a schematic diagram of a color correction according to an embodiment of the present application. Referring to fig. 6, the image shown in a in fig. 6 is used to refer to an image to be processed in the embodiment of the present application, that is, an image before color correction. The image shown in B in fig. 6 is used to refer to the first image after color correction in the embodiment of the present application. For example, referring to the "roof" A1 shown in fig. 6 a, it should be noted that, in the image to be processed, the color of the "roof" A1 appears less obvious, and the actual color thereof cannot be clarified by naked eyes. After the color correction method provided by the embodiment of the application is applied, referring to a "roof" B1 shown in a B of FIG. 6, in a first image, the color of the "roof" B1 is obvious, and the naked eye can clearly determine that the color of the roof is red. As another example, referring to the "tree" A2 shown in fig. 6 a, it should be noted that, in the image to be processed, the color of the "tree" A2 is less obvious, and the actual color thereof cannot be clarified by naked eyes, or even the tree and the building cannot be distinguished. After the color correction method provided by the embodiment of the application is applied, referring to the tree B2 shown in the B of fig. 6, in the first image, the color of the tree B2 is obvious, the tree and the building can be clearly defined by naked eyes, and the color of the tree can be clearly defined to be green. Therefore, by applying the color correction method provided by the embodiment of the application to the image shown in the A, the correction of the colors on the different color channels can be realized from the dimension of the pixel point, the color correction of the image is realized, and the image processing effect is improved.
For the color correction model mentioned in fig. 4 or fig. 5, before implementing the present solution, the initial model needs to be trained iteratively based on the image training data to obtain the color correction model.
In the process of any iteration training, the image training data is input into a model obtained after the previous iteration training, a color correction coefficient of the image training data is obtained through the model, and the image training data is subjected to color correction based on the color correction coefficient, so that an output image after the color correction is obtained. And adjusting model parameters based on the output image and the sample image corresponding to the image training data. Therefore, the training method of the color correction model is provided, the initial model is iteratively trained by utilizing the image training data, and the model with better color correction capability can be obtained through training.
For the first color correction model shown in fig. 4, fig. 7 is a flowchart of a training method of the first color correction model according to an embodiment of the present application. Referring to fig. 7, taking as an example the process of any one of the iterative training in the model training, the method includes the following S701-S705:
s701, in the process of any iterative training, the electronic equipment inputs the image training data into a model obtained after the last iterative training.
The image training data refers to training data of an initial model.
S702, predicting the color correction coefficient of the image training data through a predictor model of the model.
In some embodiments, feature extraction is performed on the image training data by a feature extraction layer of the predictor model to obtain color features of the image training data on different color channels. And further, carrying out convolution processing on the color characteristics of the image training data on different color channels through the convolution layer of the predictor model to obtain color correction coefficients of the image training data on different color channels.
In this embodiment, by setting a feature extraction layer and a convolution layer in the predictor model, color features of the image training data on different color channels are extracted by the feature extraction layer, and the extracted color features are subjected to convolution processing by the convolution layer to obtain the color correction coefficient. In this way, a predictor model formed by the feature extraction layer and the convolution layer is provided, and the color correction coefficient of the image training data can be predicted.
Wherein the feature extraction layer may comprise at least two layers of object model structures. The target model structure comprises a convolution layer, a batch normalization layer and an activation function layer.
Thus, a feature extraction layer is provided that is based on the construction of at least two layers of object model structures. The target model structure comprises a convolution layer, a batch standardization layer and an activation function layer. Therefore, the constructed feature extraction layer has good feature extraction capability, and can extract more abundant and effective color features, so that the color correction coefficient of the image training data is predicted based on the extracted color features, and the prediction accuracy of the color correction coefficient can be improved.
It should be noted that, the relevant content of the predictor model in S702 may refer to S402 in fig. 4, and will not be described again.
S703, performing color correction on the image training data based on the color correction coefficient through a correction sub-model of the model to obtain the output image.
In some embodiments, the color correction coefficients include first correction coefficients for each pixel included in the image training data on different color channels. Accordingly, the above-described color correction process may include: and carrying out first adjustment processing on color values of all pixel points included in the image training data on corresponding color channels based on first correction coefficients of all pixel points included in the image training data on different color channels to obtain the output image.
In the above embodiment, a manner of performing color correction based on the first correction coefficient of each pixel point on different color channels included in the color correction coefficient is provided. The first adjustment processing is performed by combining the first correction coefficients on different color channels and the color values on the corresponding color channels, so that correction of the color values on the color channels can be realized from the dimension of the pixel points, and color correction of the image training data is realized, thereby obtaining an output image with higher color quality.
In other embodiments, the color correction coefficients include a first correction coefficient and a second correction coefficient for each pixel included in the image training data on different color channels. Accordingly, the above-described color correction process may include: and carrying out second adjustment processing on the color values of the pixel points on the corresponding color channels included in the image training data based on second correction coefficients of the pixel points on the different color channels included in the image training data, so as to obtain an image after the second adjustment processing. And carrying out first adjustment processing on color values of all pixel points on corresponding color channels in the image after the second adjustment processing based on first correction coefficients of all pixel points on different color channels included in the image training data, so as to obtain the output image.
In the above embodiment, a manner is provided in which the color correction is performed by combining the first correction coefficient and the second correction coefficient of each pixel point included in the color correction coefficient on different color channels. The first adjustment processing is performed by combining the first correction coefficients on different color channels and the color values on corresponding color channels, so that the primary correction of the color values on the color channels can be realized rapidly and efficiently from the dimension of the pixel point. And then, the first adjustment processing is carried out by combining the first correction coefficients on different color channels and the color values on the corresponding color channels after the second adjustment processing, so that the secondary correction of the color values on the color channels can be further realized from the dimension of the pixel points, the color correction of the image training data is further realized, and the output image with higher color quality is obtained.
In some embodiments, the second correction coefficient is also normalized before the second adjustment process described above is performed. And then executing the subsequent adjustment processing based on the second correction coefficient after normalization processing.
Note that, the content related to the color correction in S703 may be referred to S403 in fig. 4, and will not be described again.
S704, the electronic equipment adjusts model parameters based on the output image and the sample image corresponding to the image training data.
The sample image is an image with the color quality reaching the preset requirement. It should be understood that the sample image is used to refer to a higher color quality image corresponding to the image training data. For example, the sample image may be a higher color quality image acquired with a high precision camera.
In one possible implementation manner, the process of adjusting the model parameters may be: and determining a model loss value of the iterative training based on the output image and the sample image of the image training data. And adjusting the model parameters according to the model loss value.
Wherein the model loss value is used to represent the difference between the output image of the model and the sample image of the image training data.
In one possible implementation, the model loss value is a cross entropy loss value (cross entropy loss), and accordingly, the electronic device determines a cross entropy loss value between the output image and the sample image of the image training data according to the output image and the sample image of the image training data, so as to obtain a model loss value of the iterative training process, and further execute the process of adjusting the model parameter according to the model loss value.
In another possible implementation manner, the model loss value is a mean square error loss value (mean square error, MSE), and accordingly, the electronic device determines the mean square error loss value between the output image and the sample image of the image training data according to the output image and the sample image of the image training data, so as to obtain the model loss value of the iterative training process, and further, executes the process of adjusting the model parameter according to the model loss value.
In the above embodiment, by determining the model loss value, since the model loss value is used to represent the difference between the output image of the model and the sample image of the image training data, the model parameter is adjusted according to the model loss value, so that the learning ability of the model can be improved, and the model with better learning ability can be obtained through training. Of course, in another possible implementation, the electronic device can also obtain other types of model loss values to perform the above-described process of adjusting the model parameters according to the model loss values. The embodiment of the present application is not limited thereto.
After adjusting the model parameters, the electronic device further determines whether the model training satisfies the target condition, and further executes S705 if the model training does not satisfy the target condition. And under the condition that the model training meets the target condition, acquiring the model obtained by training in the iterative process as a first color correction model.
And S705, under the condition that the model with the model parameters adjusted does not meet the target conditions, the electronic equipment executes the next iteration training based on the model with the model parameters adjusted until the model meets the target conditions.
In some embodiments, the target condition satisfies at least one of the following conditions: the iteration number of model training reaches the target number; alternatively, the model loss value is less than or equal to the target threshold. The target number of times is a preset training iteration number, for example, the iteration number reaches 100. The setting of the target times is not limited in the embodiment of the application. The target threshold is a fixed threshold set in advance, for example, the model loss value is less than 0.0001. The setting of the target threshold is not limited in the embodiment of the application.
In the embodiment shown in fig. 7, the predictor model and the corrector model are set in the model, so that in the process of any one iterative training, the color correction coefficient of the image training data is predicted by using the predictor model, and the color correction is performed on the image training data by using the corrector model, so as to obtain an output image. Furthermore, based on the output image and the sample image, model parameters are adjusted so as to realize training optimization of the model, and thus the model with the color correction function is obtained through training.
For the second color correction model shown in fig. 5, fig. 8 is a flowchart of a training method of the second color correction model according to an embodiment of the present application. Referring to fig. 8, taking as an example the process of any one of the iterative training in the model training, the method includes the following S801-S806:
s801, in the process of any iteration training, the electronic equipment inputs image training data into a model obtained after the previous iteration training.
It should be noted that, the embodiment of the present application may be to add a color coefficient prediction module based on an original image processing model (such as a uiet network model) to perform model training, so as to obtain a model with both a color correction coefficient prediction function and an image processing function. Thus, in some embodiments, the image training data may employ training data of a native process model. Of course, in other embodiments, the image training data may be retrieved.
S802, performing image processing on the image training data through a processing sub-model of the model to obtain an image after image processing.
The image processing may be at least one of image enhancement processing, image superdivision processing, image restoration processing, image deblurring processing, image denoising processing, image rain removing processing, image defogging processing, quality improvement processing, or high dynamic range processing.
In the above-described embodiments, various types of image processing functions are shown. Therefore, the model with at least one image processing function can be obtained by training in combination with at least one function of image enhancement, image superdivision, image recovery, image restoration, image deblurring, image denoising, image rain removal, image defogging, quality improvement or high dynamic range.
It should be noted that, the relevant content of the processing sub-model in S802 may refer to S501 in fig. 5, and will not be described again.
S803, predicting the color correction coefficient of the image training data through a predictor model of the model.
In some embodiments, feature extraction is performed on the image training data by a feature extraction layer of the predictor model to obtain color features of the image training data on different color channels. And further, carrying out convolution processing on the color characteristics of the image training data on different color channels through the convolution layer of the predictor model to obtain color correction coefficients of the image training data on different color channels.
It should be noted that, the relevant content of S803 may be referred to S502 in fig. 5, and will not be described again.
S804, performing color correction on the image processed by the image based on the color correction coefficient through a correction sub-model of the model to obtain the output image.
It should be noted that, the related content of S804 may refer to S503 in fig. 5, and will not be described again.
S805, the electronic device adjusts model parameters based on the output image and the sample image corresponding to the image training data.
In some embodiments, the sample image may be a higher color quality image in the training data of the original process model. Alternatively, in other embodiments, the sample image may be a higher color quality image acquired using a high precision camera.
It should be noted that, the relevant content of adjusting the model parameters in S805 may refer to S704 in fig. 7, and will not be described again.
After adjusting the model parameters, the electronic device further determines whether the model training satisfies the target condition, and further executes S806 if the model training does not satisfy the target condition. And under the condition that the model training meets the target condition, acquiring the model obtained by training in the iterative process as a second color correction model.
S806, under the condition that the model with the model parameters adjusted does not meet the target conditions, the electronic equipment executes the next iteration training based on the model with the model parameters adjusted until the model meets the target conditions.
In the embodiment shown in fig. 8, the processing sub-model, the prediction sub-model and the correction sub-model are set in the model, so that in the process of any iterative training, the processing sub-model is used for performing image processing on the image training data, the prediction sub-model is used for predicting the color correction coefficient of the image training data, and the correction sub-model is used for performing color correction on the image training data, so that an output image is obtained. Furthermore, based on the output image and the sample image, model parameters are adjusted so as to realize training optimization of the model, thereby training to obtain the model with both the color correction function and the image processing function.
Note that, the electronic device for executing the model training process in fig. 7 and 8 may be the same as the electronic device for executing the model application process in fig. 4 and 5, or may be different from the electronic device for executing the model application process in fig. 4 and 5. For example, in some embodiments, the electronic device for performing the model application process in fig. 4 and 5 may be a terminal, and the electronic device for performing the model training process in fig. 7 and 8 may be a server.
It will be appreciated that, in order to implement the above-described functions, the electronic device (such as a terminal) in the embodiment of the present application includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
Fig. 9 is a schematic diagram of a training device for a color correction model according to an embodiment of the present application. Referring to fig. 9, the training apparatus of the color correction model includes an image acquisition module 901, a coefficient acquisition module 902, and a display module 903. Wherein,,
an image acquisition module 901, configured to acquire an image to be processed;
a coefficient obtaining module 902, configured to obtain a color correction coefficient of the image to be processed, where the color correction coefficient is used to correct colors of pixel points included in the image to be processed on different color channels;
the display module 903 is configured to display a first image on a display screen of the electronic device, where the first image is an image obtained by performing color correction on the image to be processed based on the color correction coefficient.
According to the technical scheme provided by the embodiment of the application, the color correction coefficient of the image to be processed is obtained, so that the correction of colors on different color channels can be realized from the dimension of the pixel point, the color correction of the image is realized, and the image processing effect is improved. Furthermore, the image obtained after the color correction is displayed on the display screen of the electronic equipment can display the image with higher color quality and closer to the real color effect for the user, and the display effect of the image is improved.
In some embodiments, the image to be processed is a second image; or the image to be processed is an image obtained after the second image is subjected to image processing;
the second image is an image to be displayed on the display screen of the first application.
In some embodiments, in the case that the image to be processed is an image obtained by performing image processing on the second image, the image obtaining module 901 is specifically configured to:
receiving an operation of triggering the first application to display an image by a user;
responding to the operation of triggering the first application to display an image, and acquiring the second image to be displayed;
performing image processing on the second image to obtain the image to be processed;
the display module 903 is specifically configured to:
the first image is displayed in an interface of the first application.
In some embodiments, the coefficient acquisition module 902 is specifically configured to:
inputting the image to be processed into a first color correction model, and predicting the color correction coefficient of the image to be processed through a predictor model of the first color correction model;
the display module 903 is specifically configured to:
performing color correction on the image to be processed based on the color correction coefficient through a correction sub-model of the first color correction model to obtain the first image;
The first image is displayed on a display screen of the electronic device.
In some embodiments, the coefficient acquisition module 902 is specifically configured to:
extracting the characteristics of the image to be processed through the characteristic extraction layer of the predictor model to obtain the color characteristics of the image to be processed on different color channels;
and carrying out convolution processing on the color characteristics of the image to be processed on different color channels through the convolution layer of the prediction submodel to obtain color correction coefficients of the image to be processed on different color channels.
In some embodiments, in the case that the image to be processed is an image obtained by performing image processing on the second image, the image obtaining module 901 is specifically configured to:
inputting the second image into a second color correction model, and performing image processing on the second image through a processing sub-model of the second color correction model to obtain the image to be processed;
the coefficient obtaining module 902 is specifically configured to:
predicting the color correction coefficient of the second image through a predictor model of the second color correction model to serve as the color correction coefficient of the image to be processed;
the display module 903 is specifically configured to:
Performing color correction on the image to be processed based on the color correction coefficient through a correction sub-model of the second color correction model to obtain the first image;
the first image is displayed on a display screen of the electronic device.
In some embodiments, the coefficient acquisition module 902 is specifically configured to:
extracting features of the second image through a feature extraction layer of the predictor model to obtain color features of the second image on different color channels;
and carrying out convolution processing on the color characteristics of the second image on different color channels through the convolution layer of the prediction submodel to obtain color correction coefficients of the second image on different color channels.
In some embodiments, the feature extraction layer includes at least two layers of object model structures including a convolution layer, a batch normalization layer, and an activation function layer.
In some embodiments, the color correction coefficients include first correction coefficients for each pixel included in the image to be processed on different color channels;
the display module 903 includes a correction module for:
and carrying out first adjustment processing on color values of all pixel points included in the image to be processed on the corresponding color channels based on first correction coefficients of all pixel points included in the image to be processed on different color channels, so as to obtain the first image.
In some embodiments, the color correction coefficients include a first correction coefficient and a second correction coefficient for each pixel included in the image to be processed on different color channels;
the display module 903 includes a correction module for:
performing second adjustment processing on color values of all pixel points included in the image to be processed on corresponding color channels based on second correction coefficients of all pixel points included in the image to be processed on different color channels to obtain an image after the second adjustment processing;
and carrying out first adjustment processing on color values of all pixel points on corresponding color channels in the image after the second adjustment processing based on first correction coefficients of all pixel points on different color channels in the image to be processed, so as to obtain the first image.
In some embodiments, the apparatus further comprises a processing module for:
and normalizing the second correction coefficient.
In some embodiments, the apparatus further comprises a processing module for:
performing image processing on the first image to obtain a first image after image processing;
the display module 903 is configured to:
and displaying the first image after the image processing on a display screen of the electronic device.
In some embodiments, the image processing is at least one of an image enhancement process, an image superdivision process, an image restoration process, an image deblurring process, an image denoising process, an image rain removing process, an image defogging process, a quality improvement process, or a high dynamic range process.
Fig. 10 is a schematic diagram of a training device for a color correction model according to an embodiment of the present application. Referring to fig. 10, the training apparatus of the color correction model includes a training module 1001.
The training module 1001 is configured to perform iterative training on the initial model based on the image training data to obtain the color correction model;
in the process of any iteration training, inputting the image training data into a model obtained after the previous iteration training, obtaining a color correction coefficient of the image training data through the model, carrying out color correction on the image training data based on the color correction coefficient to obtain a color-corrected output image, wherein the color correction coefficient is used for correcting the colors of all pixel points included in the image training data on different color channels; and adjusting model parameters based on the output image and a sample image corresponding to the image training data, wherein the sample image is an image with the color quality reaching a preset requirement.
The technical scheme provided by the embodiment of the application provides a training method of a color correction model, which is used for carrying out iterative training on an initial model by utilizing image training data. In the model training process, based on an output image of the model and a sample image of the image training data, model parameters are adjusted so as to realize training optimization of the model, and the model with better color correction capability can be obtained through training.
In some embodiments, the training module 1001 is specifically configured to:
predicting color correction coefficients of the image training data through a predictor model of the model;
and carrying out color correction on the image training data based on the color correction coefficient through a correction sub-model of the model to obtain the output image.
In some embodiments, the training module 1001 is specifically configured to:
performing image processing on the image training data through a processing sub-model of the model to obtain an image after image processing;
predicting color correction coefficients of the image training data through a predictor model of the model;
and carrying out color correction on the image processed by the image based on the color correction coefficient through a correction sub-model of the model to obtain the output image.
In some embodiments, the image processing is at least one of an image enhancement process, an image superdivision process, an image restoration process, an image deblurring process, an image denoising process, an image rain removing process, an image defogging process, a quality improvement process, or a high dynamic range process.
In some embodiments, the training module 1001 is specifically configured to:
extracting features of the image training data through a feature extraction layer of the predictor model to obtain color features of the image training data on different color channels;
and carrying out convolution processing on the color characteristics of the image training data on different color channels through the convolution layer of the prediction submodel to obtain color correction coefficients of the image training data on different color channels.
In some embodiments, the feature extraction layer includes at least two layers of object model structures including a convolution layer, a batch normalization layer, and an activation function layer.
In some embodiments, the color correction coefficients include first correction coefficients for each pixel included in the image training data on different color channels;
the training module 1001 is specifically configured to:
And carrying out first adjustment processing on color values of all pixel points included in the image training data on corresponding color channels based on first correction coefficients of all pixel points included in the image training data on different color channels to obtain the output image.
In some embodiments, the color correction coefficients include a first correction coefficient and a second correction coefficient for each pixel included in the image training data on different color channels;
the training module 1001 is specifically configured to:
performing second adjustment processing on color values of each pixel point included in the image training data on the corresponding color channel based on second correction coefficients of each pixel point included in the image training data on different color channels to obtain an image after the second adjustment processing;
and carrying out first adjustment processing on color values of all pixel points on corresponding color channels in the image after the second adjustment processing based on first correction coefficients of all pixel points on different color channels included in the image training data, so as to obtain the output image.
In some embodiments, the apparatus further comprises a processing module for:
and normalizing the second correction coefficient.
The embodiment of the application also provides electronic equipment, which comprises: a display screen, a processor and a memory. The display screen is provided with a display function. The processor is connected with the memory, the memory is used for storing program codes, and the processor executes the program codes stored in the memory, so that the color correction method provided by the embodiment of the application is realized.
The embodiment of the application also provides a computer readable storage medium, and the computer readable storage medium stores program code thereon, when the program code runs on the electronic device, the electronic device is caused to execute the functions or steps executed by the electronic device in the above method embodiment.
The embodiment of the application also provides a computer program product, which comprises program code, when the program code runs on the electronic device, causes the electronic device to execute the functions or steps executed by the electronic device in the embodiment of the method.
The electronic device, the computer readable storage medium or the computer program product provided by the embodiments of the present application are configured to execute the corresponding method provided above, and therefore, the beneficial effects achieved by the electronic device, the computer readable storage medium or the computer program product can refer to the beneficial effects in the corresponding method provided above, and are not repeated herein.
It will be apparent to those skilled in the art from this disclosure that, for convenience and brevity, only the above-described division of functional modules is illustrated, and in practical applications, the above-described functional allocation may be performed by different functional modules, that is, the internal structure of the apparatus (e.g., electronic device) is divided into different functional modules, so as to perform all or part of the above-described functions. The specific working processes of the above-described system, apparatus (e.g., electronic device) and unit may refer to the corresponding processes in the foregoing method embodiments, which are not described herein again.
In the several embodiments provided herein, it should be understood that the disclosed systems, apparatuses (e.g., electronic devices) and methods may be implemented in other ways. For example, the above-described embodiments of an apparatus (e.g., an electronic device) are merely illustrative, and the division of the module or unit is merely a logical function division, and may be implemented in other ways, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform all or part of the steps of the method of the various embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic or optical disk, and the like.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (21)
1. A color correction method, applied to an electronic device, comprising:
acquiring an image to be processed;
acquiring a color correction coefficient of the image to be processed, wherein the color correction coefficient is used for correcting colors of pixel points included in the image to be processed on different color channels;
and displaying a first image on a display screen of the electronic equipment based on the color correction coefficient, wherein the first image is an image obtained by performing color correction on the image to be processed based on the color correction coefficient.
2. The method according to claim 1, wherein the image to be processed is a second image; or the image to be processed is an image obtained after the second image is subjected to image processing;
The second image is an image to be displayed on the display screen of the first application.
3. The method according to claim 2, wherein, in the case where the image to be processed is an image obtained by image processing the second image, the acquiring the image to be processed includes:
receiving an operation of triggering the first application to display an image by a user;
responding to the operation of triggering the first application to display an image, and acquiring the second image to be displayed;
performing image processing on the second image to obtain the image to be processed;
the displaying a first image on a display screen of the electronic device includes:
the first image is displayed in an interface of the first application.
4. The method according to claim 2, wherein the acquiring the color correction coefficients of the image to be processed comprises:
inputting the image to be processed into a first color correction model, and predicting the color correction coefficient of the image to be processed through a predictor model of the first color correction model;
the displaying a first image on a display screen of the electronic device based on the color correction coefficient includes:
Performing color correction on the image to be processed based on the color correction coefficient through a correction sub-model of the first color correction model to obtain the first image;
and displaying the first image on a display screen of the electronic device.
5. The method of claim 4, wherein predicting the color correction coefficients of the image to be processed by the predictor model of the first color correction model comprises:
extracting the characteristics of the image to be processed through the characteristic extraction layer of the predictor model to obtain the color characteristics of the image to be processed on different color channels;
and carrying out convolution processing on the color characteristics of the image to be processed on different color channels through the convolution layer of the prediction submodel to obtain color correction coefficients of the image to be processed on different color channels.
6. The method according to claim 2, wherein, in the case where the image to be processed is an image obtained by image processing the second image, the acquiring the image to be processed includes:
inputting the second image into a second color correction model, and performing image processing on the second image through a processing sub-model of the second color correction model to obtain the image to be processed;
The obtaining the color correction coefficient of the image to be processed comprises the following steps:
predicting the color correction coefficient of the second image through a predictor model of the second color correction model, wherein the predictor model is used as the color correction coefficient of the image to be processed;
the displaying a first image on a display screen of the electronic device based on the color correction coefficient includes:
performing color correction on the image to be processed based on the color correction coefficient through a correction sub-model of the second color correction model to obtain the first image;
and displaying the first image on a display screen of the electronic device.
7. The method of claim 6, wherein predicting the color correction coefficients of the second image by the predictor model of the second color correction model comprises:
extracting the characteristics of the second image through the characteristic extraction layer of the predictor model to obtain the color characteristics of the second image on different color channels;
and carrying out convolution processing on the color characteristics of the second image on different color channels through the convolution layer of the prediction submodel to obtain color correction coefficients of the second image on different color channels.
8. The method of claim 5 or 7, wherein the feature extraction layer comprises at least two layers of object model structures including a convolution layer, a batch normalization layer, and an activation function layer.
9. The method according to claim 4 or 6, wherein the color correction coefficients include first correction coefficients for each pixel included in the image to be processed on different color channels;
the step of performing color correction on the image to be processed based on the color correction coefficient to obtain the first image includes:
and carrying out first adjustment processing on color values of all pixel points included in the image to be processed on corresponding color channels based on first correction coefficients of all pixel points included in the image to be processed on different color channels, so as to obtain the first image.
10. The method according to claim 4 or 6, wherein the color correction coefficients include a first correction coefficient and a second correction coefficient for each pixel included in the image to be processed on different color channels;
the step of performing color correction on the image to be processed based on the color correction coefficient to obtain the first image includes:
Performing second adjustment processing on color values of all pixel points included in the image to be processed on corresponding color channels based on second correction coefficients of all pixel points included in the image to be processed on different color channels, so as to obtain an image after the second adjustment processing;
and carrying out first adjustment processing on color values of all pixel points on corresponding color channels in the image after the second adjustment processing based on first correction coefficients of all pixel points on different color channels in the image to be processed, so as to obtain the first image.
11. The method according to claim 10, wherein, before performing the second adjustment processing on the color value of each pixel point included in the image to be processed on the corresponding color channel based on the second correction coefficient of each pixel point included in the image to be processed on the different color channel, the method further includes:
and normalizing the second correction coefficient.
12. A method of training a color correction model for an electronic device, the method comprising:
performing iterative training on the initial model based on the image training data to obtain the color correction model;
In the process of any iterative training, inputting the image training data into a model obtained after the previous iterative training, obtaining a color correction coefficient of the image training data through the model, performing color correction on the image training data based on the color correction coefficient to obtain a color-corrected output image, wherein the color correction coefficient is used for correcting the colors of all pixel points included in the image training data on different color channels; and adjusting model parameters based on the output image and sample images corresponding to the image training data, wherein the sample images are images with color quality reaching preset requirements.
13. The method of claim 12, wherein the obtaining the color correction coefficient of the image training data by the model, performing color correction on the image training data based on the color correction coefficient, and obtaining the color-corrected output image, comprises:
predicting color correction coefficients of the image training data through a predictor model of the model;
and carrying out color correction on the image training data based on the color correction coefficient through a correction sub-model of the model to obtain the output image.
14. The method of claim 12, wherein the obtaining the color correction coefficient of the image training data by the model, performing color correction on the image training data based on the color correction coefficient, and obtaining the color-corrected output image, comprises:
performing image processing on the image training data through a processing sub-model of the model to obtain an image after image processing;
predicting color correction coefficients of the image training data through a predictor model of the model;
and carrying out color correction on the image after image processing based on the color correction coefficient through a correction sub-model of the model to obtain the output image.
15. The method according to claim 13 or 14, wherein predicting color correction coefficients of the image training data by a predictor model of the model comprises:
extracting features of the image training data through a feature extraction layer of the predictor model to obtain color features of the image training data on different color channels;
and carrying out convolution processing on the color characteristics of the image training data on different color channels through the convolution layer of the prediction submodel to obtain color correction coefficients of the image training data on different color channels.
16. The method of claim 15, wherein the feature extraction layer comprises at least two layers of object model structures, the object model structures comprising a convolution layer, a batch normalization layer, and an activation function layer.
17. The method of claim 12, wherein the color correction coefficients comprise first correction coefficients for each pixel included in the image training data on different color channels;
the step of performing color correction on the image training data based on the color correction coefficient to obtain a color-corrected output image comprises the following steps:
and carrying out first adjustment processing on color values of all pixel points included in the image training data on corresponding color channels based on first correction coefficients of all pixel points included in the image training data on different color channels to obtain the output image.
18. The method according to claim 12, wherein the color correction coefficients include a first correction coefficient and a second correction coefficient for each pixel included in the image training data on different color channels;
the step of performing color correction on the image training data based on the color correction coefficient to obtain a color-corrected output image comprises the following steps:
Performing second adjustment processing on color values of all pixel points included in the image training data on corresponding color channels based on second correction coefficients of all pixel points included in the image training data on different color channels to obtain an image after the second adjustment processing;
and carrying out first adjustment processing on color values of all pixel points on corresponding color channels in the image after the second adjustment processing based on first correction coefficients of all pixel points on different color channels included in the image training data, so as to obtain the output image.
19. The method according to claim 18, wherein the method further comprises, before performing the second adjustment processing on the color value of each pixel included in the image training data on the corresponding color channel based on the second correction coefficient of each pixel included in the image training data on the different color channel:
and normalizing the second correction coefficient.
20. An electronic device comprising a display screen, a memory, and a processor; the display screen is provided with a display function; the memory is used for storing program codes; the processor is configured to invoke the program code to perform the method of any of claims 1-11 or 12-19.
21. A computer readable storage medium comprising program code which, when run on an electronic device, causes the electronic device to perform the method of any of claims 1-11 or 12-19.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310983691.0A CN116721038A (en) | 2023-08-07 | 2023-08-07 | Color correction method, electronic device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310983691.0A CN116721038A (en) | 2023-08-07 | 2023-08-07 | Color correction method, electronic device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116721038A true CN116721038A (en) | 2023-09-08 |
Family
ID=87873725
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310983691.0A Pending CN116721038A (en) | 2023-08-07 | 2023-08-07 | Color correction method, electronic device, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116721038A (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109087269A (en) * | 2018-08-21 | 2018-12-25 | 厦门美图之家科技有限公司 | Low light image Enhancement Method and device |
CN109523485A (en) * | 2018-11-19 | 2019-03-26 | Oppo广东移动通信有限公司 | Image color correction method, device, storage medium and mobile terminal |
CN113168673A (en) * | 2019-04-22 | 2021-07-23 | 华为技术有限公司 | Image processing method and device and electronic equipment |
CN114299180A (en) * | 2021-12-29 | 2022-04-08 | 苏州科达科技股份有限公司 | Image reconstruction method, device, equipment and storage medium |
CN114331927A (en) * | 2020-09-28 | 2022-04-12 | Tcl科技集团股份有限公司 | Image processing method, storage medium and terminal equipment |
CN114841863A (en) * | 2021-01-30 | 2022-08-02 | 华为技术有限公司 | Image color correction method and device |
US20220301123A1 (en) * | 2021-03-17 | 2022-09-22 | Ali Mosleh | End to end differentiable machine vision systems, methods, and media |
CN115861119A (en) * | 2022-12-20 | 2023-03-28 | 上海工业自动化仪表研究院有限公司 | Rock slag image color cast correction method based on deep convolutional neural network |
-
2023
- 2023-08-07 CN CN202310983691.0A patent/CN116721038A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109087269A (en) * | 2018-08-21 | 2018-12-25 | 厦门美图之家科技有限公司 | Low light image Enhancement Method and device |
CN109523485A (en) * | 2018-11-19 | 2019-03-26 | Oppo广东移动通信有限公司 | Image color correction method, device, storage medium and mobile terminal |
CN113168673A (en) * | 2019-04-22 | 2021-07-23 | 华为技术有限公司 | Image processing method and device and electronic equipment |
CN114331927A (en) * | 2020-09-28 | 2022-04-12 | Tcl科技集团股份有限公司 | Image processing method, storage medium and terminal equipment |
CN114841863A (en) * | 2021-01-30 | 2022-08-02 | 华为技术有限公司 | Image color correction method and device |
US20220301123A1 (en) * | 2021-03-17 | 2022-09-22 | Ali Mosleh | End to end differentiable machine vision systems, methods, and media |
CN114299180A (en) * | 2021-12-29 | 2022-04-08 | 苏州科达科技股份有限公司 | Image reconstruction method, device, equipment and storage medium |
CN115861119A (en) * | 2022-12-20 | 2023-03-28 | 上海工业自动化仪表研究院有限公司 | Rock slag image color cast correction method based on deep convolutional neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220207680A1 (en) | Image Processing Method and Apparatus | |
EP3410390B1 (en) | Image processing method and device, computer readable storage medium and electronic device | |
JP7266672B2 (en) | Image processing method, image processing apparatus, and device | |
JP6803982B2 (en) | Optical imaging method and equipment | |
CN111179282B (en) | Image processing method, image processing device, storage medium and electronic apparatus | |
JP2022505115A (en) | Image processing methods and equipment and devices | |
CN113810598A (en) | Photographing method and device | |
CN112954251B (en) | Video processing method, video processing device, storage medium and electronic equipment | |
WO2021077878A1 (en) | Image processing method and apparatus, and electronic device | |
US11508046B2 (en) | Object aware local tone mapping | |
CN115359105B (en) | Depth-of-field extended image generation method, device and storage medium | |
CN113096022B (en) | Image blurring processing method and device, storage medium and electronic device | |
US12088908B2 (en) | Video processing method and electronic device | |
CN116012262B (en) | Image processing method, model training method and electronic equipment | |
CN115550544B (en) | Image processing method and device | |
CN115546858B (en) | Face image processing method and electronic equipment | |
CN113810622B (en) | Image processing method and device | |
CN116721038A (en) | Color correction method, electronic device, and storage medium | |
CN115460343A (en) | Image processing method, apparatus and storage medium | |
CN116320784B (en) | Image processing method and device | |
CN116452437B (en) | High dynamic range image processing method and electronic equipment | |
CN117745620B (en) | Image processing method and electronic equipment | |
CN113747046B (en) | Image processing method, device, storage medium and electronic equipment | |
CN116723416B (en) | Image processing method and electronic equipment | |
WO2024148968A1 (en) | Image preview method and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |