CN111738957A - Intelligent beautifying method and system for image, electronic equipment and storage medium - Google Patents
Intelligent beautifying method and system for image, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN111738957A CN111738957A CN202010595315.0A CN202010595315A CN111738957A CN 111738957 A CN111738957 A CN 111738957A CN 202010595315 A CN202010595315 A CN 202010595315A CN 111738957 A CN111738957 A CN 111738957A
- Authority
- CN
- China
- Prior art keywords
- image
- beautification
- target
- training
- beautified
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000012549 training Methods 0.000 claims abstract description 152
- 238000012545 processing Methods 0.000 claims abstract description 103
- 238000000605 extraction Methods 0.000 claims abstract description 61
- 230000006870 function Effects 0.000 claims description 52
- 238000005070 sampling Methods 0.000 claims description 24
- 238000004590 computer program Methods 0.000 claims description 10
- 238000013441 quality evaluation Methods 0.000 claims description 8
- 230000002146 bilateral effect Effects 0.000 claims description 6
- 238000011156 evaluation Methods 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000005286 illumination Methods 0.000 description 4
- 238000001303 quality assessment method Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000012423 maintenance Methods 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G06T5/77—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Abstract
The invention discloses an intelligent beautification method and system for an image, electronic equipment and a storage medium, wherein the method comprises the following steps: obtaining a sample data set, wherein the sample data set comprises a plurality of sample image pairs, and the sample image pairs comprise training images and corresponding beautified images; carrying out feature extraction processing on the training image; training a preset beautification coefficient estimation model by using the training image after the feature extraction processing and the corresponding beautified image to obtain a target beautification coefficient estimation model; acquiring a target image, and performing feature extraction processing on the target image; processing the target image after feature extraction processing by using the target beautification coefficient estimation model to obtain an beautification estimation coefficient of the target image; and beautifying the target image according to the beautification estimation coefficient to obtain a target beautification image corresponding to the target image. The invention can solve the problems of complicated operation and low efficiency of image beautification in the prior art.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an intelligent image beautifying method, an intelligent image beautifying system, electronic equipment and a storage medium.
Background
The image can ensure the intuitive display of the content, is an important medium for information transfer, and is widely used in online travel companies (OTA). The images are effectively and accurately displayed, so that the user experience can be greatly improved, and the conversion rate of the user is improved. Because the sources of OTA images are complex, most of images showing key contents such as hotels and destinations are shot by laymen photographers, and the quality of the images is greatly influenced by the skills of the photographers, the illumination of the shooting scene and the shooting equipment. Therefore, under the condition of not changing the picture content, the aesthetic sense of the image with poor quality is improved by improving the illumination, the picture definition, the color tone and the like, the user experience can be greatly improved, and the method has extremely high application value.
At present, image beautification is mainly realized by manual operation through image processing software, the operation is more complicated, and the manual operation efficiency is low for processing of large-batch images.
Disclosure of Invention
In view of the above-mentioned deficiencies of the prior art, an object of the present invention is to provide an intelligent image beautification method, system, electronic device and storage medium, so as to solve the problems of complicated operation and low efficiency of image beautification in the prior art.
In order to achieve the above object, the present invention provides an intelligent beautification method for an image, comprising:
obtaining a sample data set, wherein the sample data set comprises a plurality of sample image pairs, and the sample image pairs comprise training images and corresponding beautified images;
carrying out feature extraction processing on the training image;
training a preset beautification coefficient estimation model by using the training image after the feature extraction processing and the corresponding beautified image to obtain a target beautification coefficient estimation model;
acquiring a target image, and performing feature extraction processing on the target image;
processing the target image after feature extraction processing by using the target beautification coefficient estimation model to obtain an beautification estimation coefficient of the target image;
and beautifying the target image according to the beautification estimation coefficient to obtain a target beautification image corresponding to the target image.
In a preferred embodiment of the present invention, after obtaining the target beautification image corresponding to the target image, the method further includes:
and processing the target image and the target beautifying image corresponding to the target image by using a preset beautifying quality evaluation model to obtain the probability of effective beautifying of the target image, and judging whether the target image is effectively beautified according to the probability.
In a preferred embodiment of the present invention, the method further comprises:
when the target image is judged to be effectively beautified, displaying a target beautification image corresponding to the target image;
and when the target image is judged not to be effectively beautified, displaying the target image.
In a preferred embodiment of the present invention, before performing the feature extraction process on the training image, the method further includes: performing down-sampling processing on the training image;
before performing the feature extraction processing on the target image, the method further includes: and performing down-sampling processing on the target image.
In a preferred embodiment of the present invention, the beautification coefficient estimation model is a bilateral mesh network model as shown in the following formula (3):
wherein S represents a beautification estimation coefficient, WpRepresenting the network weight of p points on the input image, F representing the preset domain range of p points,A gaussian kernel function in the spatial domain is represented,expressing a Gaussian kernel function of a pixel value domain, | | p-q | | | expresses the pixel distance between a p point and a q point, and | I |p-IqI represents the pixel difference between the p point and the q pointqRepresenting the pixel at point q.
In a preferred embodiment of the present invention, the step of training the preset beautification coefficient estimation model by using the training image after the feature extraction processing and the corresponding beautification image to obtain the target beautification coefficient estimation model includes:
processing the training image after the feature extraction processing by using the beautification coefficient estimation model to obtain an beautification estimation coefficient corresponding to the training image;
beautify the training image according to the beautification estimation coefficient to obtain a target beautification image corresponding to the training image;
constructing a loss function based on the target beautified image and the beautified image corresponding to the training image;
and adjusting parameters of the beautification coefficient estimation model based on the loss function, and returning to the step of processing the training image after feature extraction processing by using the beautification coefficient estimation model until the loss function meets a preset condition to obtain a target beautification coefficient estimation model.
In a preferred embodiment of the present invention, the step of constructing the loss function based on the target beautified image and the beautified image corresponding to the training image includes:
calculating a reconstruction loss L according to equation (5) based on the target beautified image and the beautified image corresponding to the training imager:
Wherein I' represents a beautified image corresponding to the training image,representing a target beautification image corresponding to the training image, and S represents a corresponding beautification estimation coefficient;
calculating color loss L according to equation (6) based on the target beautified image and the beautified image corresponding to the training imagec:
Wherein, F (I')PRepresenting a three-dimensional vector composed of RGB pixel values at a P point in a beautified image corresponding to the training image,representing a three-dimensional vector formed by RGB pixel values at a point corresponding to the position of the P point in a target beautification image corresponding to the training image,represents F (I')PAndthe angle difference of (a);
according to the reconstruction loss LrAnd said color loss LcConstructing the loss function.
In order to achieve the above object, the present invention provides an intelligent beautification system for an image, comprising:
the system comprises a sample acquisition module, a processing module and a processing module, wherein the sample acquisition module is used for acquiring a sample data set, the sample data set comprises a plurality of sample image pairs, and the sample image pairs comprise training images and corresponding beautified images;
the first feature extraction module is used for carrying out feature extraction processing on the training image;
the model training module is used for training a preset beautification coefficient estimation model by utilizing the training images and the corresponding beautified images after the feature extraction processing to obtain a target beautification coefficient estimation model;
the target image acquisition module is used for acquiring a target image;
the second feature extraction module is used for carrying out feature extraction processing on the target image;
the model processing module is used for processing the target image after the characteristic extraction processing by using the target beautification coefficient estimation model to obtain an beautification estimation coefficient of the target image;
and the beautification processing module is used for beautifying the target image according to the beautification estimation coefficient to obtain a target beautification image corresponding to the target image.
In a preferred embodiment of the present invention, the system further comprises:
and the beautification evaluation module is used for processing the target image and the target beautification image corresponding to the target image by using a preset beautification quality evaluation model to obtain the probability of effective beautification of the target image, and judging whether the target image is effectively beautified according to the probability.
In a preferred embodiment of the present invention, the system further comprises:
and the display module is used for displaying the target beautification image corresponding to the target image when the target image is effectively beautified, and displaying the target image when the target image is not effectively beautified.
In a preferred embodiment of the present invention, the system further comprises:
the first down-sampling module is used for performing down-sampling processing on the training image before the first feature extraction module performs feature extraction processing on the training image;
and the second down-sampling module is used for performing down-sampling processing on the target image before the second feature extraction module performs feature extraction processing on the target image.
In a preferred embodiment of the present invention, the beautification coefficient estimation model is a bilateral mesh network model as shown in the following formula (3):
wherein S represents a beautification estimation coefficient, WpRepresenting the network weight of p points on the input image, F representing the preset domain range of p points,a gaussian kernel function in the spatial domain is represented,expressing a Gaussian kernel function of a pixel value domain, | | p-q | | | expresses the pixel distance between a p point and a q point, and | I |p-IqI represents the pixel difference between the p point and the q pointqRepresenting the pixel at point q.
In a preferred embodiment of the present invention, the model training module includes:
the model processing unit is used for processing the training image after the feature extraction processing by using the beautification coefficient estimation model to obtain the beautification estimation coefficient corresponding to the training image;
the beautification unit is used for beautifying the training image according to the beautification estimation coefficient to obtain a target beautification image corresponding to the training image;
a loss function construction unit, configured to construct a loss function based on the target beautified image and the beautified image corresponding to the training image;
and the iterative training unit is used for adjusting the parameters of the beautification coefficient estimation model based on the loss function and re-calling the model processing unit until the loss function meets the preset condition to obtain the target beautification coefficient estimation model.
In a preferred embodiment of the present invention, the loss function constructing unit includes:
a reconstruction loss calculating subunit, configured to calculate a reconstruction loss L according to equation (5) based on the beautified target image and the beautified target image corresponding to the training imager:
Wherein I' represents a beautified image corresponding to the training image,representing a target beautification image corresponding to the training image, and S represents a corresponding beautification estimation coefficient;
a color loss calculating subunit, configured to calculate a color loss L according to equation (6) based on the target beautified image and the beautified image corresponding to the training imagec:
Wherein, F (I')PRepresenting a three-dimensional vector composed of RGB pixel values at a P point in a beautified image corresponding to the training image,representing a three-dimensional vector formed by RGB pixel values at a point corresponding to the position of the P point in a target beautification image corresponding to the training image,represents F (I')PAndthe angle difference of (a);
a loss combining subunit for combining the L loss according to the reconstruction lossrAnd said color loss LcConstructing the loss function.
In order to achieve the above object, the present invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the aforementioned method when executing the computer program.
In order to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the aforementioned method.
By adopting the technical scheme, the invention has the following beneficial effects:
the method utilizes a deep learning method to learn the training image and the corresponding beautified image, can rapidly and effectively beautify the image, efficiently improves the quality of the image under the condition of not changing the content of the image, can greatly save the operation and maintenance cost, ensures the attractiveness and accuracy of image display, and effectively improves the service experience of users in OTA and other scenes.
Drawings
FIG. 1 is a flow chart of an intelligent beautification method for images according to embodiment 1 of the present invention;
FIG. 2 is a flowchart of an intelligent beautification method for images according to embodiment 2 of the present invention;
FIG. 3 is a block diagram of an image intelligent beautification system according to embodiment 3 of the present invention;
FIG. 4 is a block diagram of a model training module according to embodiment 3 of the present invention;
FIG. 5 is a block diagram of a loss function constructing unit according to embodiment 3 of the present invention;
FIG. 6 is a block diagram of an image intelligent beautification system according to embodiment 4 of the present invention;
fig. 7 is a hardware architecture diagram of an electronic device according to embodiment 5 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
Example 1
The embodiment provides an intelligent beautification method for an image, as shown in fig. 1, the method includes the following steps:
s1, obtaining a sample data set, wherein the sample data set comprises a plurality of sample image pairs, and the sample image pairs comprise training images and corresponding beautified images. In this embodiment, a large-scale (e.g. 20000) images can be randomly extracted from the image library in advance; scoring the image by a plurality of visual experts based on factors such as light, tone, definition and the like of the image, wherein the average value of the scoring is used as an aesthetic evaluation score of the corresponding image; selecting 1000 images with the highest scores as a part of training images, and taking the training images as corresponding beautified images; meanwhile, 1000 images with lower scores are selected as another part of training images, and an art designer uses image beautification tools such as Photoshop and the like to adjust the images in the aspects of color, tone, brightness contrast and the like, so that 1000 beautified images are obtained as beautified images of corresponding training images. Accordingly, an image pair comprising 2000 pairs of samples is constructed. It should be understood that the foregoing numbers 1000, 2000, etc. are merely exemplary, and the present invention is not limited in any way to the number of training images.
And S2, performing down-sampling processing on the training image to obtain a low-resolution image corresponding to the training image. For example, the training images are uniformly downsampled to a fixed resolution of 256 × 256. The purpose of the degradation sampling in the embodiment is to reduce the calculation amount of the subsequent processing process. It should be noted that the present invention does not set any limit to the resolution of the down-sampling.
And S3, performing feature extraction processing on the training image after the down-sampling processing (namely the low-resolution image obtained in the step S2) to obtain a low-resolution feature image corresponding to the training image. Specifically, firstly, a VGG16 model pre-trained on ImageNet in advance may be used to perform feature encoding on the low-resolution image to obtain a feature encoding result; and then, a preset feature extraction network comprising three convolutional layers is used for carrying out feature extraction processing on the feature coding result to obtain a corresponding low-resolution feature image.
And S4, training the preset beautification coefficient estimation model by using the training image (namely the low-resolution characteristic image) after the characteristic extraction processing and the corresponding beautification image to obtain the target beautification coefficient estimation model.
For example, assuming an input image I, a beautified image is generated by a series of transformations FThen there are:
wherein S ═ F-1And S represents a beautification estimation coefficient.
In this embodiment, the beautification coefficient estimation model adopts a bilateral mesh network model as shown in the following formula (3):
wherein S represents a beautification estimation coefficient, WpRepresenting the network weight of p points on the input image, F representing the preset domain range of p points,a gaussian kernel function in the spatial domain is represented, representing a gaussian kernel function over the pixel value domain,n represents the dimension, | p-q | | represents the pixel distance between the p point and the q point, | Ip-IqI represents the pixel difference between the p point and the q pointqRepresenting the pixel at point q.
Specifically, the embodiment trains the beautification coefficient estimation model by the following steps:
and S41, processing a certain training image after the characteristic extraction processing by using the beautification coefficient estimation model to obtain the beautification estimation coefficient S corresponding to the training image.
S42, according to the beautification estimation coefficient S, the beautification processing is carried out on the training image I according to the following formula (4) to obtain a target beautification image corresponding to the training image
It should be noted that the training image I in this step refers to a full-resolution training image before down-sampling.
And S43, constructing a loss function based on the target beautified image and the beautified image corresponding to the training image. In this embodiment, the process of constructing the loss function is as follows:
s431, based on the target beautified image and the beautified image corresponding to the training image, calculating the reconstruction loss L according to the formula (5)r:
Wherein I' represents a beautified image corresponding to the training image,and representing a target beautification image corresponding to the training image, and S represents a corresponding beautification estimation coefficient.
S432, based on the target beautified image and the beautified image corresponding to the training image, calculating the color loss L according to the formula (6)c:
Wherein, F (I')PRepresenting a three-dimensional vector composed of RGB pixel values at P point in the beautified image corresponding to the training image,representing a three-dimensional vector composed of RGB pixel values at a point corresponding to the position of the P point in the target beautification image corresponding to the training image,represents F (I')PAndthe angle difference of (a).
S433, reconstructing the loss L according to the aboverAnd color loss LcConstructing the loss function. In the present embodiment, the reconstruction can be lost by LrAnd color loss LcAdding to obtain the loss function.
The embodiment can restrict the gray level of the beautified target image to be close to the gray level of the beautified target image by setting the reconstruction loss; by setting the color loss, the color of the target beautified image can be restrained to be close to the color of the beautified image, so that a better beautification effect is achieved.
And S44, adjusting the parameters of the beautification coefficient estimation model based on the loss function, and returning to the step S31 to perform iterative training on the beautification coefficient estimation model until the loss function meets a preset condition (if the loss function converges to be stable and less than a preset value), wherein the beautification coefficient estimation model obtained at the moment is a target beautification coefficient estimation model. In this embodiment, the parameters of the beautification coefficient estimation model are preferably updated in a manner of back propagation loss.
And S5, when the target image needs to be beautified, acquiring the target image, and performing down-sampling and feature extraction processing on the target image. The downsampling and feature extraction processing procedure of this step may refer to the aforementioned steps S2 and S3.
And S6, processing the target image after the characteristic extraction processing by using the target beautification coefficient estimation model to obtain the beautification estimation coefficient of the target image.
And S7, processing the target image according to the beautification estimation coefficient of the target image to obtain a target beautification image corresponding to the target image. Specifically, the beautified target image corresponding to the target image can be obtained by the following formula (7)
Wherein S is0Beautification estimation coefficients representing target images, I0Representing a full resolution target image.
In summary, the beautified image corresponding to the training image improves the defects of dimensionalities such as illumination, image definition, color tone and the like, and has high aesthetic feeling, the embodiment learns the training image and the corresponding beautified image by using a deep learning method, can rapidly and effectively perform intelligent beautification processing on the image, efficiently improves the quality of the image under the condition of not changing the image content, can greatly save the operation and maintenance cost, ensures the aesthetic property and the accuracy of image display, and effectively improves the service experience of users in scenes such as the OTA and the like.
Example 2
As shown in fig. 2, after step S7 of embodiment 1, the intelligent beautification method for images of this embodiment adds the following steps:
s8, processing the target image and the target beautification image corresponding to the target image by using a preset beautification quality evaluation model to obtain the probability of effective beautification of the target image, and judging whether the target image is effectively beautified according to the probability. For example, a probability threshold τ is set, when the probability is obtainedAnd if not, the target image is not beautified effectively.
Preferably, the beautification quality evaluation model adopts a network model consisting of a plurality of (for example, 16) convolutional layers. In this embodiment, a plurality of images and whether the beautification label marked by each image is valid are utilized in advance to perform iterative training on the beautification quality assessment model, and during training, a cross entropy loss function L shown in the following formula (8) is used to learn the weight coefficient of the beautification quality assessment model until L satisfies a predetermined condition (e.g., converges to be stable and less than a predetermined value):
wherein the content of the first and second substances,and y represents a label of whether the corresponding image of the artificial mark is effectively beautified or not.
And S9, when the target image is predicted to be effectively beautified, displaying the target beautified image corresponding to the target image.
S10, when the target image is predicted not to be beautified effectively, displaying the target image.
Through the steps, the embodiment can ensure the output quality of the beautified image.
It should be noted that, for the sake of simplicity, embodiments 1 and 2 are described as a series of combinations of actions, but those skilled in the art should understand that the present invention is not limited by the described order of actions, because some steps may be performed in other orders or simultaneously according to the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Example 3
The present embodiment provides an intelligent beautification system for images, as shown in fig. 3, the system 10 includes:
the sample acquisition module 11 is configured to acquire a sample data set, where the sample data set includes a plurality of sample image pairs, and the sample image pairs include training images and corresponding beautified images. In this embodiment, a large-scale (e.g. 20000) images can be randomly extracted from the image library in advance; scoring the image by a plurality of visual experts based on factors such as light, tone, definition and the like of the image, wherein the average value of the scoring is used as an aesthetic evaluation score of the corresponding image; selecting 1000 images with the highest scores as a part of training images, and taking the training images as corresponding beautified images; meanwhile, 1000 images with lower scores are selected as another part of training images, and an art designer uses image beautification tools such as Photoshop and the like to adjust the images in the aspects of color, tone, brightness contrast and the like, so that 1000 beautified images are obtained as beautified images of corresponding training images. Accordingly, an image pair comprising 2000 pairs of samples is constructed. It should be understood that the foregoing numbers 1000, 2000, etc. are merely exemplary, and the present invention is not limited in any way to the number of training images.
And the first down-sampling module 12 is configured to perform down-sampling processing on the training image to obtain a low-resolution image corresponding to the training image. For example, the training images are uniformly downsampled to a fixed resolution of 256 × 256. The purpose of the degradation sampling in the embodiment is to reduce the calculation amount of the subsequent processing process. It should be noted that the present invention does not set any limit to the resolution of the down-sampling.
The first feature extraction module 13 is configured to perform feature extraction processing on the training image after the downsampling processing (i.e., the low-resolution image obtained in step S2), so as to obtain a low-resolution feature image corresponding to the training image. Specifically, firstly, a VGG16 model pre-trained on ImageNet in advance may be used to perform feature encoding on the low-resolution image to obtain a feature encoding result; and then, a preset feature extraction network comprising three convolutional layers is used for carrying out feature extraction processing on the feature coding result to obtain a corresponding low-resolution feature image.
And a model training module 14, configured to train a preset beautification coefficient estimation model by using the training image (i.e., the low-resolution feature image) after the feature extraction processing and the corresponding beautified image, so as to obtain a target beautification coefficient estimation model.
For example, assuming an input image I, a beautified image is generated by a series of transformations FThen there are:
wherein S ═ F-1And S represents a beautification estimation coefficient.
In this embodiment, the beautification coefficient estimation model adopts a bilateral mesh network model as shown in the following formula (3):
wherein S represents a beautification estimation coefficient, WpRepresenting the network weight of p points on the input image, F representing the preset domain range of p points,represents a spatial domain Gaussian kernel function and represents a pixel value domain Gaussian kernel function and n represents the dimension, | p-q | | represents the pixel distance between the p point and the q point, | Ip-IqI represents the pixel difference between the p point and the q pointqRepresenting the pixel at point q.
Specifically, as shown in fig. 4, the model training module 14 of the present embodiment includes:
the model processing unit 141 is configured to process a training image after the feature extraction processing by using the beautification coefficient estimation model to obtain the beautification estimation coefficient S corresponding to the training image.
A beautification unit 142, configured to beautify the training image I according to the beautification estimation coefficient S according to the following formula (4) to obtain a target beautified image corresponding to the training image
It should be noted that the training image I in this step refers to a full-resolution training image before down-sampling.
And a loss function constructing unit 143 configured to construct a loss function based on the target beautified image and the beautified image corresponding to the training image. In the present embodiment, as shown in fig. 5, the loss function constructing unit 143 includes:
a reconstruction loss calculation subunit 1431 configured to calculate a reconstruction loss L according to equation (5) based on the target beautified image and the beautified image corresponding to the training imager:
Wherein I' represents a beautified image corresponding to the training image,and representing a target beautification image corresponding to the training image, and S represents a corresponding beautification estimation coefficient.
A color loss calculating subunit 1432, configured to calculate a color loss L according to equation (6) based on the target beautified image and the beautified image corresponding to the training imagec:
Wherein, F (I')PRepresenting a three-dimensional vector composed of RGB pixel values at P point in the beautified image corresponding to the training image,representing a three-dimensional vector composed of RGB pixel values at a point corresponding to the position of the P point in the target beautification image corresponding to the training image,represents F (I')PAndthe angle difference of (a).
A loss combining subunit 1433 configured to reconstruct the loss L according to the foregoingrAnd color loss LcConstructing the loss function. In the present embodiment, the reconstruction can be lost by LrAnd color loss LcAdd to obtainTo the loss function.
The embodiment can restrict the gray level of the beautified target image to be close to the gray level of the beautified target image by setting the reconstruction loss; by setting the color loss, the color of the target beautified image can be restrained to be close to the color of the beautified image, so that a better beautification effect is achieved.
An iterative training unit 144, configured to adjust parameters of the beautification coefficient estimation model based on the loss function, and re-invoke the model processing unit 141 to perform iterative training until the loss function satisfies a predetermined condition (e.g., converges to be stable and smaller than a predetermined value), where the obtained beautification coefficient estimation model is the target beautification coefficient estimation model. In this embodiment, the parameters of the beautification coefficient estimation model are preferably updated in a manner of back propagation loss.
And a target image acquiring module 15, configured to acquire a target image.
And a second down-sampling module 16, configured to down-sample the target image.
And the second feature extraction module 17 is configured to perform feature extraction processing on the target image.
And the model processing module 18 is configured to process the target image after the feature extraction processing by using the target beautification coefficient estimation model to obtain an beautification estimation coefficient of the target image.
And the beautification processing module 19 is configured to process the target image according to the beautification estimation coefficient of the target image, so as to obtain a target beautification image corresponding to the target image. Specifically, the beautified target image corresponding to the target image can be obtained by the following formula (7)
Wherein S is0Beautification estimation coefficients representing target images, I0Representing a full resolution target image.
In summary, the beautified image corresponding to the training image improves the defects of dimensionalities such as illumination, image definition, color tone and the like, and has high aesthetic feeling, the embodiment learns the training image and the corresponding beautified image by using a deep learning method, can rapidly and effectively perform intelligent beautification processing on the image, efficiently improves the quality of the image under the condition of not changing the image content, can greatly save the operation and maintenance cost, ensures the aesthetic property and the accuracy of image display, and effectively improves the service experience of users in scenes such as the OTA and the like.
Example 4
As shown in fig. 6, the intelligent beautification of image system 10 of this embodiment is added with the following modules on the basis of embodiment 3:
and the beautification evaluation module 21 is configured to process the target image and the target beautification image corresponding to the target image by using a preset beautification quality evaluation model to obtain a probability of effective beautification of the target image, and determine whether the target image is effectively beautified according to the probability. For example, a probability threshold τ is set, when the probability is obtainedAnd if not, the target image is not beautified effectively.
Preferably, the beautification quality evaluation model adopts a network model consisting of a plurality of (for example, 16) convolutional layers. In this embodiment, a plurality of images and whether the beautification label marked by each image is valid are utilized in advance to perform iterative training on the beautification quality assessment model, and during training, a cross entropy loss function L shown in the following formula (8) is used to learn the weight coefficient of the beautification quality assessment model until L satisfies a predetermined condition (e.g., converges to be stable and less than a predetermined value):
wherein the content of the first and second substances,and y represents a label of whether the corresponding image of the artificial mark is effectively beautified or not.
The display module 22 is configured to display a target beautification image corresponding to the target image when it is predicted that the target image is effectively beautified; and meanwhile, the method is used for displaying the target image when the target image is predicted not to be effectively beautified.
Through the module, the embodiment can ensure the output quality of image beautification.
Example 5
The present embodiment provides an electronic device, which may be represented in the form of a computing device (for example, may be a server device), and includes a memory, a processor, and a computer program stored in the memory and running on the processor, where the processor executes the computer program to implement the method for beautifying intelligent image provided in embodiment 1 or 2.
Fig. 7 shows a schematic diagram of a hardware structure of the present embodiment, and as shown in fig. 7, the electronic device 9 specifically includes:
at least one processor 91, at least one memory 92, and a bus 93 for connecting the various system components (including the processor 91 and the memory 92), wherein:
the bus 93 includes a data bus, an address bus, and a control bus.
The processor 91 executes various functional applications and data processing, such as the image intelligent beautification method provided in embodiment 1 or 2 of the present invention, by executing the computer program stored in the memory 92.
The electronic device 9 may further communicate with one or more external devices 94 (e.g., a keyboard, a pointing device, etc.). Such communication may be through an input/output (I/O) interface 95. Also, the electronic device 9 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 96. The network adapter 96 communicates with the other modules of the electronic device 9 via the bus 93. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 9, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, and data backup storage systems, etc.
It should be noted that although in the above detailed description several units/modules or sub-units/modules of the electronic device are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module, according to embodiments of the application. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
Example 6
The present embodiment provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor, implements the steps of the intelligent beautification method for images provided in embodiment 1 or 2.
More specific examples, among others, that the readable storage medium may employ may include, but are not limited to: a portable disk, a hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In a possible implementation, the invention can also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps of implementing the method for intelligent beautification of images as described in embodiment 1 or 2, when said program product is run on said terminal device.
Where program code for carrying out the invention is written in any combination of one or more programming languages, the program code may be executed entirely on the user device, partly on the user device, as a stand-alone software package, partly on the user device and partly on a remote device or entirely on the remote device.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that this is by way of example only, and that the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the spirit and scope of the invention, and these changes and modifications are within the scope of the invention.
Claims (16)
1. An intelligent beautification method for images is characterized by comprising the following steps:
obtaining a sample data set, wherein the sample data set comprises a plurality of sample image pairs, and the sample image pairs comprise training images and corresponding beautified images;
carrying out feature extraction processing on the training image;
training a preset beautification coefficient estimation model by using the training image after the feature extraction processing and the corresponding beautified image to obtain a target beautification coefficient estimation model;
acquiring a target image, and performing feature extraction processing on the target image;
processing the target image after feature extraction processing by using the target beautification coefficient estimation model to obtain an beautification estimation coefficient of the target image;
and beautifying the target image according to the beautification estimation coefficient to obtain a target beautification image corresponding to the target image.
2. The method for intelligently beautifying images according to claim 1, wherein after obtaining the target beautifying image corresponding to the target image, the method further comprises:
and processing the target image and the target beautifying image corresponding to the target image by using a preset beautifying quality evaluation model to obtain the probability of effective beautifying of the target image, and judging whether the target image is effectively beautified according to the probability.
3. The method for intelligent beautification of images as recited in claim 2, further comprising:
when the target image is judged to be effectively beautified, displaying a target beautification image corresponding to the target image;
and when the target image is judged not to be effectively beautified, displaying the target image.
4. The method for intelligent beautification of images as claimed in claim 1, wherein prior to performing feature extraction processing on the training images, the method further comprises: performing down-sampling processing on the training image;
before performing the feature extraction processing on the target image, the method further includes: and performing down-sampling processing on the target image.
5. The method for intelligent beautification of images according to claim 1, wherein the beautification coefficient estimation model is a bilateral mesh network model represented by the following formula (3):
wherein S represents a beautification estimation coefficient, WpRepresenting the network weight of p points on the input image, F representing the preset domain range of p points,a gaussian kernel function in the spatial domain is represented,expressing a Gaussian kernel function of a pixel value domain, | | p-q | | | expresses the pixel distance between a p point and a q point, and | I |p-IqI represents the pixel difference between the p point and the q pointqRepresenting the pixel at point q.
6. The method for intelligently beautifying images according to claim 1, wherein the step of training a preset beautification coefficient estimation model by using the training images after feature extraction processing and the corresponding beautification images to obtain a target beautification coefficient estimation model comprises:
processing the training image after the feature extraction processing by using the beautification coefficient estimation model to obtain an beautification estimation coefficient corresponding to the training image;
beautify the training image according to the beautification estimation coefficient to obtain a target beautification image corresponding to the training image;
constructing a loss function based on the target beautified image and the beautified image corresponding to the training image;
and adjusting parameters of the beautification coefficient estimation model based on the loss function, and returning to the step of processing the training image after feature extraction processing by using the beautification coefficient estimation model until the loss function meets a preset condition to obtain a target beautification coefficient estimation model.
7. The method for intelligent beautification of images as claimed in claim 6, wherein the step of constructing a loss function based on the target beautified image and the beautified image corresponding to the training image comprises:
calculating a reconstruction loss L according to equation (5) based on the target beautified image and the beautified image corresponding to the training imager:
Wherein I' represents a beautified image corresponding to the training image,representing a target beautification image corresponding to the training image, and S represents a corresponding beautification estimation coefficient;
calculating color loss L according to equation (6) based on the target beautified image and the beautified image corresponding to the training imagec:
Wherein, F (I')PRepresenting a three-dimensional vector composed of RGB pixel values at a P point in a beautified image corresponding to the training image,representing a three-dimensional vector formed by RGB pixel values at a point corresponding to the position of the P point in a target beautification image corresponding to the training image,represents F (I')PAndthe angle difference of (a);
according to the reconstruction loss LrAnd said color loss LcConstructing the loss function.
8. An intelligent beautification system for images, comprising:
the system comprises a sample acquisition module, a processing module and a processing module, wherein the sample acquisition module is used for acquiring a sample data set, the sample data set comprises a plurality of sample image pairs, and the sample image pairs comprise training images and corresponding beautified images;
the first feature extraction module is used for carrying out feature extraction processing on the training image;
the model training module is used for training a preset beautification coefficient estimation model by utilizing the training images and the corresponding beautified images after the feature extraction processing to obtain a target beautification coefficient estimation model;
the target image acquisition module is used for acquiring a target image;
the second feature extraction module is used for carrying out feature extraction processing on the target image;
the model processing module is used for processing the target image after the characteristic extraction processing by using the target beautification coefficient estimation model to obtain an beautification estimation coefficient of the target image;
and the beautification processing module is used for beautifying the target image according to the beautification estimation coefficient to obtain a target beautification image corresponding to the target image.
9. The intelligent image beautification system of claim 8, further comprising:
and the beautification evaluation module is used for processing the target image and the target beautification image corresponding to the target image by using a preset beautification quality evaluation model to obtain the probability of effective beautification of the target image, and judging whether the target image is effectively beautified according to the probability.
10. The intelligent image beautification system of claim 9, further comprising:
and the display module is used for displaying the target beautification image corresponding to the target image when the target image is effectively beautified, and displaying the target image when the target image is not effectively beautified.
11. The intelligent image beautification system of claim 8, further comprising:
the first down-sampling module is used for performing down-sampling processing on the training image before the first feature extraction module performs feature extraction processing on the training image;
and the second down-sampling module is used for performing down-sampling processing on the target image before the second feature extraction module performs feature extraction processing on the target image.
12. The intelligent beautification of images system of claim 8, wherein the beautification coefficient estimation model is a bilateral mesh network model as shown in equation (3) below:
wherein S represents a beautification estimation coefficient, WpRepresenting the network weight of p points on the input image, F representing the preset domain range of p points,a gaussian kernel function in the spatial domain is represented,expressing a Gaussian kernel function of a pixel value domain, | | p-q | | | expresses the pixel distance between a p point and a q point, and | I |p-IqI represents the pixel difference between the p point and the q pointqRepresenting the pixel at point q.
13. The intelligent image beautification system of claim 8, wherein the model training module comprises:
the model processing unit is used for processing the training image after the feature extraction processing by using the beautification coefficient estimation model to obtain the beautification estimation coefficient corresponding to the training image;
the beautification unit is used for beautifying the training image according to the beautification estimation coefficient to obtain a target beautification image corresponding to the training image;
a loss function construction unit, configured to construct a loss function based on the target beautified image and the beautified image corresponding to the training image;
and the iterative training unit is used for adjusting the parameters of the beautification coefficient estimation model based on the loss function and re-calling the model processing unit until the loss function meets the preset condition to obtain the target beautification coefficient estimation model.
14. The intelligent image beautification system of claim 13, wherein the loss function construction unit comprises:
a reconstruction loss calculating subunit, configured to calculate a reconstruction loss L according to equation (5) based on the beautified target image and the beautified target image corresponding to the training imager:
Wherein I' represents a beautified image corresponding to the training image,representing a target beautification image corresponding to the training image, and S represents a corresponding beautification estimation coefficient;
a color loss calculating subunit, configured to calculate a color loss L according to equation (6) based on the target beautified image and the beautified image corresponding to the training imagec:
Wherein, F (I')PRepresenting a three-dimensional vector composed of RGB pixel values at a P point in a beautified image corresponding to the training image,representing a three-dimensional vector formed by RGB pixel values at a point corresponding to the position of the P point in a target beautification image corresponding to the training image,represents F (I')PAndthe angle difference of (a);
a loss combining subunit for combining the L loss according to the reconstruction lossrAnd said color loss LcConstructing the loss function.
15. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of intelligent beautification of images as claimed in any one of claims 1 to 7 when executing the computer program.
16. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for intelligent beautification of images as claimed in any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010595315.0A CN111738957A (en) | 2020-06-28 | 2020-06-28 | Intelligent beautifying method and system for image, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010595315.0A CN111738957A (en) | 2020-06-28 | 2020-06-28 | Intelligent beautifying method and system for image, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111738957A true CN111738957A (en) | 2020-10-02 |
Family
ID=72651281
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010595315.0A Pending CN111738957A (en) | 2020-06-28 | 2020-06-28 | Intelligent beautifying method and system for image, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111738957A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112487223A (en) * | 2020-12-08 | 2021-03-12 | Oppo广东移动通信有限公司 | Image processing method and device and electronic equipment |
CN113378885A (en) * | 2021-05-13 | 2021-09-10 | 武汉科技大学 | New user aesthetic preference calibration and classification method based on preferred image pair |
-
2020
- 2020-06-28 CN CN202010595315.0A patent/CN111738957A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112487223A (en) * | 2020-12-08 | 2021-03-12 | Oppo广东移动通信有限公司 | Image processing method and device and electronic equipment |
WO2022121701A1 (en) * | 2020-12-08 | 2022-06-16 | Oppo广东移动通信有限公司 | Image processing method and apparatus, electronic device, and storage medium |
CN113378885A (en) * | 2021-05-13 | 2021-09-10 | 武汉科技大学 | New user aesthetic preference calibration and classification method based on preferred image pair |
CN113378885B (en) * | 2021-05-13 | 2022-10-14 | 武汉科技大学 | New user aesthetic preference calibration and classification method based on preferred image pair |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110363716B (en) | High-quality reconstruction method for generating confrontation network composite degraded image based on conditions | |
CN110503680A (en) | It is a kind of based on non-supervisory convolutional neural networks monocular scene depth estimation method | |
CN109087258A (en) | A kind of image rain removing method and device based on deep learning | |
CN111696112A (en) | Automatic image cutting method and system, electronic equipment and storage medium | |
CN114339409B (en) | Video processing method, device, computer equipment and storage medium | |
CN110717868B (en) | Video high dynamic range inverse tone mapping model construction and mapping method and device | |
CN112040222B (en) | Visual saliency prediction method and equipment | |
CN112995652B (en) | Video quality evaluation method and device | |
CN110134885A (en) | A kind of point of interest recommended method, device, equipment and computer storage medium | |
CN111738957A (en) | Intelligent beautifying method and system for image, electronic equipment and storage medium | |
CN114283080A (en) | Multi-mode feature fusion text-guided image compression noise removal method | |
JP2023001926A (en) | Method and apparatus of fusing image, method and apparatus of training image fusion model, electronic device, storage medium and computer program | |
CN106407932A (en) | Handwritten number recognition method based on fractional calculus and generalized inverse neural network | |
CN111696034A (en) | Image processing method and device and electronic equipment | |
CN109118469B (en) | Prediction method for video saliency | |
CN116797768A (en) | Method and device for reducing reality of panoramic image | |
WO2022178975A1 (en) | Noise field-based image noise reduction method and apparatus, device, and storage medium | |
CN113822790B (en) | Image processing method, device, equipment and computer readable storage medium | |
CN109712074A (en) | The remote sensing images super-resolution reconstruction method of two-parameter beta combine processes dictionary | |
CN111553961B (en) | Method and device for acquiring line manuscript corresponding color map, storage medium and electronic device | |
CN114841887A (en) | Image restoration quality evaluation method based on multi-level difference learning | |
CN111062886A (en) | Super-resolution method, system, electronic product and medium for hotel pictures | |
CN112907456A (en) | Deep neural network image denoising method based on global smooth constraint prior model | |
Zhu et al. | Application research on improved CGAN in image raindrop removal | |
CN116958423B (en) | Text-based three-dimensional modeling method, image rendering method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |