CN109685785A - A kind of image quality measure method, apparatus and electronic equipment - Google Patents
A kind of image quality measure method, apparatus and electronic equipment Download PDFInfo
- Publication number
- CN109685785A CN109685785A CN201811563932.1A CN201811563932A CN109685785A CN 109685785 A CN109685785 A CN 109685785A CN 201811563932 A CN201811563932 A CN 201811563932A CN 109685785 A CN109685785 A CN 109685785A
- Authority
- CN
- China
- Prior art keywords
- image
- evaluation
- value
- training
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000012549 training Methods 0.000 claims abstract description 142
- 238000011156 evaluation Methods 0.000 claims description 178
- 238000003062 neural network model Methods 0.000 claims description 47
- 238000013441 quality evaluation Methods 0.000 claims description 28
- 238000004891 communication Methods 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 8
- 238000001303 quality assessment method Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a kind of image quality measure method, apparatus and electronic equipments, can determine image to be assessed;Image to be assessed is inputted into assessment network model, obtains image to be assessed for the assessed value of each image parameter in preset quantity image parameter;Wherein, assessment network model is the model obtained according to training set training, and training set includes: the sample value of multiple sample images and each sample image for each image parameter in preset quantity image parameter.In training pattern, it is contemplated that multiple images parameter, and joint training is carried out for multiple images parameter, to improve the accuracy of image quality measure.
Description
Technical Field
The invention relates to the technical field of computer application, in particular to an image quality evaluation method and device and electronic equipment.
Background
Most current video software or video websites support users to upload videos and determine video cover pictures from the uploaded videos. However, the quality of images in the video uploaded by the user is generally poor, and in order to obtain images with better quality as a video cover picture, quality evaluation needs to be performed on a plurality of images included in the video.
At present, the quality evaluation of images mainly aims at two image parameters of sharpness and brightness. Specifically, when the quality of an image is evaluated, the sharpness of the image is extracted according to a set sharpness algorithm, the brightness of the image is extracted according to a set brightness algorithm, and an image with high sharpness and high brightness is selected from a plurality of images as a video cover picture. Here, the sharpness and brightness of the image are extracted by different algorithms, and the relationship between the sharpness and brightness is not considered, so that the image quality cannot be effectively evaluated, and an image with good quality cannot be accurately determined.
Disclosure of Invention
The embodiment of the invention aims to provide an image quality evaluation method, an image quality evaluation device and electronic equipment so as to improve the accuracy of image quality evaluation. The specific technical scheme is as follows:
in order to achieve the above object, an embodiment of the present invention provides an image quality evaluation method, where the method includes:
determining an image to be evaluated;
inputting the image to be evaluated into an evaluation network model to obtain an evaluation value of the image to be evaluated for each image parameter in a preset number of image parameters;
wherein, the evaluation network model is a model obtained by training according to a training set, and the training set comprises: a plurality of sample images, and a sample value of each sample image for each image parameter of the preset number of image parameters.
Optionally, the image parameters include: one or more of image definition, color saturation and preset evaluation; the preset evaluation includes an image attractiveness evaluation and an image scene evaluation.
Optionally, the evaluation network model includes evaluation network submodels corresponding to the preset number of image parameters one to one;
the step of inputting the image to be evaluated into an evaluation network model to obtain an evaluation value of the image to be evaluated for each image parameter in a preset number of image parameters includes:
and respectively inputting the image to be evaluated into each evaluation network submodel in the evaluation network submodels to obtain the evaluation value of the image to be evaluated for each image parameter in the preset number of image parameters.
Optionally, the evaluation network model is obtained by training through the following steps:
acquiring a preset neural network model and the training set;
inputting the plurality of sample images into the neural network model to obtain an evaluation value of each sample image for each image parameter;
calculating a training loss value according to the obtained evaluation value and the sample value of each image aiming at each image parameter in the training set;
determining whether the neural network model converges according to the training loss value;
if not, adjusting parameter values in the neural network model, and returning to the step of inputting the plurality of sample images into the neural network model to obtain an evaluation value of each sample image for each image parameter;
and if so, determining the current neural network model as the evaluation network model.
Optionally, the step of calculating a training loss value according to the obtained evaluation value and a sample value of each image parameter of each sample image included in the training set includes:
and inputting the obtained evaluation value and the sample value of each image included in the training set aiming at each image parameter into a preset loss function to obtain a training loss value.
Optionally, the step of calculating a training loss value according to the obtained evaluation value and a sample value of each image parameter of each sample image included in the training set includes:
for each image parameter, determining a loss value for the image parameter according to the obtained evaluation value of each sample image for the image parameter and the sample value of each sample image included in the training set for the image parameter;
and determining a training loss value according to the loss value aiming at each image parameter and a preset training weight of each image parameter.
Optionally, the step of determining a training loss value according to the loss value for each image parameter and a preset training weight for each image parameter includes:
determining a training loss value s according to the following formula:
wherein s isiAs a loss value for the ith image parameter, wiThe training weight is a preset ith image parameter, and N is the number of the image parameters.
In order to achieve the above object, an embodiment of the present invention further provides an image quality evaluation apparatus, including:
the determining module is used for determining an image to be evaluated;
the evaluation module is used for inputting the image to be evaluated into an evaluation network model to obtain an evaluation value of the image to be evaluated for each image parameter in a preset number of image parameters;
wherein, the evaluation network model is a model obtained by training according to a training set, and the training set comprises: a plurality of sample images, and a sample value of each sample image for each image parameter of the preset number of image parameters.
Optionally, the image parameters include: one or more of image definition, color saturation and preset evaluation; the preset evaluation includes an image attractiveness evaluation and an image scene evaluation.
Optionally, the evaluation network model includes evaluation network submodels corresponding to the preset number of image parameters one to one;
the evaluation module is specifically configured to:
and respectively inputting the image to be evaluated into each evaluation network submodel in the evaluation network submodels to obtain the evaluation value of the image to be evaluated for each image parameter in the preset number of image parameters.
Optionally, the apparatus further comprises:
a training module for training the evaluation network model;
the training module is specifically configured to:
acquiring a preset neural network model and the training set;
inputting the plurality of sample images into the neural network model to obtain an evaluation value of each sample image for each image parameter;
calculating a training loss value according to the obtained evaluation value and the sample value of each image aiming at each image parameter in the training set;
determining whether the neural network model converges according to the training loss value;
if not, adjusting parameter values in the neural network model, and returning to the step of inputting the plurality of sample images into the neural network model to obtain an evaluation value of each sample image for each image parameter;
and if so, determining the current neural network model as the evaluation network model.
Optionally, the apparatus further comprises a loss value determining module,
the loss value determination module is specifically configured to:
and inputting the obtained evaluation value and the sample value of each image included in the training set aiming at each image parameter into a preset loss function to obtain a training loss value.
Optionally, the loss value determining module is specifically configured to:
for each image parameter, determining a loss value for the image parameter according to the obtained evaluation value of each sample image for the image parameter and the sample value of each sample image included in the training set for the image parameter;
and determining a training loss value according to the loss value aiming at each image parameter and a preset training weight of each image parameter.
Optionally, the loss value determining module is specifically configured to:
determining a training loss value s according to the following formula:
wherein s isiAs a loss value for the ith image parameter, wiThe training weight is a preset ith image parameter, and N is the number of the image parameters.
In order to achieve the above object, an embodiment of the present invention further provides an electronic device, including a processor, a communication interface, a memory and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the bus, and the memory is used for storing a computer program; and the processor is used for realizing any method step when executing the program stored in the memory.
To achieve the above object, an embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, and the computer program implements any of the above method steps when executed by a processor.
Therefore, the embodiment of the invention provides an image quality evaluation method, an image quality evaluation device and electronic equipment, which can determine an image to be evaluated; inputting the image to be evaluated into an evaluation network model to obtain an evaluation value of the image to be evaluated for each image parameter in a preset number of image parameters; wherein, the evaluation network model is a model obtained by training according to a training set, and the training set comprises: a plurality of sample images, and a sample value of each sample image for each image parameter of a preset number of image parameters. Therefore, in the embodiment of the invention, a plurality of image parameters are considered when the model is trained, and the combined training is carried out aiming at the plurality of image parameters, so that the accuracy of image quality evaluation is improved.
Of course, it is not necessary for any product or method of practicing the invention to achieve all of the above-described advantages at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
Fig. 1 is a schematic flow chart of an image quality evaluation method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of an image quality evaluation method according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating an image quality assessment according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an image quality evaluation apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
In order to solve the problem that the image quality cannot be effectively evaluated only by considering the sharpness and brightness of an image in the conventional evaluation of the quality of a video cover picture, the embodiment of the invention provides an image quality evaluation method which can be applied to electronic equipment or a server and can improve the accuracy of image quality evaluation.
The following describes the above image quality evaluation method with reference to specific embodiments.
Referring to fig. 1, fig. 1 is a flowchart of an image quality evaluation method according to an embodiment of the present invention, where the method includes the following steps:
step S101: and determining an image to be evaluated.
In the embodiment of the invention, the image to be evaluated can be an image uploaded to video software or a video website by a user or an image acquired from a network.
S102: and inputting the characteristic data of the image to be evaluated into the evaluation network model to obtain the evaluation value of the image to be evaluated for each image parameter in the preset number of image parameters. Wherein, the evaluation network model is a model obtained by training according to a training set, and the training set comprises: a plurality of sample images, and a sample value of each sample image for each image parameter of a preset number of image parameters.
In the embodiment of the present invention, the image parameters may include characteristic parameters inherent in the image itself, such as image sharpness and color saturation, and may also include preset evaluation of the image, where the preset evaluation may be evaluation given in advance by a person such as a user, such as evaluation of attractiveness of the image, and evaluation of an image scene.
The predetermined number of image parameters can be used to evaluate the image quality. For example, an image in which the prediction evaluation values of a plurality of image parameters are all high is determined as an image of high quality. In a specific implementation, one or more of the above image parameters may be selected to perform quality evaluation on the image.
In the embodiment of the present invention, each image parameter of the sample images included in the training set is known, and a sample value of the image parameter of each sample image may be determined by analyzing the sample image, or may be determined manually, and specifically, which manner is used for determining may be selected according to an actual situation. For example, with respect to image sharpness or color saturation, the sample image may be analyzed to obtain the true image sharpness or color saturation of the sample image. The evaluation of the attraction degree of the sample image may be manually determined, and if there is an attractive scene or person in the sample image, the evaluation of the attraction degree of the sample image is high. The scene evaluation of the sample image can also be artificially determined, and if the scenes contained in the sample image are rich, the image scene evaluation is high.
In the embodiment of the invention, in order to intuitively embody the quality of each image parameter, a plurality of grades can be set for each image parameter. For example, taking "attractiveness" as an example, three levels are set for "attractiveness", which are labeled with level 1, level 2, and level 3, with level 1 indicating "unattractive", level 2 indicating "more appealing", and level 3 indicating "very appealing".
In the embodiment of the present invention, the evaluation network model may include a preset number of evaluation network submodels, where the number of evaluation network submodels is the same as the number of image parameters to be evaluated. One evaluation network sub-model corresponds to one image parameter. In order to obtain the evaluation value of the image to be evaluated for each image parameter in the preset number of image parameters, the image to be evaluated may be respectively input into each evaluation network sub-model in the preset number of evaluation network sub-models, so as to obtain the evaluation value of the image to be evaluated for each image parameter in the preset number of image parameters. The evaluation network submodel is pre-trained according to the training set.
As an example, referring to fig. 3, the image parameters may include 4 of image sharpness, color saturation, image attraction evaluation and image scene evaluation, and the evaluation network model includes 4 evaluation network submodels, a first evaluation network submodel corresponding to the image sharpness, a second evaluation network submodel corresponding to the color saturation, a third evaluation network submodel corresponding to the image attraction evaluation and a fourth evaluation network submodel corresponding to the image scene evaluation.
After the images to be evaluated are respectively input into the 4 trained evaluation network submodels, the first evaluation network submodel can output evaluation values aiming at the image definition; the second evaluation network submodel may output an evaluation value for color saturation; the third evaluation network sub-model may output an evaluation value for the image attractiveness evaluation; the fourth evaluation network sub-model may output an evaluation value for the image scene evaluation.
In the embodiment of the present invention, the neural network sub-model may be a circular neural network model, a convolutional neural network model, a circular convolutional neural network model, a deep neural network model, or the like. The embodiment of the present invention is not limited thereto.
Therefore, the image quality evaluation method provided by the embodiment of the invention can determine the image to be evaluated, input the image to be evaluated into the evaluation network model, and obtain the prediction evaluation value of the image to be evaluated for each image parameter in the preset number of image parameters. Wherein, the evaluation network model is a model obtained by training according to a training set, and the training set comprises: a plurality of sample images, and a sample value of each sample image for each image parameter of a preset number of image parameters. Therefore, in the embodiment of the invention, a plurality of image parameters are considered when the model is trained, wherein the image parameters include the inherent characteristic parameters of the image and the preset evaluation parameters, and the combined training is performed on the plurality of image parameters.
In the embodiment of the present invention, referring to fig. 2, the training process of evaluating the network model may refer to the following steps:
step S201: and acquiring a preset neural network model and a training set.
In the embodiment of the present invention, the preset neural network model may include a preset number of neural network submodels, the neural network submodels correspond to the image parameters one to one, and the process of model training is a process of updating the parameters in the neural network submodels.
The training set includes a plurality of sample images, and each sample image has a sample value for each of a predetermined number of image parameters.
Step S202: and inputting a plurality of sample images into the neural network model to obtain the evaluation value of each sample image for each image parameter.
In this step, if the preset neural network model includes a preset number of neural network submodels, the sample image is respectively input into the plurality of neural network submodels included in the preset neural network model, and the evaluation value for the plurality of image parameters is obtained. The specific process is substantially the same as step S102 in the embodiment shown in fig. 1, and reference may be made to step S102, which is not described herein again.
Step S203: a training loss value is calculated based on the obtained evaluation value and the sample value of each image parameter for each sample image included in the training set.
In an embodiment of the invention, the loss value of the entire neural network model may be determined by the loss values of a plurality of neural network submodels.
The step S203 may specifically include the following steps:
step S203 a: for each image parameter, determining a loss value for the image parameter according to the obtained evaluation value of each sample image for the image parameter and the sample value of each sample image included in the training set for the image parameter.
In this step, a loss value may be calculated for each image parameter. Specifically, in step S202, an evaluation value for the image parameter may be obtained. And then, the loss value aiming at the image parameter can be calculated by combining the sample value aiming at the image parameter contained in the training set.
In the embodiment of the present invention, the evaluation value for each image parameter and the sample value for each image parameter included in the training set may be substituted into a preset loss function to obtain a loss value for each image parameter. In the embodiment of the present invention, the loss value is obtained by using, but not limited to, Mean Squared Error (MSE) formula as the loss function.
Step S203 b: and determining a training loss value according to the loss value aiming at each image parameter and a preset training weight of each image parameter.
In the embodiment of the invention, the loss values of the image parameters can be weighted and calculated to obtain the final training loss value. The weight values of the respective image parameters may be preset by the user.
In the embodiment of the present invention, the training loss value s may be determined according to the following formula:
wherein,sias a loss value for the ith image parameter, wiThe training weight is a preset ith image parameter, and N is the number of the image parameters.
As an example, the weights of the image sharpness, the color saturation, the image attractiveness evaluation, and the image scene evaluation set by the user in advance are 0.2, 0.1, 0.5, and 0.2, respectively, and the training loss value of the final quality evaluation is 0.2 a +0.1 b +0.5 c +0.2 d when the loss value for the image sharpness is a, the loss value for the color saturation is b, the loss value for the image attractiveness evaluation is c, and the loss value for the image scene evaluation is d.
Step S204: determining whether the neural network model converges according to the training loss value, if yes, executing step S205; otherwise, the step S202 is executed.
Step S205: and determining the current neural network model as the evaluation network model.
In the embodiment of the invention, the final loss value can be compared with the preset loss threshold value to judge whether the neural network model converges. If not, adjusting the parameter values in the neural network model, and returning to execute step S202. If the neural network model is converged, the training of the neural network model is completed, and the current neural network model can be determined as the evaluation network model.
It can be seen that, in the embodiment of the present invention, one neural network sub-model can be constructed for each image parameter. And the plurality of neural network submodels are used as an integral neural network model for training, and in the training process, the loss values of the neural network submodels can be integrated to calculate the final training loss value of the whole neural network model. And adjusting parameter values of each neural network submodel according to the final training loss value, so as to realize the combined training of a plurality of neural network submodels. Compared with the method of training for a certain image parameter independently, the combined training result is more accurate.
Based on the same inventive concept, according to the above embodiment of the image quality assessment method, an embodiment of the present invention further provides an image quality assessment apparatus, referring to fig. 4, which may include the following modules:
a determining module 401, configured to determine an image to be evaluated.
The evaluation module 402 is configured to input the image to be evaluated into an evaluation network model, so as to obtain an evaluation value of the image to be evaluated for each image parameter in a preset number of image parameters;
wherein, the evaluation network model is a model obtained by training according to a training set, and the training set comprises: a plurality of sample images, and a sample value of each sample image for each image parameter of a preset number of image parameters.
In an embodiment of the present invention, the image parameters include: one or more of image definition, color saturation and preset evaluation; the preset evaluation includes an image attraction degree evaluation and an image scene evaluation.
In the embodiment of the invention, the evaluation network model comprises evaluation network submodels which correspond to the preset number of image parameters one by one.
The evaluation module 402 is specifically configured to: and respectively inputting the image to be evaluated into each evaluation network submodel in the evaluation network submodels to obtain the evaluation value of the image to be evaluated for each image parameter in the preset number of image parameters.
In the embodiment of the present invention, on the basis of the apparatus shown in fig. 4, the apparatus may further include: and the training module is used for training the evaluation network model.
The training module is specifically configured to:
acquiring a preset neural network model and a training set;
inputting a plurality of sample images into a neural network model to obtain an evaluation value of each sample image for each image parameter;
calculating a training loss value according to the obtained evaluation value and the sample value of each image aiming at each image parameter in the training set;
determining whether the neural network model converges according to the training loss value;
if not, adjusting parameter values in the neural network model, and returning to the step of inputting the plurality of sample images into the neural network model to obtain an evaluation value of each sample image for each image parameter;
and if so, determining the current neural network model as the evaluation network model.
In this embodiment of the present invention, on the basis of the apparatus shown in fig. 4, the apparatus may further include a loss value determining module, where the loss value determining module is specifically configured to:
and inputting the obtained evaluation value and the sample value of each sample image included in the training set aiming at each image parameter into a preset loss function to obtain a training loss value.
In this embodiment of the present invention, the loss value determining module may be specifically configured to:
for each image parameter, determining a loss value for the image parameter according to the obtained evaluation value of each sample image for the image parameter and the sample value of each sample image included in the training set for the image parameter;
and determining a training loss value according to the loss value aiming at each image parameter and a preset training weight of each image parameter.
In this embodiment of the present invention, the loss value determining module may be specifically configured to:
determining a training loss value s according to the following formula:
wherein s isiFor referencing the ith imageLoss value of number, wiThe training weight is a preset ith image parameter, and N is the number of the image parameters.
Therefore, in the embodiment of the invention, the image to be evaluated can be determined; inputting the image to be evaluated into an evaluation network model to obtain an evaluation value of the image to be evaluated for each image parameter in a preset number of image parameters; wherein, the evaluation network model is a model obtained by training according to a training set, and the training set comprises: a plurality of sample images, and a sample value of each sample image for each image parameter of a preset number of image parameters. Therefore, in the embodiment of the invention, a plurality of image parameters are considered when the model is trained, and the combined training is carried out aiming at the plurality of image parameters, so that the accuracy of image quality evaluation is improved.
Based on the same inventive concept, according to the above-mentioned embodiment of the image quality assessment method, an electronic device is further provided in the embodiments of the present invention, as shown in fig. 5, which includes a processor 501, a communication interface 502, a memory 503, and a communication bus 504, wherein the processor 501, the communication interface 502, and the memory 503 complete mutual communication through the communication bus 504,
a memory 503 for storing a computer program;
the processor 501 is configured to implement the embodiment of the image quality evaluation method shown in fig. 1 to 4 when executing the program stored in the memory 503. The image quality evaluation method comprises the following steps:
determining an image to be evaluated; inputting the image to be evaluated into an evaluation network model to obtain an evaluation value of the image to be evaluated for each image parameter in a preset number of image parameters; wherein, the evaluation network model is a model obtained by training according to a training set, and the training set comprises: a plurality of sample images, and a sample value of each sample image for each image parameter of a preset number of image parameters.
The communication bus 504 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus 504 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus.
The communication interface 502 is used for communication between the above-described electronic apparatus and other apparatuses.
The Memory 503 may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory 503 may also be at least one storage device located remotely from the aforementioned processor.
The Processor 501 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
Based on the same inventive concept, according to the above-mentioned image quality assessment method embodiment, in yet another embodiment provided by the present invention, there is further provided a computer-readable storage medium having stored therein a computer program, which when executed by a processor implements any of the image quality assessment method steps shown in fig. 1-4 above.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the embodiments of the image quality evaluation apparatus, the electronic device and the computer-readable storage medium, since they are substantially similar to the embodiments of the method, the description is simple, and the relevant points can be referred to the partial description of the embodiments of the method. The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (15)
1. An image quality evaluation method, characterized in that the method comprises:
determining an image to be evaluated;
inputting the image to be evaluated into an evaluation network model to obtain an evaluation value of the image to be evaluated for each image parameter in a preset number of image parameters;
wherein, the evaluation network model is a model obtained by training according to a training set, and the training set comprises: a plurality of sample images, and a sample value of each sample image for each image parameter of the preset number of image parameters.
2. The method of claim 1, wherein the image parameters comprise: one or more of image definition, color saturation and preset evaluation; the preset evaluation includes an image attractiveness evaluation and an image scene evaluation.
3. The method according to claim 1 or 2, wherein the evaluation network model comprises evaluation network submodels corresponding one-to-one to the preset number of image parameters;
the step of inputting the image to be evaluated into an evaluation network model to obtain an evaluation value of the image to be evaluated for each image parameter in a preset number of image parameters includes:
and respectively inputting the image to be evaluated into each evaluation network submodel in the evaluation network submodels to obtain the evaluation value of the image to be evaluated for each image parameter in the preset number of image parameters.
4. The method according to claim 1 or 2, wherein the evaluation network model is obtained by training using the following steps:
acquiring a preset neural network model and the training set;
inputting the plurality of sample images into the neural network model to obtain an evaluation value of each sample image for each image parameter;
calculating a training loss value according to the obtained evaluation value and the sample value of each image aiming at each image parameter in the training set;
determining whether the neural network model converges according to the training loss value;
if not, adjusting parameter values in the neural network model, and returning to the step of inputting the plurality of sample images into the neural network model to obtain an evaluation value of each sample image for each image parameter;
and if so, determining the current neural network model as the evaluation network model.
5. The method of claim 4, wherein the step of calculating a training loss value for each sample value of each image parameter based on the obtained evaluation value and each sample image included in the training set comprises:
and inputting the obtained evaluation value and the sample value of each image included in the training set aiming at each image parameter into a preset loss function to obtain a training loss value.
6. The method of claim 5, wherein the step of calculating a training loss value for each sample value of each image parameter based on the obtained evaluation value and each sample image included in the training set comprises:
for each image parameter, determining a loss value for the image parameter according to the obtained evaluation value of each sample image for the image parameter and the sample value of each sample image included in the training set for the image parameter;
and determining a training loss value according to the loss value aiming at each image parameter and a preset training weight of each image parameter.
7. The method of claim 6, wherein the step of determining the training loss value according to the loss value for each image parameter and the preset training weight for each image parameter comprises:
determining a training loss value s according to the following formula:
wherein s isiAs a loss value for the ith image parameter, wiFor a preset ith image parameterNumber of training weights, N being the number of image parameters.
8. An image quality evaluation apparatus characterized by comprising:
the determining module is used for determining an image to be evaluated;
the evaluation module is used for inputting the image to be evaluated into an evaluation network model to obtain an evaluation value of the image to be evaluated for each image parameter in a preset number of image parameters;
wherein, the evaluation network model is a model obtained by training according to a training set, and the training set comprises: a plurality of sample images, and a sample value of each sample image for each image parameter of the preset number of image parameters.
9. The apparatus of claim 8, wherein the image parameters comprise: one or more of image definition, color saturation and preset evaluation; the preset evaluation includes an image attractiveness evaluation and an image scene evaluation.
10. The apparatus according to claim 8 or 9, wherein the evaluation network model comprises evaluation network submodels corresponding to the preset number of image parameters one to one;
the evaluation module is specifically configured to:
and respectively inputting the image to be evaluated into each evaluation network submodel in the evaluation network submodels to obtain the evaluation value of the image to be evaluated for each image parameter in the preset number of image parameters.
11. The apparatus of claim 8 or 9, further comprising:
a training module for training the evaluation network model;
the training module is specifically configured to:
acquiring a preset neural network model and the training set;
inputting the plurality of sample images into the neural network model to obtain an evaluation value of each sample image for each image parameter;
calculating a training loss value according to the obtained evaluation value and the sample value of each image aiming at each image parameter in the training set;
determining whether the neural network model converges according to the training loss value;
if not, adjusting parameter values in the neural network model, and returning to the step of inputting the plurality of sample images into the neural network model to obtain an evaluation value of each sample image for each image parameter;
and if so, determining the current neural network model as the evaluation network model.
12. The apparatus of claim 11, further comprising a loss value determination module,
the loss value determination module is specifically configured to:
and inputting the obtained evaluation value and the sample value of each image included in the training set aiming at each image parameter into a preset loss function to obtain a training loss value.
13. The apparatus of claim 12, wherein the loss value determining module is specifically configured to:
for each image parameter, determining a loss value for the image parameter according to the obtained evaluation value of each sample image for the image parameter and the sample value of each sample image included in the training set for the image parameter;
and determining a training loss value according to the loss value aiming at each image parameter and a preset training weight of each image parameter.
14. The apparatus of claim 13, wherein the loss value determining module is specifically configured to:
determining a training loss value s according to the following formula:
wherein s isiAs a loss value for the ith image parameter, wiThe training weight is a preset ith image parameter, and N is the number of the image parameters.
15. An electronic device, comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other via the bus,
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 7 when executing a program stored in the memory.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811563932.1A CN109685785A (en) | 2018-12-20 | 2018-12-20 | A kind of image quality measure method, apparatus and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811563932.1A CN109685785A (en) | 2018-12-20 | 2018-12-20 | A kind of image quality measure method, apparatus and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109685785A true CN109685785A (en) | 2019-04-26 |
Family
ID=66188006
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811563932.1A Pending CN109685785A (en) | 2018-12-20 | 2018-12-20 | A kind of image quality measure method, apparatus and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109685785A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110211119A (en) * | 2019-06-04 | 2019-09-06 | 厦门美图之家科技有限公司 | Image quality measure method, apparatus, electronic equipment and readable storage medium storing program for executing |
CN110378883A (en) * | 2019-07-11 | 2019-10-25 | 北京奇艺世纪科技有限公司 | Picture appraisal model generating method, image processing method, device, computer equipment and storage medium |
CN110807476A (en) * | 2019-10-17 | 2020-02-18 | 新华三信息安全技术有限公司 | Password security level classification method and device and electronic equipment |
CN110838106A (en) * | 2019-10-31 | 2020-02-25 | 国网河北省电力有限公司电力科学研究院 | Multi-dimensional evaluation method for image recognition software of secondary equipment of transformer substation |
CN110956615A (en) * | 2019-11-15 | 2020-04-03 | 北京金山云网络技术有限公司 | Image quality evaluation model training method and device, electronic equipment and storage medium |
CN110996169A (en) * | 2019-07-12 | 2020-04-10 | 北京达佳互联信息技术有限公司 | Method, device, electronic equipment and computer-readable storage medium for clipping video |
CN111915595A (en) * | 2020-08-06 | 2020-11-10 | 北京金山云网络技术有限公司 | Image quality evaluation method, and training method and device of image quality evaluation model |
EP3817392A1 (en) * | 2019-12-18 | 2021-05-05 | Beijing Baidu Netcom Science Technology Co., Ltd. | Video jitter detection method and apparatus |
WO2021082819A1 (en) * | 2019-10-31 | 2021-05-06 | 北京金山云网络技术有限公司 | Image generation method and apparatus, and electronic device |
CN112950581A (en) * | 2021-02-25 | 2021-06-11 | 北京金山云网络技术有限公司 | Quality evaluation method and device and electronic equipment |
CN112950579A (en) * | 2021-02-26 | 2021-06-11 | 北京金山云网络技术有限公司 | Image quality evaluation method and device and electronic equipment |
CN113011468A (en) * | 2021-02-25 | 2021-06-22 | 上海皓桦科技股份有限公司 | Image feature extraction method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107133948A (en) * | 2017-05-09 | 2017-09-05 | 电子科技大学 | Image blurring and noise evaluating method based on multitask convolutional neural networks |
CN108446651A (en) * | 2018-03-27 | 2018-08-24 | 百度在线网络技术(北京)有限公司 | Face identification method and device |
CN108960087A (en) * | 2018-06-20 | 2018-12-07 | 中国科学院重庆绿色智能技术研究院 | A kind of quality of human face image appraisal procedure and system based on various dimensions evaluation criteria |
CN109002812A (en) * | 2018-08-08 | 2018-12-14 | 北京未来媒体科技股份有限公司 | A kind of method and device of intelligent recognition video cover |
-
2018
- 2018-12-20 CN CN201811563932.1A patent/CN109685785A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107133948A (en) * | 2017-05-09 | 2017-09-05 | 电子科技大学 | Image blurring and noise evaluating method based on multitask convolutional neural networks |
CN108446651A (en) * | 2018-03-27 | 2018-08-24 | 百度在线网络技术(北京)有限公司 | Face identification method and device |
CN108960087A (en) * | 2018-06-20 | 2018-12-07 | 中国科学院重庆绿色智能技术研究院 | A kind of quality of human face image appraisal procedure and system based on various dimensions evaluation criteria |
CN109002812A (en) * | 2018-08-08 | 2018-12-14 | 北京未来媒体科技股份有限公司 | A kind of method and device of intelligent recognition video cover |
Non-Patent Citations (3)
Title |
---|
WEILONG HOU 等: "《Blind Image Quality Assessment via Deep Learning》", 《 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》 * |
金鑫: "《图像美学质量评价技术发展趋势》", 《科技导报》 * |
陈汝洪: "《影像构成基础》", 30 April 2016 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110211119A (en) * | 2019-06-04 | 2019-09-06 | 厦门美图之家科技有限公司 | Image quality measure method, apparatus, electronic equipment and readable storage medium storing program for executing |
CN110378883A (en) * | 2019-07-11 | 2019-10-25 | 北京奇艺世纪科技有限公司 | Picture appraisal model generating method, image processing method, device, computer equipment and storage medium |
CN110996169A (en) * | 2019-07-12 | 2020-04-10 | 北京达佳互联信息技术有限公司 | Method, device, electronic equipment and computer-readable storage medium for clipping video |
CN110807476A (en) * | 2019-10-17 | 2020-02-18 | 新华三信息安全技术有限公司 | Password security level classification method and device and electronic equipment |
CN110807476B (en) * | 2019-10-17 | 2022-11-18 | 新华三信息安全技术有限公司 | Password security level classification method and device and electronic equipment |
WO2021082819A1 (en) * | 2019-10-31 | 2021-05-06 | 北京金山云网络技术有限公司 | Image generation method and apparatus, and electronic device |
CN110838106B (en) * | 2019-10-31 | 2023-04-14 | 国网河北省电力有限公司电力科学研究院 | Multi-dimensional evaluation method for image recognition software of secondary equipment of transformer substation |
US11836898B2 (en) | 2019-10-31 | 2023-12-05 | Beijing Kingsoft Cloud Network Technology Co., Ltd. | Method and apparatus for generating image, and electronic device |
CN110838106A (en) * | 2019-10-31 | 2020-02-25 | 国网河北省电力有限公司电力科学研究院 | Multi-dimensional evaluation method for image recognition software of secondary equipment of transformer substation |
CN110956615A (en) * | 2019-11-15 | 2020-04-03 | 北京金山云网络技术有限公司 | Image quality evaluation model training method and device, electronic equipment and storage medium |
CN110956615B (en) * | 2019-11-15 | 2023-04-07 | 北京金山云网络技术有限公司 | Image quality evaluation model training method and device, electronic equipment and storage medium |
US11546577B2 (en) | 2019-12-18 | 2023-01-03 | Beijing Baidu Netcom Science Technology Co., Ltd. | Video jitter detection method and apparatus |
EP3817392A1 (en) * | 2019-12-18 | 2021-05-05 | Beijing Baidu Netcom Science Technology Co., Ltd. | Video jitter detection method and apparatus |
CN111915595A (en) * | 2020-08-06 | 2020-11-10 | 北京金山云网络技术有限公司 | Image quality evaluation method, and training method and device of image quality evaluation model |
CN112950581A (en) * | 2021-02-25 | 2021-06-11 | 北京金山云网络技术有限公司 | Quality evaluation method and device and electronic equipment |
CN113011468A (en) * | 2021-02-25 | 2021-06-22 | 上海皓桦科技股份有限公司 | Image feature extraction method and device |
CN112950581B (en) * | 2021-02-25 | 2024-06-21 | 北京金山云网络技术有限公司 | Quality evaluation method and device and electronic equipment |
CN112950579A (en) * | 2021-02-26 | 2021-06-11 | 北京金山云网络技术有限公司 | Image quality evaluation method and device and electronic equipment |
CN112950579B (en) * | 2021-02-26 | 2024-05-31 | 北京金山云网络技术有限公司 | Image quality evaluation method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109685785A (en) | A kind of image quality measure method, apparatus and electronic equipment | |
WO2020239015A1 (en) | Image recognition method and apparatus, image classification method and apparatus, electronic device, and storage medium | |
CN106778820B (en) | Identification model determining method and device | |
CN107920257B (en) | Video key point real-time processing method and device and computing equipment | |
CN108810642B (en) | Bullet screen display method and device and electronic equipment | |
CN110211119B (en) | Image quality evaluation method and device, electronic equipment and readable storage medium | |
CN107967693A (en) | Video Key point processing method, device, computing device and computer-readable storage medium | |
CN112241976A (en) | Method and device for training model | |
JP2020522061A (en) | Sample weight setting method and device, and electronic device | |
US11301669B2 (en) | Face recognition system and method for enhancing face recognition | |
WO2021139448A1 (en) | Method and apparatus for correcting new model on basis of multiple source models, and computer device | |
US20240153271A1 (en) | Method and apparatus for selecting cover of video, computer device, and storage medium | |
CN109740621B (en) | Video classification method, device and equipment | |
CN108335131A (en) | A kind of method, apparatus and electronic equipment for estimating age of user section | |
CN111222553A (en) | Training data processing method and device of machine learning model and computer equipment | |
CN113326821A (en) | Face driving method and device for video frame image | |
CN110689496B (en) | Method and device for determining noise reduction model, electronic equipment and computer storage medium | |
CN109359675B (en) | Image processing method and apparatus | |
CN112434717A (en) | Model training method and device | |
CN108804670B (en) | Data recommendation method and device, computer equipment and storage medium | |
CN111597383A (en) | Video heat level prediction method and device | |
CN111353597B (en) | Target detection neural network training method and device | |
CN111428125B (en) | Ordering method, ordering device, electronic equipment and readable storage medium | |
CN110765852B (en) | Method and device for acquiring face direction in image | |
CN116645282A (en) | Data processing method and system based on big data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190426 |