CN112418292B - Image quality evaluation method, device, computer equipment and storage medium - Google Patents

Image quality evaluation method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN112418292B
CN112418292B CN202011288901.7A CN202011288901A CN112418292B CN 112418292 B CN112418292 B CN 112418292B CN 202011288901 A CN202011288901 A CN 202011288901A CN 112418292 B CN112418292 B CN 112418292B
Authority
CN
China
Prior art keywords
image
feature
training
network
evaluated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011288901.7A
Other languages
Chinese (zh)
Other versions
CN112418292A (en
Inventor
陈昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202011288901.7A priority Critical patent/CN112418292B/en
Publication of CN112418292A publication Critical patent/CN112418292A/en
Priority to PCT/CN2021/090416 priority patent/WO2022105117A1/en
Application granted granted Critical
Publication of CN112418292B publication Critical patent/CN112418292B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method, a device, computer equipment and a storage medium for evaluating image quality, which belong to the technical field of artificial intelligence; receiving an image to be evaluated, and extracting image features of the image to be evaluated by using an image feature extractor; performing feature vector conversion on image features of the image to be evaluated, and converting the image features into feature vectors; and constructing a network regression function, calculating a regression value of the feature vector by using the network regression function, and determining the quality of the image to be evaluated according to the regression value of the feature vector. In addition, the application also relates to a blockchain technology, and an image to be evaluated can be stored in the blockchain. The application constructs the image quality evaluation system by simplifying the deep learning network and adopting a machine regression mode, and the image quality evaluation system is quickly suitable for various scenes.

Description

Image quality evaluation method, device, computer equipment and storage medium
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to a method, a device, computer equipment and a storage medium for evaluating image quality.
Background
Currently, in a series of intelligent image application fields, judging whether the quality of an input image is good or not is often a key for starting a subsequent series of operations. In general, the general idea for solving the image quality evaluation is to simulate the idea of human discrimination as much as possible, and there are two main solving ideas for the current image quality evaluation:
1. The basic idea is to use an index for determining the brightness, edge sharpness, and the like of an image by defining an important index, that is, by determining the image quality by using a feature engineering method, which is represented by an evaluation scheme such as SSIM (Structural SIMilarity ), FSIM (feature similarity, image quality measurement standard), NIQE (Natural image quality evaluator, image quality evaluation), and the like.
2. The method based on the convolutional deep learning network is often to fit the judgment result of a person by using a neural network.
However, both the above two methods have certain defects, and for the image quality evaluation mode for defining important indexes, the image quality evaluation mode is often limited by the influence of insufficient index definition, so that the generalization capability of the method is insufficient, and the design of the characteristic indexes often requires a designer to have higher mathematical level and abundant experience, and cannot be rapidly adapted to various scenes.
The image quality evaluation method for the convolution deep learning network has higher requirements on the cost of computing resources, has larger use limit in some occasions (such as a mobile terminal), has certain limit on the computing capacity and the memory of the mobile terminal at present, and is difficult to deploy the convolution deep learning network.
Disclosure of Invention
The embodiment of the application aims to provide a method, a device, computer equipment and a storage medium for evaluating image quality, which are used for solving the technical problems that the existing image quality evaluation scheme has larger application scene limitation and can not be rapidly adapted to various scenes.
In order to solve the above technical problems, an embodiment of the present application provides a method for evaluating image quality, which adopts the following technical scheme:
A method of image quality assessment, comprising:
constructing an image generation network, and training the image generation network through a training sample set in a preset database to obtain an image feature extractor;
Receiving an image to be evaluated, and extracting image features of the image to be evaluated by using an image feature extractor;
Performing feature vector conversion on image features of the image to be evaluated, and converting the image features into feature vectors;
And constructing a network regression function, calculating a regression value of the feature vector by using the network regression function, and determining the quality of the image to be evaluated according to the regression value of the feature vector.
Further, before the step of constructing the image generation network and training the image generation network through the training sample set in the preset database to obtain the image feature extractor, the method further comprises:
acquiring image data in a preset database, and preprocessing the image data;
Labeling the preprocessed image data, and randomly combining the labeled image data to obtain a training sample set and a verification data set;
The training sample set and the verification data set are stored in a preset database.
Further, the image generation network comprises an encoding layer and a decoding layer, the encoding layer comprises a plurality of convolution kernels, the decoding layer comprises a plurality of deconvolution kernels, the convolution kernels correspond to the deconvolution kernels one by one, the image generation network is constructed, the image generation network is trained through a training sample set in a preset database, and the image feature extractor is obtained, wherein the method specifically comprises the following steps:
extracting training samples in the training sample set, and sequentially importing each training sample into a coding layer of an image generation network;
training a coding layer in an image generation network by utilizing each training sample to obtain a plurality of convolution kernels after training;
Screening a plurality of convolution kernels after training based on a deep learning compression algorithm, and removing redundant items in the plurality of convolution kernels;
and constructing an image feature extractor by using a plurality of convolution kernels with redundancy removed.
Further, after the step of training the coding layer in the convolutional layer of the image generation network by using each training sample to obtain a plurality of trained convolutional kernels, the method further comprises:
collecting training results of each convolution kernel in the coding layer;
The training result of each convolution kernel is imported into the corresponding deconvolution kernel, and the corresponding deconvolution kernel is trained through the training result of each convolution kernel, so that a plurality of deconvolution kernels with complete training are obtained.
Further, after the step of importing the training result of each convolution kernel into the corresponding deconvolution kernel and training the corresponding deconvolution kernel by the training result of each convolution kernel to obtain a plurality of deconvolution kernels after training, the method further comprises the steps of:
Extracting a verification sample in the verification data set, and importing the verification data set into an image generation network;
Respectively carrying out feature extraction on the verification samples by using a plurality of convolution kernels after training to obtain feature extraction results of a plurality of verification samples;
Respectively importing the feature extraction results of a plurality of verification samples into corresponding deconvolution kernels to perform feature reduction to obtain feature reduction results;
fitting by using a back propagation algorithm based on the feature reduction result and the verification sample to obtain a prediction error;
And comparing the prediction error with a preset threshold, and if the prediction error is larger than the preset threshold, performing iterative updating on the image generation network until the prediction error is smaller than or equal to the preset threshold, and acquiring the image generation network.
Further, the step of converting the image feature into the feature vector by converting the feature vector into the image feature of the image to be evaluated specifically includes:
And carrying out feature vector conversion on the image features of the image to be evaluated based on the spatial pyramid pooling, and converting the image features into feature vectors.
Further, constructing a network regression function, calculating a regression value of the feature vector by using the network regression function, and determining the quality of the image to be evaluated according to the regression value of the feature vector, wherein the method specifically comprises the following steps:
constructing an initial regression function based on a Bayesian algorithm;
extracting parameters of the image feature extractor, and calculating feature weights based on the parameters of the image feature extractor;
importing the characteristic weight into an initial regression function to obtain a network regression function;
and importing the feature vector into a network regression function, calculating a regression value of the feature vector, and determining the quality of the image to be evaluated according to the regression value of the feature vector.
In order to solve the above technical problems, the embodiment of the present application further provides an image quality evaluation device, which adopts the following technical scheme:
an apparatus for image quality assessment, comprising:
the building module is used for building an image generation network, training the image generation network through a training sample set in a preset database, and obtaining an image feature extractor;
The extraction module is used for receiving the image to be evaluated and extracting image features of the image to be evaluated by utilizing the image feature extractor;
The conversion module is used for carrying out feature vector conversion on the image features of the image to be evaluated and converting the image features into feature vectors;
and the evaluation module is used for constructing a network regression function, calculating the regression value of the feature vector by utilizing the network regression function, and determining the quality of the image to be evaluated according to the regression value of the feature vector.
In order to solve the above technical problems, the embodiment of the present application further provides a computer device, which adopts the following technical schemes:
a computer device comprising a memory having stored therein computer readable instructions which when executed by a processor perform the steps of the method of image quality assessment as described in any of the above.
In order to solve the above technical problems, an embodiment of the present application further provides a computer readable storage medium, which adopts the following technical schemes:
a computer readable storage medium having computer readable instructions stored thereon which, when executed by a processor, perform the steps of a method of image quality assessment as described in any of the above.
Compared with the prior art, the embodiment of the application has the following main beneficial effects:
The application discloses a method, a device, computer equipment and a storage medium for evaluating image quality, which belong to the technical field of artificial intelligence; receiving an image to be evaluated, and extracting image features of the image to be evaluated by using an image feature extractor; performing feature vector conversion on image features of the image to be evaluated, and converting the image features into feature vectors; and constructing a network regression function, calculating a regression value of the feature vector by using the network regression function, and determining the quality of the image to be evaluated according to the regression value of the feature vector. The image quality evaluation system is constructed by simplifying a deep learning network and adopting a machine regression mode, when the image quality evaluation is carried out, the image characteristics are acquired through the trained deep learning network, then the regression value of the image characteristics is calculated based on the regression function of the network, and finally the quality of the image to be evaluated is determined based on the regression value of the image characteristics. Meanwhile, the image quality is finally evaluated through the network regression function, mathematical explanation can be made aiming at the evaluation result, and the user can conveniently and intuitively analyze the problem.
Drawings
In order to more clearly illustrate the solution of the present application, a brief description will be given below of the drawings required for the description of the embodiments of the present application, it being apparent that the drawings in the following description are some embodiments of the present application, and that other drawings may be obtained from these drawings without the exercise of inventive effort for a person of ordinary skill in the art.
FIG. 1 illustrates an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 shows a flow chart of one embodiment of a method of image quality assessment according to the present application;
FIG. 3 shows a flow chart of one embodiment of step S201 in FIG. 2;
FIG. 4 shows a flow chart of one embodiment of step S204 in FIG. 2;
FIG. 5 is a schematic view showing the structure of an embodiment of an apparatus for image quality evaluation according to the present application;
fig. 6 shows a schematic structural diagram of an embodiment of a computer device according to the application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the applications herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description of the application and the claims and the description of the drawings above are intended to cover a non-exclusive inclusion. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order to make the person skilled in the art better understand the solution of the present application, the technical solution of the embodiment of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as a web browser application, a shopping class application, a search class application, an instant messaging tool, a mailbox client, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablet computers, electronic book readers, MP3 players (Moving Picture ExpertsGroup Audio Layer III, dynamic video expert compression standard audio plane 3), MP4 (Moving PictureExperts Group Audio Layer IV, dynamic video expert compression standard audio plane 4) players, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that, the method for evaluating image quality provided by the embodiment of the present application is generally executed by a server/terminal device, and accordingly, the apparatus for evaluating image quality is generally disposed in the server/terminal device.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow chart of one embodiment of a method of image quality assessment according to the present application is shown. The image quality evaluation method comprises the following steps:
s201, constructing an image generation network, and training the image generation network through a training sample set in a preset database to obtain an image feature extractor.
The image generation network can be constructed based on a deep convolutional neural network model, and the convolutional neural network (Convolutional Neural Networks, CNN) is a feedforward neural network (Feedforward Neural Networks) which contains convolutional calculation and has a deep structure and is one of representative algorithms of deep learning (DEEP LEARNING). Convolutional neural networks have the capability of token learning (representation learning) and are capable of performing a shift-invariant classification (shift-INVARIANT CLASSIFICATION) on input information in their hierarchical structure, and are therefore also referred to as "shift-invariant artificial neural networks". The convolutional neural network imitates the biological visual perception (visual perception) mechanism to construct, can carry on and supervise and study and unsupervised and study, its convolutional kernel parameter in the convolutional layer shares and sparsity of the interlaminar connection makes the convolutional neural network can study the grid-like feature (such as pixel and audio) with less calculation amount, have stable effect and have no extra feature engineering requirement to the data.
The image generation network comprises an encoding layer encoder and a decoding layer encoder, the encoding layer encoder comprises a plurality of convolution kernels, the decoding layer encoder comprises a plurality of deconvolution kernels, the convolution kernels are in one-to-one correspondence with the deconvolution kernels, a communication channel is established between the convolution kernels of the encoding layer encoder and the deconvolution kernels of the corresponding decoding layer encoder, and after the convolution kernels of the encoding layer encoder extract image features, the extracted image features can be directly transmitted to the deconvolution kernels of the corresponding decoding layer encoder through the communication channel. The coding layer encoder is a full convolution layer and is used for extracting image features of an input image, and the image feature extractor is also composed of the part. The decoding layer decoder is an deconvolution layer and is used for decoding the extracted image features and restoring the image features into an input image, and the purpose of restoring the image features by the decoding layer decoder is to finish verification of the encoding layer decoder. When the image generation network is constructed, the loss functions L1 and L2 are set for the encoding layer encoder and the decoding layer encoder, respectively, and when the image generation network is iteratively updated, the image generation network can be iteratively updated based on the L1 loss function and the L2 loss function.
Specifically, an image generating network is built based on a depth convolutional neural network model, a training sample set is obtained from a preset database, the image generating network is trained through the training sample set, and after the trained image generating network is obtained, an image feature extractor is built through a convolutional kernel in a coding layer encoder of the image generating network.
S202, receiving the image to be evaluated, and extracting image features of the image to be evaluated by using an image feature extractor.
Specifically, when the image evaluation requirement is generated, an image evaluation instruction is received, an image to be evaluated is obtained based on the image evaluation instruction, and the image characteristics of the image to be evaluated are extracted by using the constructed image characteristic extractor. It should be noted that, the compressed encoding layer encoder part network constructs an image feature extractor, and the image feature extractor outputs multi-scale image features when extracting features, and the image features of the previous layer are the image input of the next layer. In a specific embodiment of the present application, 5 layers of feature extraction convolution layers are constructed together, and when the input image is 512x512 in size, the 5 layers of extracted image features are scale feature 0, scale feature 1, scale feature 2, scale feature 3, and scale feature 4, respectively, and scale feature 0, scale feature 1, scale feature 2, scale feature 3, and scale feature 4 are 512x512, 256x256, 128x128, 64x64, 32x32, respectively.
In this embodiment, the electronic device (e.g., the server/terminal device shown in fig. 1) on which the image quality evaluation method operates may receive the image evaluation instruction through a wired connection manner or a wireless connection manner. It should be noted that the wireless connection may include, but is not limited to, 3G/4G connection, wiFi connection, bluetooth connection, wiMAX connection, zigbee connection, UWB (ultra wideband) connection, and other now known or later developed wireless connection.
S203, converting the feature vector of the image feature of the image to be evaluated, and converting the image feature into the feature vector.
Specifically, after the image feature of the image to be evaluated is extracted by the image feature extractor, the image features of the image to be evaluated are converted into feature vectors by spatial pyramid pooling (SPATIAL PYRAMID Pooling, SPP) of the image features extracted by the image feature extractor, the feature vectors are vectors with the same size, and the conversion of the image features of the image to be evaluated into feature vectors is favorable for calculating regression values of the image features by using the network regression function in a subsequent step. The space pyramid pooling can enable feature graphs with any size to be converted into feature vectors with fixed sizes, and the feature vectors with the fixed sizes are sent to the full-connection layer.
S204, constructing a network regression function, calculating a regression value of the feature vector by using the network regression function, and determining the quality of the image to be evaluated according to the regression value of the feature vector.
In a specific embodiment of the application, the image evaluation task is split into an image feature extraction and multiple regression evaluation process. Calculating the regression value of the feature vector through the multi-network regression function, calculating the regression value of the feature vector based on the constructed network regression function, normalizing the regression value of the feature vector to enable the regression value of the feature vector to fall into a value range between 0 and 1, wherein the regression value can be regarded as a comprehensive score for a plurality of dimensions of the image feature, and finally determining the quality of the image to be evaluated according to the regression value of the feature tensor, wherein the network regression function of the multi-dimensional image feature is constructed mainly by adopting a Bayesian kernel regression mode.
Specifically, after the image features are converted into feature vectors, a network regression function is constructed based on a Bayesian kernel regression equation, regression values of the feature vectors are calculated by using the network regression function, the regression values of the feature vectors are normalized, and the quality of an image to be evaluated is determined according to the normalized feature vector regression values. If the regression value is 1, the image quality is excellent, and if the regression value is 0, the image quality is disqualified.
The application discloses a method, a device, computer equipment and a storage medium for evaluating image quality, which belong to the technical field of artificial intelligence; receiving an image to be evaluated, and extracting image features of the image to be evaluated by using an image feature extractor; performing feature vector conversion on image features of the image to be evaluated, and converting the image features into feature vectors; and constructing a network regression function, calculating a regression value of the feature vector by using the network regression function, and determining the quality of the image to be evaluated according to the regression value of the feature vector. The image quality evaluation system is constructed by simplifying a deep learning network and adopting a machine regression mode, when the image quality evaluation is carried out, the image characteristics are acquired through the trained deep learning network, then the regression value of the image characteristics is calculated based on the regression function of the network, and finally the quality of the image to be evaluated is determined based on the regression value of the image characteristics. Meanwhile, the image quality is finally evaluated through the network regression function, mathematical explanation can be made aiming at the evaluation result, and the user can conveniently and intuitively analyze the problem.
Further, before the step of constructing the image generation network and training the image generation network through the training sample set in the preset database to obtain the image feature extractor, the method further comprises:
acquiring image data in a preset database, and preprocessing the image data;
Labeling the preprocessed image data, and randomly combining the labeled image data to obtain a training sample set and a verification data set;
The training sample set and the verification data set are stored in a preset database.
Specifically, image data is obtained from a preset database, the image data is marked, and the quality index of the image data can be marked during marking. And randomly combining the marked image data to obtain a training sample set and a verification data set, wherein the marked image data can be randomly divided into 10 equal sample subsets, 9 sample subsets are randomly combined to serve as the training sample set, the rest sample subsets serve as the verification data set, and the training sample set and the verification data set are stored in a preset database.
Further, referring to fig. 3, fig. 3 shows a flowchart of a specific embodiment of step S201 in fig. 2, where the image generating network includes an encoding layer and a decoding layer, the encoding layer includes a plurality of convolution kernels, the decoding layer includes a plurality of deconvolution kernels, the convolution kernels correspond to the deconvolution kernels one by one, the image generating network is constructed, the image generating network is trained by a training sample set in a preset database, and the step of obtaining an image feature extractor specifically includes:
s301, extracting training samples in a training sample set, and sequentially importing each training sample into a coding layer of an image generation network;
s302, training a coding layer in an image generation network by utilizing each training sample to obtain a plurality of convolution kernels after training;
S303, screening a plurality of trained convolution kernels based on a deep learning compression algorithm, and removing redundant items in the plurality of convolution kernels;
s304, constructing an image feature extractor by using a plurality of convolution kernels with redundancy removed.
The deep learning compression (deep compression) algorithm trains the neural network to obtain weights of all convolution layers of the trained neural network, sets a weight threshold, deletes convolution layers lower than the weight threshold, iterates the training, and removes redundant layers once through the iterative training. And finally, clustering and weight sharing are carried out on weights of the convolution layers reserved in the neural network, the value of the clustering center point is used as the value of all the weights, the number of the clustering center point and the number of the center points are continuously adjusted to obtain a good model compression effect, and finally, huffman coding is carried out on the weights. The neural network can be compressed under the condition of not losing precision by adopting the Deep compression method, wherein the size of the neural network can be compressed to 35-49 times of the original size, and the stored application is more effective in reasoning.
Specifically, training samples in a training sample set are extracted, each training sample is sequentially led into a coding layer encoder of an image generation network, a plurality of convolution kernels are preset in the coding layer encoder, each training sample is utilized to train the convolution kernels of the coding layer encoder in the image generation network, a plurality of training completed convolution kernels are obtained, a weight threshold is set, a plurality of training completed convolution kernels are screened based on a deep learning compression algorithm, convolution layers lower than the weight threshold are deleted, redundancy items in the plurality of convolution kernels are removed, and an image feature extractor is constructed by utilizing the plurality of convolution kernels with the redundancy items removed.
Further, after the step of training the coding layer in the convolutional layer of the image generation network by using each training sample to obtain a plurality of trained convolutional kernels, the method further comprises:
collecting training results of each convolution kernel in the coding layer;
The training result of each convolution kernel is imported into the corresponding deconvolution kernel, and the corresponding deconvolution kernel is trained through the training result of each convolution kernel, so that a plurality of deconvolution kernels with complete training are obtained.
Specifically, the training result of each convolution kernel in the coding layer encoder is collected, the training result of each convolution kernel in the coding layer encoder is marked, the training result of each convolution kernel in the marked coding layer encoder is used for training the deconvolution kernel in the corresponding decoding layer encoder, and a plurality of deconvolution kernels with complete training are obtained.
Further, after the step of importing the training result of each convolution kernel into the corresponding deconvolution kernel and training the corresponding deconvolution kernel by the training result of each convolution kernel to obtain a plurality of deconvolution kernels after training, the method further comprises the steps of:
Extracting a verification sample in the verification data set, and importing the verification data set into an image generation network;
Respectively carrying out feature extraction on the verification samples by using a plurality of convolution kernels after training to obtain feature extraction results of a plurality of verification samples;
Respectively importing the feature extraction results of a plurality of verification samples into corresponding deconvolution kernels to perform feature reduction to obtain feature reduction results;
fitting by using a back propagation algorithm based on the feature reduction result and the verification sample to obtain a prediction error;
And comparing the prediction error with a preset threshold, and if the prediction error is larger than the preset threshold, performing iterative updating on the image generation network until the prediction error is smaller than or equal to the preset threshold, and acquiring the image generation network.
Among them, the back propagation algorithm, i.e., the error back propagation algorithm (Backpropagation algorithm, BP algorithm), is suitable for a learning algorithm of a multi-layer neuron network, which is based on a gradient descent method for error calculation of a deep learning network. The input and output relationship of the BP network is essentially a mapping relationship: an n-input m-output BP neural network performs the function of a continuous mapping from n-dimensional Euclidean space to a finite field in m-dimensional Euclidean space, which mapping is highly nonlinear. The learning process of the BP algorithm consists of a forward propagation process and a backward propagation process. In the forward propagation process, input information is processed layer by layer through an input layer and is transmitted to an output layer through an implicit layer, and is transmitted to the backward propagation layer by layer, so that the partial derivative of the objective function on the weight of each neuron is obtained layer by layer, and the gradient of the objective function on the weight vector is formed to serve as the basis for modifying the weight.
Specifically, a verification sample in a verification data set is extracted, the verification data set is imported into an image generation network, feature extraction is carried out on the verification sample by utilizing a plurality of convolution kernels after training, feature reduction is carried out through corresponding deconvolution kernels, then a back propagation algorithm calculates a prediction error, the prediction error is compared with a preset error threshold, if the prediction error is larger than the preset error threshold, the image generation network is subjected to iterative updating based on loss functions L1 and L2 of an encoding layer encoder and a decoding layer decoder until the prediction error is smaller than or equal to the preset error threshold, and the image generation network passing through verification is obtained.
Further, the step of converting the image feature into the feature vector by converting the feature vector into the image feature of the image to be evaluated specifically includes:
And carrying out feature vector conversion on the image features of the image to be evaluated based on the spatial pyramid pooling, and converting the image features into feature vectors.
Specifically, after the image features of the image to be evaluated are extracted by the image feature extractor, the image features of the image to be evaluated are converted into feature vectors by spatial pyramid pooling, the feature vectors are vectors with consistent sizes, and the conversion of the image features of the image to be evaluated into feature vectors is beneficial to calculating regression values of the image features by using the network regression function in a subsequent step. The space pyramid pooling can enable feature graphs with any size to be converted into feature vectors with fixed sizes, and the feature vectors with the fixed sizes are sent to the full-connection layer.
In a specific embodiment of the application, image features are pooled by a spatial pyramid and converted into feature vectors in a full-link form. The spatial pyramid pooling is to perform deformation convolution operation on the image features with different scales respectively, and finally obtain feature vectors in a full-link form.
Further, referring to fig. 4, fig. 4 shows a flowchart of a specific embodiment of step S204 in fig. 2, a step of constructing a network regression function, calculating a regression value of a feature vector by using the network regression function, and determining a quality of an image to be evaluated according to the regression value of the feature vector, which specifically includes:
S401, constructing an initial regression function based on a Bayesian algorithm;
s402, extracting parameters of an image feature extractor, and calculating feature weights based on the parameters of the image feature extractor;
S403, importing the characteristic weight into an initial regression function to obtain a network regression function;
S404, importing the feature vector into a network regression function, calculating a regression value of the feature vector, and determining the quality of the image to be evaluated according to the regression value of the feature vector.
Specifically, an initial regression function is constructed based on a Bayesian equation, which is specifically as follows:
Where Y refers to the bayesian regression, i refers to the sequence number of the input image, h refers to the high-dimensional response function, z refers to the image feature, x refers to the potential factor, β refers to the weight, and epsilon is the modulation factor. Solving the response function h may be derived based on a kernel function approach, so h may be written as follows:
where α is the pre-coefficient of the kernel function, where the kernel function is a gaussian kernel function, and thus where K (z, z') can be rewritten as:
Where exp is the e index and M is the training set capacity, i.e. the number of samples. The above K is further rewritten:
wherein r herein is a number satisfying the following condition:
rmm~δmf1(rm)+(1-δm)P0
wherein m=1, … …, M; rm is the probability value of the Bayesian theorem conditional probability, f is the probability density function, delta m -bernouli (pi), bernouli is the complex Bernoulli distribution, delta is the variance, where the regression process can be modified to a Bayesian Gaussian kernel based regression.
Specifically, after converting image features into feature vectors, constructing an initial regression function based on a Bayesian algorithm, extracting parameters of an image feature extractor, calculating feature weights based on the parameters of the image feature extractor, normalizing the feature weights, importing the normalized feature weights into the initial regression function to obtain a network regression function, importing the feature vectors into the network regression function, calculating regression values of the feature vectors, and determining the quality of an image to be evaluated according to the regression values of the feature vectors.
It should be emphasized that, to further ensure the privacy and security of the image to be evaluated, the image to be evaluated may also be stored in a node of a blockchain.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The blockchain (Blockchain), essentially a de-centralized database, is a string of data blocks that are generated in association using cryptographic methods, each of which contains information from a batch of network transactions for verifying the validity (anti-counterfeit) of its information and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
Those skilled in the art will appreciate that implementing all or part of the processes of the methods of the embodiments described above may be accomplished by way of computer readable instructions, stored on a computer readable storage medium, which when executed may comprise processes of embodiments of the methods described above. The storage medium may be a nonvolatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a random access Memory (Random Access Memory, RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
With further reference to fig. 5, as an implementation of the method shown in fig. 2 described above, the present application provides an embodiment of an apparatus for evaluating image quality, which corresponds to the embodiment of the method shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus for evaluating image quality according to the present embodiment includes:
the construction module 501 is configured to construct an image generation network, train the image generation network through a training sample set in a preset database, and obtain an image feature extractor;
The extracting module 502 is configured to receive an image to be evaluated, and extract image features of the image to be evaluated by using an image feature extractor;
the conversion module 503 is configured to perform feature vector conversion on the image features of the image to be evaluated, and convert the image features into feature vectors;
the evaluation module 504 is configured to construct a network regression function, calculate a regression value of the feature vector using the network regression function, and determine a quality of the image to be evaluated according to the regression value of the feature vector.
Further, the image quality evaluation apparatus further includes:
the preprocessing module is used for acquiring image data in a preset database and preprocessing the image data;
the marking module is used for marking the preprocessed image data and randomly combining the marked image data to obtain a training sample set and a verification data set;
and the storage module is used for storing the training sample set and the verification data set into a preset database.
Further, the image generating network includes an encoding layer and a decoding layer, the encoding layer includes a plurality of convolution kernels, the decoding layer includes a plurality of deconvolution kernels, the convolution kernels correspond to the deconvolution kernels one by one, and the building module 501 specifically includes:
the extraction unit is used for extracting training samples in the training sample set and sequentially importing each training sample into a coding layer of the image generation network;
The first training unit is used for training the coding layer in the image generation network by utilizing each training sample to obtain a plurality of convolution kernels after training;
The compression unit is used for screening the plurality of trained convolution kernels based on a deep learning compression algorithm and removing redundant items in the plurality of convolution kernels;
And the construction unit is used for constructing the image feature extractor by using a plurality of convolution kernels with redundancy removed.
Further, the image quality evaluation apparatus further includes:
The acquisition unit is used for acquiring the training result of each convolution kernel in the coding layer;
The second training unit is used for importing the training result of each convolution kernel into the corresponding deconvolution kernel, training the corresponding deconvolution kernel through the training result of each convolution kernel, and obtaining a plurality of deconvolution kernels after training.
Further, the image quality evaluation apparatus further includes:
the verification unit is used for extracting a verification sample in the verification data set and importing the verification data set into the image generation network;
The convolution unit is used for respectively carrying out feature extraction on the verification samples by utilizing a plurality of convolution kernels after training to obtain feature extraction results of a plurality of verification samples;
the reduction unit is used for respectively importing the feature extraction results of the plurality of verification samples into corresponding deconvolution cores to perform feature reduction to obtain feature reduction results;
the fitting unit is used for fitting by using a back propagation algorithm based on the feature reduction result and the verification sample to obtain a prediction error;
And the iteration unit is used for comparing the prediction error with a preset threshold value, and if the prediction error is larger than the preset threshold value, carrying out iteration update on the image generation network until the prediction error is smaller than or equal to the preset threshold value, and acquiring the image generation network.
Further, the conversion module specifically includes:
And the conversion unit is used for carrying out feature vector conversion on the image features of the image to be evaluated based on the spatial pyramid pooling and converting the image features into feature vectors.
Further, the evaluation module 504 specifically includes:
The function construction unit is used for constructing an initial regression function based on a Bayesian algorithm;
A parameter extraction unit for extracting parameters of the image feature extractor and calculating feature weights based on the parameters of the image feature extractor;
The importing unit is used for importing the characteristic weight into the initial regression function to obtain a network regression function;
the evaluation unit is used for leading the feature vector into the network regression function, calculating the regression value of the feature vector and determining the quality of the image to be evaluated according to the regression value of the feature vector.
The application discloses a device for evaluating image quality, which belongs to the technical field of artificial intelligence, and comprises the steps of constructing an image generation network, training the image generation network through a training sample set in a preset database, and obtaining an image feature extractor; receiving an image to be evaluated, and extracting image features of the image to be evaluated by using an image feature extractor; performing feature vector conversion on image features of the image to be evaluated, and converting the image features into feature vectors; and constructing a network regression function, calculating a regression value of the feature vector by using the network regression function, and determining the quality of the image to be evaluated according to the regression value of the feature vector. The image quality evaluation system is constructed by simplifying a deep learning network and adopting a machine regression mode, when the image quality evaluation is carried out, the image characteristics are acquired through the trained deep learning network, then the regression value of the image characteristics is calculated based on the regression function of the network, and finally the quality of the image to be evaluated is determined based on the regression value of the image characteristics. Meanwhile, the image quality is finally evaluated through the network regression function, mathematical explanation can be made aiming at the evaluation result, and the user can conveniently and intuitively analyze the problem.
In order to solve the technical problems, the embodiment of the application also provides computer equipment. Referring specifically to fig. 6, fig. 6 is a basic structural block diagram of a computer device according to the present embodiment.
The computer device 6 comprises a memory 61, a processor 62, a network interface 63 communicatively connected to each other via a system bus. It is noted that only computer device 6 having components 61-63 is shown in the figures, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may be implemented instead. It will be appreciated by those skilled in the art that the computer device herein is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and its hardware includes, but is not limited to, a microprocessor, an Application SPECIFIC INTEGRATED Circuit (ASIC), a Programmable gate array (Field-Programmable GATE ARRAY, FPGA), a digital Processor (DIGITAL SIGNAL Processor, DSP), an embedded device, and the like.
The computer equipment can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The computer equipment can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The memory 61 includes at least one type of readable storage media including flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), programmable Read Only Memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the storage 61 may be an internal storage unit of the computer device 6, such as a hard disk or a memory of the computer device 6. In other embodiments, the memory 61 may also be an external storage device of the computer device 6, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the computer device 6. Of course, the memory 61 may also comprise both an internal memory unit of the computer device 6 and an external memory device. In this embodiment, the memory 61 is typically used to store an operating system installed on the computer device 6 and various types of application software, such as computer readable instructions of a method for evaluating image quality. Further, the memory 61 may be used to temporarily store various types of data that have been output or are to be output.
The processor 62 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 62 is typically used to control the overall operation of the computer device 6. In this embodiment, the processor 62 is configured to execute computer readable instructions stored in the memory 61 or process data, such as computer readable instructions for executing the method of image quality evaluation.
The network interface 63 may comprise a wireless network interface or a wired network interface, which network interface 63 is typically used for establishing a communication connection between the computer device 6 and other electronic devices.
The application discloses a computer device, which belongs to the technical field of artificial intelligence, and comprises the steps of constructing an image generation network, training the image generation network through a training sample set in a preset database, and obtaining an image feature extractor; receiving an image to be evaluated, and extracting image features of the image to be evaluated by using an image feature extractor; performing feature vector conversion on image features of the image to be evaluated, and converting the image features into feature vectors; and constructing a network regression function, calculating a regression value of the feature vector by using the network regression function, and determining the quality of the image to be evaluated according to the regression value of the feature vector. The image quality evaluation system is constructed by simplifying a deep learning network and adopting a machine regression mode, when the image quality evaluation is carried out, the image characteristics are acquired through the trained deep learning network, then the regression value of the image characteristics is calculated based on the regression function of the network, and finally the quality of the image to be evaluated is determined based on the regression value of the image characteristics. Meanwhile, the image quality is finally evaluated through the network regression function, mathematical explanation can be made aiming at the evaluation result, and the user can conveniently and intuitively analyze the problem.
The present application also provides another embodiment, namely, a computer-readable storage medium storing computer-readable instructions executable by at least one processor to cause the at least one processor to perform the steps of the method for image quality assessment as described above.
The application discloses a storage medium, which belongs to the technical field of artificial intelligence, and comprises the steps of constructing an image generation network, training the image generation network through a training sample set in a preset database, and obtaining an image feature extractor; receiving an image to be evaluated, and extracting image features of the image to be evaluated by using an image feature extractor; performing feature vector conversion on image features of the image to be evaluated, and converting the image features into feature vectors; and constructing a network regression function, calculating a regression value of the feature vector by using the network regression function, and determining the quality of the image to be evaluated according to the regression value of the feature vector. The image quality evaluation system is constructed by simplifying a deep learning network and adopting a machine regression mode, when the image quality evaluation is carried out, the image characteristics are acquired through the trained deep learning network, then the regression value of the image characteristics is calculated based on the regression function of the network, and finally the quality of the image to be evaluated is determined based on the regression value of the image characteristics. Meanwhile, the image quality is finally evaluated through the network regression function, mathematical explanation can be made aiming at the evaluation result, and the user can conveniently and intuitively analyze the problem.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
It is apparent that the above-described embodiments are only some embodiments of the present application, but not all embodiments, and the preferred embodiments of the present application are shown in the drawings, which do not limit the scope of the patent claims. This application may be embodied in many different forms, but rather, embodiments are provided in order to provide a thorough and complete understanding of the present disclosure. Although the application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing description, or equivalents may be substituted for elements thereof. All equivalent structures made by the content of the specification and the drawings of the application are directly or indirectly applied to other related technical fields, and are also within the scope of the application.

Claims (8)

1. A method of image quality assessment, comprising:
Constructing an image generation network, and training the image generation network through a training sample set in a preset database to obtain an image feature extractor;
receiving an image to be evaluated, and extracting image features of the image to be evaluated by utilizing the image feature extractor;
performing feature vector conversion on the image features of the image to be evaluated, and converting the image features into feature vectors;
constructing a network regression function, calculating a regression value of the feature vector by using the network regression function, and determining the quality of the image to be evaluated according to the regression value of the feature vector;
The image generation network comprises an encoding layer and a decoding layer, the encoding layer comprises a plurality of convolution kernels, the decoding layer comprises a plurality of deconvolution kernels, the convolution kernels are in one-to-one correspondence with the deconvolution kernels, the image generation network is constructed, the image generation network is trained through a training sample set in a preset database, and the image feature extractor is obtained, wherein the image feature extractor comprises the following steps:
extracting training samples in the training sample set, and sequentially importing each training sample into a coding layer of the image generation network;
Training a coding layer in the image generation network by using each training sample to obtain a plurality of trained convolution kernels;
Screening a plurality of trained convolution kernels based on a deep learning compression algorithm, and removing redundant items in the plurality of convolution kernels;
constructing the image feature extractor by using a plurality of convolution kernels from which the redundancy items are removed;
After the step of training the coding layer in the image generation network convolution layer by using each training sample to obtain a plurality of trained convolution kernels, the method further comprises the following steps:
collecting training results of each convolution kernel in the coding layer;
The training result of each convolution kernel is led into a corresponding deconvolution kernel, the corresponding deconvolution kernel is trained through the training result of each convolution kernel, a plurality of deconvolution kernels after training are obtained, a communication channel is established between each convolution kernel and the corresponding deconvolution kernel, and the training result of each convolution kernel is transmitted to the corresponding deconvolution kernel through the communication channel.
2. The method of image quality assessment according to claim 1, wherein before said step of constructing an image generation network, training said image generation network by a training sample set in a preset database, obtaining an image feature extractor, further comprising:
acquiring image data in the preset database, and preprocessing the image data;
labeling the preprocessed image data, and randomly combining the labeled image data to obtain a training sample set and a verification data set;
and storing the training sample set and the verification data set into the preset database.
3. The method for evaluating image quality according to claim 2, wherein after said step of introducing the training result of each of said convolution kernels into a corresponding deconvolution kernel, training the corresponding deconvolution kernel by the training result of each of said convolution kernels, obtaining a plurality of trained deconvolution kernels, further comprises:
Extracting a verification sample in the verification data set, and importing the verification data set into the image generation network;
Respectively carrying out feature extraction on the verification samples by using the convolution kernels after the training to obtain feature extraction results of the verification samples;
Respectively importing the feature extraction results of a plurality of verification samples into corresponding deconvolution kernels to perform feature reduction to obtain feature reduction results;
fitting by using a back propagation algorithm based on the feature reduction result and the verification sample to obtain a prediction error;
And comparing the prediction error with a preset threshold, and if the prediction error is larger than the preset threshold, performing iterative updating on the image generation network until the prediction error is smaller than or equal to the preset threshold, so as to acquire the image generation network.
4. A method of image quality assessment according to any one of claims 1 to 3, wherein said step of performing feature vector conversion on image features of said image to be assessed, converting said image features into feature vectors, comprises:
and carrying out feature vector conversion on the image features of the image to be evaluated based on spatial pyramid pooling, and converting the image features into feature vectors.
5. The method for evaluating image quality according to claim 4, wherein the steps of constructing a network regression function, calculating a regression value of the feature vector using the network regression function, and determining the quality of the image to be evaluated based on the regression value of the feature vector, specifically comprise:
constructing an initial regression function based on a Bayesian algorithm;
extracting parameters of the image feature extractor, and calculating feature weights based on the parameters of the image feature extractor;
importing the characteristic weight into the initial regression function to obtain a network regression function;
And importing the feature vector into the network regression function, calculating the regression value of the feature vector, and determining the quality of the image to be evaluated according to the regression value of the feature vector.
6. An apparatus for image quality assessment, characterized in that the apparatus for image quality assessment implements the steps of the method for image quality assessment according to any one of claims 1 to 5, the apparatus for image quality assessment comprising:
the building module is used for building an image generation network, training the image generation network through a training sample set in a preset database, and obtaining an image feature extractor;
the extraction module is used for receiving the image to be evaluated and extracting the image characteristics of the image to be evaluated by utilizing the image characteristic extractor;
The conversion module is used for carrying out feature vector conversion on the image features of the image to be evaluated and converting the image features into feature vectors;
and the evaluation module is used for constructing a network regression function, calculating the regression value of the feature vector by using the network regression function, and determining the quality of the image to be evaluated according to the regression value of the feature vector.
7. A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions which when executed by the processor implement the steps of the method of image quality assessment according to any one of claims 1 to 5.
8. A computer readable storage medium having stored thereon computer readable instructions which when executed by a processor perform the steps of the method of image quality assessment according to any of claims 1 to 5.
CN202011288901.7A 2020-11-17 2020-11-17 Image quality evaluation method, device, computer equipment and storage medium Active CN112418292B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011288901.7A CN112418292B (en) 2020-11-17 2020-11-17 Image quality evaluation method, device, computer equipment and storage medium
PCT/CN2021/090416 WO2022105117A1 (en) 2020-11-17 2021-04-28 Method and device for image quality assessment, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011288901.7A CN112418292B (en) 2020-11-17 2020-11-17 Image quality evaluation method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112418292A CN112418292A (en) 2021-02-26
CN112418292B true CN112418292B (en) 2024-05-10

Family

ID=74832061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011288901.7A Active CN112418292B (en) 2020-11-17 2020-11-17 Image quality evaluation method, device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112418292B (en)
WO (1) WO2022105117A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418292B (en) * 2020-11-17 2024-05-10 平安科技(深圳)有限公司 Image quality evaluation method, device, computer equipment and storage medium
WO2022217496A1 (en) * 2021-04-14 2022-10-20 中国科学院深圳先进技术研究院 Image data quality evaluation method and apparatus, terminal device, and readable storage medium
CN113112518B (en) * 2021-04-19 2024-03-26 深圳思谋信息科技有限公司 Feature extractor generation method and device based on spliced image and computer equipment
CN113486939A (en) * 2021-06-30 2021-10-08 平安证券股份有限公司 Method, device, terminal and storage medium for processing pictures
CN117135306A (en) * 2022-09-15 2023-11-28 深圳Tcl新技术有限公司 Television definition debugging method and device
CN115984843A (en) * 2022-12-06 2023-04-18 北京信息科技大学 Remanufacturing raw material evaluation method and device, storage medium and electronic equipment
CN117830246A (en) * 2023-12-27 2024-04-05 广州极点三维信息科技有限公司 Image analysis and quality evaluation method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109949277A (en) * 2019-03-04 2019-06-28 西北大学 A kind of OCT image quality evaluating method based on sequence study and simplified residual error network
JP2020014042A (en) * 2018-07-13 2020-01-23 日本放送協会 Image quality evaluation device, learning device and program
CN110766658A (en) * 2019-09-23 2020-02-07 华中科技大学 Non-reference laser interference image quality evaluation method
CN111242036A (en) * 2020-01-14 2020-06-05 西安建筑科技大学 Crowd counting method based on encoding-decoding structure multi-scale convolutional neural network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10002415B2 (en) * 2016-04-12 2018-06-19 Adobe Systems Incorporated Utilizing deep learning for rating aesthetics of digital images
US10540589B2 (en) * 2017-10-24 2020-01-21 Deep North, Inc. Image quality assessment using similar scenes as reference
CN110033446B (en) * 2019-04-10 2022-12-06 西安电子科技大学 Enhanced image quality evaluation method based on twin network
CN112418292B (en) * 2020-11-17 2024-05-10 平安科技(深圳)有限公司 Image quality evaluation method, device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020014042A (en) * 2018-07-13 2020-01-23 日本放送協会 Image quality evaluation device, learning device and program
CN109949277A (en) * 2019-03-04 2019-06-28 西北大学 A kind of OCT image quality evaluating method based on sequence study and simplified residual error network
CN110766658A (en) * 2019-09-23 2020-02-07 华中科技大学 Non-reference laser interference image quality evaluation method
CN111242036A (en) * 2020-01-14 2020-06-05 西安建筑科技大学 Crowd counting method based on encoding-decoding structure multi-scale convolutional neural network

Also Published As

Publication number Publication date
WO2022105117A1 (en) 2022-05-27
CN112418292A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
CN112418292B (en) Image quality evaluation method, device, computer equipment and storage medium
CN110929622B (en) Video classification method, model training method, device, equipment and storage medium
CN112395979B (en) Image-based health state identification method, device, equipment and storage medium
CN111444340A (en) Text classification and recommendation method, device, equipment and storage medium
CN110659723B (en) Data processing method and device based on artificial intelligence, medium and electronic equipment
CN113139628B (en) Sample image identification method, device and equipment and readable storage medium
CN112164002B (en) Training method and device of face correction model, electronic equipment and storage medium
CN113761153B (en) Picture-based question-answering processing method and device, readable medium and electronic equipment
JP2023500222A (en) Sequence mining model training method, sequence data processing method, sequence mining model training device, sequence data processing device, computer equipment, and computer program
CN116580257A (en) Feature fusion model training and sample retrieval method and device and computer equipment
US20220101121A1 (en) Latent-variable generative model with a noise contrastive prior
CN113628059A (en) Associated user identification method and device based on multilayer graph attention network
CN114282059A (en) Video retrieval method, device, equipment and storage medium
CN114241459B (en) Driver identity verification method and device, computer equipment and storage medium
CN112529149A (en) Data processing method and related device
CN116821113A (en) Time sequence data missing value processing method and device, computer equipment and storage medium
CN116958325A (en) Training method and device for image processing model, electronic equipment and storage medium
CN112950501B (en) Noise field-based image noise reduction method, device, equipment and storage medium
CN112966150A (en) Video content extraction method and device, computer equipment and storage medium
CN113822291A (en) Image processing method, device, equipment and storage medium
CN113344060A (en) Text classification model training method, litigation shape classification method and device
CN111915701A (en) Button image generation method and device based on artificial intelligence
CN116798052B (en) Training method and device of text recognition model, storage medium and electronic equipment
CN117938951B (en) Information pushing method, device, computer equipment and storage medium
CN113139490B (en) Image feature matching method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant