CN111325733A - Image quality evaluation method combining low-level vision and high-level vision statistical characteristics - Google Patents

Image quality evaluation method combining low-level vision and high-level vision statistical characteristics Download PDF

Info

Publication number
CN111325733A
CN111325733A CN202010112724.0A CN202010112724A CN111325733A CN 111325733 A CN111325733 A CN 111325733A CN 202010112724 A CN202010112724 A CN 202010112724A CN 111325733 A CN111325733 A CN 111325733A
Authority
CN
China
Prior art keywords
image
level
image quality
level vision
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010112724.0A
Other languages
Chinese (zh)
Inventor
刘玉涛
李秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Shenzhen International Graduate School of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen International Graduate School of Tsinghua University filed Critical Shenzhen International Graduate School of Tsinghua University
Priority to CN202010112724.0A priority Critical patent/CN111325733A/en
Publication of CN111325733A publication Critical patent/CN111325733A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image quality evaluation method combining low-level vision and high-level vision statistical characteristics, which comprises the following steps: s1, carrying out local normalization on the image, and extracting low-level visual statistical characteristics; s2, sparsely representing the image, calculating a representation residual error, and extracting high-level visual statistical characteristics; s3, training the artificial neural network, learning a mapping model of the image features extracted from steps S1 and S2 to image quality to predict the image quality. The invention extracts the low-level and high-level characteristics by means of the characteristics of low-level human vision and the activities of high-level brain to obtain a mapping model from image characteristics to image quality, can effectively measure the loss of image perception quality and accurately evaluate the quality of the image.

Description

Image quality evaluation method combining low-level vision and high-level vision statistical characteristics
Technical Field
The invention relates to the technical field of image processing, in particular to an image quality evaluation method combining low-level vision and high-level vision statistical characteristics.
Background
Since the twenty-first century, with the rapid development of internet technology, digital media technology and communication technology, digital images have become an important way for people to communicate information. In recent years, the large-scale popularization of digital devices such as digital cameras, smart phones, tablet computers and the like provides great convenience for image acquisition and video acquisition. However, digital images are susceptible to various kinds of distortion in a series of processes such as acquisition, compression, storage, transmission, etc., and the quality thereof is inevitably affected to some extent. For example, in the process of shooting an image, mechanical shake and unfocusing can cause the acquired image to be blurred; during image transmission, noise and the like may be introduced. Therefore, accurately evaluating the quality of an image is of great significance for digital image industrial applications.
In terms of Image quality evaluation indexes, the structural similarity method (SSIM) proposed by Wang, Z et al in the article "Image quality assessment from information to structural similarity" published in IEEE Trans. Image Process 13, Vol.4, pp.600 to 612, judges the quality of images by measuring the structural similarity of the images. The article "VSNR: A wave-Based Visual Signal-to-Noise Ratio (VSNR) algorithm is designed in the article" IEEE Trans. image Process ", volume 16, stage 9, page 2284 to page 2298", the algorithm is divided into two steps, the first step is to judge whether distortion is visible to human eyes through Visual masking effect, if not, the algorithm considers that the image has the best Visual quality, and if so, the algorithm estimates the quality of the image by calculating the contrast distortion of the Visual system at low level and the image edge distortion at medium level. Based on a Visual salience-based Index (VSI) proposed in the paper "VSI: a Visual salience-Induced Index for Perceptual Image Quality Assessment" published by Zhang, l.et al in ieee trans. Image Process, vol.23, page 10, page 4270 to page 4281, in which firstly the change in Visual salience caused by Image distortion is studied, then the Visual Saliency is used as a feature of Image Quality to reflect the degree of distortion of the Image, and finally the Visual Saliency feature and the gradient magnitude feature are combined to predict the distortion of the Image.
Liu, Y. et al, in the article "Reduced-Reference Image Quality Assessment in Free-Energy Principle and sparse Representation" published in IEEE Trans.multimedia, volume 20, 2, pages 379 to 391, calculate the entropy of information that sparsely represents the residual to evaluate the Quality of the Image. Liu, A. et al, in the paper "image quality Assessment Based on Gradient Similarity", published in IEEE Trans. image processing, Vol.21, pp.4, 1500 to 1512, designs a Gradient Similarity Index (GSI) [127], which first calculates the Gradient magnitude Similarity between the original image and the distortion map, then improves the Gradient magnitude Similarity calculation Based on the masking property of the human visual system, and finally estimates the quality of the image by adaptively pooling the brightness, contrast and structure. Moorthy et al established Distortion-based Image authenticity and INtegrity Evaluation indexes (DIVINE) in a paper "Black Image Quality assessment From Natural Scene Statistics to Perceptional Quality", published by IEEE Trans. ImageProcess, Vol.20, No. 12, pp.3350 to 3364. Wu, J. et al, in the paper "comprehensive Quality Metal with Internal Generation Mechanism", published in IEEE Trans. image processing, Vol.22, pp.1, 43 to 54, propose a self-generating Model algorithm (IGM), which assumes a generating Model inside the brain, responsible for understanding and inferring the image and generating a corresponding predictive image, then uses an Autoregressive Model (AR) to simulate a generating Model inside the brain, and decomposes the image into two parts, a predictable part and an unpredictable part, the predictable part predicts its Quality using SSIM, the unpredictable part predicts its Quality using PSNR, and then synthesizes the qualities of the two parts to predict the Quality of the image.
Disclosure of Invention
The invention mainly aims to provide an image quality evaluation method and device combining low-level vision and high-level vision statistical characteristics so as to effectively measure the loss of image perception quality and accurately evaluate the quality of an image.
To achieve the above object, the present invention provides an image quality evaluation method combining low-level vision and high-level vision statistical characteristics, the method comprising the steps of:
s1, carrying out local normalization on the image, and extracting low-level visual statistical characteristics;
s2, sparsely representing the image, calculating a representation residual error, and extracting high-level visual statistical characteristics;
s3, training the artificial neural network, learning a mapping model of the image features extracted from steps S1 and S2 to image quality to predict the image quality.
Preferably, in step S1, the image is locally normalized by using the local mean and variance of the image to obtain a normalized coefficient image, and then a segment of distribution in the normalized coefficient distribution is intercepted as the low-level visual feature vector describing the image quality variation.
Preferably, in step S2, the input image is sparsely represented, then a representation residual is calculated, and a segment of distribution in the distribution of the residual is truncated as a high-level visual feature vector describing the change of image quality.
Preferably, in step S3, an artificial neural network with a four-layer structure is designed, which includes three hidden layers and a linear regression layer, and then the network is trained to obtain a model for mapping image features to image quality, and the model is used to predict the image quality.
An image quality evaluation device combining the low-level vision and the high-level vision statistical characteristics comprises a computer readable storage medium and a processor, wherein the computer readable storage medium stores an executable program, and the executable program is executed by the processor to realize the image quality evaluation method combining the low-level vision and the high-level vision statistical characteristics.
A computer-readable storage medium storing an executable program which, when executed by a processor, implements the method for image quality assessment that combines low-level vision with high-level vision statistical features.
The invention has the beneficial effects that:
the invention provides an image quality evaluation method combining low-level vision and high-level vision statistical characteristics. In the method, statistical characteristics of low-level vision and high-level vision of an image are respectively extracted, then a mapping model of the extracted image characteristics to image quality is learned by utilizing a neural network, and the model is utilized to predict the image quality. The invention extracts the low-level and high-level characteristics by means of the low-level human vision characteristics and the activity of the high-level brain, learns the mapping from the vision characteristics to the image quality by utilizing a neural network, obtains a mapping model from the image characteristics to the image quality, and estimates the image quality.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
fig. 1 is a schematic diagram of an embodiment of an image quality evaluation method combining the statistical characteristics of the low-level vision and the high-level vision according to the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the present invention.
The invention provides an image quality evaluation method combining low-level vision and high-level vision statistical characteristics. In the method, statistical characteristics of low-level vision and high-level vision of an image are respectively extracted, then a mapping model of the extracted image characteristics to image quality is learned by utilizing a neural network, and the model is utilized to predict the image quality.
Fig. 1 is a schematic diagram of an embodiment of an image quality evaluation method combining the statistical characteristics of the low-level vision and the high-level vision according to the present invention. As shown in fig. 1, an embodiment of the present invention provides an image quality evaluation method combining low-level vision and high-level vision statistical features, the method including the following steps: s1, carrying out local normalization on the image, and extracting low-level visual statistical characteristics; s2, sparsely representing the image, calculating a representation residual error, and extracting high-level visual statistical characteristics; s3, training the artificial neural network, learning a mapping model of the image features extracted from steps S1 and S2 to the image quality for predicting the image quality. The invention extracts the low-level and high-level characteristics by means of the low-level human vision characteristics and the activity of the high-level brain, learns the mapping from the vision characteristics to the image quality by utilizing a neural network, obtains a mapping model from the image characteristics to the image quality, and estimates the image quality.
In some embodiments, the above image quality evaluation method combining the low-level vision and the high-level vision statistical features is implemented as follows:
in the embodiment of the present invention, first, the image is locally normalized to obtain a normalization coefficient, where the normalization coefficient of the image may be calculated as:
Figure BDA0002390570590000041
where I is the input image, (x, y) represents positional information,
Figure BDA0002390570590000042
the method includes that an image with normalized coefficients is represented, mu (x, y), sigma (x, y) are mean values and variance of a part with (x, y) as a center, the normalized coefficients are original image part-removed mean values, normalization is carried out by using the local mean values, and the calculation method of the mu (x, y) and the sigma (x, y) is as follows:
Figure BDA0002390570590000043
Figure BDA0002390570590000044
here, ω is { ω ═ ωs,tS ═ S,.., S; t ═ T., T } represents a symmetric gaussian filter, the width of the local image block is 2S, the height is 2T, and both S and T take values of 16. Then, the interval [ -2,2 [ -2]The average is divided into 20 sub-intervals, the step size is 0.2, and then the number of pixels falling in each sub-interval is taken to form a 20-dimensional feature vector.
Then, sparsely representing the image (physiological research shows that the human visual system senses external visual signals in a sparse sensing mode), for the input image I, firstly extracting one image block to sparsely represent the image block, and assuming that the image block is sparsely represented
Figure BDA0002390570590000051
It has a size of
Figure BDA0002390570590000052
This process can be expressed as:
xk=Rk(I)
wherein R isk(·) is an image block extraction operator that extracts an image block at position k, where k ═ 1,2, 3.
For image block xkIt is in the dictionary
Figure BDA0002390570590000053
The sparse representation of (1) is to obtain a sparse vector
Figure BDA0002390570590000054
kMost of the elements are 0 or close to 0) satisfies:
Figure BDA0002390570590000055
the first term is a fidelity term, the second term is a sparse constraint term, lambda is a constant and is used for balancing the proportion of the two terms, p is 0 or 1, and if p is 0, the sparse term represents non-in-coefficientThe number of 0 is consistent with the sparsity required by us, however, the optimization problem of 0 norm is non-convex and is difficult to solve, and the alternative solution is to set p to 1, so the above equation becomes the solution of convex optimization problem. Thus, p is set to 1. Solving the above formula by using Orthogonal Matching Pursuit (OMP) algorithm to obtain image block xkIs sparse representation coefficient
Figure BDA0002390570590000056
X is thenkCan be expressed sparsely as
Figure BDA0002390570590000057
The sparse representation of the entire image I can be written as:
Figure BDA0002390570590000058
where I' represents a sparse representation of image I. Then, the representation residual is calculated, the interval is divided into 100 equal subintervals on the residual interval (-50, 50), the interval of each interval is 1, and then the number of pixels in each subinterval is taken as a feature to obtain a 100-dimensional feature vector.
Designing an artificial neural network comprising four layers, three hidden layers and a linear regression layer, wherein the bottom layer of the network is input with extracted image characteristics f1,f2,...fnAnd the network outputs the image quality. The size of each hidden layer is 200,40,6, respectively.
Training a designed network in three steps, in the first step, pre-training each hidden layer by an unsupervised method, training each hidden layer as a sparse self-encoder, training each sparse self-encoder by utilizing an L-BFGS algorithm, setting the iteration number to be 1000, taking a sigmoid function as an activation function, and training each layer by using a loss function as follows:
Figure BDA0002390570590000061
wherein the content of the first and second substances,
Figure BDA0002390570590000062
and
Figure BDA0002390570590000063
here, W denotes the network weights, b denotes the hidden layer bias, hW,b(. g) the output of each neuron of the hidden layer, p represents the average activation value,
Figure BDA0002390570590000064
representing the expected average activation value, ρ is set to 0.1, β is set to 3, γ is a weighted decay parameter set to 0.0001. the training loss function of the linear regression layer is:
Figure BDA0002390570590000065
where Y represents the output of the linear regression layer and Label represents the subjective score of the image. After training is completed, for a new image, features are extracted and input into the network, and the network outputs its quality score.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.

Claims (9)

1. An image quality evaluation method combining low-level vision and high-level vision statistical characteristics is characterized by comprising the following steps:
s1, carrying out local normalization on the image, and extracting low-level visual statistical characteristics;
s2, sparsely representing the image, calculating a representation residual error, and extracting high-level visual statistical characteristics;
s3, training the artificial neural network, learning a mapping model of the image features extracted from steps S1 and S2 to image quality to predict the image quality.
2. The method for evaluating the image quality by combining the statistical characteristics of the low-level vision and the high-level vision as claimed in claim 1, wherein in step S1, the image is locally normalized by using the local mean and variance of the image to obtain a normalized coefficient image, and then a segment of the normalized coefficient distribution is intercepted as the low-level vision characteristic vector describing the image quality variation.
3. The method for evaluating image quality by combining statistic features of low-level vision and high-level vision according to claim 2, wherein said step S1 specifically comprises: carrying out local normalization on the image to obtain a local normalization coefficient; the normalized coefficients for an image may be calculated as:
Figure FDA0002390570580000011
where I is the input image, (x, y) represents positional information,
Figure FDA0002390570580000012
the method includes that an image with normalized coefficients is represented, mu (x, y), sigma (x, y) are mean values and variance of a part with (x, y) as a center, the normalized coefficients are original image part-removed mean values, normalization is carried out by using the local mean values, and the calculation method of the mu (x, y) and the sigma (x, y) is as follows:
Figure FDA0002390570580000013
Figure FDA0002390570580000014
wherein ω ═ { ω ═ ωs,tS ═ S,.., S; t ═ T., T } represents a symmetric gaussian filter, the width of the local image block is 2S, the height is 2T, and the values of S and T are both 16; then, the interval-2,2]The average is divided into 20 subintervals, the step size is 0.2, and then the number of the normalization coefficients falling in each subinterval is taken to form a 20-dimensional low-level visual feature vector.
4. The method for evaluating image quality according to any of claims 1 to 3, wherein in step S2, the input image is sparsely represented, then a representation residual is calculated, and a segment of distribution in the distribution of the residual is intercepted as a high-level visual feature vector describing the image quality variation.
5. The method for evaluating image quality by combining statistic features of low-level vision and high-level vision according to claim 4, wherein said step S2 specifically comprises: firstly, sparsely representing an image, and for an image I, firstly, extracting one image block to sparsely represent the image, wherein the assumption is that
Figure FDA0002390570580000021
It has a size of
Figure FDA0002390570580000022
The process is represented as:
xk=Rk(I)
wherein R isk(·) is an image block extraction operator, which extracts an image block at a position k, where k is 1,2, 3.
For image block xkIt is in the dictionary
Figure FDA0002390570580000023
The sparse representation of (1) is to obtain a sparse vector
Figure FDA0002390570580000024
αkMost of the elements are 0 or close to 0, and satisfy the following conditions:
Figure FDA0002390570580000025
the first term is a fidelity term, the second term is a sparse constraint term, lambda is a constant and is used for balancing the proportion of the two terms, p is 0 or 1, p is preferably set to be 1, the above expression is changed into the solution of a convex optimization problem, the above expression is solved by using an Orthogonal Matching Pursuit (OMP) algorithm, and an image block x is obtainedkIs sparse representation coefficient
Figure FDA0002390570580000026
X is thenkIs expressed as
Figure FDA0002390570580000027
The sparse representation of the entire image I is:
Figure FDA0002390570580000028
wherein I' represents a sparse representation of image I; then calculating a sparse representation residual error, dividing the interval into 100 equal subintervals on the interval (-50, 50), wherein the interval of each interval is 1, and then taking the number of the values representing the residual error in each subinterval as a feature to obtain a 100-dimensional high-level visual feature vector.
6. The method for evaluating image quality according to any of claims 1 to 5, wherein in step S3, an artificial neural network with a four-layer structure is designed, comprising three hidden layers and a linear regression layer, and then the network is trained to obtain a model for mapping image characteristics to image quality, so as to use the model to predict the image quality.
7. The method for evaluating image quality by combining statistic features of low-level vision and high-level vision according to claim 6, wherein said step S3 specifically comprises: designing a neural network comprising four layers, three hidden layers and one linear regression layer, of the networkBottom layer input is extracted image feature f1,f2,...fnNetwork output image quality; the sizes of the hidden layers of each layer are 200,40 and 6 respectively;
training a designed network in three steps, in the first step, pre-training each hidden layer by an unsupervised method, training each hidden layer as a sparse self-encoder, training each sparse self-encoder by utilizing an L-BFGS algorithm, setting the iteration number to be 1000, taking a sigmoid function as an activation function, and training each layer by using a loss function as follows:
Figure FDA0002390570580000031
wherein the content of the first and second substances,
Figure FDA0002390570580000032
and
Figure FDA0002390570580000033
wherein W represents the network weight, b represents the hidden layer bias, hw,b(. g) the output of each neuron of the hidden layer, p represents the average activation value,
Figure FDA0002390570580000034
representing the expected average activation value, p is set to 0.1, β is set to 3, lambda is a weight attenuation parameter and is set to 0.0001, and the training loss function of the linear regression layer is as follows:
Figure FDA0002390570580000035
wherein Y represents the output of the linear regression layer, and Label represents the subjective score of the image; after training is completed, for a new image, features are extracted and input into the network, and the network outputs its quality score.
8. An image quality evaluation apparatus combining low-level vision and high-level vision statistical features, comprising a computer-readable storage medium and a processor, wherein the computer-readable storage medium stores an executable program, and wherein the executable program, when executed by the processor, implements the image quality evaluation method combining low-level vision and high-level vision statistical features according to any one of claims 1 to 7.
9. A computer-readable storage medium storing an executable program, wherein the executable program, when executed by a processor, implements the method for evaluating image quality by combining low-level vision and high-level vision statistical features according to any one of claims 1 to 7.
CN202010112724.0A 2020-02-24 2020-02-24 Image quality evaluation method combining low-level vision and high-level vision statistical characteristics Withdrawn CN111325733A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010112724.0A CN111325733A (en) 2020-02-24 2020-02-24 Image quality evaluation method combining low-level vision and high-level vision statistical characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010112724.0A CN111325733A (en) 2020-02-24 2020-02-24 Image quality evaluation method combining low-level vision and high-level vision statistical characteristics

Publications (1)

Publication Number Publication Date
CN111325733A true CN111325733A (en) 2020-06-23

Family

ID=71165226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010112724.0A Withdrawn CN111325733A (en) 2020-02-24 2020-02-24 Image quality evaluation method combining low-level vision and high-level vision statistical characteristics

Country Status (1)

Country Link
CN (1) CN111325733A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669270A (en) * 2020-12-21 2021-04-16 北京金山云网络技术有限公司 Video quality prediction method and device and server
CN117611516A (en) * 2023-09-04 2024-02-27 北京智芯微电子科技有限公司 Image quality evaluation, face recognition, label generation and determination methods and devices

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669270A (en) * 2020-12-21 2021-04-16 北京金山云网络技术有限公司 Video quality prediction method and device and server
CN117611516A (en) * 2023-09-04 2024-02-27 北京智芯微电子科技有限公司 Image quality evaluation, face recognition, label generation and determination methods and devices

Similar Documents

Publication Publication Date Title
EP3968179A1 (en) Place recognition method and apparatus, model training method and apparatus for place recognition, and electronic device
EP3084682B1 (en) System and method for identifying faces in unconstrained media
Huang et al. Identification of the source camera of images based on convolutional neural network
CN111954250B (en) Lightweight Wi-Fi behavior sensing method and system
CN111325733A (en) Image quality evaluation method combining low-level vision and high-level vision statistical characteristics
CN112883231B (en) Short video popularity prediction method, system, electronic equipment and storage medium
CN111311595A (en) No-reference quality evaluation method for image quality and computer readable storage medium
He et al. A visual residual perception optimized network for blind image quality assessment
CN108830829B (en) Non-reference quality evaluation algorithm combining multiple edge detection operators
CN111694977A (en) Vehicle image retrieval method based on data enhancement
CN110717423A (en) Training method and device for emotion recognition model of facial expression of old people
Ji et al. Blind image quality assessment with semantic information
CN113810683B (en) No-reference evaluation method for objectively evaluating underwater video quality
CN114119560A (en) Image quality evaluation method, system, and computer-readable storage medium
CN112614110A (en) Method and device for evaluating image quality and terminal equipment
CN110796177B (en) Method for effectively reducing neural network overfitting in image classification task
CN104598866A (en) Face-based social intelligence promotion method and system
CN111354048A (en) Quality evaluation method and device for camera-oriented acquired pictures
Li et al. Learning a blind quality evaluator for UGC videos in perceptually relevant domains
WO2024066927A1 (en) Training method and apparatus for image classification model, and device
CN112801058B (en) UML picture identification method and system
Zhao et al. Research on No-reference Image Quality Assessment Algorithm Based on Generative Adversarial Networks
Khaing et al. Convolutional Neural Network for Blind Image Quality Assessment
Aydi An Image Quality Assessment Method based on Sparse Neighbor Significance.
Lv et al. Underwater Image Enhancement Based on Shallow Underwater Neural Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200623