CN112767311A - Non-reference image quality evaluation method based on convolutional neural network - Google Patents

Non-reference image quality evaluation method based on convolutional neural network Download PDF

Info

Publication number
CN112767311A
CN112767311A CN202011625910.0A CN202011625910A CN112767311A CN 112767311 A CN112767311 A CN 112767311A CN 202011625910 A CN202011625910 A CN 202011625910A CN 112767311 A CN112767311 A CN 112767311A
Authority
CN
China
Prior art keywords
map
network
image
distortion
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011625910.0A
Other languages
Chinese (zh)
Inventor
颜成钢
陈子阳
张继勇
孙垚棋
张勇东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202011625910.0A priority Critical patent/CN112767311A/en
Publication of CN112767311A publication Critical patent/CN112767311A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The invention discloses a non-reference image quality evaluation method based on a convolutional neural network. Firstly, preprocessing a distortion graph and a natural graph to obtain a similar graph, and then constructing a neural network according to the distortion graph and the similar graph; based on the countermeasure generation concept of the GAN framework, the jump connection characteristic of the U-net framework and the denseblock structural characteristic of the framework of densenet are integrated in the network generation part; in the network distinguishing part, a simple classification network is adopted; and finally training the constructed neural network. The method respectively draws and combines the characteristics of the GAN network, the U-net network and the densenet network, constructs a more effective neural network, more effectively realizes the conversion and the migration from the diagram to the diagram, not only has better result in the realization from the diagram to the diagram, but also has strong correlation and smaller error between the simulated mass fraction and the real mass fraction.

Description

Non-reference image quality evaluation method based on convolutional neural network
Technical Field
The invention belongs to the field of image processing, designs an image quality evaluation method, and relates to application of a generation countermeasure network in deep learning in image quality evaluation.
Background
Nowadays, with the rapid development of internet technology and communication technology, digital images have become an important way for information transfer in people's daily life. Statistically, the total number of digital photographs produced in the world has reached hundreds of billions since 2011, and this number has increased year by year. However, the images are susceptible to various kinds of distortion interference during acquisition, storage, compression, transmission, etc., thereby causing degradation of image quality. Therefore, how to accurately and reliably evaluate the quality of the image becomes an important research hotspot in current and future research. Generally, most images are viewed by people, so the most reliable image quality evaluation method is subjective quality evaluation, that is, an organization viewer subjectively scores the quality of the images according to their experience, however, as the number of images increases, implementation of subjective quality evaluation becomes difficult, and the method cannot be applied to a real-time image processing system. Therefore, researchers have proposed an objective quality evaluation method for evaluating the quality of an image by designing an objective algorithm.
Existing objective quality evaluation methods are classified into three categories, full-reference, partial-reference and no-reference quality evaluation methods, depending on whether the original image is referenced. Although a large number of methods are provided for the quality evaluation of the three types of objective images respectively at present, the research of the objective quality evaluation is still not mature enough and mainly shows in the following aspects, firstly, because the understanding of the human visual perception mechanism is not deep enough at present, the existing objective quality evaluation method based on the measurement signal distortion cannot accurately simulate the subjective quality evaluation; secondly, in the design of the no-reference quality evaluation method, most methods still need to train a quality evaluation model by using subjective quality scores; thirdly, the existing objective algorithms still do not perform well when evaluating distorted images in real scenes. Therefore, establishing a set of three-dimensional image quality objective evaluation mechanism capable of accurately reflecting subjective feelings of human eyes has profound and remote significance. In recent years, relevant research organizations have conducted intensive research on planar image quality evaluation algorithms, such as evaluation indexes of peak signal to noise ratio (PSNR), Mean Square Error (MSE), Structural Similarity (SSIM), and the like. However, more factors, such as depth maps, etc., are included in the planar image.
In recent years, deep learning becomes a research hotspot in the related fields of machine learning and neural networks, and the deep learning can simulate the way of processing data in deep level of human brain to obtain hierarchical characteristic representation of internal structure and relation of original data, so that the network parameters after the preprocessing conform to the processing result of the human brain, and the stability and generalization capability of the network obtained after training are improved to a certain extent.
Most of the existing no-reference quality evaluation methods belong to evaluation methods with known subjective quality scores, and such methods usually require a large number of training sample images and corresponding subjective scores to train a quality prediction model, in contrast, no-reference evaluation methods with unknown subjective quality scores are still few and the performance of the existing methods cannot be compared with the methods with known subjective quality scores.
Disclosure of Invention
The invention aims to provide a non-reference image quality evaluation method based on a convolutional neural network aiming at the defects of the prior art.
The method is based on the antagonism generation concept of the GAN framework. In the network generation part, the jump connection characteristic of the U-net frame and the denseblock structural characteristic of the densenet frame are integrated, so that the extraction performance of the original frame on image features is greatly improved. In the network discriminating part, the network is discriminated by adopting blocks. And (4) a loss function part adopts a mode of cross entropy plus L1 norm loss. And finally, iteratively training a good generated network model and a good discriminant network model. Finally, a high-quality target picture quality similarity graph can be obtained by inputting a distorted picture through the trained generation network module.
A non-reference image quality evaluation method based on a convolutional neural network comprises the following steps:
step 1: preprocessing the distortion image and the natural image to obtain a similar image;
step 2: according to the existing distortion map: x and similar figures: z, SSIM _ MAP. And constructing a neural network.
Based on the countermeasure generation concept of the GAN frame, the jump connection characteristic of the U-net frame and the denseblock structural characteristic of the framework of the densenet are integrated in the network generation part, and the extraction performance of the original framework on image features is improved. In the discriminating network part, a simple classification network is adopted.
And step 3: training the constructed neural network;
the invention has the beneficial effects that:
firstly, the method belongs to the category of no-reference quality evaluation. By using the trained neural network framework, the quality of the distorted image can be evaluated under the condition of no natural image (original image).
Under the condition that the no-reference quality evaluation method generally performs image feature extraction based on SVR (support vector machine), the method performs feature extraction by adopting a more effective hybrid neural network.
Under the condition that the discriminator usually discriminates the whole graph, the method adopts a more effective block discrimination method, so that the training speed is higher and the experimental effect is better.
The method respectively draws and combines the characteristics of the GAN network, the U-net network and the densenet network, and constructs a more effective neural network. From the results, it does more efficiently implement graph-to-graph conversion and migration. The experimental results not only have better results in the graph-to-graph implementation, but also the simulated mass fraction has strong correlation with the real mass fraction and has smaller error.
Drawings
Fig. 1 is a schematic diagram of a network structure according to the present invention.
Detailed Description
The present invention is further described below.
A non-reference image quality evaluation method based on a convolutional neural network comprises the following specific implementation steps:
step 1: preprocessing the distortion image and the natural image to obtain a similar image;
1-1. calculating the brightness contrast:
Figure RE-GDA0002988799700000031
for the distortion map: x, natural diagram: y, adopt
Figure RE-GDA0002988799700000032
The luminance information representing the distortion map X,
Figure RE-GDA0002988799700000033
luminance information representing the natural image Y:
Figure RE-GDA0002988799700000034
wherein x isi,yiAre pixel point values of the image.
The luminance contrast of the distortion map X and the natural map Y can be expressed as:
Figure RE-GDA0002988799700000035
wherein C is1Is an extremely small number set to prevent the denominator from being 0.
1-2. calculating contrast ratio: c (x, y)
Using sigmaxContrast information, σ, representing the distortion map XyContrast information representing the natural diagram Y:
Figure RE-GDA0002988799700000041
the contrast ratio of the distortion map X and the natural map Y is expressed as:
Figure RE-GDA0002988799700000042
wherein C is2Is an extremely small number set to prevent the denominator from being 0.
1-3. calculating structural comparison: s (x, y)
Introducing correlation: sigmaxy
Figure RE-GDA0002988799700000043
The structural comparison of the distortion map X and the natural map Y can be expressed as:
Figure RE-GDA0002988799700000044
wherein C is3Is an extremely small number set to prevent the denominator from being 0.
1-4, calculating a similarity graph through the obtained brightness contrast, contrast and structural contrast;
Figure RE-GDA0002988799700000045
where a, b, c are the weights for brightness, contrast and structure.
The quality fraction MSSIM of the distortion MAP X may be found by SSIM _ MAP:
MSSIM=mean(SSIM_MAP)
where mean () is the averaging operation.
Step 2: according to the existing distortion map: x and similar figures: z, SSIM _ MAP. And constructing a neural network.
The specific operation is as follows:
2-1, generating a network:
(1) using the distortion map X as input, the size of the distortion map X is 256 × 256, and the number of channels is 3.
(2) And (3) carrying out a characteristic extraction process on the distortion diagram X in a manner of adding a convolution layer to the denseblock structure. A feature map with a size of 1 × 1 and a number of channels of 512 is obtained.
a. Firstly, a low-level feature map with the size of 128 x 128 and the number of channels of 64 is extracted from the distortion map by one convolution layer.
b. And further extracting the characteristics of the low-level characteristic diagram by 7 denseblock convolution layers, wherein one convolution layer is arranged behind each denseblock structure and used for reducing the size of the characteristic diagram so as to achieve the effect of refining characteristic information. Each denseblock consists of 4 convolutional layers, and each denseblock is only used for further extracting the features without changing the size and the number of channels of the feature diagram. The feature size and channel number will only change at the convolutional layer. The specific data are shown in fig. 1.
(3) Recovering the false similarity map from the 8 deconvolution layers: z ', Z' have a size of 256X 256 and a number of channels of 3. The characteristic diagram size of each layer is shown as the specification drawing.
c. Each convolution layer is provided with a corresponding deconvolution layer, each convolution layer is connected with the corresponding deconvolution layer in a jump mode, and the intermediate feature map is used as input information and transmitted to the deconvolution layer to restore the target picture. Therefore, a total of 7 hopping connections, 7 layers of deconvolution layers will output an information map of size 128 × 128 and channel number 128.
d. Finally, the information graph is changed into a target picture with the size of 256 multiplied by 256 and the number of channels of 3 through a layer of convolution layer, namely a false similarity graph Z'.
2-2. discriminating network
(1) The stitching distortion graph X and the true similarity graph Z are new pictures: X-Z; the stitching distortion map X and the false similarity map Z' are new pictures: X-Z'. The sizes of X-Z and X-Z' are 256 multiplied by 256, and the number of channels is 6. And taking the obtained X-Z and X-Z' as the input of the discrimination network.
(2) An input image first passes through 6 layers of convolutional layers, and a 256 × 256 × 6 input picture is changed into a 4 × 4 × 1 small block picture, and each pixel value on the picture represents an image block of 64 × 64 size in the input picture.
(3) Each pixel value on a 4 x 1 tile is an output prediction tag. The overall prediction tag is determined by the average of the 16 pixel values.
And step 3: and training the constructed neural network.
G (-) represents a generator, and D (-) represents a discriminator.
3-1. a discriminator training process.
When the parameters of the arbiter are trained, the parameters of the fixed generator are not changed, and only the parameters of the arbiter participate in iterative updating. Defining a discriminator loss function
Figure RE-GDA0002988799700000061
Comprises the following steps:
Figure RE-GDA0002988799700000062
because it is a training discriminator, D (x)i,zi) The output is 'true', i.e., the larger the better; d (x)i,G(xi) Output is 'false', i.e., the smaller the better. By maximizing the loss function
Figure RE-GDA00029887997000000610
And finishing the training of the discriminator.
And 3-2. a generator training process.
When the generator parameters are trained, the parameters of the fixed discriminator are unchanged, and only the generator parameters participate in iterative updating. Defining a generator loss function
Figure RE-GDA0002988799700000063
Comprises the following steps:
Figure RE-GDA0002988799700000064
wherein
Figure RE-GDA0002988799700000065
Represents the loss function of a conventional GAN network;
Figure RE-GDA0002988799700000066
representing the conventional L1 loss function. Because it is the training generator, it is,
Figure RE-GDA0002988799700000067
d (x) in (1)i,G(xi) Output is 'true', i.e., the larger the better; while the smaller the L1 loss function, i.e. the
Figure RE-GDA0002988799700000068
Is better by minimizing the loss function
Figure RE-GDA0002988799700000069
Training of the generator is completed.
3-3. prediction of results
Generators and discriminators iteratively train optimal generators G (-) by maximizing or minimizing respective objective loss functions*And discriminator D (.)*
Inputting the distorted picture into a trained optimal generator G (·)*In this way, a desired similar graph can be obtained.

Claims (4)

1. A no-reference image quality evaluation method based on a convolutional neural network is characterized by comprising the following steps:
step 1: preprocessing the distortion image and the natural image to obtain a similar image;
step 2: according to the existing distortion map: x and similar figures: z, i.e., SSIM _ MAP; constructing a neural network;
based on the countermeasure generation concept of the GAN frame, the jump connection characteristic of the U-net frame and the denseblock structural characteristic of the framework of the densenet are integrated in the network generation part, and the extraction performance of the original framework on image features is improved; in the network distinguishing part, a simple classification network is adopted;
and step 3: and training the constructed neural network.
2. The method for evaluating the quality of the reference-free image based on the convolutional neural network as claimed in claim 1, wherein the step 1 specifically operates as follows:
1-1. calculating the brightness contrast:
Figure RE-FDA0002988799690000011
for the distortion map: x, natural diagram: y, adopt
Figure RE-FDA0002988799690000012
The luminance information representing the distortion map X,
Figure RE-FDA0002988799690000013
luminance information representing the natural image Y:
Figure RE-FDA0002988799690000014
wherein x isi,yiPixel point values of the image;
the luminance contrast of the distortion map X and the natural map Y can be expressed as:
Figure RE-FDA0002988799690000015
wherein C is1Is an extremely small number set to prevent the denominator from being 0;
1-2. calculating contrast ratio: c (x, y)
Using sigmaxContrast information, σ, representing the distortion map XyContrast information representing the natural diagram Y:
Figure RE-FDA0002988799690000016
the contrast ratio of the distortion map X and the natural map Y is expressed as:
Figure RE-FDA0002988799690000021
wherein C is2Is an extremely small number set to prevent the denominator from being 0;
1-3. calculating structural comparison: s (x, y)
Introducing correlation: sigmaxy
Figure RE-FDA0002988799690000022
The structural comparison of the distortion map X and the natural map Y can be expressed as:
Figure RE-FDA0002988799690000023
wherein C is3Is an extremely small number set to prevent the denominator from being 0;
1-4, calculating a similarity graph through the obtained brightness contrast, contrast and structural contrast;
Figure RE-FDA0002988799690000024
wherein a, b, c are the weights of brightness, contrast and structure;
the quality fraction MSSIM of the distortion MAP X may be found by SSIM _ MAP:
MSSIM=mean(SSIM_MAP)
where mean () is the averaging operation.
3. The method for evaluating the quality of the reference-free image based on the convolutional neural network as claimed in claim 2, wherein the step 2 specifically operates as follows:
2-1, generating a network:
(1) using a distortion map X as an input, the size of the distortion map X being 256 × 256, the number of channels being 3;
(2) carrying out a characteristic extraction process on the distortion diagram X in a manner of adding a convolution layer to the denseblock structure; obtaining a characteristic diagram with the size of 1 multiplied by 1 and the number of channels of 512;
a. firstly, extracting a low-level feature map with the size of 128 multiplied by 128 and the channel number of 64 from the distortion map through a layer of convolutional layer;
b. further extracting the characteristics of the low-level characteristic diagram by 7 denseblock convolution layers, wherein one convolution layer is arranged behind each denseblock structure and used for reducing the size of the characteristic diagram so as to achieve the effect of refining characteristic information; each denseblock consists of 4 convolutional layers, is only used for further extracting the characteristics and does not change the size and the channel number of a characteristic diagram; the size and channel number of the feature map will only change at the convolutional layer;
(3) recovering the false similarity map from the 8 deconvolution layers: z ', Z' is 256 multiplied by 256, the number of channels is 3;
c. each convolution layer is provided with a corresponding deconvolution layer, each convolution layer is in jump connection with the corresponding deconvolution layer, and the intermediate feature map is used as input information and transmitted to the deconvolution layer for restoring a target picture; therefore, a total of 7 jump connections, 7 layers of deconvolution layers will output an information graph with the size of 128 × 128 and the number of channels of 128;
d. finally, the information image is changed into a target image with the size of 256 multiplied by 256 and the number of channels of 3 through a layer of convolution layer, namely a false similarity image Z';
2-2. discriminating network
(1) The stitching distortion graph X and the true similarity graph Z are new pictures: X-Z; the stitching distortion map X and the false similarity map Z' are new pictures: X-Z'; the sizes of X-Z and X-Z' are 256 multiplied by 256, and the number of channels is 6; taking the obtained X-Z and X-Z' as the input of a discrimination network;
(2) an input image firstly passes through 6 layers of convolution layers, an input picture of 256 multiplied by 6 is changed into a small block picture of 4 multiplied by 1, and each pixel value on the picture represents an image block of 64 multiplied by 64 size in the input picture;
(3) each pixel value on a 4 × 4 × 1 tile is an output prediction tag; the overall prediction tag is determined by the average of the 16 pixel values.
4. The method for evaluating the quality of the reference-free image based on the convolutional neural network as claimed in claim 3, wherein the step 3 specifically operates as follows:
g (-) represents a generator, D (-) represents a discriminator;
3-1, a discriminator training process;
when the parameters of the discriminator are trained, the parameters of the fixed generator are unchanged, and only the parameters of the discriminator participate in iterative updating; defining a discriminator loss function
Figure RE-FDA0002988799690000031
Comprises the following steps:
Figure RE-FDA0002988799690000032
because it is a training discriminator, D (x)i,zi) The output is 'true', i.e., the larger the better; d (x)i,G(xi) Output is 'false', i.e., the smaller the better; by maximizing the loss function
Figure RE-FDA0002988799690000041
Finishing the training of the discriminator;
3-2. a generator training process;
when generator parameters are trained, the parameters of the fixed discriminator are unchanged, and only the generator parameters participate in iterative updating; defining a generator loss function
Figure RE-FDA0002988799690000042
Comprises the following steps:
Figure RE-FDA0002988799690000043
wherein
Figure RE-FDA0002988799690000044
Represents the loss function of a conventional GAN network;
Figure RE-FDA0002988799690000045
represents the conventional L1 loss function; because it is the training generator, it is,
Figure RE-FDA0002988799690000046
d (x) in (1)i,G(xi) Output is 'true', i.e., the larger the better; while the smaller the L1 loss function, i.e. the
Figure RE-FDA0002988799690000047
Is better by minimizing the loss function
Figure RE-FDA0002988799690000048
Completing the training of the generator;
3-3, predicting the result;
generators and discriminators iteratively train optimal generators G (-) by maximizing or minimizing respective objective loss functions*And discriminator D (.)*
Inputting the distorted picture into a trained optimal generator G (·)*In this way, a desired similar graph can be obtained.
CN202011625910.0A 2020-12-31 2020-12-31 Non-reference image quality evaluation method based on convolutional neural network Withdrawn CN112767311A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011625910.0A CN112767311A (en) 2020-12-31 2020-12-31 Non-reference image quality evaluation method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011625910.0A CN112767311A (en) 2020-12-31 2020-12-31 Non-reference image quality evaluation method based on convolutional neural network

Publications (1)

Publication Number Publication Date
CN112767311A true CN112767311A (en) 2021-05-07

Family

ID=75698982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011625910.0A Withdrawn CN112767311A (en) 2020-12-31 2020-12-31 Non-reference image quality evaluation method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN112767311A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114371442A (en) * 2022-01-05 2022-04-19 哈尔滨工程大学 Underwater DOA estimation method of U-net neural network based on DenseBlock

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114371442A (en) * 2022-01-05 2022-04-19 哈尔滨工程大学 Underwater DOA estimation method of U-net neural network based on DenseBlock

Similar Documents

Publication Publication Date Title
Ying et al. From patches to pictures (PaQ-2-PiQ): Mapping the perceptual space of picture quality
Niu et al. 2D and 3D image quality assessment: A survey of metrics and challenges
CN108391121B (en) No-reference stereo image quality evaluation method based on deep neural network
Yang et al. Predicting stereoscopic image quality via stacked auto-encoders based on stereopsis formation
CN109831664B (en) Rapid compressed stereo video quality evaluation method based on deep learning
Yue et al. Blind stereoscopic 3D image quality assessment via analysis of naturalness, structure, and binocular asymmetry
CN113554599B (en) Video quality evaluation method based on human visual effect
CN111047543A (en) Image enhancement method, device and storage medium
CN114066747A (en) Low-illumination image enhancement method based on illumination and reflection complementarity
Niu et al. Siamese-network-based learning to rank for no-reference 2D and 3D image quality assessment
CN112950480A (en) Super-resolution reconstruction method integrating multiple receptive fields and dense residual attention
CN116309178A (en) Visible light image denoising method based on self-adaptive attention mechanism network
CN112767311A (en) Non-reference image quality evaluation method based on convolutional neural network
CN111127386B (en) Image quality evaluation method based on deep learning
CN113628143A (en) Weighted fusion image defogging method and device based on multi-scale convolution
CN111127587B (en) Reference-free image quality map generation method based on countermeasure generation network
CN111127392B (en) No-reference image quality evaluation method based on countermeasure generation network
CN114821174B (en) Content perception-based transmission line aerial image data cleaning method
Liu et al. Progressive knowledge transfer based on human visual perception mechanism for perceptual quality assessment of point clouds
CN110020986A (en) The single-frame image super-resolution reconstruction method remapped based on Euclidean subspace group two
CN110858304A (en) Method and equipment for identifying identity card image
CN113628121B (en) Method and device for processing and training multimedia data
CN112529866A (en) Remote operation and maintenance reference-free video quality evaluation method based on deep transmission CNN structure
CN113469998B (en) Full-reference image quality evaluation method based on subjective and objective feature fusion
CN117408893B (en) Underwater image enhancement method based on shallow neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210507

WW01 Invention patent application withdrawn after publication