CN108257131A - A kind of 3D rendering quality evaluating method - Google Patents

A kind of 3D rendering quality evaluating method Download PDF

Info

Publication number
CN108257131A
CN108257131A CN201810158525.6A CN201810158525A CN108257131A CN 108257131 A CN108257131 A CN 108257131A CN 201810158525 A CN201810158525 A CN 201810158525A CN 108257131 A CN108257131 A CN 108257131A
Authority
CN
China
Prior art keywords
distortion
rendering
represent
quality
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810158525.6A
Other languages
Chinese (zh)
Inventor
胡新平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN201810158525.6A priority Critical patent/CN108257131A/en
Publication of CN108257131A publication Critical patent/CN108257131A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Abstract

The present invention proposes a kind of 3D rendering quality evaluating method, includes the following steps:Step 1)By the type of distortion that the natural scene feature of 2D images, prediction 3D rendering or so view is extracted in wavelet field;The energy spectrum statistical nature of disparity map is extracted, predicts the type of distortion of 3D rendering disparity map;Classified by depth belief network to the type of distortion of 3D rendering;Step 2)The mapping relations established between statistical nature and the quality of image obtain the quality of 3D rendering.Advantageous effect:The algorithm that technical scheme is proposed is compared with the algorithm of existing 3D rendering, the consistency higher of subjective quality assessment, and universal more preferable.

Description

A kind of 3D rendering quality evaluating method
Technical field
The invention belongs to technical field of computer vision more particularly to a kind of 3D rendering quality evaluating methods.
Background technology
Vision is one of important channel that people obtain external information, with three-dimensional movie and the rapid hair of display equipment Exhibition, 3 D video and image are becoming one of multimedia form most popular in our daily lifes.Due to bandwidth and set The limitation of standby physical characteristic during the acquisition, transmission and display of 3-D view, can lose by different type and in various degree (Zhang Yan, Anping, Zhang Qiuwen wait binocular tri-dimensional video minimum discernable distortion models and its in quality evaluation for the influence of proper class type Application [J] electronics and information journal, 2012,34 (3):698-703), the 3D videos and picture quality that people obtain are often not The needs of people can be reached, it is therefore desirable to consider the three-dimensional stereoscopic three-dimensional image quality evaluation index perceived.Three-dimensional image quality Evaluation (IQA) is the important subject of image processing field.It can be used for real-time system detection image quality, can also be used as weighing apparatus Measure the standard of 3-D view Processing Algorithm.Compared with traditional two dimensional image, 3-D view is made of 3 parts:Left and right view And depth information.Therefore, the quality evaluation of three-dimensional image is more much more complex than two dimensional image.
According to the depth information that 3D rendering whether is considered during 3D rendering quality evaluation, we are by existing 3D Image quality evaluation algorithm is divided into two classes (Shao F, Lin W, Gu S, et al.Perceptual full-reference quality assessment of stereoscopic images by considering binocular visual characteristics[J].IEEE Transactions on Image Processing,2013,22(5):1940- 1953.).The first kind is the index of two dimension measurement, i.e., using traditional 2D image quality evaluation indexs, such as document (Song Y, Yu M,Zheng K H,et al.New Objective Stereo Image Quality Metric Using Human Visual Characteristics and Phase Congruency[J].Advanced Materials Research, 2013,816:506-511.), it can be good at predicting the quality of 2D images, a left side for 3D rendering obtained using this one kind 2D index The quality of right view finally obtains the quality of 3D rendering, and algorithm is relatively easy, but due to having ignored the depth information of 3D rendering, Algorithm and human subject's mass are inconsistent.Second class algorithm considers the distinctive depth information of 3D rendering.Since binocular parallax can Depth information is provided, depth information is considered as an important factor for related with 3D vision perception, and to X-Y scheme by the second class measurement The quality and depth information of picture are assessed.The score of two dimensional image with depth quality is combined, forms obtaining for 3-D view Point.Zhang(Zhang L,Tong M H,Marks T K,et al.SUN:A Bayesian framework for saliency using natural statistics[J].Journal of Vision.2008,8(7):1-20.) et al. adopt The quality of image is calculated with two dimension blunt C4 and structural similarity SSIM, the quality of 3D rendering is obtained with reference to two kinds of quality.You (You J,Korhonen J,Perkis A.Spatial and temporal pooling of image quality metrics for perceptual video quality assessment on packet loss streams[C] .2010IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP),2010:It 1002-1005.) et al. attempts many two dimension measurements and applies to 3-D view, it is proposed that a two dimension regards The SSIM scoring binding models of figure and the antipode of average disparity map obtain 3D rendering quality.However these evaluation indexes for The 3D rendering of symmetrical distortion can obtain good effect, but then unsatisfactory for the 3D rendering of asymmetric distortion.In order to Symmetrical distortion and asymmetric distortion are improved in subjective consistency, this paper presents a kind of commenting based on 3D rendering type of distortion Valency algorithm.Algorithm is first according to the type of distortion of the natural scene statistical nature prognostic chart picture of 3D rendering, symmetrical distortion and non-right Claim distortion, then set up the mapping relations between each distortion and picture quality, the quality score being distorted with reference to two classes obtains The quality of image.
Invention content
The purpose of the present invention is to overcome the deficiency in the prior art, provide it is a kind of based on natural scene statistical nature without ginseng 3D rendering quality evaluation algorithm is examined, is specifically realized by following technical scheme:
The 3D rendering quality evaluating method, includes the following steps:
The distortion class that step 1) passes through the natural scene feature in wavelet field extraction 2D images, prediction 3D rendering or so view Type;The energy spectrum statistical nature of disparity map is extracted, predicts the type of distortion of 3D rendering disparity map;By depth belief network to 3D Image is classified;
The mapping relations that step 2) is established between statistical nature and the quality of image obtain the quality of 3D rendering.
The further design of the 3D rendering quality evaluating method is that left and right view, which is distorted, in step 3) includes five kinds of mistakes Proper class type, five kinds of type of distortion are respectively WN, FastFading, JPEG, JPEG2000 and Blur;Disparity map distortion is by carrying The energy spectrum statistical nature of disparity map is taken to carry out predicted distortion type, including two kinds of type of distortion, two kinds of type of distortion are respectively Symmetrical distortion and asymmetric distortion.
The further design of the 3D rendering quality evaluating method is that the step 3) obtains 3D rendering according to formula (1) Quality,
Wherein piRepresent probability, the p of symmetrical distortionijRepresent probability, the q of asymmetric distortionijRepresent five kinds of distortions Probability, QL, QRRepresent the quality of left and right view.
The further design of the 3D rendering quality evaluating method is that depth belief network includes one in the step 1) A visual layers, three hidden layers and a recurrence layer, the input of visual layers is the NSS feature vectors of 3D rendering or so view, Each visual layers includes 220 nodes;Hidden layer includes 100 nodes;Linear regression layer includes two nodes, represents respectively Symmetrical distortion and asymmetric distortion;Visual layers and the joint probability distribution of hidden layer are obtained by formula (2).
p(X,h1,h2,h3)=p (X | h1)·p(h1|h2)·p(h2,h3) (2)
In formula, X represents input left image feature or right image feature, h1、h2、h3Accordingly represent three hidden layers.
The further design of the 3D rendering quality evaluating method is, needed in the step 1) to depth belief network into Row adjustment, adjustment mode are as follows:
1) unsupervised learning mechanism:Each layer of hidden layer trains each layer weights by greedy learning method, from bottom to On successively training obtain training result, first layer is set as second order Gauss model, and every layer is relatively independent;
2) strategy of adjustment supervision in real time:It is adjusted in real time according to weighted value of the training result to each layer;
3) linear regression:It is supervised by unsupervised learning and adjustment, obtains each layer of weight, then set up NSS features Output and type of distortion between regression model.
The further design of the 3D rendering quality evaluating method is that the acquisition of energy spectrum statistical nature includes following step Suddenly:
A) it is defined as follows formula (3);
CI (x, y)=WL(x,y)·IL(x,y)+WR(x+d,y)·IR(x+d, y) (3)
IL、IR、WLAnd WRThe parallax value of left and right view is represented respectively, and d represents weighted value;
B) W is obtained according to formula (4)LAnd WR,
Wherein,GEL(x, y) andGERIt is the energy response in all scales and direction that (x+d, y), which represents left and right view, (x, y) represents the position of wave filter;
C) gradient information of image is obtained by formula (5),
In formula (5), Gx(x, y) represents horizontal direction Grad, Gy(x, y) represents that vertical gradient value, G (x, y) represent Gradient magnitude and α (x, y) represent gradient direction;
D) finally, the monocular feature of stereo-picture and binocular feature are cascaded, obtains the left and right image f of imageL, fR,fL=[gL, c] and fR=[gR,c]。
The further design of the 3D rendering quality evaluating method is that the natural scene feature of 2D images passes through formula (6) it obtains,
In formula (6), W, H represent width and height respectively, and x takes the positive integer of 1,2...W, and y takes the positive integer of 1,2...H, σθ,λ(x, y) represents standard deviation, Sθ,λ(x, y) represents divergence, GEθ,λ(x, y) represent by Gabor filter response amplitude, μθ,λ(x, y) represents that the mean value of response amplitude, λ represent that wavelength, θ represent centric angle, controls the filtering direction of wave filter.
Advantages of the present invention is as follows:
3D rendering type of distortion is divided into symmetrical and asymmetric distortion, Jin Erjian by the 3D rendering quality evaluating method of the present invention Mapping of the vertical natural scene feature (Natural Scene Statistics, hereinafter NSS features) between picture quality Relationship obtains the quality of 3D rendering, the algorithm of the algorithm that the results show technical scheme is proposed and existing 3D rendering It compares, the consistency higher of subjective quality assessment, and universal more preferable.
Description of the drawings
Fig. 1 is 3D NR model frameworks of the present invention.
Fig. 2 is Deep Belief Networks (DBNs) structure diagram of the present invention.
Fig. 3 is the accuracy comparison schematic diagram of type of distortion of the present invention classification.
Fig. 4 is the dependency diagram between subjective quality.
Specific embodiment
Technical scheme of the present invention is further illustrated with attached drawing in conjunction with specific embodiments.
Such as Fig. 1,3D rendering quality evaluating method provided in this embodiment, including descending step as follows:
The distortion class that step 1) passes through the natural scene feature in wavelet field extraction 2D images, prediction 3D rendering or so view Type;Extract the energy spectrum statistical nature of disparity map.
The mapping relations that step 2) is established between statistical nature and the quality of image obtain the quality of 3D rendering.
View distortion in left and right includes five kinds of type of distortion in step 3), five kinds of type of distortion be respectively WN, FastFading, JPEG, JPEG2000 and Blur;Disparity map distortion carries out predicted distortion class by extracting the energy spectrum statistical nature of disparity map Type, including two kinds of type of distortion, two kinds of type of distortion are respectively symmetrical distortion and asymmetric distortion.
The further design of the 3D rendering quality evaluating method is that the step 3) obtains 3D rendering according to formula (1) Quality,
Wherein piRepresent probability, the p of symmetrical distortionijRepresent probability, the q of asymmetric distortionijRepresent above-mentioned common five kinds of mistakes Genuine probability, QL, QRRepresent the quality of left and right view.
In the step 1)
p(X,h1,h2,h3)=p (X | h1)·p(h1|h2)·p(h2,h3) (2)
In formula, X represents the left image feature of input or right image feature, h1、h2、h3Accordingly represent three hidden layers.
In order to improve the classification accuracy of this paper algorithms, depth belief network (Deep Belief are used in step 1) Nets, hereinafter DBNs) classify to disparity map type of distortion.The results show, the classification accuracy of this paper algorithms Mean value is up to 86.6%, standard deviation 2.88.Specific classification results such as Fig. 3, Fig. 3 show DBNs graders employed herein Compared to svm classifier accuracy higher.The DBNs of the present embodiment includes a visual layers, three hidden layers and a recurrence Layer, the input of visual layers is the NSS feature vectors of 3D rendering or so view, each visual layers includes 220 nodes;Hidden layer Include 100 nodes;Linear regression layer includes two nodes, represents symmetrical distortion and asymmetric distortion respectively;Visual layers and hidden The joint probability distribution for hiding layer is obtained by formula (2).
It needs to be adjusted DBNs in step 1), adjustment mode is as follows:
1) unsupervised learning mechanism:Each layer can regard RBM (limited Boltzmann machine) as, pass through greedy study side Method trains each layer weights, is successively trained from bottom-up, first layer is set as second order Gauss model, and every layer relatively independent;
2) strategy of adjustment supervision in real time:It is adjusted in real time according to weighted value of the prediction result to each layer;
3) linear regression:By unsupervised training and the fine tuning of supervision, each layer of weight is obtained, then sets up NSS spies Regression model between the output of sign and type of distortion.
The acquisition of 3D natural scene features includes the following steps:
A) it is defined as follows formula (3);
CI (x, y)=WL(x,y)·IL(x,y)+WR(x+d,y)·IR(x+d, y) (3)
IL、IR、WLAnd WRThe parallax value of left and right view is represented respectively, and d represents weighted value;
B) W is obtained according to formula (4)LAnd WR,
Wherein,GEL(x, y) andGERIt is the energy response in all scales and direction that (x+d, y), which represents left and right view, (x, y) represents the position of wave filter.
C) gradient information of image is obtained by formula (5),
In formula (5), Gx(x, y) represents horizontal direction Grad, Gy(x, y) represents that vertical gradient value, G (x, y) represent Gradient magnitude and α (x, y) represent gradient direction;
D) finally, the monocular feature of stereo-picture and binocular feature are cascaded, obtains the left and right image f of imageL, fR,fL=[gL, c] and fR=[gR,c]。
The further design of the 3D rendering quality evaluating method is that the natural scene feature of 2D images passes through formula (6) it obtains.
In formula (6), W, H represent width and height respectively, and x takes the positive integer of 1,2...W, and y takes the positive integer of 1,2...H, σθ,λ(x, y) represents standard deviation, Sθ,λ(x, y) represents divergence, GEθ,λ(x, y) represent by Gabor filter response amplitude, μθ,λ(x, y) represents that the mean value of response amplitude, λ represent that wavelength, θ represent centric angle, controls the filtering direction of wave filter.
The present embodiment is using Spearman rank correlation coefficient (SROCC) and Pearson's linearly dependent coefficient (LCC) to prediction As a result it measures.The value of SROCC and LCC is higher, and the performance for showing algorithm is better.Simultaneously by this paper algorithms with it is existing common 3D non-reference picture quality appraisement algorithms are compared.
Table I and Table II are shown in I in the database and II, this paper algorithm and BRISQUE, You, Hewage, MS-SSIM are calculated Method is in SROCC and compares, and experimental result shows the 3D rendering prediction for symmetrical distortion, and this paper algorithms are superior to existing 3D figures As quality evaluation algorithm, Table II shows that, corresponding to the comparison carried out in asymmetric distortion library, comparison result is shown for asymmetric mistake For the quality evaluation of true image, this paper algorithms also have certain advantage.And due to not needing to any reference information, herein Algorithm has widely application.
Table I common algorithms and this paper algorithms are in database I compared with SROCC
Table II common algorithms and this paper algorithms are in database II compared with SROCC
Algorithms WN JP2K JPEG Blur FF Combine
BRISQUE 0.932 0.822 0.560 0.720 0.840 0.783
You 0.909 0.894 0.795 0.813 0.891 0786
MS-SSIM 0.980 0.841 0.842 0.908 0.884 0.889
Hewage 0.880 0.598 0.736 0.028 0.684 0.501
Our Method 0.951 0.870 0.850 0.912 0.942 0.899
Table III and Table IV show this paper algorithms on LCC with BRISQUE, You, Hewage, MS-SSIM algorithm comparisons are real It tests result and shows that this paper algorithms are better than other several algorithms.
Fig. 4 shows the fitness between this paper algorithms and subjective perception, as seen from the figure, proposed algorithm and subjectivity It is similar to linear relationship between quality evaluation, meets human visual perception system.
Table III common algorithms and this paper algorithms are in database I compared with LCC
Algorithms WN JP2K JPEG Blur FF Combine
BRISQUE 0.941 0.847 0.615 0.926 0.852 0.912
You 0.941 0.877 0.487 0.919 0.930 0.881
MS-SSIM 0.942 0.912 0.603 0.942 0.776 0.917
Hewage 0.895 0.904 0.530 0.798 0.669 0.830
Our Method 0.876 0.923 0.693 0.820 0.855 0.912
Table IV common algorithms and this paper algorithms are in database II compared with LCC
Algorithm WN JP2K JPEG Blur FF Combine
BRISQUE 0.823 0.840 0.650 0.936 0.870 0.892
You 0.912 0.905 0.830 0.784 0.915 0.800
MS-SSIM 0.957 0.834 0.862 0.963 0.901 0.900
Hewage 0.891 0.662 0.734 0.450 0.745 0.60
Our Method 0.950 0.91 0.941 0.94 0.912 0.912
The foregoing is only a preferred embodiment of the present invention, but protection scope of the present invention be not limited thereto, Any one skilled in the art in the technical scope disclosed by the present invention, the change or replacement that can be readily occurred in, It should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with scope of the claims Subject to.

Claims (7)

1. a kind of 3D rendering quality evaluating method, it is characterised in that include the following steps:
The type of distortion that step 1) passes through the natural scene feature in wavelet field extraction 2D images, prediction 3D rendering or so view; The energy spectrum statistical nature of disparity map is extracted, predicts the type of distortion of 3D rendering disparity map;3D is schemed by depth belief network As classifying;
The mapping relations that step 2) is established between statistical nature and the quality of image obtain the quality of 3D rendering.
2. 3D rendering quality evaluating method according to claim 1, it is characterised in that left and right view distortion packet in step 3) Five kinds of type of distortion are included, five kinds of type of distortion are respectively WN, FastFading, JPEG, JPEG2000 and Blur;Disparity map loses Predicted distortion type very is carried out by extracting the energy spectrum statistical nature of disparity map, including two kinds of type of distortion, two kinds of distortion classes Type is respectively symmetrical distortion and asymmetric distortion.
3. 3D rendering quality evaluating method according to claim 2, it is characterised in that the step 3) is obtained according to formula (1) The quality of 3D rendering,
Wherein piRepresent probability, the p of symmetrical distortionijRepresent probability, the q of asymmetric distortionijRepresent the probability of five kinds of distortions, QL, QRRepresent the quality of left and right view.
4. 3D rendering quality evaluating method according to claim 1, it is characterised in that depth conviction net in the step 1) Network includes a visual layers, three hidden layers and a recurrence layer, and the input of visual layers is the NSS spies of 3D rendering or so view Sign vector, each visual layers include 220 nodes;Hidden layer includes 100 nodes;Linear regression layer includes two nodes, Symmetrical distortion and asymmetric distortion are represented respectively;Visual layers and the joint probability distribution of hidden layer are obtained by formula (2).
p(X,h1,h2,h3)=p (X | h1)·p(h1|h2)·p(h2,h3) (2)
In formula, X represents input left image feature or right image feature, h1、h2、h3Accordingly represent three hidden layers.
5. 3D rendering quality evaluating method according to claim 1, it is characterised in that need to believe depth in the step 1) It reads network to be adjusted, adjustment mode is as follows:
1) unsupervised learning mechanism:Each layer of hidden layer trains each layer weights by greedy learning method, from it is bottom-up by Layer training obtains training result, and first layer is set as second order Gauss model, and every layer relatively independent;
2) strategy of adjustment supervision in real time:It is adjusted in real time according to weighted value of the training result to each layer;
3) linear regression:It is supervised by unsupervised learning and adjustment, obtains each layer of weight, then set up the defeated of NSS features Go out the regression model between type of distortion.
6. 3D rendering quality evaluating method according to claim 1, it is characterised in that the acquisition packet of energy spectrum statistical nature Include following steps:
A) it is defined as follows formula (3);
CI (x, y)=WL(x,y)·IL(x,y)+WR(x+d,y)·IR(x+d, y) (3)
IL、IR、WLAnd WRThe parallax value of left and right view is represented respectively, and d represents weighted value;
B) W is obtained according to formula (4)LAnd WR,
Wherein, GEL(x, y) and GERIt is the energy response in all scales and direction that (x+d, y), which represents left and right view, (x, Y) position of wave filter is represented;
C) gradient information of image is obtained by formula (5),
In formula (5), Gx(x, y) represents horizontal direction Grad, Gy(x, y) represents that vertical gradient value, G (x, y) represent gradient Amplitude and α (x, y) represent gradient direction;
D) finally, the monocular feature of stereo-picture and binocular feature are cascaded, obtains the left and right image f of imageL,fR,fL =[gL, c] and fR=[gR,c]。
7. 3D rendering quality evaluating method according to claim 1, it is characterised in that the natural scene feature of 2D images is led to Formula (6) acquisition is crossed,
In formula (6), W, H represent width and height respectively, and x takes the positive integer of 1,2...W, and y takes the positive integer of 1,2...H, σθ,λ (x, y) represents standard deviation, Sθ,λ(x, y) represents divergence, GEθ,λ(x, y) represents the amplitude responded by Gabor filter, μθ,λ (x, y) represents that the mean value of response amplitude, λ represent that wavelength, θ represent centric angle, controls the filtering direction of wave filter.
CN201810158525.6A 2018-02-24 2018-02-24 A kind of 3D rendering quality evaluating method Pending CN108257131A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810158525.6A CN108257131A (en) 2018-02-24 2018-02-24 A kind of 3D rendering quality evaluating method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810158525.6A CN108257131A (en) 2018-02-24 2018-02-24 A kind of 3D rendering quality evaluating method

Publications (1)

Publication Number Publication Date
CN108257131A true CN108257131A (en) 2018-07-06

Family

ID=62745282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810158525.6A Pending CN108257131A (en) 2018-02-24 2018-02-24 A kind of 3D rendering quality evaluating method

Country Status (1)

Country Link
CN (1) CN108257131A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945552A (en) * 2012-10-22 2013-02-27 西安电子科技大学 No-reference image quality evaluation method based on sparse representation in natural scene statistics
CN106960432A (en) * 2017-02-08 2017-07-18 宁波大学 One kind is without with reference to stereo image quality evaluation method
CN107635136A (en) * 2017-09-27 2018-01-26 北京理工大学 View-based access control model is perceived with binocular competition without with reference to stereo image quality evaluation method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945552A (en) * 2012-10-22 2013-02-27 西安电子科技大学 No-reference image quality evaluation method based on sparse representation in natural scene statistics
CN106960432A (en) * 2017-02-08 2017-07-18 宁波大学 One kind is without with reference to stereo image quality evaluation method
CN107635136A (en) * 2017-09-27 2018-01-26 北京理工大学 View-based access control model is perceived with binocular competition without with reference to stereo image quality evaluation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANQING LI ET.AL: "No-Reference Stereoscopic Image Quality Assessment Using Natural Scene Statistics", 《2017 2ND INTERNATIONAL CONFERENCE ON MULTIMEDIA AND IMAGE PROCESSING》 *
田维军 等: "基于深度学习的无参考立体图像质量评价", 《计算机辅助设计与图形学学报》 *

Similar Documents

Publication Publication Date Title
Zhang et al. Learning structure of stereoscopic image for no-reference quality assessment with convolutional neural network
Fang et al. Objective quality assessment of screen content images by uncertainty weighting
Chao et al. Salgan360: Visual saliency prediction on 360 degree images with generative adversarial networks
CN105744256B (en) Based on the significant objective evaluation method for quality of stereo images of collection of illustrative plates vision
CN108765414B (en) No-reference stereo image quality evaluation method based on wavelet decomposition and natural scene statistics
CN104537647B (en) A kind of object detection method and device
CN108391121B (en) No-reference stereo image quality evaluation method based on deep neural network
CN109255358B (en) 3D image quality evaluation method based on visual saliency and depth map
CN107396095B (en) A kind of no reference three-dimensional image quality evaluation method
CN104994375A (en) Three-dimensional image quality objective evaluation method based on three-dimensional visual saliency
CN104867138A (en) Principal component analysis (PCA) and genetic algorithm (GA)-extreme learning machine (ELM)-based three-dimensional image quality objective evaluation method
CN109831664B (en) Rapid compressed stereo video quality evaluation method based on deep learning
CN107481236A (en) A kind of quality evaluating method of screen picture
CN107959848A (en) Universal no-reference video quality evaluation algorithms based on Three dimensional convolution neutral net
CN109242834A (en) It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method
CN110674925B (en) No-reference VR video quality evaluation method based on 3D convolutional neural network
CN105654142A (en) Natural scene statistics-based non-reference stereo image quality evaluation method
CN108259893B (en) Virtual reality video quality evaluation method based on double-current convolutional neural network
CN109685772B (en) No-reference stereo image quality evaluation method based on registration distortion representation
CN104866864A (en) Extreme learning machine for three-dimensional image quality objective evaluation
Tan et al. Computational aesthetics of photos quality assessment based on improved artificial neural network combined with an autoencoder technique
CN111915589A (en) Stereo image quality evaluation method based on hole convolution
CN105898279B (en) A kind of objective evaluation method for quality of stereo images
CN109523590B (en) 3D image depth information visual comfort evaluation method based on sample
CN108848365B (en) A kind of reorientation stereo image quality evaluation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180706