CN108648207B - Stereo image quality evaluation method based on segmented stack type self-encoder - Google Patents
Stereo image quality evaluation method based on segmented stack type self-encoder Download PDFInfo
- Publication number
- CN108648207B CN108648207B CN201810444082.7A CN201810444082A CN108648207B CN 108648207 B CN108648207 B CN 108648207B CN 201810444082 A CN201810444082 A CN 201810444082A CN 108648207 B CN108648207 B CN 108648207B
- Authority
- CN
- China
- Prior art keywords
- image
- sae
- edge
- sum
- graph
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention relates to a stereo image quality evaluation method based on a segmented stack type self-encoder, which is used for extracting primary edge characteristics from a sum graph, a difference graph and a monocular diagram under an unsupervised conditionInputting three trained segmented stacked self-encoders S-SAE to obtain abstract deep edge characteristicsThe primary color feature of the color map is then corrected using a stacked auto-encoder SAEEncoding to obtain abstract deep color characteristicsAnd finally, fitting the deep characteristic vector of the stereo image with the corresponding MOS value, and predicting the quality score of the stereo image to be measured by using the deep characteristic vector.
Description
Technical Field
The invention belongs to the field of image processing, and relates to an objective evaluation method for quality of a non-reference stereo image.
Background
With the rapid development of the stereoscopic display technology, the stereoscopic display technology has been widely applied to various fields. Compared with a plane image, the stereo image brings brand-new experience and presence to audiences. Therefore, stereo image processing research is receiving a lot of attention. However, due to the influence of equipment, processing means and other factors, the stereoscopic image may introduce distortion during the acquisition, compression, transmission and storage processes, which affects the quality of the stereoscopic image. Therefore, there is a need to develop an evaluation method that can effectively evaluate the quality of stereoscopic images. Although subjective quality evaluation is a very reliable evaluation method, the subjective evaluation method consumes a lot of manpower and time, and is poor in real-time performance. In addition, the subjective evaluation method is easily interfered by human and external environmental factors, and the evaluation result is not stable enough. Compared with subjective evaluation, objective evaluation utilizes software to evaluate the quality of images, does not need organization personnel to participate in a large number of subjective tests, is simple to operate, and is highly related to the result of the subjective evaluation, so that the objective evaluation is more and more concerned by related researchers.
Currently, objective stereoscopic image quality evaluation is mainly divided into three methods, namely full-reference stereoscopic image quality evaluation, partial-reference quality evaluation and no-reference quality evaluation, according to whether an original image is used in the evaluation process. The former two methods make objective evaluation on the stereo image by using the original image or partial information of the original image, and have great limitations. The method is not required to be subjected to reference quality evaluation, and is the method which is most suitable for practical situations. Due to the problems that binocular characteristics are not fully considered, the process of binocular stereo image processing is not thoroughly understood, and the like, stereo image quality evaluation is still a hotspot and difficulty of current research.
Disclosure of Invention
The invention aims to provide a non-reference stereo image quality evaluation method for simulating the perception of human eyes to stereo images and the image processing process. The invention extracts the primary characteristics of the edges, colors and the like of the stereo image, converts the primary characteristics into deep level characteristics which are more in line with human eye characteristic abstraction through the segmented stacking type self-encoder or the stacking type self-encoder, and accordingly makes more comprehensive, accurate and objective evaluation on the quality of the stereo image. The technical scheme is as follows:
a stereo image quality evaluation method based on a segmented stack type self-encoder extracts primary edge features from a sum graph, a difference graph and a monocular graph under an unsupervised conditionAndinputting three trained segmented stacked self-encoders S-SAE, obtaining abstract deep edge characteristicsAndthe primary color feature of the color map is then corrected using a stacked auto-encoder SAEEncoding to obtain abstract deep color characteristicsAnd finally, fitting the deep characteristic vector of the stereo image with the corresponding MOS value, and predicting the quality score of the stereo image to be measured by using the deep characteristic vector. The method comprises the following steps:
the first step is as follows: synthesis of sum graph (S), difference graph (D) and monocular graph (C) of left and right LoG graphs
Filtering the image pair by using a Gauss Laplace LoG filter to obtain left and right LoG images, and setting parameters of LoG as (n, sigma) belonging to { (3,0.5), (7,1), (13,2) }, wherein sigma is a standard deviation of a Gauss Laplace operator, thereby obtaining left and right LoG images of three edge thicknesses; then, calculating a sum graph, a difference graph and a single eye graph of each left and right LoG graph;
the second step is that: extracting primary edge features of sum, difference and monocular imagesAnd
fitting the MSCN coefficient histogram of the sum graph by using a generalized Gaussian distribution GGD model, and taking the variance and the shape parameters of the GGD as 2 characteristics of the sum graph; respectively fitting MSCN neighborhood coefficient histograms in 4 directions of horizontal, vertical, main diagonal and secondary diagonal of the graph by using 4 asymmetric generalized Gaussian distribution AGGD models, and respectively calculating the mean value, variance and shape of the 4 AGGD modelsThe 4 parameters of shape and size are taken as the characteristics of the sum graph, and 16 characteristics are extracted; in addition, the amplitude, variance and entropy information of the sum graph are taken as 3 characteristics of the sum graph; there are three edge thickness sums because there are left and right LoG plots for three edge thicknesses; according to the steps, 21-dimensional feature vectors are extracted from the sum graph of each edge thickness, and 63-dimensional primary edge feature vectors are finally extracted from the sum graph
The difference image and the monocular image have the same feature extraction method as the sum image, and 63-dimensional primary edge feature vectors are extracted from both the difference image and the monocular imageAnd
the third step: training 3 segmented stacked autoencoder S-SAE
Randomly selecting 50% of the image pairs in the image library trains three segmented stacked self-encoders S-SAE to extract primary edge features from the sum, difference and monocular imagesAndrespectively training three segmented stacked self-encoders S-SAE as samples;
each segmented stacked self-encoder S-SAE consists of 3 local stacked self-encoders SAE; segmented stacked self-encoder will input according to different image edge thicknessThe method comprises the following steps of dividing the system into three sections, inputting 21 characteristics of each section into 3 local SAEs, and training the 3 local SAEs respectively; training to obtain 3 local SAEs with three hidden layers, wherein the number of each layer unit of the 3 local SAEs is 21-18-14-12, finally, connecting output layers of local SAE in series to obtain a segmented stacked self-encoder S-SAE-S of the sum graph;
training to obtain a segmented stacked self-encoder S-SAE-D of a difference diagram and a segmented stacked self-encoder S-SAE-C of a single-eye diagram according to the steps, wherein the number of each layer unit of local SAE of the two S-SAEs is 21-18-14-12;
will be provided withAndthe trained S-SAE-S, S-SAE-D and S-SAE-C are input, and the three-segment stacked self-encoder willAndrespectively encoded as abstract deep-edge featuresAnd
The three color maps of the left image were fitted separately with 6 AGGD models: three color maps of the RG map, the BY map, the Lum map and the right map: RG map, BY map and Lum map,extracting the shape, the left variance and the right variance of the AGGD model, and simultaneously calculating kurtosis and skewness values of the 6 AGGD models; 30-dimensional primary color feature vector is extracted
Randomly selecting 50% of image pairs in an image library to train a stacked self-encoder SAE, wherein the number of each layer unit of the stacked self-encoder is 30-25-20-15, and the method comprises the following steps ofInput of the trained SAE, SAE willEncoding as abstract deep color feature vectors
The seventh step: computing stereo image local quality score
Randomly selecting 80% of image pairs in the image library as a training set, and using the image pairs in the training setWith corresponding MOS trainingA corresponding support vector regression (SVR-S); using SVR-S prediction and the quality score of the image, the remaining 20% of the image pairs in the image library are used as a test setBy the method, the mass fractions of the difference image, the monocular image and the color image are obtained respectivelyAnd
eighth step: calculating an objective quality score for a stereoscopic image
1 calculating local quality scores related to edge information by using dynamic weights
Weighting the sum image quality score and the difference image quality score to obtain a sum-difference image quality score QSD:
QSD=WDQD+(1-WD)QS 1
Wherein the weight of the difference mapμLAnd muRIs a desire for L and R, σL,σRIs the variance, C1,C2Is a constant value, C1=0.6,C2=5;
Combining the sum-difference image quality fraction and the monocular image fraction to obtain an edge characteristic quality fraction Qedge:
Qedge=WCQC+(1-WC)QSD 2
2 calculating stereoscopic image quality score
The quality score of the edge is assigned a higher weight:
Q=WedgeQedge+WcolorQcolor 3
wherein the edge weight Wedge=0.8、Wcolor=0.2。
The method for evaluating the objective quality of the three-dimensional image provided by the invention is based on the edge information and the color information of the image, combines the operation mechanism of the whole visual perception channel, utilizes the segmented stacked self-encoder to simulate the process of processing the image information by human eyes, and establishes a non-reference three-dimensional image quality objective evaluation model. The obtained objective evaluation result and the subjective evaluation result of the quality of the three-dimensional image have high consistency, and the quality of the three-dimensional image can be reflected more accurately.
Drawings
Fig. 1 is a flowchart of a method for evaluating stereoscopic image quality based on a segmented stacked auto-encoder, fig. 2 is an RG diagram, a BY diagram and a Lum diagram of a left diagram in an image pair, and fig. 3 is a structural diagram of a segmented stacked auto-encoder.
Detailed Description
The invention relates to a stereo image quality evaluation method based on a segmented stack type self-encoder, which comprises the following steps of converting primary edge feature vectors of a sum image, a difference image and a monocular image into abstract deep layer edge feature vectors by using three segmented stack type self-encoders, then encoding primary color features of a color image by using a stack type automatic encoder to obtain the abstract deep layer color feature vectors, wherein the deep layer feature vectors can reflect the distortion degree of the stereo image, so that the quality evaluation of the distorted stereo image is carried out, and the method comprises the following steps:
the first step is as follows: synthesis of left and right LoG maps (L)GoL、RGoL) Sum, difference and monocular images
Simulating the process of extracting the image edges by retinal nerve cells, filtering the image pair by using a Laplacian of Gaussian (LoG) filter, inputting the image pair into an n × n Gaussian low-pass filter, applying a 3 × 3 weighted mask window to a 3 × 3 region centered at (i, j), and calculating the correlation value (convolution and sum) of the window; here, the setting parameter is set to (n, σ) ∈ { (3,0.5), (7,1), (13,2) }, σ is the standard deviation of the laplacian of gaussian operator, thereby obtaining LoG maps of 3 kinds of edge thicknesses; then, a sum map (S), a difference map (D), and a monocular map (C) of the left and right LoG maps are calculated. The calculation method is as follows:
S(i,j)=LLoG(i,j)+RLoG(i,j) (1)
D(i,j)=LLoG(i,j)-RLoG(i,j) (2)
C(i,j)=WLLLoG(i,j)+wR((i+d(i,j)),j)RLoG((i+d(i,j)),j) (3)
wherein L isLoGIs a left LoG plot, RLoGIs the right LoG image, d is the parallax, WLAnd WRThe weights of the left and right LoG maps are obtained by assigning normalized Gabor filter energy responses, and are defined as:
wherein is GELAnd GERIndicating the energy response values of the left and right LoG plots at all sizes and orientations, respectively.
The second step is that: extracting primary edge features of sum, difference and monocular imagesAnd
fitting MSCN coefficient histogram of the sum graph by using a Generalized Gaussian Distribution (GGD) model, and taking variance and shape parameters of the GGD as 2 characteristics of the sum graph; respectively fitting MSCN neighborhood coefficient histograms in 4 directions of a horizontal direction, a vertical direction, a main diagonal line and a secondary diagonal line of a sum graph by using 4 Asymmetric Generalized Gaussian Distribution (AGGD) models, taking 4 parameters of a mean value, a variance, a shape and a size of the 4 AGGD models as the characteristics of the sum graph, and extracting 16 characteristics; in addition, the amplitude, variance, entropy information of the sum graph are taken as 3 features of the sum graph. Since there are left and right LoG maps of three edge thicknesses, there are sum maps of three edge thicknesses. According to the steps, 21-dimensional feature vectors can be extracted from the sum graph of each edge thickness, and 63-dimensional primary edge feature vectors are finally extracted from the sum graph
The feature extraction method of the difference image and the monocular image is completely the same as that of the sum image, and 63-dimensional primary edge feature vectors are extracted from both the difference image and the monocular imageAnd
the third step: training 3 segmented stacked autoencoder S-SAE
Randomly select 50% of the image pairs in the image library to train 3S-SAEs. Extracting primary edge features from the sum, difference and monocular imagesAndthree segmented stacked autoencoders are trained as samples, respectively.
The segmented stacked self-encoder consists of 3 local stacked self-encoders (local SAE). Segmented stacked self-encoder will input according to different edge thicknessThe method is divided into three sections, each section has 21 characteristics, the characteristics are input into 3 local SAEs, the 3 local SAEs are trained respectively, and the training method is the same as that of a stacked self-encoder; training to obtain 3 local SAEs with three hidden layers, wherein the unit number of each layer of the 3 local SAEs is 21-18-14-12, and finally, connecting the output layers of the local SAEs in series to obtain a segmented stacked self-encoder (S-SAE-S) of the sum graph.
According to the steps, a segment-stacked self-encoder (S-SAE-D) of a difference diagram and a segment-stacked self-encoder (S-SAE-C) of a monocular diagram are obtained by training, and the number of units of each layer of the local SAE of the two S-SAEs is 21-18-14-12.
will be provided withAndthe trained S-SAE-S, S-SAE-D and S-SAE-C are input, and the three-segment stacked self-encoder willAndrespectively encoded as abstract deep-edge featuresAnd
The vision system processes color signals in the lateral geniculate nucleus. The color excitation is coded by comparing the activity of different cones by using the outer knee kernel, and the processing process of color information in the outer knee kernel is simulated by using reverse coding. In the retina, cones are divided into three types: l-cones, M-cones and S-cones, which are sensitive to long (red related), medium (green), and short (blue) wavelengths, respectively. There are three types of reverse channel coding, red and green channel (Lum), blue and yellow channel (RG), and light brightness channel (BY).
Wherein the content of the first and second substances,the MSCN coefficient is obtained by calculating the MSCN coefficient after the RGB image of the image pair is subjected to logarithmic transformation.
Fitting the color chart (RG chart, BY chart and Lum chart) of the left chart and the color chart (RG chart, BY chart and Lum chart) of the right chart BY 6 AGGD models, extracting the shape, left variance and right variance of the AGGD, and simultaneously calculating the kurtosis and skewness values of the 6 AGGD; extracting 30-dimensional primary color feature vector from color image
Randomly selecting 50% of the image pairs in the image library trains a stacked self-encoder SAE, the number of the units of each layer of the stacked self-encoder is 30-25-20-15. Will be provided withInput of the trained SAE, SAE willEncoding as abstract deep color feature vectors
The seventh step: computing stereo image local quality score
Randomly selecting 80% of image pairs in the image library as a training set, and using the image pairs in the training setWith corresponding MOS trainingA corresponding support vector regression machine (SVR-S); using SVR-S prediction and the quality score of the image, the remaining 20% of the image pairs in the image library are used as a test setBy the method, the mass fractions of the difference image, the monocular image and the color image are obtained respectivelyAnd
eighth step: calculating an objective quality score for a stereoscopic image
(1) Computing local quality scores associated with edge information using dynamic weights
Weighting the sum image quality score and the difference image quality score to obtain a sum-difference image quality score QSD:
QSD=WDQD+(1-WD)QS (9)
Wherein the weight of the difference mapμLAnd muRIs a desire for L and R, σL,σRIs the variance, C1,C2Is a constant value, C1=0.6,C2=5。
Dividing the sum-difference image quality fraction and the single eye image fractionCombining the numbers to obtain an edge characteristic quality fraction Qedge:
Qedge=WCQC+(1-WC)QSD (10)
(2) Calculating stereo image quality scores
Since edge information is more important than color information in stereoscopic image quality evaluation, a higher weight is assigned to the quality score of an edge:
Q=WedgeQedge+WcolorQcolor (11)
according to the experiment, the edge weight W is provededge=0.8、WcolorWhen the value is 0.2, the algorithm has the best effect.
Claims (1)
1. A stereo image quality evaluation method based on a segmented stack type self-encoder extracts primary edge features from a sum graph, a difference graph and a monocular graph under an unsupervised conditionAndinputting three trained segmented stacked self-encoders S-SAE to obtain abstract deep edge characteristicsAndthe primary color feature of the color map is then corrected using a stacked auto-encoder SAEEncoding to obtain abstract deep color characteristicsAnd finally, fitting the deep characteristic vector of the stereo image with the corresponding MOS value, and predicting the quality score of the stereo image to be tested by utilizing the deep characteristic vector of the stereo image to be tested, wherein the method comprises the following steps:
the first step is as follows: synthesis of sum map S, difference map D and monocular map C of left and right LoG maps
Filtering the image pair by using a Gaussian Laplace LoG filter, inputting the image pair into an n multiplied by n Gaussian low-pass filter to obtain left and right LoG images, and setting parameters of LoG as (n, sigma) epsilon { (3,0.5), (7,1), (13,2) }, wherein sigma is a standard difference of a Gaussian Laplace operator, thereby obtaining left and right LoG images with three edge thicknesses; then, calculating a sum graph, a difference graph and a single eye graph of each left and right LoG graph;
the second step is that: extracting primary edge features of sum, difference and monocular imagesAnd
fitting the MSCN coefficient histogram of the sum graph by using a generalized Gaussian distribution GGD model, and taking the variance and the shape parameters of the GGD as 2 characteristics of the sum graph; respectively fitting MSCN neighborhood coefficient histograms in 4 directions of a horizontal direction, a vertical direction, a main diagonal line and a secondary diagonal line of a sum graph by using 4 asymmetric generalized Gaussian distribution AGGD models, taking 4 parameters of the mean value, the variance, the shape and the size of the 4 AGGD models as the characteristics of the sum graph, and extracting 16 characteristics; in addition, the amplitude, variance and entropy information of the sum graph are taken as 3 characteristics of the sum graph; there are three edge thickness sums because there are left and right LoG plots for three edge thicknesses; according to the steps, 21-dimensional feature vectors are extracted from the sum graph of each edge thickness, and 63-dimensional primary edge feature vectors are finally extracted from the sum graph
The difference image and the monocular image have the same feature extraction method as the sum image, and 63-dimensional primary edge feature vectors are extracted from both the difference image and the monocular imageAnd
the third step: training 3 segmented stacked autoencoder S-SAE
Randomly selecting 50% of the image pairs in the image library trains three segmented stacked self-encoders S-SAE to extract primary edge features from the sum, difference and monocular imagesAndrespectively training three segmented stacked self-encoders S-SAE as samples;
each segmented stacked self-encoder S-SAE consists of 3 local stacked self-encoders SAE; segmented stacked self-encoder will input according to different image edge thicknessThe method comprises the following steps of dividing the system into three sections, inputting 21 characteristics of each section into 3 local SAEs, and training the 3 local SAEs respectively; training to obtain 3 local SAEs with three hidden layers, wherein the unit number of each layer of the 3 local SAEs is 21-18-14-12, and finally connecting the output layers of the local SAEs in series to obtain a segmented stacked self-encoder S-SAE-S of the sum graph;
training to obtain a segmented stacked self-encoder S-SAE-D of a difference diagram and a segmented stacked self-encoder S-SAE-C of a single-eye diagram according to the steps, wherein the number of each layer unit of local SAE of the two S-SAEs is 21-18-14-12;
will be provided withAndthe trained S-SAE-S, S-SAE-D and S-SAE-C are input, and the three-segment stacked self-encoder willAndrespectively encoded as abstract deep-edge featuresAnd
The three color maps of the left image were fitted separately with 6 AGGD models: three color maps of the RG map, the BY map, the Lum map and the right map: extracting the shape, left variance and right variance of the AGGD model from the RG graph, the BY graph and the Lum graph, and simultaneously calculating kurtosis and skewness values of the 6 AGGD models; 30-dimensional primary color feature vector is extracted
Randomly selecting 50% of image pairs in an image library to train a stacked self-encoder SAE, wherein the number of each layer unit of the stacked self-encoder is 30-25-20-15, and the method comprises the following steps ofInput of the trained SAE, SAE willEncoding as abstract deep color feature vectors
The seventh step: computing stereo image local quality score
Randomly selecting 80% of image pairs in the image library as a training set, and using the image pairs in the training setWith corresponding MOS trainingA corresponding support vector regression (SVR-S); using SVR-S prediction and the quality score of the image, the remaining 20% of the image pairs in the image library are used as a test setBy the method, the mass fractions of the difference image, the monocular image and the color image are obtained respectivelyAnd
eighth step: calculating an objective quality score for a stereoscopic image
(1) Computing local quality scores associated with edge information using dynamic weights
Weighting the sum image quality score and the difference image quality score to obtain a sum-difference image quality score QSD:
QSD=WDQD+(1-WD)QS
Wherein the weight of the difference mapμLAnd muRIs a desire for L and R, σL,σRIs the variance, C1,C2Is a constant value, C1=0.6,C2=5;
Combining the sum-difference image quality fraction and the monocular image fraction to obtain an edge characteristic quality fraction Qedge:
Qedge=WCQC+(1-WC)QSD
(2) Calculating stereo image quality scores
The quality score of the edge is assigned a higher weight:
Q=WedgeQedge+WcolorQcolor
wherein the edge weight Wedge0.8, color weight Wcolor=0.2、QcolorIs the mass fraction of the color chart.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810444082.7A CN108648207B (en) | 2018-05-10 | 2018-05-10 | Stereo image quality evaluation method based on segmented stack type self-encoder |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810444082.7A CN108648207B (en) | 2018-05-10 | 2018-05-10 | Stereo image quality evaluation method based on segmented stack type self-encoder |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108648207A CN108648207A (en) | 2018-10-12 |
CN108648207B true CN108648207B (en) | 2021-07-09 |
Family
ID=63754463
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810444082.7A Active CN108648207B (en) | 2018-05-10 | 2018-05-10 | Stereo image quality evaluation method based on segmented stack type self-encoder |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108648207B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102497565A (en) * | 2011-12-05 | 2012-06-13 | 天津大学 | Method for measuring brightness range influencing comfort degree of stereoscopic image |
CN104408716A (en) * | 2014-11-24 | 2015-03-11 | 宁波大学 | Three-dimensional image quality objective evaluation method based on visual fidelity |
CN104658001A (en) * | 2015-03-10 | 2015-05-27 | 浙江科技学院 | Non-reference asymmetric distorted stereo image objective quality assessment method |
EP2790406A4 (en) * | 2011-12-05 | 2015-07-22 | Nippon Telegraph & Telephone | Video quality evaluation device, method and program |
CN107679543A (en) * | 2017-02-22 | 2018-02-09 | 天津大学 | Sparse autocoder and extreme learning machine stereo image quality evaluation method |
-
2018
- 2018-05-10 CN CN201810444082.7A patent/CN108648207B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102497565A (en) * | 2011-12-05 | 2012-06-13 | 天津大学 | Method for measuring brightness range influencing comfort degree of stereoscopic image |
EP2790406A4 (en) * | 2011-12-05 | 2015-07-22 | Nippon Telegraph & Telephone | Video quality evaluation device, method and program |
CN104408716A (en) * | 2014-11-24 | 2015-03-11 | 宁波大学 | Three-dimensional image quality objective evaluation method based on visual fidelity |
CN104658001A (en) * | 2015-03-10 | 2015-05-27 | 浙江科技学院 | Non-reference asymmetric distorted stereo image objective quality assessment method |
CN104658001B (en) * | 2015-03-10 | 2017-04-19 | 浙江科技学院 | Non-reference asymmetric distorted stereo image objective quality assessment method |
CN107679543A (en) * | 2017-02-22 | 2018-02-09 | 天津大学 | Sparse autocoder and extreme learning machine stereo image quality evaluation method |
Non-Patent Citations (3)
Title |
---|
Quality assessment metric of stereo images considering cyclopean integration and visual saliency;Jiachen Yang e.t.;《Information Sciences》;20160902;全文 * |
Quality index for stereoscopic images by jointly evaluating cyclopean amplitude and cyclopean phase;Yancong Lin e.t.;《IEEE Journal of selected topics in signal processing》;20170228;第11卷(第1期);全文 * |
应用深度极限学习机的立体图像质量评价方法;张博洋 等;《小型微型计算机系统》;20171231;第38卷(第11期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN108648207A (en) | 2018-10-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108428227B (en) | No-reference image quality evaluation method based on full convolution neural network | |
CN102170581B (en) | Human-visual-system (HVS)-based structural similarity (SSIM) and characteristic matching three-dimensional image quality evaluation method | |
CN103996192B (en) | Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model | |
CN106447646A (en) | Quality blind evaluation method for unmanned aerial vehicle image | |
CN108090902A (en) | A kind of non-reference picture assessment method for encoding quality based on multiple dimensioned generation confrontation network | |
CN110060236B (en) | Stereoscopic image quality evaluation method based on depth convolution neural network | |
CN107635136B (en) | View-based access control model perception and binocular competition are without reference stereo image quality evaluation method | |
CN109831664B (en) | Rapid compressed stereo video quality evaluation method based on deep learning | |
CN110033446A (en) | Enhancing image quality evaluating method based on twin network | |
CN110378232B (en) | Improved test room examinee position rapid detection method of SSD dual-network | |
CN107396095B (en) | A kind of no reference three-dimensional image quality evaluation method | |
CN110400293B (en) | No-reference image quality evaluation method based on deep forest classification | |
CN108389192A (en) | Stereo-picture Comfort Evaluation method based on convolutional neural networks | |
CN105741328A (en) | Shot image quality evaluation method based on visual perception | |
CN103841410B (en) | Based on half reference video QoE objective evaluation method of image feature information | |
Geng et al. | A stereoscopic image quality assessment model based on independent component analysis and binocular fusion property | |
CN109167996A (en) | It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method | |
CN104954778A (en) | Objective stereo image quality assessment method based on perception feature set | |
CN109859166A (en) | It is a kind of based on multiple row convolutional neural networks without ginseng 3D rendering method for evaluating quality | |
CN105894507B (en) | Image quality evaluating method based on amount of image information natural scene statistical nature | |
CN102722888A (en) | Stereoscopic image objective quality evaluation method based on physiological and psychological stereoscopic vision | |
CN109829905A (en) | It is a kind of face beautification perceived quality without reference evaluation method | |
CN111882516B (en) | Image quality evaluation method based on visual saliency and deep neural network | |
CN109754390A (en) | A kind of non-reference picture quality appraisement method based on mixing visual signature | |
CN107909565A (en) | Stereo-picture Comfort Evaluation method based on convolutional neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |