CN106341677A - Virtual viewpoint video quality evaluation method - Google Patents

Virtual viewpoint video quality evaluation method Download PDF

Info

Publication number
CN106341677A
CN106341677A CN201510395100.3A CN201510395100A CN106341677A CN 106341677 A CN106341677 A CN 106341677A CN 201510395100 A CN201510395100 A CN 201510395100A CN 106341677 A CN106341677 A CN 106341677A
Authority
CN
China
Prior art keywords
time domain
distortion
video
space
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510395100.3A
Other languages
Chinese (zh)
Other versions
CN106341677B (en
Inventor
张云
刘祥凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201510395100.3A priority Critical patent/CN106341677B/en
Publication of CN106341677A publication Critical patent/CN106341677A/en
Application granted granted Critical
Publication of CN106341677B publication Critical patent/CN106341677B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a virtual viewpoint video quality evaluation method. According to the invention, time domain blink distortion of virtual viewpoint video is calculated by taking all pixels in a space-time domain unit as a unit, so that erroneous estimation which is brought about by a pixel-to-pixel time domain blink distortion calculation mode for virtual viewpoint video human eye perception distortion is avoided. The virtual viewpoint video quality evaluation method not only considers distortion brought about by a depth image error but also considers distortion introduced by left and right viewpoint texture images when performing calculation on the time domain blink distortion of the virtual viewpoint video. The virtual viewpoint video quality evaluation method can effectively evaluate the time domain blink distortion which imposes great influences on the subjective quality in the virtual viewpoint video, thereby more conforming to a result of human eye subjective perception when the virtual viewpoint video quality is evaluated, and enabling virtual viewpoint video quality evaluation to be more accurate and more comprehensive.

Description

Virtual view method for evaluating video quality
Technical field
The present invention relates to video quality evaluation technology, more particularly to a kind of accurate, comprehensive virtual view regards Frequency quality evaluating method.
Background technology
With the development of 3d video technique, increasing film and TV programme begin to use 3d technology to clap Take the photograph, various 3d display and 3d TV also in gradually popularization it is anticipated that trend be, following 3d Video council becomes main flow.
International Standards Organization mpeg (moving picture experts group, Motion Picture Experts Group) at present With itu-t vceg (international telecommunication union-telecommunication Standardization sector video coding experts group, International Telecommunication Union's communication standardization group Knit Video Coding Experts Group) jointly work out 3d video encoding standard based on depth information.In this standard In 3d video solution, coding side only needs to coding transmission 2 to 3 road color texture video and corresponding Depth map video, decoding end can be generated using the two-way texture video receiving and corresponding depth map video The virtual view video of any intermediate-view between two viewpoints, may finally form 8 to 9 even more many The video of viewpoint is used for meeting the broadcast request of bore hole multiple views 3d display.Therefore, generated based on depth map The quality of virtual view video conclusive impact is served on final 3d video quality.So it is how right 3d virtual view video carries out quality evaluation and will determine the optimization quality of whole 3d Video processing and coding system.
The evaluation methodology of video quality is divided into two big class: subjective quality assessment and evaluating objective quality.Subjective matter Amount is evaluated and is mainly passed through to organize beholder to watch video and carry out subjective quality marking, although the method can obtain Still take time and effort to accurate video quality score, and cannot be applied in real-time processing system for video. By contrast, objective video quality evaluation automatically estimates the quality of video by algorithm, not only saved manpower but also Can carry out in real time.
Currently for the evaluating objective quality of traditional 2d video, there are many achievements in research both at home and abroad, permissible Preferably the noise distortion in video, transmission distortion, fuzzy distortion etc. are estimated.Regard for 3d is virtual The research of point video quality evaluation is also fewer.Existing research virtual view video objective quality evaluation at present Work be mostly based on image fault, individually calculate the distortion of each two field picture and then using average as whole The distortion of video sequence.
A kind of scheme is that the noise in time domain distortion to static background region in virtual view video counts.Its Scheme is, when the pixel that should belong to static background region in virtual view video there occurs brightness value changes, And change size exceeded human eye just can perceptual distortion threshold value (jnd, just noticeable distortion) when, The change of this brightness value is designated as the noise in time domain of video, and the quality of virtual view video is evaluated with this.
A kind of scheme is the distortion at contour of object edge in statistics virtual view video.Implementing step is, Rim detection is done to the picture material of original reference video and virtual view video, extracts edge contour figure. Then two breadths edge profile diagrams are compared, the ratio shared by statistics edge distortion pixel in this, as commenting The foundation of valency virtual view video distortion size.Or lose in the image border profile calculating virtual view video Before true, first each image block is classified.Its scheme is, according to the hundred of image intra-block edge pixel Point than image block classification is flat block, edge block and texture block, each type of image block is in calculated distortion The different weighted value of Shi Huiyou.
The scheme also having is first original reference video and virtual view video to be decomposed wavelet field, Ran Hou Wavelet field carries out horizontal alignment operation to two videos.The purpose carrying out horizontal alignment operation is that virtual view regards In frequency, the integral level skew of object does not interfere with the perceived quality of human eye, but but can lead to this video Evaluating objective quality score relatively low.Carry out objective quality distortion computation again will obtain after horizontal alignment More meet the result of human eye subjective perceptual quality.Or first will send out in virtual view video before calculated distortion The object having given birth to horizontal direction displacement carries out bit shift compensation, is caused due to depth map distortion in virtual view Object horizontal displacement mistake can't cause the perceived quality of human eye to be decreased obviously, because object is global displacement Not there is structure distortion, so calculate objective distortion before first the object that there occurs horizontal displacement is entered Row bit shift compensation, thus will not be calculated as distortion such that it is able to more realistically reflect virtual view video Perceived quality.
Above-mentioned virtual view video quality evaluation scheme has the disadvantages that and does not account in virtual view video Temporal flicker distortion.Depth map is to generate virtual view video to provide necessary depth information, but Compression, transmission and processing stage, depth map inevitably produces distortion, and this distortion can cause virtual regarding The geometric distortion of dot image.When image continuous broadcasting in the middle of video, erratic geometric distortion will shape Become temporal flicker distortion.
Pixel is not suitable for the quality for evaluating virtual view video to the distortion computation pattern of pixel.Mesh Front modal image and video distortion calculation criterion are psnr (Y-PSNR, peak signal to noise ) and mse (mean square error, mean squared error) ratio.This two criterions are all according to pixel Mode calculated distortion to pixel, and all pixels point distortion is carried out cumulative mean.Which is not suitable for In virtual view video evaluation, reason is that in the middle part of virtual view video, the constant geometric distortion in timesharing domain is not easy to It is detected by human eye, thus does not result in perceptual distortion, but but can seriously reduce the quality of psnr and mse Fraction.Meanwhile, virtual view video is generated by the video of two reference views in left and right, and left and right two difference Viewpoint video contains camera sensor random noise in itself.These noises can't cause human eye sensory perceptual system Attention, but but can lead to psnr and mse score decline.
Virtual view video is painted according to the depth information that depth map provides by the texture image of left view point and right viewpoint System forms, so the distortion sources of virtual view video contain the distortion of depth map and the distortion of texture image. And existing virtual view method for evaluating video quality often calculates not comprehensively at present, have only considered depth The impact that degree figure distortion brings, have then only considered the distortion that drawing process brings in itself.
Content of the invention
Based on this it is necessary to provide a kind of accurate, comprehensive virtual view method for evaluating video quality.
A kind of virtual view method for evaluating video quality, comprises the following steps:
Original reference video and video to be evaluated are carried out Space-time domain dividing elements respectively;
Calculate temporal flicker distortion and the Space-time domain texture distortion of each Space-time domain unit respectively;
According to the temporal flicker feature calculation first kind distortion of original reference video and video to be evaluated, according to former The Space-time domain textural characteristics of beginning reference video and video to be evaluated calculate Equations of The Second Kind distortion;The described first kind is lost True and described Equations of The Second Kind distortion is integrated and is obtained the quality that total distortion judges video to be evaluated.
Wherein in an embodiment, described original reference video and video to be evaluated are carried out Space-time domain respectively The step of dividing elements includes:
Respectively by original reference video and video to be evaluated be divided into some by time domain continuously some frames constitute Image sets;
Each image in image sets is divided into some image blocks, continuously some image block compositions are empty for time domain Time domain unit.
Wherein in an embodiment, described Space-time domain unit is by some time domain sequential charts of spatial domain position identical As block composition;Or
It is made up of the different some time domain consecutive image blocks in the spatial domain position describing same movement locus of object.
Wherein in an embodiment, the step bag of the described temporal flicker distortion calculating each Space-time domain unit Include:
First time domain gradient is calculated to each pixel of original reference video;
Second time domain gradient is calculated to each pixel of video to be evaluated;
Judge that each pixel whether there is temporal flicker and loses according to the first time domain gradient and the second time domain gradient Very, if so, then calculate temporal flicker strength of distortion.
Wherein in an embodiment, the step of the described temporal flicker distortion calculating each Space-time domain unit is also Including: according to pixel time domain gradient information calculate temporal flicker distortion, wherein, described temporal flicker distortion with The direction change frequency of pixel time domain gradient and amplitude of variation are directly proportional.
Wherein in an embodiment, the step of the described temporal flicker distortion calculating each Space-time domain unit is also Including: calculate the corresponding temporal flicker distortion of cluster time domain contiguous pixels in video space time domain unit to be evaluated;
Detected according to first function and in cluster time domain contiguous pixels, whether there is temporal flicker distortion;
If so, then temporal flicker strength of distortion in cluster time domain neighbor is detected according to second function.
Wherein in an embodiment, the step of the described Space-time domain texture distortion calculating each Space-time domain unit Including:
Calculate the horizontal direction gradient of each pixel and vertical gradient in Space-time domain unit;
Each pixel in Space-time domain unit is calculated according to described horizontal direction gradient and described vertical gradient Spatial domain gradient;
The spatial domain gradient disparities of the Space-time domain unit according to original reference video and video to be evaluated calculate to be evaluated The Space-time domain texture distortion of valency video.
Wherein in an embodiment, further comprise the steps of: pixel characteristic statistical information in calculating Space-time domain unit, Wherein, described pixel characteristic statistical information includes the average of pixel gradient information in Space-time domain unit, variance And standard deviation;Described pixel gradient information includes the spatial domain horizontal direction gradient of pixel, spatial domain vertical direction ladder Degree, spatial domain gradient, time domain gradient and Space-time domain gradient.
Wherein in an embodiment, further comprise the steps of: and described pixel characteristic statistical information is set with minimum sense Know threshold value;
When pixel characteristic statistical value is less than minimum threshold of perception current, then special using minimum threshold of perception current replacement pixels Levy statistical value;
Then maintain pixel characteristic statistical information constant when pixel characteristic statistical value is more than minimum threshold of perception current.
Wherein in an embodiment, described according to temporal flicker feature calculation first kind distortion and according to space-time The step that domain textural characteristics calculate Equations of The Second Kind distortion includes: by maximum in the temporal flicker distortion in Space-time domain unit A series of distortion of Space-time domain units is as first kind distortion;
Using a series of distortion of maximum Space-time domain units in the Space-time domain texture distortion in Space-time domain unit as Equations of The Second Kind Distortion.
Above-mentioned virtual view method for evaluating video quality counts virtual in units of all pixels in Space-time domain unit The temporal flicker distortion of viewpoint video, it is to avoid the temporal flicker distortion computation pattern institute to pixel for the pixel The mistake estimation to virtual view video human eye perceptual distortion bringing.Dodge in the time domain to virtual view video When bright distortion is calculated, both considered the distortion that brings due to depth map mistake it is also contemplated that left and right viewpoint stricture of vagina The distortion that reason image introduces.Said method can effectively be assessed in virtual view video and subjective quality is caused very The temporal flicker distortion of big impact, thus more meet human eye subjective perception when evaluating virtual view video quality Result so that virtual view video quality evaluation is more accurate, comprehensively.
Brief description
Fig. 1 is the flow chart of virtual view method for evaluating video quality;
Fig. 2 is first kind Space-time domain dividing elements schematic diagram;
Fig. 3 is Equations of The Second Kind Space-time domain dividing elements schematic diagram;
Fig. 4 is pixel level spatial domain gradient calculation Prototype drawing;
Fig. 5 is pixel vertical airspace gradient calculation Prototype drawing;
Fig. 6 is temporal flicker distortion computation method flow diagram;
Fig. 7 is Space-time domain texture distortion computation method flow diagram.
Specific embodiment
For the ease of understanding the present invention, below with reference to relevant drawings, the present invention is described more fully. The preferred embodiment of the present invention is given in accompanying drawing.But, the present invention can come in many different forms Realize however it is not limited to embodiment described herein.On the contrary, provide the purpose of these embodiments be make right The understanding of the disclosure is more thoroughly comprehensive.
It should be noted that when element is referred to as " being fixed on " another element, it can be directly another On individual element or can also there is element placed in the middle.When an element is considered as " connection " another yuan Part, it can be directly to another element or may be simultaneously present centering elements.Used herein Term " vertical ", " level ", "left", "right" and similar statement simply to illustrate that mesh 's.
Unless otherwise defined, all of technology used herein and scientific terminology and the technology belonging to the present invention The implication that the technical staff in field is generally understood that is identical.The art being used in the description of the invention herein Language be intended merely to describe specific embodiment purpose it is not intended that in limit the present invention.Used herein Term " and/or " include the arbitrary and all of combination of one or more related Listed Items.
The texture video of existing left view point and right viewpoint and corresponding depth image, separately have between the viewpoint of left and right Intermediate-view video.The texture video of left and right viewpoint and deep video all may comprise distortion.By base Image rendering technologies (depth image based rendering, dibr) in depth map utilize left and right viewpoint Texture video and respective depth image can generate the virtual view video of intermediate-view.Compared to for joining The original intermediate-view videos examined, the virtual view video bag of generation contains various distortions.
Virtual view method for evaluating video quality is as reference with original intermediate-view videos, calculates virtual view The distortion of video, and make the virtual view video that the distortion value calculating and human eye sensory perceptual system are perceived The distortion factor is consistent.
As shown in figure 1, the flow chart for virtual view method for evaluating video quality.
Because the dividing mode of Space-time domain unit has two classes, below the Space-time domain unit different according to two classes is drawn Point mode provides embodiment respectively.
Embodiment 1:
A kind of virtual view method for evaluating video quality, comprises the following steps:
Step s110, original reference video and video to be evaluated are carried out Space-time domain dividing elements respectively.
Step s110 includes:
1. respectively by original reference video and video to be evaluated be divided into some by continuous some frame structures in time domain The image sets becoming.
2. each image in image sets is divided into some image blocks, in time domain, continuous image block composition is empty Time domain unit.
In the present embodiment, described Space-time domain unit is by spatial domain position identical some time domains consecutive image block group Become.
Specifically it is necessary first to original reference video and video to be evaluated are carried out Space-time domain dividing elements, its Process schematic is as shown in Figure 2.By video (original reference video and video to be evaluated) if sequence is divided into Dry image sets, each image sets by time domain continuously some frames form.The selection gist of Group Of Pictures length with Human eye watches the cycle attentively to the time domain of video and the frame per second of video is relevant.
Assume that human eye watches the cycle attentively for the t second to the time domain of video, and the frame per second of video is one second n frame, this When image sets length be n × t frame, that is, adjacent in time domain n × t two field picture is as image sets.With regard to Divide image sets, sliding window pattern can be selected, have overlapping region between image sets it is also possible to select Non-cross between image sets.
It is assumed that length is n × t frame for image sets.If each image is divided into length and width respectively Some image blocks for w and h, then in image sets, the continuous one group of image block of time domain may be constructed a space-time Domain unit.
As shown in Fig. 2 the constructive method of first kind Space-time domain unit is by some time domains of spatial domain position identical Consecutive image block forms Space-time domain unit.Each Space-time domain unit is that length, width and height are respectively w, h, n × t Pixel cuboid.The adopted video length of hypothesis is 200 frames, frame per second for 25 frames/second // resolution is 1920x1080.Video sequence is divided into 40 image sets, each image sets is by 5 frames continuous in time domain Composition.The selection gist of Group Of Pictures length and human eye watch the cycle attentively to the time domain of video and the frame per second of video has Close.Assume that human eye watched the cycle attentively for 0.2 second to the time domain of video, and the frame per second of video be one second 25 frame, Now the length of image sets is 5 frames, and that is, adjacent 5 two field pictures in time domain are as image sets.
For image sets, length be 5 frames, the length of image and wide be respectively 1920 and 1080, then Pixel in image sets constitutes the pixel cuboid in the range of a Space-time domain, and its length is respectively For 1920,1080,5.If each image is divided into the image block that length and width are respectively 16 and 16, whole figure As group then has been partitioned into a series of Space-time domain units, each Space-time domain unit is that length, width and height are respectively 16th, 16,5 pixel cuboid.Space-time domain unit is the ultimate unit of statistical computation virtual view video distortion, Each calculated distortion of Space-time domain unit obtains the distortion of its place image sets eventually through integrating, and every The distortion of individual image sets is finally integrated into the distortion of whole video sequence.
The temporal flicker of step s120, the temporal flicker feature according to original reference video and video to be evaluated is special Levy calculating first kind distortion.
In the present embodiment, virtual view method for evaluating video quality further comprises the steps of:
First time domain gradient is calculated to each pixel of original reference video.
Second time domain gradient is calculated to each pixel of video to be evaluated.
Judge that each pixel whether there is temporal flicker and loses according to the first time domain gradient and the second time domain gradient Very, if so, then calculate temporal flicker strength of distortion.
Particularly as follows: according to the symbol difference of described first time domain gradient and described second time domain gradient and absolute value Difference calculates temporal flicker distortion.
Specifically, temporal flicker distortion is calculated respectively to each Space-time domain unit.Calculate temporal flicker distortion Flow chart is as shown in Figure 6.Input original reference video and video to be evaluated (virtual view video) correspond to position The Space-time domain unit put.In Space-time domain unit, each pixel i of original reference video and video to be evaluated are (empty Intend viewpoint video) each pixelThere is a three-dimensional coordinate (x, y, t).To i (x, y, t) andCount respectively Calculate time domain gradientWithFor the spatial domain coordinate in Space-time domain unit be (x, y) cluster when Domain neighbor, its corresponding temporal flicker distortion dfx,yCalculated using formula 1.
df x , y = σ t = 1 t φ ( x , y , t ) · δ ( x , y , t ) t - 1 Formula 1;
Wherein, dfx,yRepresent the temporal flicker distortion of the cluster time domain neighbor that spatial domain coordinate is (x, y).T is Image sets and the length of Space-time domain unit.
In the present embodiment, virtual view method for evaluating video quality further comprises the steps of: calculating video to be evaluated The corresponding temporal flicker distortion of cluster time domain neighbor in Space-time domain unit.
Detected according to first function and in cluster time domain neighbor, whether there is temporal flicker distortion.
If so, then temporal flicker strength of distortion in cluster time domain neighbor is detected according to second function.
Specifically, first function φ () is used for detecting that the pixel of (x, y, t) position whether there is temporal flicker distortion. And second function δ () is used for measuring the temporal flicker strength of distortion of the pixel of (x, y, t) position.
The formula 2 of first function φ () is as follows:
&phi; ( x , y , t ) = 1 , i f &dtri; &rightarrow; i x , y , t t e m p o r a l &times; &dtri; &rightarrow; i ~ x , y , t t e m p o r a l < 0 a n d | i ( x , y , t ) - i ~ ( x , y , t ) | > &rho; 0 , o t h e r w i s e Formula 2;
Wherein,Represent point i (x, y, t) in original reference video and video to be evaluated Point in (virtual view video)Time domain gradient direction contrary, meet this condition and then think this point There is temporal flicker distortion.ρ represents visually-perceptible threshold value, and the value of this value can use the human eye of this pixel Just can perceptual distortion threshold value (jnd, just noticeable difference).The human eye based on pixel domain for the tradition Just can not distinguish edge pixel and texture pixel in perceptual distortion threshold calculations model.In order to allow this threshold value Only keep sensitive to edge distortion, reduce the sensitivity to texture distortion, can be to the edge of image and texture Region makes a distinction, and the human eye of texture region just can perceptual distortion threshold value be carried out reducing process.Extract former Then edge graph is divided into etc. the image block of size, when in an image block by the edge graph of beginning video image Edge pixel point exceedes then thinks during certain limit that this image block is texture block.
In the present embodiment, first by canny edge detection operator, rim detection is carried out to image, extract Obtain edge graph.Edge graph is divided into the image block of 8x8.When the edge pixel in an image block exceedes It is believed that the edge pixel detecting in this image block should belong to texture pixel when 48.Just can feel calculating human eye When knowing distortion threshold, if running into texture pixel, its human eye just perceptual distortion threshold value can will be multiplied by 0.1.
Second function δ () is used for measuring the flicker strength of distortion of the pixel of (x, y, t) position, second function δ () Formula 3 is as follows:
&delta; t ( x , y ) = ( &dtri; &rightarrow; i ~ x , y , t t e m p o r a l - &dtri; &rightarrow; i x , y , t t e m p o r a l &dtri; &rightarrow; i ~ x , y , t t e m p o r a l ) 2 Formula 3;
WhereinIt is used for reflecting the size of time domain gradient distortion.Denominator divided byMesh Be consider masking effect.Because the precondition that second function φ () is not zero isWithSide To contrary, so δt(x, y) must more than 1 and the intensity to temporal flicker distortion be directly proportional.If whole Space-time domain The temporal flicker distortion of unit is dfcube, the implication of cube is defined empty for the cuboid that pixel forms Time domain unit.dfcubeCan calculate by formula 4.
df c u b e = &sigma; x = 1 w &sigma; y = 1 h df x , y w &times; h Formula 4;
Wherein w and h is length and the width of Space-time domain unit respectively.Obtaining each Space-time domain unit in image sets Temporal flicker distortion dfcubeAfterwards, can be by integrating dfcubeObtain the temporal flicker distortion of image sets dfgop, the implication of gop is group of pictures, i.e. image sets.dfgopCan obtain by formula 5.
df g o p = 1 n w &sigma; k &element; w df k c u b e Formula 5;
Wherein, w represents the set of w% Space-time domain unit of temporal flicker distortion most serious in image sets.nw Represent the number of the Space-time domain unit in set w.In the present embodiment, the purpose of this integration rules of employing is In video image, the maximum part region of distortion determines human visual system and the quality of whole image is commented Valency.In the middle of the present embodiment, the value of w% is 1%.
The described step according to temporal flicker feature calculation first kind distortion includes: by Space-time domain unit when Domain flashes a series of distortion of maximum Space-time domain units in distortion as first kind distortion.
Finally, the temporal flicker distortion of whole video sequence can pass through the temporal flicker distortion of integral image group dfgopTo obtain, both can be according to worst integration criterion, it is possible to use as the calculating meansigma methodss of formula 6.
df s e q = 1 k &sigma; m = 0 k df m g o p Formula 6;
Wherein, k is the number of image sets in this video sequence.In the present embodiment, k is 40.Seq represents Sequence (sequence).
The Space-time domain stricture of vagina of step s130, the Space-time domain textural characteristics according to original reference video and video to be evaluated Reason feature calculation Equations of The Second Kind distortion.
In the present embodiment, virtual view method for evaluating video quality further comprises the steps of:
Calculate the horizontal direction gradient of each pixel and vertical gradient in Space-time domain unit.
Each pixel in Space-time domain unit is calculated according to described horizontal direction gradient and described vertical gradient Spatial domain gradient.
Calculate the space-time of video to be evaluated according to the spatial domain gradient disparities of original reference video and video to be evaluated Domain texture distortion.
Specifically, during Space-time domain texture distortion in calculating video to be evaluated (virtual view video), still Right is basic distortion computation unit with Space-time domain unit, and calculation process is as shown in Figure 7.Firstly the need of to sky Each pixel i (x, y, t) in time domain unit calculates its spatial domain gradientSpatial domain gradientCalculating Firstly the need of the gradient calculating it respectively and horizontally and vertically going upWithCalculated level Template example such as Fig. 4 and Fig. 5 with vertical gradient.It is calculated by formula 7 after obtaining horizontal and vertical gradient Spatial domain gradient
&dtri; i x , y , t s p a t i a l = | &dtri; &rightarrow; i x , y , t s p a t i a l - h | 2 + | &dtri; &rightarrow; i x , y , t s p a t i a l - v | 2 Formula 7;
After the spatial domain gradient being calculated each pixel in Space-time domain unit, its averageAnd standard deviation σcubeCan be calculated by formula 8 and formula 9 respectively.
&dtri; i c u b e s p a t i a l &overbar; = &sigma; x = 1 w &sigma; y = 1 h &sigma; t = 1 l &dtri; i x , y , t s p a t i a l w &times; h &times; l Formula 8;
&sigma; c u b e = 1 w &times; h &times; l &sigma; x = 1 w &sigma; y = 1 h &sigma; t = 1 1 ( &dtri; i x , y , t s p a t i a l - &dtri; i c u b e s p a t i a l &overbar; ) 2 Formula 9;
Wherein, w, h and l are the length, width and height of Space-time domain unit respectively.Due to the flat region in Space-time domain unit Domain pixel does not have obvious spatial domain gradient, can lead to the spatial domain gradient standard deviation of calculated Space-time domain unit The texture features of this Space-time domain unit cannot be accurately reflected.So needing to arrange a threshold of perception current ε, work as σcube During less than ε, by σcubeIt is set to ε.
In this embodiment, linearly closing on the occasion of sum η in the value of threshold of perception current ε and gradient calculation template System: ε=α × η.In such as gradient calculation template in Fig. 4 on the occasion of sum η be 32, in the present embodiment, Linear coefficient α value is 5.62, so ε value is 179.84.
It is being calculated original reference video and video to be evaluated (virtual view video) Space-time domain unit respectively Spatial domain gradient standard deviationAnd σcubeAfterwards, difference degree of coming between the two can be compared in several ways Measure the Space-time domain texture distortion da of video to be evaluated (virtual view video)cube, such as formula 10.
da c u b e = | log 10 ( &sigma; ~ c u b e &sigma; c u b e ) | Formula 10;
Having calculated the texture distortion da of all Space-time domain units in image setscubeAfterwards, can carry out integrating Texture distortion da to image setsgop, the criterion of integration can be worst region decision as shown in Equation 11 Overall human eye perceived quality:
da g o p = 1 n z &sigma; k &element; z da k c u b e Formula 11;
Wherein, z represents worst z% da in image setscubeThe set of composition.
The described step according to the calculating Equations of The Second Kind distortion of Space-time domain textural characteristics includes:
Using a series of distortion of maximum Space-time domain units in the Space-time domain texture distortion in Space-time domain unit as Equations of The Second Kind distortion.
Finally, entirely the Space-time domain texture distortion of video (virtual view video) sequence to be evaluated can be passed through The Space-time domain texture distortion of each image sets is averaging to obtain and obtains it is also possible to integrate criterions by other Take.Such as formula 12.
da s e q = 1 k &sigma; m = 0 k da m g o p Formula 12;
Step s140, according to described first kind distortion and Equations of The Second Kind distortion are integrated obtain total distortion judge to be evaluated The quality of valency video.
Lose in the temporal flicker distortion being calculated video to be evaluated (virtual view video) and Space-time domain texture After true, integration can be carried out respectively to it and obtain total distortion.The rule integrated is not limited, for example, can press Formula 13 carries out integration and obtains total distortion d.
D=da × log10(1+df) formula 13;
Using aforesaid way, first kind distortion is integrated with Equations of The Second Kind distortion and obtain total distortion.
Embodiment 2:
Step s110, original reference video and video to be evaluated are carried out Space-time domain dividing elements respectively.
Step s110 includes:
1. respectively by original reference video and video to be evaluated be divided into some by continuous some frame structures in time domain The image sets becoming.
2. each image in image sets is divided into some image blocks, in time domain, continuous image block composition is empty Time domain unit.
Described Space-time domain unit is continuous by the different some time domains in the spatial domain position describing same movement locus of object Image block forms.
Specifically it is necessary first to original reference video and video to be evaluated are carried out Space-time domain dividing elements, its Process schematic is as shown in Figure 3.By video (original reference video and video to be evaluated) if sequence is divided into Dry image sets, each image sets by time domain continuously some frames form.The selection gist of Group Of Pictures length with Human eye watches the cycle attentively to the time domain of video and the frame per second of video is relevant.
Assume that human eye watches the cycle attentively for the t second to the time domain of video, and the frame per second of video is one second n frame, this When image sets length be n × t frame, that is, adjacent in time domain n × t two field picture is as image sets.With regard to Divide image sets, sliding window pattern can be selected, have overlapping region between image sets it is also possible to select Non-cross between image sets.
It is assumed that length is n × t frame for image sets.If each image is divided into length and width respectively Some image blocks for w and h, then in image sets, the continuous one group of image block of time domain may be constructed a space-time Domain unit.
As shown in figure 3, the constructive method of Equations of The Second Kind Space-time domain unit is by along same movement locus of object Some time domain consecutive image block composition Space-time domain units of different spatial domain position.Each Space-time domain unit comprises N × t length and width are respectively the block of pixels of w, h.Space-time domain unit is statistical computation virtual view video distortion Ultimate unit, each calculated distortion of Space-time domain unit obtains its place image sets eventually through integration Distortion, and the distortion of each image sets is finally integrated into the distortion of whole video sequence.
Equations of The Second Kind Space-time domain unit is made up of the image block along same movement locus.Image block to be obtained Movement locus, block-based motion estimation algorithm, for example, full search, three-wave mixing etc. can be adopted. In addition, because video sequence there may be the global motion of camera motion generation, need estimation overall situation fortune Dynamic vector is modified to movement locus.
The motion of camera includes rotation, Pan and Zoom etc., and these can be entered by affine Transform Model Row description, for example:
x &prime; y &prime; = &theta; 1 x + &theta; 2 y + &theta; 3 &theta; 4 x + &theta; 5 y + &theta; 6 Formula 14;
Wherein, (x, y) represents the pixel coordinate position in present image, and (x ', y ') represents estimation target image In location of pixels.Vectorial θ=[θ1,...,θ6] represent affine Transform Model parameter.Deduct (x, y) using (x ', y ') The global motion vector between two width images can be obtained.Carry out before being detected according to global motion vector Whether the image block of motion tracking has moved out picture, thus deciding whether to bring this image block along fortune into In the Space-time domain unit that dynamic rail mark is constituted.
The temporal flicker of step s120, the temporal flicker feature according to original reference video and video to be evaluated is special Levy calculating first kind distortion.
In the present embodiment, virtual view method for evaluating video quality further comprises the steps of:
First time domain gradient is calculated to each pixel of original reference video.
Second time domain gradient is calculated to each pixel of video to be evaluated.
Judge that each pixel whether there is temporal flicker and loses according to the first time domain gradient and the second time domain gradient Very, if so, then calculate temporal flicker strength of distortion.
Particularly as follows: according to the symbol difference of described first time domain gradient and described second time domain gradient and absolute value Difference calculates temporal flicker distortion.
Specifically, temporal flicker distortion is calculated respectively to each Space-time domain unit.Calculate temporal flicker distortion Flow chart is as shown in Figure 6.Input original reference video and video to be evaluated (virtual view video) correspond to position The Space-time domain unit put.
Assume that the one of pixel coordinate position of intermediate frame i-th frame of image sets in Fig. 6 such as is (xi,yi).At this Pixel coordinate on the corresponding movement locus of this pixel in the Space-time domain unit that pixel is located is [(xi-n, yi-n) ..., (xi, yi) ..., (xi+n, yi+n)].It is (x for the i-th frame pixel coordinate positioni,yi) place time domain dodge Bright distortion dfxi,yiCalculated using formula 15.
df x i , y i = &sigma; n = i - n + 1 i + n &phi; ( x n , y n , n ) &centerdot; &delta; ( x n , y n , n ) 2 n Formula 15;
Wherein, dfxi,yiRepresent that spatial domain coordinate is (xi,yi) pixel place movement locus on the adjacent picture of cluster time domain The temporal flicker distortion of element.2n+1 is image sets and the length of Space-time domain unit.
In the present embodiment, virtual view method for evaluating video quality further comprises the steps of: calculating video to be evaluated The corresponding temporal flicker distortion of cluster time domain neighbor in Space-time domain unit.
Detected according to first function and in cluster time domain neighbor, whether there is temporal flicker distortion.
If so, then temporal flicker strength of distortion in cluster time domain neighbor is detected according to second function.
Specifically, first function φ () is used for detecting that the pixel of (x, y, n) position whether there is temporal flicker distortion. Second function δ () is used for measuring the temporal flicker strength of distortion of the pixel of (x, y, n) position.
The formula 16 of first function φ () is as follows:
&phi; ( x , y , n ) = 1 , i f &dtri; &rightarrow; i x , y , n t e m p o r a l &times; &dtri; &rightarrow; i ~ x , y , n t e m p o r a l < 0 a n d | i ( x , y , n ) - i ~ ( x , y , n ) | > &rho; 0 , o t h e r w i s e Formula 16;
Wherein,Represent point i (x, y, n) in original reference video and video to be evaluated Point in (virtual view video)Time domain gradient direction contrary, meet this condition and then think this point There is temporal flicker distortion.ρ represents visually-perceptible threshold value, and the value of this value refers to embodiment 1.
Second function δ () is used for measuring the flicker strength of distortion of the pixel of (x, y, n) position, second function δ () Formula 17 is as follows:
&delta; t ( x , y , n ) = ( &dtri; &rightarrow; i ~ x , y , n t e m p o r a l - &dtri; &rightarrow; i x , y , n t e m p o r a l &dtri; &rightarrow; i ~ x , y , n t e m p o r a l ) 2 Formula 17;
WhereinIt is used for reflecting the size of time domain gradient distortion.Denominator divided byMesh Be consider masking effect.Because the precondition that second function φ () is not zero isWithSide To contrary, so δt(x, y, n) must more than 1 and the intensity to temporal flicker distortion be directly proportional.If whole Space-time domain The temporal flicker distortion of unit is dftube, the pipe-like Space-time domain unit that the implication of tube forms for pixel. dftubeCan calculate by formula 18.
df t u b e = &sigma; x = 1 w &sigma; y = 1 h df x , y w &times; h Formula 18;
Wherein w and h is length and the width of Space-time domain unit respectively.Obtaining each Space-time domain unit in image sets Temporal flicker distortion dftubeAfterwards, can be by integrating dftubeObtain the temporal flicker distortion df of image setsgop, The implication of gop is group of pictures, i.e. image sets.dfgopCan obtain by formula 19.
df g o p = 1 n w &sigma; k &element; w df k t u b e Formula 19;
Wherein, w represents the set of w% Space-time domain unit of temporal flicker distortion most serious in image sets.nw Represent the number of the Space-time domain unit in set w.In the present embodiment, the purpose of this integration rules of employing is In video image, the maximum part region of distortion determines human visual system and the quality of whole image is commented Valency.In the middle of the present embodiment, the value of w% is 1%.
The described step according to temporal flicker feature calculation first kind distortion includes:
Using a series of distortion of maximum Space-time domain units in the temporal flicker distortion in Space-time domain unit as the One class distortion.
Finally, the temporal flicker distortion of whole video sequence can pass through the temporal flicker distortion of integral image group dfgopTo obtain, both can be according to worst integration criterion, it is possible to use as the calculating meansigma methodss of formula 20.
df s e q = 1 k &sigma; m = 0 k df m g o p Formula 20;
Wherein, k is the number of image sets in this video sequence.In the present embodiment, k is 40.Seq represents Sequence (sequence).
The Space-time domain stricture of vagina of step s130, the Space-time domain textural characteristics according to original reference video and video to be evaluated Reason feature calculation Equations of The Second Kind distortion.
In the present embodiment, virtual view method for evaluating video quality further comprises the steps of:
Calculate the horizontal direction gradient of each pixel and vertical gradient in Space-time domain unit.
Each pixel in Space-time domain unit is calculated according to described horizontal direction gradient and described vertical gradient Spatial domain gradient.
Calculate the space-time of video to be evaluated according to the spatial domain gradient disparities of original reference video and video to be evaluated Domain texture distortion.
Specifically, during Space-time domain texture distortion in calculating video to be evaluated (virtual view video), still Right is basic distortion computation unit with Space-time domain unit, and calculation process is as shown in Figure 7.Firstly the need of to sky Pixel i (x, y) of the n-th frame in time domain unit calculates its spatial domain gradientSpatial domain gradientMeter Calculate firstly the need of the gradient calculating it respectively and horizontally and vertically going upWithCalculate water Template example such as Fig. 4 and Fig. 5 of gentle vertical gradient.Calculate by formula 21 after obtaining horizontal and vertical gradient Obtain spatial domain gradient
&dtri; i x , y , n s p a t i a l = | &dtri; &rightarrow; i x , y , n s p a t i a l - h | 2 + | &dtri; &rightarrow; i x , y , n s p a t i a l - v | 2 Formula 21;
As Fig. 3 it is assumed that the top left corner pixel coordinate of one of intermediate frame i-th frame of image sets Space-time domain unit For (xi,yi).Pixel coordinate position on this pixel corresponding movement locus in Space-time domain unit is followed successively by [(xi-n,yi-n),...,(xi,yi),...,(xi+n,yi+n)].Being calculated the spatial domain gradient of each pixel in Space-time domain unit Afterwards, its averageAnd standard deviation sigmatubeCan be calculated by formula 22 and formula 23 respectively.
&dtri; i t u b e s p a t i a l &overbar; = &sigma; n = i - n i + n &sigma; y = y n y n + h &sigma; x = x n x n + w &dtri; i x , y , n s p a t i a l w &times; h &times; ( 2 n + 1 ) Formula 22;
&sigma; t u b e = &sigma; n = i - n i + n &sigma; y = y n y n + h &sigma; x = x n x n + w ( &dtri; i x , y , n s p a t i a l - &dtri; i t u b e s p a t i a l &overbar; ) 2 w &times; h &times; ( 2 n + 1 ) Formula 23;
Wherein, w, h are the length and width of image block in Space-time domain unit respectively.2n+1 is image sets and space-time The length of domain unit.Because the flat site pixel in Space-time domain unit does not have obvious spatial domain gradient, can lead The spatial domain gradient standard deviation causing calculated Space-time domain unit cannot accurately reflect the texture of this Space-time domain unit Characteristic.So needing to arrange a threshold of perception current ε, work as σtubeDuring less than ε, by σtubeIt is set to ε.ε takes Value reference implementation example 1.
It is being calculated original reference video and video to be evaluated (virtual view video) Space-time domain unit respectively Spatial domain gradient standard deviationAnd σtubeAfterwards, difference degree of coming between the two can be compared in several ways Measure the Space-time domain texture distortion da of video to be evaluated (virtual view video)tube, such as formula 24.
da t u b e = | log 10 ( &sigma; ~ t u b e &sigma; t u b e ) | Formula 24;
Having calculated the texture distortion da of all Space-time domain units in image setstubeAfterwards, can carry out integrating Texture distortion da to image setsgop, the criterion of integration can be the worst region decision as shown in formula 25 Overall human eye perceived quality:
da g o p = 1 n z &sigma; k &element; z da k t u b e Formula 25;
Wherein, z represents worst z% da in image setstubeThe set of composition.
The described step according to the calculating Equations of The Second Kind distortion of Space-time domain textural characteristics includes: by Space-time domain unit In Space-time domain texture distortion, a series of distortion of maximum Space-time domain units is as Equations of The Second Kind distortion.
Finally, entirely the Space-time domain texture distortion of video (virtual view video) sequence to be evaluated can be passed through The Space-time domain texture distortion of each image sets is averaging to obtain and obtains it is also possible to integrate criterions by other Take.Such as formula 26.
da s e q = 1 k &sigma; m = 0 k da m g o p Formula 26;
Step s140, according to described first kind distortion and Equations of The Second Kind distortion are integrated obtain total distortion judge to be evaluated The quality of valency video.
Lose in the temporal flicker distortion being calculated video to be evaluated (virtual view video) and Space-time domain texture After true, integration can be carried out respectively to it and obtain total distortion.The rule integrated is not limited, for example, can press Formula 27 carries out integration and obtains total distortion d.
D=da × log10(1+df) formula 27;
Using aforesaid way, first kind distortion is integrated with Equations of The Second Kind distortion and obtain total distortion.
Based on above-mentioned all embodiments, virtual view method for evaluating video quality further comprises the steps of: according to pixel Time domain gradient information calculates temporal flicker distortion, wherein, described temporal flicker distortion and pixel time domain gradient Direction change frequency and amplitude of variation are directly proportional.
Based on above-mentioned all embodiments, virtual view method for evaluating video quality further comprises the steps of: calculating space-time Space-time domain pixel characteristic statistical information in the unit of domain, wherein, described Space-time domain pixel characteristic statistical information includes The average of the pixel gradient information in Space-time domain unit, variance and standard deviation;Described pixel gradient information includes The spatial domain horizontal direction gradient of pixel, spatial domain vertical gradient, spatial domain gradient, time domain gradient and space-time Domain gradient.
Virtual view method for evaluating video quality further comprises the steps of: and described pixel characteristic statistical information is set Little threshold of perception current.
When pixel characteristic statistical value is less than minimum threshold of perception current, then special using minimum threshold of perception current replacement pixels Levy statistical value.
Then maintain pixel characteristic statistical information constant when pixel characteristic statistical value is more than minimum threshold of perception current.
Based on above-mentioned all embodiments, when distortion computation is carried out to virtual view video, with a Space-time domain All pixels in unit are unit.First by original reference video and video to be evaluated (virtual view video) Be divided into image sets, each image sets by time domain continuously some two field pictures form.Each image sets is entered again One step is divided into several Space-time domain units.A kind of Space-time domain unit is by some image blocks of spatial domain position identical Composition.Another kind of Space-time domain unit is by some image block groups of the different spatial domain position along same movement locus Become.
Two class distortions are counted in Space-time domain unit.One class is the temporal flicker distortion of virtual view video, separately One class is the Space-time domain texture distortion that left and right viewpoint texture image brings.Calculate the temporal flicker distortion of video Main Basiss are the time domain graded of pixel brightness value.Picture when video to be evaluated (virtual view video) When the direction of plain brightness value time domain gradient is contrary with the time domain gradient direction of respective pixel in original reference video then Think that this pixel has temporal flicker distortion, and the size of distortion is directly proportional to the error size of time domain gradient.
On the other hand, void is weighed in the change of the standard deviation with pixel spatial domain gradient (standard deviation) Intend the Space-time domain texture distortion that the texture image of viewpoint video brings.Finally, each Space-time domain unit is counted The temporal flicker distortion obtaining and Space-time domain texture distortion carry out integrating the distortion obtaining image sets, integration former It is then that the distortion of the worst a part of Space-time domain unit of quality determines human eye sensory perceptual system to whole image group Perceptual distortion.The distortion of image sets obtains the distortion of whole video sequence eventually through integrating.Using mpeg The 3d video sequence that (Motion Picture Experts Group, moving picture experts group) provides builds void Intend viewpoint video data set, tissue beholder carries out subjective quality marking, the quality proposing using said method Evaluation methodology carries out distortion computation to the virtual view video that data is concentrated.Acquired results and subjective marking this Joseph Pearman order correlation coefficient (spearman rankorder correlation coefficient, srocc) reaches To 0.867, Pearson's linearly dependent coefficient (pearson linear correlation coefficient, plcc) Reach 0.785, obviously higher than prior art.
Said method can be used for the applications such as video coding algorithm optimization, the generation of 3d video content and post processing.
Above-mentioned virtual view method for evaluating video quality using Space-time domain unit as the elementary cell of distortion computation, Thus avoid according to pixel to caused by the mode computation distortion of pixel to virtual view video distortion Overestimated consequence.The human eye such as the constant displacement distortion in virtual view video or camera random noise is very Difficulty perceives, but but can lead to using score mistake when being evaluated based on the evaluating criterion of quality of pixel Low.Temporal flicker distortion and Space-time domain texture substantially calculate the feature based on the internal all pixels of Space-time domain unit Statistical information, is not easily susceptible to the impact of constant displacement distortion, and can reduce the factor band such as camera random noise The impact coming.Therefore lost using the direction change of the time domain gradient of pixel and amplitude of variation as description temporal flicker Genuine validity feature, can reflect the scintillation in virtual view video exactly.
The simulation experiment result video to be evaluated evaluated using virtual view method for evaluating video quality is such as Under:
It is the data set setting up a virtual view video first, employ Motion Picture Experts Group mpeg institute 10 video sequences providing.For each video sequence, select a pair about viewpoint video, respectively to this Left and right viewpoint video texture image and depth image are compressed coding so as to produce compression artefacts.Each sequence The virtual view video that column-generation 14 has different degrees of distortion altogether constitutes and has 10 original reference Video and the data set of 140 virtual view videos.The second step of experiment needs tissue beholder viewing data The central video of collection simultaneously carries out subjective quality marking.This experiment has organized 56 beholders to carry out subjective quality altogether Marking.
Ensuing experiment is distortion video meter data concentrated using virtual view method for evaluating video quality Calculate quality score, then by the average mark of calculated video quality fraction and 56 beholder's subjectivity marking It is compared, mainly investigate Spearman order correlation coefficient (srocc) and Pearson's linearly dependent coefficient (plcc).Correlation coefficient is more high, illustrates that calculated video quality fraction and human eye sensory perceptual system are commented The subjective quality score that valency goes out is more consistent.
Technical scheme 1 uses first kind Space-time domain cellular construction, and technical scheme 2 uses the Two class Space-time domain cellular constructions.Test result indicate that, for the video of whole data set, present techniques side The final result srocc of case 1 is 0.845, plcc is 0.773, the termination of technical scheme 2 Fruit srocc is 0.867, plcc is 0.785.For the virtual view only existing depth distortion in data set Video, the final result srocc of technical scheme 1 is 0.763 for 0.790, plcc, the application The final result srocc of technical scheme 2 is 0.810, plcc is 0.801.Only exist in data set The virtual view video of texture distortion, the final result srocc of technical scheme 1 is 0.785, plcc For 0.667, the final result srocc of technical scheme 2 is 0.828, plcc is 0.673.Right There is the virtual view video of texture distortion and depth distortion, technical scheme 1 in data set simultaneously Final result srocc be 0.854, plcc be 0.808, the final result of technical scheme 2 Srocc is 0.868, plcc is 0.815.In order to be contrasted with existing state-of-the-art technology scheme at present, Spy be have chosen technical scheme below and carries out distortion computation to same virtual view sets of video data and commented with quality Valency: psnr (Y-PSNR, peak signal to noise ratio), ssim (structural similarity measure, Structural similarity index measurement), vqm (video quality model, video quality Model), movie (based drive video distortion interpretational criteria, motion-based video integrity evaluation index).
Contrast on effect between virtual view method for evaluating video quality and at present existing mainstream technology see table:
From above table as can be seen that virtual view method for evaluating video quality can obtain perceiving system with human eye The consistent result of system subjective quality score, better than existing technical scheme.
Based on above-mentioned all embodiments, the size of Space-time domain unit, shape and scope are not fixed.Time domain, In the middle of spatial domain or time-space domain, adjacent portion pixel can form a Space-time domain unit, all should belong to The scope of protection of the invention.
The method of estimation is not fixed, can be block-based estimation or be based on pixel Estimation.Global motion model is not also fixed, and can be translation model, four parameters of two parameters Perspective model of geometric model, the affine model of six parameters and 8 parameters etc..
Temporal flicker distortion is turned to weigh the foundation of time domain distortion with the change of the temporal signatures of pixel.Time domain is special Levy mean flow rate within a period of time for the pixel being not limited to time domain gradient or same position, bright The various features such as the variance of degree, the distribution of brightness.
In virtual view method for evaluating video quality, the computational methods of pixel time domain gradient are not also fixed, Suo Youji The two frames or single pixel between multiframe or local pixel change over the method for situation and all belong in front and back before and after calculation In pixel time domain gradient calculation, the scope of the present invention all should be belonged to.
Pixel in Space-time domain texture distortion statistical Space-time domain unit in virtual view method for evaluating video quality Feature counting statistics information.Wherein, the feature of pixel is not limited to pixel gradient.When the side using pixel All should belong to during the further features such as edge detection information, Gradient direction information, brightness value, color value, value of chromatism The scope of the present invention.
Pixel in Space-time domain texture distortion statistical Space-time domain unit in virtual view method for evaluating video quality Feature counting statistics information.Wherein, statistical information is not limited to variance or standard deviation information, when using all The scope of the present invention all should be belonged to during the statistical information such as value, probability distribution coefficient.
The calculation template of pixel level gradient and vertical gradient is calculated simultaneously in virtual view method for evaluating video quality It is not limited to Fig. 4 and Fig. 5, the template of all calculating pixel level gradients and vertical gradient is suitable for the present invention Pixel level in technical scheme and vertical gradient calculate.
Above-mentioned virtual view method for evaluating video quality counts virtual in units of all pixels in Space-time domain unit The temporal flicker distortion of viewpoint video, it is to avoid the temporal flicker distortion computation pattern institute to pixel for the pixel The mistake estimation to virtual view video human eye perceptual distortion bringing.Dodge in the time domain to virtual view video When bright distortion is calculated, both considered the distortion that brings due to depth map mistake it is also contemplated that left and right viewpoint stricture of vagina The distortion that reason image introduces.Said method can effectively be assessed in virtual view video and subjective quality is caused very The temporal flicker distortion of big impact, thus more meet human eye subjective perception when evaluating virtual view video quality Result so that virtual so that video quality evaluation is more accurate, comprehensively.
Each technical characteristic of embodiment described above can arbitrarily be combined, for making description succinct, not right The all possible combination of each technical characteristic in above-described embodiment is all described, as long as however, these skills There is not contradiction in the combination of art feature, be all considered to be the scope of this specification record.
Embodiment described above only have expressed the several embodiments of the present invention, and its description is more concrete and detailed, But can not therefore be construed as limiting the scope of the patent.It should be pointed out that for this area For those of ordinary skill, without departing from the inventive concept of the premise, can also make and some deform and change Enter, these broadly fall into protection scope of the present invention.Therefore, the protection domain of patent of the present invention should be with appended power Profit requires to be defined.

Claims (10)

1. a kind of virtual view method for evaluating video quality, comprises the following steps:
Original reference video and video to be evaluated are carried out Space-time domain dividing elements respectively;
Temporal flicker feature calculation first kind distortion according to original reference video and video to be evaluated;
Space-time domain textural characteristics according to original reference video and video to be evaluated calculate Equations of The Second Kind distortion;
Described first kind distortion is integrated with described Equations of The Second Kind distortion and obtains the matter that total distortion judges video to be evaluated Amount.
2. virtual view method for evaluating video quality according to claim 1 is it is characterised in that described The step that original reference video and video to be evaluated are carried out Space-time domain dividing elements respectively includes:
Respectively by original reference video and video to be evaluated be divided into some by time domain continuously some frames constitute Image sets;
Each image in image sets is divided into some image blocks, continuously some image block compositions are empty for time domain Time domain unit.
3. virtual view method for evaluating video quality according to claim 2 is it is characterised in that described Space-time domain unit is made up of spatial domain position identical some time domains consecutive image block;Or
It is made up of the different some time domain consecutive image blocks in the spatial domain position describing same movement locus of object.
4. virtual view method for evaluating video quality according to claim 1 is it is characterised in that described The step calculating the temporal flicker distortion of each Space-time domain unit includes:
First time domain gradient is calculated to each pixel of original reference video;
Second time domain gradient is calculated to each pixel of video to be evaluated;
Judge that each pixel whether there is temporal flicker and loses according to the first time domain gradient and the second time domain gradient Very, if so, then calculate temporal flicker strength of distortion.
5. virtual view method for evaluating video quality according to claim 1 is it is characterised in that described The step calculating the temporal flicker distortion of each Space-time domain unit also includes: according to pixel time domain gradient information meter Calculate temporal flicker distortion, wherein, the direction change frequency of described temporal flicker distortion and pixel time domain gradient with And amplitude of variation is directly proportional.
6. virtual view method for evaluating video quality according to claim 1 is it is characterised in that described The step calculating the temporal flicker distortion of each Space-time domain unit also includes: calculates video Space-time domain list to be evaluated The corresponding temporal flicker distortion of cluster time domain contiguous pixels in unit;
Detected according to first function and in cluster time domain contiguous pixels, whether there is temporal flicker distortion;
If so, then temporal flicker strength of distortion in cluster time domain neighbor is detected according to second function.
7. virtual view method for evaluating video quality according to claim 1 is it is characterised in that described The step calculating the Space-time domain texture distortion of each Space-time domain unit includes:
Calculate the horizontal direction gradient of each pixel and vertical gradient in Space-time domain unit;
Each pixel in Space-time domain unit is calculated according to described horizontal direction gradient and described vertical gradient Spatial domain gradient;
The spatial domain gradient disparities of the Space-time domain unit according to original reference video and video to be evaluated calculate to be evaluated The Space-time domain texture distortion of valency video.
8. virtual view method for evaluating video quality according to claim 1 is it is characterised in that also wrap Include step: calculate pixel characteristic statistical information in Space-time domain unit, wherein, described pixel characteristic statistical information Including the average of the pixel gradient information in Space-time domain unit, variance and standard deviation;Described pixel gradient information Including the spatial domain horizontal direction gradient of pixel, spatial domain vertical gradient, spatial domain gradient, time domain gradient and Space-time domain gradient.
9. virtual view method for evaluating video quality according to claim 8 is it is characterised in that also wrap Include step: described pixel characteristic statistical information is set with minimum threshold of perception current;
When pixel characteristic statistical value is less than minimum threshold of perception current, then special using minimum threshold of perception current replacement pixels Levy statistical value;
Then maintain pixel characteristic statistical information constant when pixel characteristic statistical value is more than minimum threshold of perception current.
10. virtual view method for evaluating video quality according to claim 1 is it is characterised in that institute State according to temporal flicker feature calculation first kind distortion and calculate Equations of The Second Kind distortion according to Space-time domain textural characteristics Step includes: using a series of distortion of maximum Space-time domain units in the temporal flicker distortion in Space-time domain unit as the One class distortion;
Using a series of distortion of maximum Space-time domain units in the Space-time domain texture distortion in Space-time domain unit as Equations of The Second Kind Distortion.
CN201510395100.3A 2015-07-07 2015-07-07 Virtual view method for evaluating video quality Active CN106341677B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510395100.3A CN106341677B (en) 2015-07-07 2015-07-07 Virtual view method for evaluating video quality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510395100.3A CN106341677B (en) 2015-07-07 2015-07-07 Virtual view method for evaluating video quality

Publications (2)

Publication Number Publication Date
CN106341677A true CN106341677A (en) 2017-01-18
CN106341677B CN106341677B (en) 2018-04-20

Family

ID=57826441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510395100.3A Active CN106341677B (en) 2015-07-07 2015-07-07 Virtual view method for evaluating video quality

Country Status (1)

Country Link
CN (1) CN106341677B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106973281A (en) * 2017-01-19 2017-07-21 宁波大学 A kind of virtual view video quality Forecasting Methodology
CN107147906A (en) * 2017-06-12 2017-09-08 中国矿业大学 A kind of virtual perspective synthetic video quality without referring to evaluation method
CN108156451A (en) * 2017-12-11 2018-06-12 江苏东大金智信息系统有限公司 A kind of 3-D view/video without reference mass appraisal procedure
CN110401832A (en) * 2019-07-19 2019-11-01 南京航空航天大学 A kind of panoramic video objective quality assessment method based on space-time model building for pipeline
CN110636282A (en) * 2019-09-24 2019-12-31 宁波大学 No-reference asymmetric virtual viewpoint three-dimensional video quality evaluation method
CN113014918A (en) * 2021-03-03 2021-06-22 重庆理工大学 Virtual viewpoint image quality evaluation method based on skewness and structural features
CN113793307A (en) * 2021-08-23 2021-12-14 上海派影医疗科技有限公司 Automatic labeling method and system suitable for multi-type pathological images
US11310475B2 (en) 2019-08-05 2022-04-19 City University Of Hong Kong Video quality determination system and method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108600745B (en) * 2018-08-06 2020-02-18 北京理工大学 Video quality evaluation method based on time-space domain slice multi-map configuration

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742355A (en) * 2009-12-24 2010-06-16 厦门大学 Method for partial reference evaluation of wireless videos based on space-time domain feature extraction
CN103391450A (en) * 2013-07-12 2013-11-13 福州大学 Spatio-temporal union reference-free video quality detecting method
CN104023225A (en) * 2014-05-28 2014-09-03 北京邮电大学 No-reference video quality evaluation method based on space-time domain natural scene statistics characteristics
CN104023226A (en) * 2014-05-28 2014-09-03 北京邮电大学 HVS-based novel video quality evaluation method
CN104023227A (en) * 2014-05-28 2014-09-03 宁波大学 Objective video quality evaluation method based on space domain and time domain structural similarities
CN104243970A (en) * 2013-11-14 2014-12-24 同济大学 3D drawn image objective quality evaluation method based on stereoscopic vision attention mechanism and structural similarity
CN104394403A (en) * 2014-11-04 2015-03-04 宁波大学 A compression-distortion-oriented stereoscopic video quality objective evaluating method
CN104754322A (en) * 2013-12-27 2015-07-01 华为技术有限公司 Stereoscopic video comfort evaluation method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742355A (en) * 2009-12-24 2010-06-16 厦门大学 Method for partial reference evaluation of wireless videos based on space-time domain feature extraction
CN103391450A (en) * 2013-07-12 2013-11-13 福州大学 Spatio-temporal union reference-free video quality detecting method
CN104243970A (en) * 2013-11-14 2014-12-24 同济大学 3D drawn image objective quality evaluation method based on stereoscopic vision attention mechanism and structural similarity
CN104754322A (en) * 2013-12-27 2015-07-01 华为技术有限公司 Stereoscopic video comfort evaluation method and device
CN104023225A (en) * 2014-05-28 2014-09-03 北京邮电大学 No-reference video quality evaluation method based on space-time domain natural scene statistics characteristics
CN104023226A (en) * 2014-05-28 2014-09-03 北京邮电大学 HVS-based novel video quality evaluation method
CN104023227A (en) * 2014-05-28 2014-09-03 宁波大学 Objective video quality evaluation method based on space domain and time domain structural similarities
CN104394403A (en) * 2014-11-04 2015-03-04 宁波大学 A compression-distortion-oriented stereoscopic video quality objective evaluating method

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106973281A (en) * 2017-01-19 2017-07-21 宁波大学 A kind of virtual view video quality Forecasting Methodology
CN106973281B (en) * 2017-01-19 2018-12-07 宁波大学 A kind of virtual view video quality prediction technique
CN107147906A (en) * 2017-06-12 2017-09-08 中国矿业大学 A kind of virtual perspective synthetic video quality without referring to evaluation method
CN107147906B (en) * 2017-06-12 2019-04-02 中国矿业大学 A kind of virtual perspective synthetic video quality without reference evaluation method
CN108156451A (en) * 2017-12-11 2018-06-12 江苏东大金智信息系统有限公司 A kind of 3-D view/video without reference mass appraisal procedure
CN110401832A (en) * 2019-07-19 2019-11-01 南京航空航天大学 A kind of panoramic video objective quality assessment method based on space-time model building for pipeline
CN110401832B (en) * 2019-07-19 2020-11-03 南京航空航天大学 Panoramic video objective quality assessment method based on space-time pipeline modeling
US11310475B2 (en) 2019-08-05 2022-04-19 City University Of Hong Kong Video quality determination system and method
CN110636282A (en) * 2019-09-24 2019-12-31 宁波大学 No-reference asymmetric virtual viewpoint three-dimensional video quality evaluation method
CN113014918A (en) * 2021-03-03 2021-06-22 重庆理工大学 Virtual viewpoint image quality evaluation method based on skewness and structural features
CN113014918B (en) * 2021-03-03 2022-09-02 重庆理工大学 Virtual viewpoint image quality evaluation method based on skewness and structural features
CN113793307A (en) * 2021-08-23 2021-12-14 上海派影医疗科技有限公司 Automatic labeling method and system suitable for multi-type pathological images

Also Published As

Publication number Publication date
CN106341677B (en) 2018-04-20

Similar Documents

Publication Publication Date Title
CN106341677A (en) Virtual viewpoint video quality evaluation method
CN100559881C (en) A kind of method for evaluating video quality based on artificial neural net
CN104079925B (en) Ultra high-definition video image quality method for objectively evaluating based on vision perception characteristic
CN102333233B (en) Stereo image quality objective evaluation method based on visual perception
CN103763552B (en) Stereoscopic image non-reference quality evaluation method based on visual perception characteristics
CN102523477B (en) Stereoscopic video quality evaluation method based on binocular minimum discernible distortion model
CN102075786B (en) Method for objectively evaluating image quality
CN105338343A (en) No-reference stereo image quality evaluation method based on binocular perception
CN104869421B (en) Saliency detection method based on overall motion estimation
CN104394403B (en) A kind of stereoscopic video quality method for objectively evaluating towards compression artefacts
CN102595185A (en) Stereo image quality objective evaluation method
CN101146226A (en) A highly-clear video image quality evaluation method and device based on self-adapted ST area
CN108109147A (en) A kind of reference-free quality evaluation method of blurred picture
CN104754322A (en) Stereoscopic video comfort evaluation method and device
CN102263985A (en) Quality evaluation method, device and system of stereographic projection device
CN104574424B (en) Based on the nothing reference image blur evaluation method of multiresolution DCT edge gradient statistics
Tsai et al. Quality assessment of 3D synthesized views with depth map distortion
CN102843572A (en) Phase-based stereo image quality objective evaluation method
CN104767993A (en) Stereoscopic video objective quality evaluation method based on quality lowering time domain weighting
Jin et al. Validation of a new full reference metric for quality assessment of mobile 3DTV content
CN108513132A (en) A kind of method for evaluating video quality and device
CN108848365B (en) A kind of reorientation stereo image quality evaluation method
CN102685547B (en) Low-bit-rate video quality detection method based on blocking effects and noises
CN105809691A (en) Full-reference screen image quality evaluation method
CN105430397B (en) A kind of 3D rendering Quality of experience Forecasting Methodology and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant