CN103841410B - Based on half reference video QoE objective evaluation method of image feature information - Google Patents

Based on half reference video QoE objective evaluation method of image feature information Download PDF

Info

Publication number
CN103841410B
CN103841410B CN201410079834.6A CN201410079834A CN103841410B CN 103841410 B CN103841410 B CN 103841410B CN 201410079834 A CN201410079834 A CN 201410079834A CN 103841410 B CN103841410 B CN 103841410B
Authority
CN
China
Prior art keywords
video
conspicuousness
impaired
texture information
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410079834.6A
Other languages
Chinese (zh)
Other versions
CN103841410A (en
Inventor
李文璟
喻鹏
罗千
耿杨
嵇华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201410079834.6A priority Critical patent/CN103841410B/en
Publication of CN103841410A publication Critical patent/CN103841410A/en
Application granted granted Critical
Publication of CN103841410B publication Critical patent/CN103841410B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The open half reference video QoE objective evaluation method based on image feature information of the present invention, the method comprises: operator's end extracts conspicuousness hum pattern and the texture information figure of each two field picture in original video, conspicuousness hum pattern and texture information figure are processed in compression, obtain half reference data of original video; Half reference data and the impaired video of the original video transmitting held by user side reception operator, extract conspicuousness hum pattern and the texture information figure of each two field picture in impaired video, obtain half reference data of impaired video, according to half reference data of original video and impaired video, calculate the degree of impairment of impaired video, use the good neural network algorithm of training in advance to assess subjective feeling mass M OS.

Description

Based on half reference video QoE objective evaluation method of image feature information
Technical field
The present invention relates to communication technical field, be specifically related to based on image feature information half reference video QoE is objective commentsEstimate method.
Background technology
Along with popularizing of wireless network and high-speed wideband access, development fast is just being experienced in real-time video service. QoE(QualityofExperience) class index can reflect the service quality of real-time video traffic, the QoE matter of real-time video trafficThe objective method (also referred to as QoE objective evaluation method) of amount assessment is to subjective scoring according to specific objective quality of service indexAssess, QoE objective evaluation method can be divided three classes according to the service condition to original video data again, is respectively full ginsengExamine (needing whole initial data), half with reference to (needing part initial data) with without reference (not needing initial data).
Existing QoE objective evaluation method majority be complete with reference to or without reference method, it is the most accurate that full reference method can obtainAssessment result, but be difficult to application; Be easy to dispose without reference method, but be conventionally only applicable to specifically damage scene; Half referenceAlthough method can obtain between the two better balance, lacks ripe scheme.
The problem that prior art exists is: full reference method or all exist and limit in practicality without reference methodProperty, and prior art is not to the assessment accuracy lateral comparison result of testing.
Summary of the invention
In practicality, there is limitation in technical problem prior art to be solved by this invention, and not to assessing standardThe lateral comparison result that really property is tested.
For this purpose, the present invention proposes the half reference video QoE objective evaluation method based on image feature information, the methodComprise:
Operator's end extracts conspicuousness hum pattern and the texture information figure of each two field picture in original video, and institute is processed in compressionState conspicuousness hum pattern and texture information figure, obtain half reference data of original video;
Half reference data and the impaired video of the original video transmitting held by user side reception operator, extracts in impaired videoThe conspicuousness hum pattern of each two field picture and texture information figure, obtain half reference data of impaired video, according to original video andHalf reference data of impaired video, calculates the degree of impairment of impaired video, uses neural network algorithm that training in advance is good to masterPerception is assessed by mass M OS, and wherein, described impaired video is original the looking through operator's end of Erasure channel transmissionFrequently.
Wherein, described conspicuousness hum pattern comprises time domain conspicuousness hum pattern and spatial domain conspicuousness hum pattern.
Wherein, described conspicuousness hum pattern comprises strength component, color component, durection component and the skin-color of weightedAdjust component.
Wherein, described texture information figure comprises time domain texture information figure and spatial domain texture information figure.
Wherein, the extraction of the texture information figure of described each two field picture comprises: edge extracting, morphological dilations are processed and addFolded;
Wherein, described edge extracting comprises: the marginal information image that extracts current frame image;
Described morphological dilations comprises: the marginal information image of current frame image is carried out to morphological dilations processing, obtainMarginal information image after treatment;
Describedly add stacked package and draw together: marginal information image after treatment and current frame image added folded, obtain current frame imageTexture information figure.
Wherein, the texture information figure of described each two field picture comprises: the line of each two field picture in operator's end original videoThe texture information figure of each two field picture in reason hum pattern, the impaired video of user side.
Wherein, described compression processing comprises:
Adopt wavelet transformation to decompose conspicuousness hum pattern and the texture information figure of described spatial domain and time domain two aspects, obtain notSame high-frequency sub-band;
Make the histogram of all high-frequency sub-band;
Adopt histogram the digital simulation error of all high-frequency sub-band of extensive Gaussian distribution GGD matching.
Wherein, described half reference data comprises: the damage of conspicuousness information spatial domain, the damage of conspicuousness information time domain, texture letterThe damage of breath spatial domain and the damage of texture information time domain.
Wherein, described according to half reference data of original video and impaired video, calculate the degree of impairment bag of impaired videoDraw together: according to half reference data of original video and impaired video, calculate the degree of impairment of impaired video by relative entropy.
Than prior art, the beneficial effect of method provided by the invention is: real-time video traffic provided by the inventionQoE objective quality assessment method is insensitive to the type of video impairment, and the damage video also different reasons being caused can obtainArrive assessment result comparatively accurately; The present invention is insensitive to bottom transmission network, can (comprise office for multiple actual sceneTerritory net, wide area network and wireless environment etc.) under real-time video traffic is carried out to objective quality assessment; The present invention be easy to dispose andRealize, all functions of modules all can realize at software view, if there is particular demands, also can consider to realize adding with hardware modeFast processing speed.
Brief description of the drawings
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existingHave the accompanying drawing of required use in technical description to be briefly described, apparently, the accompanying drawing in the following describes is thisSome bright embodiment, for those of ordinary skill in the art, are not paying under the prerequisite of creative work, can also rootObtain other accompanying drawing according to these accompanying drawings.
Fig. 1 shows the half reference video QoE objective evaluation method flow diagram based on image feature information;
Fig. 2 shows the result figure in embodiment 2, LVQ database being assessed.
Detailed description of the invention
For making object, technical scheme and the advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present inventionIn accompanying drawing, the technical scheme in the embodiment of the present invention is clearly described, obviously, described embodiment is the present inventionPart embodiment, instead of whole embodiment. Based on the embodiment in the present invention, those of ordinary skill in the art are not havingMake the every other embodiment obtaining under creative work prerequisite, all belong to the scope of protection of the invention.
Embodiment 1:
The present embodiment discloses a kind of half reference video QoE objective evaluation method based on image feature information, the method bagDraw together:
Operator's end extracts conspicuousness hum pattern and the texture information figure of each two field picture in original video, and institute is processed in compressionState conspicuousness hum pattern and texture information figure, obtain half reference data of original video;
Half reference data and the impaired video of the original video transmitting held by user side reception operator, extracts in impaired videoThe conspicuousness hum pattern of each two field picture and texture information figure, obtain half reference data of impaired video, according to original video andHalf reference data of impaired video, calculates the degree of impairment of impaired video, uses neural network algorithm that training in advance is good to masterPerception is assessed by mass M OS, and wherein, described impaired video is original the looking through operator's end of Erasure channel transmissionFrequently.
Wherein, described conspicuousness hum pattern comprises time domain conspicuousness hum pattern and spatial domain conspicuousness hum pattern.
Wherein, described conspicuousness hum pattern comprises strength component, color component, durection component and the skin-color of weightedAdjust component, wherein, skin color component weight is made as to 2, all the other component weights are 1, also can adjust according to actual.
Wherein, described texture information figure comprises time domain texture information figure and spatial domain texture information figure.
Wherein, the extraction of the texture information figure of described each two field picture comprises: edge extracting, morphological dilations are processed and addFolded;
Wherein, described edge extracting comprises: the marginal information image that extracts current frame image;
Described morphological dilations comprises: the marginal information image of current frame image is carried out to morphological dilations processing, obtainMarginal information image after treatment;
Describedly add stacked package and draw together: marginal information image after treatment and current frame image added folded, obtain current frame imageTexture information figure.
Wherein, the texture information figure of described each two field picture comprises: the line of each two field picture in operator's end original videoThe texture information figure of each two field picture in reason hum pattern, the impaired video of user side.
At operator's end, described compression processing comprises:
Adopt wavelet transformation to decompose conspicuousness hum pattern and the texture information figure of described spatial domain and time domain two aspects, obtain notSame high-frequency sub-band;
Make the histogram of all high-frequency sub-band;
Adopt histogram the digital simulation error of all high-frequency sub-band of extensive Gaussian distribution GGD matching.
Wherein, described half reference data comprises: the damage of conspicuousness information spatial domain, the damage of conspicuousness information time domain, texture letterThe damage of breath spatial domain and the damage of texture information time domain.
Wherein, described according to half reference data of original video and impaired video, calculate the degree of impairment bag of impaired videoDraw together: according to half reference data of original video and impaired video, calculate the degree of impairment of impaired video by relative entropy.
Embodiment 2:
The present embodiment discloses a kind of half reference video QoE objective evaluation method based on image feature information, as Fig. 1 instituteShow, the present invention carries out half method with reference to QoE objective quality assessment to real-time video traffic and is mainly divided into 11 steps, can divideFor operator's end and two parts of user side, wherein operator's end comprises 5 steps, and user side comprises 6 steps. Its overview flow chart is as figureShown in 1, introduce respectively each step below.
Operator's end:
101) each frame in original video is extracted respectively to its conspicuousness information. What conspicuousness was described is in a sub-pictureThe relatively more region of arresting power. First built significantly from intensity, color, direction and skin color 4 aspects, respectivelyProperty component, then according to different weights, 4 components are merged into a secondary Saliency maps. Wherein, the conspicuousness of calculating skin color is dividedBefore amount, needing to apply face recognition technology detects and whether really has portrait. The Saliency maps obtaining is each picture in former figureElement has distributed a conspicuousness value, and this pixel region of the higher expression of value more attracts human eye notice.
Concrete computational process is described below. First, setting up scale parameter for each two field picture of input is Gauss's gold of 9Word tower (Gaussianpyramids), and establish central yardstick ce{2 wherein, 3,4}, periphery yardstick s=c+ δ, wherein δ ∈ 3,4}. Definition is to the difference computing between two width different scale imagesFor: thick scalogram is carried out by picture after thin yardstick interpolationElement subtracts each other. If r, g and b are respectively the three primary color components of original image, intensity image I is I=(r+g+b)/3. In addition, definitionBroad tuning color channel R=r-(g+b)/2, G=g-(r+b)/2, B=b-(r+g)/2, and Y=(r+g)/2-|r-g|/2-b. WithOn I, R, G, B and Y all for multiple dimensioned. Below, definition strength characteristic image is:
Definition color character image is:
Next, need to use Gabor at direction 6E respectively (0 °, 45 °, 90 °, 135 °) to the intensity image I of each yardstickWave filter, obtains Gabor pyramid 0 (δ, θ), and definition direction character image is:
For obtaining each conspicuousness component, each characteristic image above need to be merged. Concrete union operationBe defined as, for the input picture of two different scales, by they all scaling after yardstick 4, carry out by pixel addition. Thus,The conspicuousness component of intensity, color and direction three aspects: calculates to (7) by formula (5) respectively, whereinIt is normalizingOperator:
In addition, when existing, face needs the conspicuousness of extra computation skin pixels component if detectedBy to formerIn beginning image, the degree of closeness of pixel and skin color is distributed weights for each pixel value. Finally, by each conspicuousness componentAccording to predefined weights WiAfter merging, just obtain single frames Saliency maps picture:
Calculate conspicuousness by each frame in video, and former figure is weighted to place with the Saliency maps picture obtainingManage, just can obtain the weighting conspicuousness information of image, the calculating of this step is provided by formula (9), and wherein, p describes original lookingFrequently S,pThe Saliency maps picture that represents the each frame extracting from original video, multiplication is to enter for the each pixel of two width imagesRow, i represents certain frame, and F represents totalframes, and SWS represents spatial domain conspicuousness weighted image, and wherein, SWS represents respectivelySaliency, Weighted and Spatial.
SWSPTwo (SWSP(i)|SWSP(i) two Sp(i)*Original_Video(i),i∈F)(9)
102) each frame in original video is extracted respectively to its texture information. Texture information can be according to the limit in imageEdge situation is assessed the sensitivity of regional to damage. This is the hypothesis based on below: the region of texture complexity in imageThe damage of middle appearance wants the smooth region of colorimetric leveling to be more difficult to be discovered. In this programme, first use Gauss Laplce filterRipple device extracts marginal information wherein from original image, then edge image is done morphological dilations process and by each pixel valueNegate obtains texture maps, finally will use equally this texture maps to do weighting processing to former figure, has just obtained the texture information of image.The wherein edges cover of the region of texture complexity after can being inflated, thus can the image of the level and smooth tone part of reevaluating damageWound.
Concrete computational process is described below. First, for extracting Gauss's Laplace filter at edge, be, first to figurePicture is made gaussian filtering, re-uses Laplace operator and obtains edge image. Gaussian filtering operation is provided by formula (10).
L(x,y;t)=g(x,y;t)*f(x,y)(10)
Wherein L is filtering result, and f (x, y) is the pixel value of locating at (x, y) of video one two field picture, g (x, y; T) beGaussian function, is described by formula (11), and t is wherein filter scale.
g ( x , y ; t ) = 1 2 π e - ( x 2 + y 2 ) / 2 t - - - ( 11 )
Laplace operator is provided by formula (12).
▿ 2 f ( x , y ) = f ( x + 1 , y ) + f ( x - 1 , y ) + f ( x , y + 1 ) + f ( x , y - 1 ) - 4 f ( x , y ) - - - ( 12 )
Morphological dilations is processed and is provided by formula (13), and wherein A is input picture, and B is that length and width are 8 rectangle description.
dilation(A,B)=(a+b|a∈A,b∈B)(13)
In sum, the computational process of extraction single frames texture image is provided by formula (14).
Calculate texture by each frame in video, and former figure is weighted to processing with the texture image obtaining, justCan obtain the weighting texture information of image, wherein for lesion assessment crucial especially region all with highlighted demonstration, itsRemaining region all presents with grey low key tone. The calculating of this step is provided by formula (15). WhereinRepresent to extract from original videoThe texture image of the each frame going out, TWS represents spatial domain texture weighted image, tri-letters of TWS represent respectively Texture,Weighted and Spatial.
103) this step needs 101) and 102) conspicuousness of each frame of calculating and texture information be as input. ?When the QoE of video traffic is implemented in assessment, the degree of impairment of spatial domain and time domain is all extremely important for subjective feeling. This programme is to aobviousWork property information and texture information have calculated respectively corresponding spatial domain and time domain impairment value. Specifically, the assessment of aspect, spatial domain makesUse the conspicuousness of each frame and texture information 101) and 102) result that directly obtains, the assessment of time domain aspect isConspicuousness and texture information difference between consecutive frame are used. Therefore, for each video, altogether need to be from 4 aspects to damagingCondition of the injury condition is assessed. The calculating of this step is provided by formula (16) and (17), and SWT represents time domain conspicuousness weighted image, SWTRepresent respectively Saliency, Weighted and Temporal. TWT represents time domain texture weighted image, and TWT represents respectivelyTexture, Weighted and Temporal.
SWTP=(SWTP(i)|SWTP(i)=abs(SWSP(i)-SWSP(i-1)),i∈F)(16)
TWTP=(TWTP(i)|TWTP(i)=abs(TWSP(i)-TWSP(i mono-is l)), ieF} (17)
104) by 103) spatial domain obtaining and the conspicuousness of time domain and texture information data volume huge, be cannot be directly through netNetwork transmission, therefore need to compress processing. Wherein 104) what describe is the step of wavelet transformation. For every in original videoThe information of 4 aspects that one frame obtains, is 3 as scale parameter respectively, and the SteerablePyramid that direction number is 2 decomposes, and obtains altogether6 different high-frequency sub-band are for assessment of image lesion. Subsequently, make each high-frequency sub-band histogram, obtain describing original videoP. The calculating of this step is provided by formula (18). Wherein Wavelet represents to do foregoing wavelet transformation, and Hist is for doingThe histogram of each high-frequency sub-band.
P=Hist(Wauelet(SWSP,TWSP,SWTP,TWTP))(18)
105) this step need to be used extensive Gaussian distribution (GeneralizedGaussianDistribution, GGD)Each high-frequency sub-band histogram is carried out to matching. GGD function is only determined curve shape by α and two parameters of β, and can be well withThe matching of high-frequency sub-band histogram phase, its definition is provided by formula (19) and (20). Meanwhile, also need to use relative entropy (to be called againKL divergence) the error ε of formula digital simulation curve generation when high-frequency sub-band histogram is carried out to matching. Specifically, first logicalCrossing formula (21) obtains the approximate histogram that GGD matching histogram obtains and describes Pm, then calculate P by formula (22)mFor PRelative entropy ε. Finally, transformation parameter corresponding to each high-frequency sub-band comprises α, β and ε, and wherein α value is (8 of 11 bit floating numbersMantissa, 3 indexes), β value 8 bits, ε value 8 bits, altogether 27 bits. And for each frame of video, the degree of impairment of 4 aspects is eachCorresponding 6 high-frequency sub-band, also need to transmit 27*4*6=648 bit altogether, if the FPS of video is 30, and total bandwidth soOccupancy is 648*30/8=2.43KB/s. This is can accept completely for half reference video QoE objective quality assessment. Half reference data of these former videos need to be used harmless auxiliary channel to transmit.
p ( x ) = β 2 αΓ ( 1 / β ) e ( - | x | / α ) β - - - ( 19 )
Γ ( z ) = ∫ 0 ∞ t z - 1 e t dt - - - ( 20 )
Pm=GGD_Fitting(P)(21)
ϵ = D KL ( P m | | P ) = Σ i ln ( P m ( i ) P ( i ) ) P m ( i ) - - - ( 22 )
User side:
106) except being input as impaired video and Saliency maps picture thereof, other parts and 101) identical. The calculating of this stepProvided by formula (23). Wherein Q describes impaired video,Represent the conspicuousness information of the each frame extracting from impaired video.
107) except being input as impaired video and texture image thereof, other parts and 102) identical. The calculating of this step byFormula (24) provides. WhereinRepresent the texture information of the each frame extracting from impaired video.
108) except being input as 106) and 107) data, other are with 103) identical. The calculating of this step is by formula (25)(26) provide.
109) except being input as 108) data, other are with 104) identical. The calculating of this step is provided by formula (27).
110) this step needs 109) and 105) data as input. First, need according to receiving to obtain half reference numberAccording in each α, β value rebuild the histogram of the high-frequency sub-band that each frame is corresponding; Then, calculate concrete damage by formula (28)Value, wherein P represents each histogram that original video is corresponding, Q represents each histogram that impaired video is corresponding, PmRepresent GGD matchingThe each approximate histogram obtaining, ε=DKL (Pm|| P). Finally, for each video, will calculate 4 impairment values, i.e. formula(28) Distortion in, respectively corresponding spatial domain conspicuousness weighted injury, spatial domain texture weighted injury, the weighting of time domain conspicuousnessDamage and time domain texture weighted injury.
111) last, according to 110) calculate video impairment situation, use the good neural network algorithm pair of training in advanceSubjective feeling quality (MOS) is assessed. The calculating of this step is provided by formula (29).
The beneficial effect of the embodiment of the present invention is:
Consider when human eye is watched video, concern be only subregion in video image and therefore non-integral is closingThe video impairment that key range occurs will especially affect subjective feeling. For this point, this programme is in conjunction with visual model, from lookSet out and asked for the conspicuousness information of image in coloured silk, intensity, direction and 4 aspects of skin color. Wherein, need to examine determining whetherConsider when skin color, first applied face recognition technology and judged in image, whether there is portrait, and only extra in the time that portrait existsThe conspicuousness component that calculates skin color, makes the accurate performance of calculating of conspicuousness information be guaranteed thus, and then improves and damageThe order of accuarcy of wound assessment.
The edge situation that image itself exists, can be used to judge the sensitivity of image to damage. Specifically, work as figureCertain edges of regions of picture is intensive, and while being also texture complexity, if occurred, damage is also difficult to be discovered by human eye; On the contrary, if damage appearanceIn the level and smooth region of tone, be easy to be captured by human eye. Accordingly, this programme has extracted image edge information, passes through subsequentlyMorphological dilations method hides near pixel edge, thereby reevaluating is being carried out in the damage that can occur pixel smooth region, fromAnd the order of accuarcy of raising lesion assessment.
For realizing appraisal procedure half referenceization, use the mode of wavelet transformation join probability fitting of distribution to realize data pressureContracting, and calculate concrete impairment value by relative entropy. The method can be in guaranteeing to assess accuracy, only takies fewAdditional transmissions bandwidth.
Embodiment 3:
At disclosed Subjective video quality database LIVEVideoQualityDatabase(LVQD) upper, to thisThe accuracy of bright assessment is tested, and test result as shown in Figure 2. LVQD has comprised 10 groups of videos, every group comprise 1 original(can't harm) video, and respectively from H.264 compressive damage, MPEG2 compressive damage, IP transmission impairment and wireless transmission are damaged angle15 impaired videos that degree builds, therefore whole database has comprised 150 impaired videos altogether. To each impaired video, according toITU-RBT.500-11 standard regulation has been carried out subjective scoring. What subjective scoring used is single stimulating course, and rating staff is 0To 100 continuum, provide score value. 38 rating staff have been invited in experiment altogether, and wherein 9 parts of scorings are judged to according to standardIt is invalid and disallowable to be decided to be. After being processed according to standard, remaining 29 parts of scorings just obtain 150 impaired videos pairThe MOS answering and scoring variance.
Use this method to carry out QoE objective quality assessment to the impaired video of LVQD, its result as shown in Figure 2. With otherAs shown in Table 1 and Table 2, that wherein table 1 is listed is Pierre to the result that main flow contrasts with reference to QoE objective quality assessment method entirelyGloomy coefficient correlation, what table 2 was listed is Spearman's correlation coefficient. Although this patent propose be half reference method, with can makeIt is unfavorable while contrast with the full reference method of whole original video data, to exist, and is commenting but actual comparing result shows this methodEstimate accuracy aspect and still have stronger competitiveness.
Table 1.LVQD assessment result---Pearson's coefficient correlation
Table 2.LVQD assessment result---Spearman's correlation coefficient
Although described by reference to the accompanying drawings embodiments of the present invention, those skilled in the art can not depart from thisIn the situation of bright spirit and scope, make various modifications and variations, such amendment and modification all fall into by claimsWithin limited range.

Claims (7)

1. the half reference video QoE objective evaluation method based on image feature information, is characterized in that, the method comprises:
Operator's end extracts conspicuousness hum pattern and the texture information figure of each two field picture in original video, and compression is processed described aobviousShow property hum pattern and texture information figure, obtain half reference data of original video;
Half reference data and the impaired video of the original video transmitting held by user side reception operator, extracts each in impaired videoThe conspicuousness hum pattern of two field picture and texture information figure, obtain half reference data of impaired video, according to original video and impairedHalf reference data of video, calculates the degree of impairment of impaired video, uses the good neural network algorithm of training in advance to feel subjectivityAssessed by mass M OS, wherein, described impaired video is the original video through operator's end of Erasure channel transmission;
The extraction of the texture information figure of described each two field picture comprises: edge extracting, morphological dilations are processed and add folded;
Wherein, described edge extracting comprises: the marginal information image that extracts current frame image;
Described morphological dilations comprises: the marginal information image of current frame image is carried out to morphological dilations processing, processedAfter marginal information image;
Describedly add stacked package and draw together: marginal information image after treatment and current frame image are added folded, obtain the texture of current frame imageHum pattern;
Described according to half reference data of original video and impaired video, calculate the degree of impairment of impaired video, comprising:
According to receiving to such an extent that each α, the β value in half reference data rebuild the histogram of the high-frequency sub-band that each frame is corresponding;
Calculate concrete impairment value by formula (28), wherein P represents each histogram that original video is corresponding, and Q represents impaired lookingFrequently corresponding each histogram, PmRepresent the each approximate histogram that GGD matching obtains, ϵ = D K L ( P m | | P ) = Σ i l n ( P m ( i ) P ( i ) ) P m ( i ) ;
For each video, will calculate 4 impairment values, i.e. Distortion in formula (28), corresponding spatial domain is aobvious respectivelyWork property weighted injury, spatial domain texture weighted injury, time domain conspicuousness weighted injury and time domain texture weighted injury;
D i s t o r t i o n ( P , Q ) = D K L ( P m | | Q ) - ϵ = Σ i l n ( P ( i ) Q ( i ) ) P m ( i ) - - - ( 28 )
2. method according to claim 1, is characterized in that, described conspicuousness hum pattern comprises time domain conspicuousness hum patternWith spatial domain conspicuousness hum pattern.
3. method according to claim 1, is characterized in that, described conspicuousness hum pattern comprises that the intensity of weighted dividesAmount, color component, durection component and skin color component.
4. method according to claim 1, is characterized in that, described texture information figure comprises time domain texture information figure and skyTerritory texture information figure.
5. method according to claim 1, is characterized in that, the texture information figure of described each two field picture comprises: operationBusiness holds the texture information figure of each two field picture in the impaired video of texture information figure, user side of each two field picture in original video.
6. method according to claim 1 and 2, is characterized in that, described compression processing comprises:
Adopt wavelet transformation to decompose conspicuousness hum pattern and the texture information figure of spatial domain and time domain two aspects, obtain different high frequencyBand;
Make the histogram of all high-frequency sub-band;
Adopt histogram the digital simulation error of all high-frequency sub-band of extensive Gaussian distribution GGD matching.
7. method according to claim 1, is characterized in that, described half reference data comprises: damage in conspicuousness information spatial domainWound, the damage of conspicuousness information time domain, the damage of texture information spatial domain and the damage of texture information time domain.
CN201410079834.6A 2014-03-05 2014-03-05 Based on half reference video QoE objective evaluation method of image feature information Expired - Fee Related CN103841410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410079834.6A CN103841410B (en) 2014-03-05 2014-03-05 Based on half reference video QoE objective evaluation method of image feature information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410079834.6A CN103841410B (en) 2014-03-05 2014-03-05 Based on half reference video QoE objective evaluation method of image feature information

Publications (2)

Publication Number Publication Date
CN103841410A CN103841410A (en) 2014-06-04
CN103841410B true CN103841410B (en) 2016-05-04

Family

ID=50804492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410079834.6A Expired - Fee Related CN103841410B (en) 2014-03-05 2014-03-05 Based on half reference video QoE objective evaluation method of image feature information

Country Status (1)

Country Link
CN (1) CN103841410B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104113788B (en) * 2014-07-09 2017-09-19 北京邮电大学 A kind of QoE training of TCP video stream traffics and the method and system assessed
CN107657251A (en) * 2016-07-26 2018-02-02 阿里巴巴集团控股有限公司 Determine the device and method of identity document display surface, image-recognizing method
CN106651829B (en) * 2016-09-23 2019-10-08 中国传媒大学 A kind of non-reference picture method for evaluating objective quality based on energy and texture analysis
CN109801266A (en) * 2018-12-27 2019-05-24 西南技术物理研究所 A kind of image quality measure system of wireless image data-link
CN110324613B (en) * 2019-07-30 2021-06-01 华南理工大学 Deep learning image evaluation method for video transmission quality
CN110599468A (en) * 2019-08-30 2019-12-20 中国信息通信研究院 No-reference video quality evaluation method and device
CN111242936A (en) * 2020-01-17 2020-06-05 苏州瓴图智能科技有限公司 Non-contact palm herpes detection device and method based on image
CN113011270A (en) * 2021-02-23 2021-06-22 中国矿业大学 Coal mining machine cutting state identification method based on vibration signals

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282481A (en) * 2008-05-09 2008-10-08 中国传媒大学 Method for evaluating video quality based on artificial neural net
CN101426150A (en) * 2008-12-08 2009-05-06 青岛海信电子产业控股股份有限公司 Video image quality evaluation method and system
CN101448175A (en) * 2008-12-25 2009-06-03 华东师范大学 Method for evaluating quality of streaming video without reference
CN102496162A (en) * 2011-12-21 2012-06-13 浙江大学 Method for evaluating quality of part of reference image based on non-tensor product wavelet filter
CN103281555A (en) * 2013-04-24 2013-09-04 北京邮电大学 Half reference assessment-based quality of experience (QoE) objective assessment method for video streaming service

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006099743A1 (en) * 2005-03-25 2006-09-28 Algolith Inc. Apparatus and method for objective assessment of dct-coded video quality with or without an original video sequence
KR100731358B1 (en) * 2005-11-09 2007-06-21 삼성전자주식회사 Method and system for measuring the video quality
KR20080029371A (en) * 2006-09-29 2008-04-03 광운대학교 산학협력단 Method of image quality evaluation, and system thereof
KR101033296B1 (en) * 2009-03-30 2011-05-09 한국전자통신연구원 Apparatus and method for extracting and decision-making of spatio-temporal feature in broadcasting and communication systems
JP2011186715A (en) * 2010-03-08 2011-09-22 Nk Works Kk Method and photographic image device evaluation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282481A (en) * 2008-05-09 2008-10-08 中国传媒大学 Method for evaluating video quality based on artificial neural net
CN101426150A (en) * 2008-12-08 2009-05-06 青岛海信电子产业控股股份有限公司 Video image quality evaluation method and system
CN101448175A (en) * 2008-12-25 2009-06-03 华东师范大学 Method for evaluating quality of streaming video without reference
CN102496162A (en) * 2011-12-21 2012-06-13 浙江大学 Method for evaluating quality of part of reference image based on non-tensor product wavelet filter
CN103281555A (en) * 2013-04-24 2013-09-04 北京邮电大学 Half reference assessment-based quality of experience (QoE) objective assessment method for video streaming service

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Reduced-Reference Image Quality Assessment Using A Wavelet-Domain Natural Image Statistic Model;Zhou Wang,Eero P. Simoncelli;《Human Vision and Electronic Imaging X, Proc. SPIE》;20050120;第5666卷;第1-2节 *
基于多特征类型的无线视频质量用户体验(QoE)方法研究;杨艳;《中国博士学位论文全文数据库 信息科技辑》;20130115;第1-2、5、7、20-22页 *
基于视觉显著性的网络丢包图像和视频的客观质量评估方法研究;冯欣;《中国博士学位论文全文数据库 信息科技辑》;20111215;第9、19-22页 *

Also Published As

Publication number Publication date
CN103841410A (en) 2014-06-04

Similar Documents

Publication Publication Date Title
CN103841410B (en) Based on half reference video QoE objective evaluation method of image feature information
CN109886997B (en) Identification frame determining method and device based on target detection and terminal equipment
CN104023230B (en) A kind of non-reference picture quality appraisement method based on gradient relevance
CN103996192B (en) Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model
CN109118470B (en) Image quality evaluation method and device, terminal and server
CN108010024B (en) Blind reference tone mapping image quality evaluation method
CN109978854B (en) Screen content image quality evaluation method based on edge and structural features
CN109255358B (en) 3D image quality evaluation method based on visual saliency and depth map
CN108830823B (en) Full-reference image quality evaluation method based on spatial domain combined frequency domain analysis
CN103426173B (en) Objective evaluation method for stereo image quality
CN105654142B (en) Based on natural scene statistics without reference stereo image quality evaluation method
CN103116763A (en) Vivo-face detection method based on HSV (hue, saturation, value) color space statistical characteristics
CN105744256A (en) Three-dimensional image quality objective evaluation method based on graph-based visual saliency
CN103473564B (en) A kind of obverse face detection method based on sensitizing range
CN110111347B (en) Image sign extraction method, device and storage medium
CN103517065B (en) Method for objectively evaluating quality of degraded reference three-dimensional picture
JP2021531571A (en) Certificate image extraction method and terminal equipment
CN111723714B (en) Method, device and medium for identifying authenticity of face image
CN106127234B (en) Non-reference picture quality appraisement method based on characteristics dictionary
CN106934770B (en) A kind of method and apparatus for evaluating haze image defog effect
CN109191428A (en) Full-reference image quality evaluating method based on masking textural characteristics
Geng et al. A stereoscopic image quality assessment model based on independent component analysis and binocular fusion property
CN102867295A (en) Color correction method for color image
CN107679469A (en) A kind of non-maxima suppression method based on deep learning
CN114202491B (en) Method and system for enhancing optical image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160504