CN102883179B - Objective evaluation method of video quality - Google Patents
Objective evaluation method of video quality Download PDFInfo
- Publication number
- CN102883179B CN102883179B CN201110194206.9A CN201110194206A CN102883179B CN 102883179 B CN102883179 B CN 102883179B CN 201110194206 A CN201110194206 A CN 201110194206A CN 102883179 B CN102883179 B CN 102883179B
- Authority
- CN
- China
- Prior art keywords
- video
- mass fraction
- frame
- block
- measured
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Abstract
The invention provides an objective evaluation method of the video quality. The objective evaluation method comprises the following steps: 10) segmenting a source video and a video to be detected at the same time point to obtain a video segment; 20) respectively extracting video blocks of a video frame in a video segment from the source video and the video to be detected, calculating the similarity of the corresponding video block according to the space-time textural feature, wherein the space-time textural feature embodies the pixel difference between the pixels; 30) calculating the quality score of the video frame from the video to be detected according to the similarity of the corresponding video block; 40) calculating the quality value of the video segment from the video to be detected according to the quality score of the video frame from the video to be detected, and further calculating the quality score of the video to be detected. By adopting the method, the obtained quality score is more in line with subjective perceptions of people.
Description
Technical field
The present invention relates to information engineering field, particularly, relate to image, video analysis process field.
Background technology
Along with the arrival of digital times, the media product such as image, video daily life in play more and more important role.And in the every field of Video processing: collection, display, storage, transmission, compression etc. all need to carry out quality evaluation.The research of quality evaluation technology has become one of basic research problem important in information engineering, and this its important theory significance existing, be equally also widely used background.
Usually, method for evaluating video quality is divided into subjective assessment and objective evaluation.In subjective assessment, the quality of video is that the average mark provided by observer determines.This method, for the quality evaluating video, beyond doubt the most accurately, but is also consumption manpower more consuming time.So the rising in recent years method of objective evaluation, is mainly divided three classes: complete with reference to evaluating, part is with reference to evaluating, without with reference to evaluating.This patent launches with reference to evaluating for complete.For complete with reference to evaluating, method conventional at present has:
1. based on traditional evaluation method of both full-pixel distortion statistical, such as: PSNR (peaksignal-to-noise ratio), MSE (mean squared error) etc.
2. based on the evaluation method of human visual system (HVS), such as: MPQM (MovingPictures Quality Metric), PDM (Perceptual Distortion Metric) and Sarnoff JNDvision model.
3., based on the evaluation method (SSIM) of picture structure similitude: natural image signal has specific structure, with very strong subordinate relation between pixel, these subordinate relation contain structural information important in a large number in visual scene.Therefore, a kind of new objective evaluation method of video quality is proposed: the image of structure based distortion and method for evaluating video quality-structural similarity (SSIM, StructuralSimilarity Index Metric) method.
But these methods above are all the methods of objective evaluation, there is no the subjective perception of measuring out people completely truly.That is: between the video quality mark that objective models provides and the subjective perception of people, always there is wide gap.
Summary of the invention
The present invention is to solve objective evaluation method of video quality of the prior art can not measure the subjective perception of people completely truly, there is the problem of gap in the evaluation provided and the subjective perception of people.
According to an aspect of the present invention, provide a kind of objective evaluation method of video quality, comprising:
10) at same time point cutting source video and video to be measured, video segment is obtained;
20) extract the video block of frame of video in the video segment from source video and video to be measured respectively, utilize space-time textural characteristics to calculate the similarity of corresponding video block, wherein said space-time textural characteristics embodies the pixel difference between pixel;
30) according to the Similarity Measure of the corresponding video block mass fraction from the frame of video of video to be measured;
40) calculate the mass value from the video segment of video to be measured according to the mass fraction from the frame of video of video to be measured, and then calculate the mass fraction of video to be measured.
In said method, described step 20) described in utilize space-time textural characteristics to calculate corresponding video block similarity comprise further:
203) in the three-dimensional bits of time-space information, local binarization is carried out to the pixel in corresponding video block;
204) add up the rotary mode under non-uniform pattern, part uniform pattern and another part uniform pattern bunch histogram;
205) similarity of video block is calculated according to the difference between histogram.
In said method, described step 20) described in utilize space-time textural characteristics to calculate corresponding video block similarity comprise further:
201) brightness of corresponding video block, contrast and structure is calculated;
202) similarity of the corresponding video block of space-time textural characteristics, brightness, contrast and Structure Calculation is utilized.
In said method, described step 10) after also comprise: enrich degree 11) according to video content, in the video segment that cutting obtains, select part video segment for subsequent treatment.
In said method, described step 11) after also comprise: enrich degree 12) according to video length and video content, first or last video segment be also used for subsequent treatment.
In said method, degree of the enriching comentropy of described video content is measured.
In said method, described step 40) in calculate video to be measured mass fraction be weighted on average to the mass value of the video segment from video to be measured, wherein the weight of first or last video segment is 2 times of other video segment weights.
In said method, described step 30) after also comprise the step of the mass fraction of the mass fraction revision present frame according to former frame.
In the described revision of said method, the revision value of present frame mass fraction is the subtraction function of former frame mass fraction.
The described revision of said method carries out according to the revision curve mass fraction phase matching adopting aforesaid method with the video to be measured adopting video quality subjective evaluation method to obtain respectively obtained.
Said method of the present invention, compared to the prior art reduces the gap between evaluation and the subjective perception of people provided, and makes the result of video quality objective assessment more meet the subjective perception of people.
Accompanying drawing explanation
Fig. 1 is objective evaluation method of video quality block diagram according to the preferred embodiment of the invention;
Fig. 2 is the flow chart utilizing space-time textural characteristics computing block similarity according to the preferred embodiment of the invention;
Fig. 3 a is 9 kinds of uniform pattern schematic diagrames of every pixel 8 neighborhood, and Fig. 3 b is 8 kinds of rotational sensitive pattern diagram for uniform pattern 1;
Fig. 4 a and Fig. 4 b is contrast effect schematic diagram;
Fig. 5 is quality correction schematic diagram according to the preferred embodiment of the invention.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with accompanying drawing, objective evaluation method of video quality is according to an embodiment of the invention further described.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
Describe the method in detail below in conjunction with the block diagram of objective evaluation method of video quality according to the preferred embodiment of the invention shown in Fig. 1, it mainly comprises the following steps:
By video structural, that is whole section of video is divided into video segment.Can adopt the methods such as such as shot boundary detector that source video is divided into a series of video segment.Because source video and video to be measured are time-space close alignment, so also cut video to be measured to obtain the video segment of a series of correspondence at same time point.
Preferably, enrich degree in order to what reduce that operation efficiency keeps video content as much as possible simultaneously, calculate the comentropy of each video segment, before comentropy is high k% video segment representatively fragment characterize video content.One of ordinary skill in the art will appreciate that, this sentences comentropy exemplarily, and other physical features in information theory also can be adopted to enrich degree to what characterize video content.
Preferably, the present invention also changes the representative fragment chosen according to frame position.
In the process of video quality evaluation, for frame of video, unless the context outside difference, because its position is in the video sequence different, its significance level is also different.Two kinds of common phenomenons are called respectively: forward frame more important (Primacy Effect) and frame rearward more important (RecencyEffect).
Due to the pattern adopting EoS in the process of video quality evaluation, after whole section of video playback, namely just provide the pattern of evaluation result, so when video is shorter and content is fairly simple, forward frame more accounts for leading more; When video is shorter but content is relatively enriched, frame rearward more accounts for leading; When video is long, owing to there is visual fatigue phenomenon, forward frame more accounts for leading.One of ordinary skill in the art will appreciate that, video length can be determined by the frame number contained by video, and the degree of enriching of video content can be determined by comentropy.Therefore, represent fragment for video above and choose link, devise a compensation policy.Enrich degree according to video length and content, if first or last fragment are not chosen as and represent fragment, still need it to cover.Preferably, when adopting average weighted mode to calculate the mass fraction of whole section of video, giving larger weight to this first or last fragment, more preferably, making its weight be 2 times of other frame of video.
Obtain after representing fragment, just can obtain the distortion level of video to be measured by the similitude representing fragment in tolerance source video and video to be measured.Owing to representing the set that fragment is a series of frame of video, so the similitude of tolerance frame of video is prerequisite.According to a preferred embodiment of the invention, the similitude of measuring frame of video is further comprising the steps:
First, from frame of video, extract a series of video block, such as, frame of video is equally divided into 9 video blocks.
Then, space-time textural characteristics is utilized to obtain respectively from the similarity between the frame of video of source video and two corresponding video blocks of the frame of video of video to be measured.
For space-time textural characteristics, it embodies the pixel difference between pixel, and what embody in the preferred embodiment is pixel difference between neighbor.Because space-time textural characteristics is to rotational sensitive, match with the perception of human eye, so be applicable to video quality evaluation system.Below in conjunction with Fig. 2, describe the detailed process utilizing space-time textural characteristics computing block similarity according to the preferred embodiment of the invention in detail.
Local binarization is carried out to the pixel in video block, specifically, exactly the point of central point neighborhood is all contrasted with central point, light than center, be set to 1, be secretly set to 0.As shown in Figure 3 a, if the fritter of 8 neighborhoods at each pixel place contains be less than or equal to 2 saltus steps, just it is referred to as uniform pattern (uniform patterns), this pattern accounts for more than 90% in image local texture.As: 00000000
2with 11111111
2there is 0 saltus step, and 11100011
2with 11000001
2deng there being 2 saltus steps.For 8 neighborhoods, should can see and have 9 kinds of uniform pattern.This local binarization can see in July, 2002 T.Ojala, " the Multiresolution Gray Scale and Rotation Invariant Texture Classificationwith Local Binary Patterns, " one that M.Pietikainen and T.Maenpaa. delivers at the 971-987 page of volume 24 the 7th phase of IEEE Trans.Pattern Analysis and Machine Intelligence is civilian.
Such as, but because human eye is responsive to rotation, so for the uniform pattern 1-7 shown in Fig. 3 a, often kind of corresponding 8 kinds of rotary modes again, 8 kinds of rotational sensitive patterns of the uniform pattern 1 shown in Fig. 3 b, then there is not rotary mode in uniform pattern 0 and 8.Like this, can obtain the histogram that 2+7*8=58 corresponds to bunch (bin) of often kind of pattern, wherein each bunch illustrates the number of pixels of associative mode.Add non-uniform pattern, obtain 59 bin altogether.
In addition, because video can be counted as the three-dimensional bits containing time-space information, so the present invention adds up respectively to XY, XT and YT tri-dimensions, obtain XY (59-bin), XT (59-bin) and YT (59-bin), carry out histogram and connect the histogram obtaining the 177-bin after connecting, wherein X and Y representation space territory, T represent time-domain.
One of ordinary skill in the art will appreciate that, the above is added up all rotary modes, certainly, also only can add up a part wherein, and another part adopts its uniform pattern itself.
Similarity between two video block space-time textures can be obtained for the mass fraction calculating video block by the card side's distance between the corresponding histogram that calculates two video blocks:
t(Block
1,Block
2)=exp{-χ
2(RSH
1,RSH
2)} (1)
Wherein, Block
1, Block
2represent from the video block of source video and the video block from video to be measured respectively, RSH
1, RSH
2represent the histogram obtained according to the video block from source video and the video block from video to be measured respectively.
To one with ordinary skill in the art would appreciate that in the preferred embodiment with card side's distance exemplarily, other modes can certainly be adopted to calculate similarity between two video block space-time textures, such as: Euclidean distance etc.
According to a preferred embodiment of the invention, except space-time texture information t (Block
1, Block
2), also merge following static nature to calculate the similarity of video block: brightness l (Block
1, Block
2), contrast c (Block
1, Block
2) and structure s (Block
1, Block
2).For brightness, the definition of contrast and structure is as follows respectively:
Wherein, μ
1, μ
2represent the pixel average of two video block pixels respectively, σ
1, σ
2the pixel criterion representing two video block pixels is respectively poor, σ
1,2represent the pixel covariance of two video block pixels.
Thus, by merging static nature and space-time textural characteristics, the similarity between two video blocks is:
QW(Block
1,Block
2)=l(Block
1,Block
2)×c(Block
1,Block
2)×s(Block
1,Block
2)×t(Block
1,Block
2)(5)
One of ordinary skill in the art will appreciate that, as mentioned above, static nature and space-time textural characteristics has been merged in the preferred embodiments of the present invention, consider the content information of frame of video like this, make obtained quality evaluation result better, but also only can be used as the similarity of video block by the similarity that space-time textural characteristics calculates, basic object of the present invention can be realized equally.
By being averaged to the similarity of video blocks all in frame of video, the mass fraction of each frame of video to be measured can be determined.One of ordinary skill in the art will appreciate that, average just a kind of implementation, also can adopt other implementations, such as weighted sum etc., still can realize basic object of the present invention.
Preferably, method for evaluating video quality provided by the present invention also utilizes the context property of frame to revise the above-mentioned mass fraction determined according to contrast effect.
As shown in figures 4 a and 4b, due to contrast effect, people can think that ball in the middle of in Fig. 4 a is larger than the ball in the middle of in Fig. 4 b usually, and they are equally large in fact.In video quality evaluation, there is this effect too, specifically: due to the memory effect of human brain, if the mass ratio of presenting to the former frame of observer is better, observer may underestimate the quality of frame subsequently; Otherwise if the mass ratio of the former frame seen is poor, observer may over-evaluate the quality of frame subsequently.In a word, the revision value of present frame mass fraction is the subtraction function of former frame mass fraction.By method in accordance with a preferred embodiment of the present invention will be adopted and adopt the matching of video quality subjective evaluation method phase, obtain revision curve as shown in Figure 5, can revise according to the current Quality mark of this revision curve to frame of video.The frame that mass fraction for former frame is less than 0.7, needs its mass fraction to strengthen, the frame that the mass fraction for former frame is greater than 0.9, needs its mass fraction to reduce.Preferably, the frame that mass fraction for former frame is less than 0.3, its mass fraction is added the number of 0.05 to 0.23, mass fraction for former frame is more than or equal to 0.3 and is less than the frame of 0.7, its mass fraction is added the number between 0 to 0.05, the frame that mass fraction for former frame is greater than 0.9, reduces the number between 0 to 0.1 by its mass fraction.
Revised for each frame mass fraction is weighted on average or directly on average, each mass value representing fragment can be obtained.
Last again by being weighted to each mass value representing fragment the degree value on average obtaining whole section of final video distortion, wherein, such as previously mentioned, the weights of first or last video segment are 2 times of other video segment to weights.One of ordinary skill in the art will appreciate that, weighted average is an example of implementation and unrestricted.
It should be noted that and understand, when not departing from the spirit and scope of the present invention required by accompanying claim, various amendment and improvement can be made to the present invention of foregoing detailed description.Therefore, the scope of claimed technical scheme is not by the restriction of given any specific exemplary teachings.
Claims (8)
1. an objective evaluation method of video quality, comprises the following steps:
10) at same time point cutting source video and video to be measured, video segment is obtained;
20) extract the video block of frame of video in the video segment from source video and video to be measured respectively, utilize space-time textural characteristics to calculate the similarity of corresponding video block, wherein said space-time textural characteristics embodies the pixel difference between pixel; Wherein, the similarity utilizing space-time textural characteristics to calculate corresponding video block comprises further:
In the three-dimensional bits of time-space information, local binarization is carried out to the pixel in corresponding video block;
Statistics non-uniform pattern, part uniform pattern and another part uniform pattern under rotary mode bunch histogram;
The similarity of video block is calculated according to the difference between histogram;
30) according to the Similarity Measure of the corresponding video block mass fraction from the frame of video of video to be measured; And, according to the mass fraction of the mass fraction revision present frame of former frame, comprising:
Revise according to by the mass fraction of the frame of video adopting the Similarity Measure of corresponding video block to obtain and the revision curve that the mass fraction phase matching of the video to be measured adopting video quality subjective evaluation method to obtain obtains; Wherein, the abscissa of the point on revision curve is the mass fraction of the former frame adopting the Similarity Measure of corresponding video block to obtain, and ordinate is the correction value that matching obtains; Wherein, if the mass fraction of former frame corresponding correction value in revision curve is greater than 0, then the mass fraction of present frame is strengthened; If the mass fraction of former frame corresponding correction value in revision curve is less than 0, then reduce the mass fraction of present frame;
40) calculate the mass value from the video segment of video to be measured according to the mass fraction from the frame of video of video to be measured, and then calculate the mass fraction of video to be measured.
2. method according to claim 1, is characterized in that, described step 20) described in card side's distance between difference histogram between histogram measure.
3. method according to claim 1, is characterized in that, described step 20) described in utilize space-time textural characteristics to calculate corresponding video block similarity comprise further:
Calculate the brightness of corresponding video block, contrast and structure;
Utilize the similarity of the corresponding video block of space-time textural characteristics, brightness, contrast and Structure Calculation.
4. the method according to any one of claims 1 to 3, is characterized in that, described step 10) further comprising the steps of afterwards:
11) enrich degree according to video content, in the video segment that cutting obtains, select part video segment for subsequent treatment.
5. method according to claim 4, is characterized in that, described step 11) further comprising the steps of afterwards:
12) enrich degree according to video length and video content, first or last video segment are also used for subsequent treatment.
6. method according to claim 4, is characterized in that, degree of the enriching comentropy of described video content is measured.
7. method according to claim 5, it is characterized in that, described step 40) in calculate the mass fraction of video to be measured be weighted on average to the mass value of the video segment from video to be measured, wherein the weight of first or last video segment is 2 times of other video segment weights.
8. method according to claim 1, is characterized in that, in described revision, the revision value of present frame mass fraction is the subtraction function of former frame mass fraction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110194206.9A CN102883179B (en) | 2011-07-12 | 2011-07-12 | Objective evaluation method of video quality |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110194206.9A CN102883179B (en) | 2011-07-12 | 2011-07-12 | Objective evaluation method of video quality |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102883179A CN102883179A (en) | 2013-01-16 |
CN102883179B true CN102883179B (en) | 2015-05-27 |
Family
ID=47484292
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110194206.9A Expired - Fee Related CN102883179B (en) | 2011-07-12 | 2011-07-12 | Objective evaluation method of video quality |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102883179B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103810467A (en) * | 2013-11-01 | 2014-05-21 | 中南民族大学 | Method for abnormal region detection based on self-similarity number encoding |
CN104504007B (en) * | 2014-12-10 | 2018-01-30 | 成都品果科技有限公司 | The acquisition methods and system of a kind of image similarity |
CN104504368A (en) * | 2014-12-10 | 2015-04-08 | 成都品果科技有限公司 | Image scene recognition method and image scene recognition system |
CN104796690B (en) * | 2015-04-17 | 2017-01-25 | 浙江理工大学 | Human brain memory model based non-reference video quality evaluation method |
CN106651934B (en) * | 2017-01-17 | 2019-06-25 | 湖南优象科技有限公司 | Automatically the method for texture block size is chosen in block textures synthesis |
CN107465914B (en) * | 2017-08-18 | 2019-03-12 | 电子科技大学 | Method for evaluating video quality based on Local textural feature and global brightness |
CN111866583B (en) * | 2019-04-24 | 2024-04-05 | 北京京东尚科信息技术有限公司 | Video monitoring resource adjusting method, device, medium and electronic equipment |
CN110401832B (en) * | 2019-07-19 | 2020-11-03 | 南京航空航天大学 | Panoramic video objective quality assessment method based on space-time pipeline modeling |
CN110381310B (en) * | 2019-07-23 | 2021-02-05 | 北京猎户星空科技有限公司 | Method and device for detecting health state of visual system |
CN110460874B (en) * | 2019-08-09 | 2020-07-03 | 腾讯科技(深圳)有限公司 | Video playing parameter generation method and device, storage medium and electronic equipment |
CN110751649B (en) * | 2019-10-29 | 2021-11-02 | 腾讯科技(深圳)有限公司 | Video quality evaluation method and device, electronic equipment and storage medium |
CN113724182A (en) * | 2020-05-21 | 2021-11-30 | 无锡科美达医疗科技有限公司 | No-reference video quality evaluation method based on expansion convolution and attention mechanism |
CN111639235B (en) * | 2020-06-01 | 2023-08-25 | 重庆紫光华山智安科技有限公司 | Video recording quality detection method and device, storage medium and electronic equipment |
CN112468807A (en) * | 2020-11-16 | 2021-03-09 | 北京达佳互联信息技术有限公司 | Method and device for determining coding type |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1809838A (en) * | 2003-06-18 | 2006-07-26 | 英国电讯有限公司 | Method and system for video quality assessment |
CN101378519A (en) * | 2008-09-28 | 2009-03-04 | 宁波大学 | Method for evaluating quality-lose referrence image quality base on Contourlet transformation |
CN101605272A (en) * | 2009-07-09 | 2009-12-16 | 浙江大学 | A kind of method for evaluating objective quality of partial reference type image |
CN101621709A (en) * | 2009-08-10 | 2010-01-06 | 浙江大学 | Method for evaluating objective quality of full-reference image |
CN101853504A (en) * | 2010-05-07 | 2010-10-06 | 厦门大学 | Image quality evaluating method based on visual character and structural similarity (SSIM) |
CN101998137A (en) * | 2009-08-21 | 2011-03-30 | 华为技术有限公司 | Method and device for acquiring video quality parameters as well as electronic equipment |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100505895C (en) * | 2005-01-17 | 2009-06-24 | 华为技术有限公司 | Video quality evaluation method |
CN1321390C (en) * | 2005-01-18 | 2007-06-13 | 中国电子科技集团公司第三十研究所 | Establishment of statistics concerned model of acounstic quality normalization |
CN101404778B (en) * | 2008-07-16 | 2011-01-19 | 河北师范大学 | Integrated non-reference video quality appraisement method |
CN101763440B (en) * | 2010-03-26 | 2011-07-20 | 上海交通大学 | Method for filtering searched images |
-
2011
- 2011-07-12 CN CN201110194206.9A patent/CN102883179B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1809838A (en) * | 2003-06-18 | 2006-07-26 | 英国电讯有限公司 | Method and system for video quality assessment |
CN101378519A (en) * | 2008-09-28 | 2009-03-04 | 宁波大学 | Method for evaluating quality-lose referrence image quality base on Contourlet transformation |
CN101605272A (en) * | 2009-07-09 | 2009-12-16 | 浙江大学 | A kind of method for evaluating objective quality of partial reference type image |
CN101621709A (en) * | 2009-08-10 | 2010-01-06 | 浙江大学 | Method for evaluating objective quality of full-reference image |
CN101998137A (en) * | 2009-08-21 | 2011-03-30 | 华为技术有限公司 | Method and device for acquiring video quality parameters as well as electronic equipment |
CN101853504A (en) * | 2010-05-07 | 2010-10-06 | 厦门大学 | Image quality evaluating method based on visual character and structural similarity (SSIM) |
Non-Patent Citations (1)
Title |
---|
Multiresolution Gray Scale and Rotation Invariant Texture Classification with Local Binary Patterns;T.Ojala,M.Pietikäinen,T.Mäenpää;《IEEE Transactions on Pattern Analysis and Machine Intelligence》;20020730;第24卷(第7期);第971-987页 * |
Also Published As
Publication number | Publication date |
---|---|
CN102883179A (en) | 2013-01-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102883179B (en) | Objective evaluation method of video quality | |
CN108090902B (en) | Non-reference image quality objective evaluation method based on multi-scale generation countermeasure network | |
CN111784602B (en) | Method for generating countermeasure network for image restoration | |
CN107483920B (en) | A kind of panoramic video appraisal procedure and system based on multi-layer quality factor | |
CN103578119B (en) | Target detection method in Codebook dynamic scene based on superpixels | |
CN106548160A (en) | A kind of face smile detection method | |
CN103313047B (en) | A kind of method for video coding and device | |
CN111179187B (en) | Single image rain removing method based on cyclic generation countermeasure network | |
CN104811691B (en) | A kind of stereoscopic video quality method for objectively evaluating based on wavelet transformation | |
Tian et al. | Quality assessment of DIBR-synthesized views: An overview | |
CN107240084A (en) | A kind of removing rain based on single image method and device | |
CN109635822B (en) | Stereoscopic image visual saliency extraction method based on deep learning coding and decoding network | |
CN103152600A (en) | Three-dimensional video quality evaluation method | |
CN106341677B (en) | Virtual view method for evaluating video quality | |
CN101976444A (en) | Pixel type based objective assessment method of image quality by utilizing structural similarity | |
CN106127234B (en) | Non-reference picture quality appraisement method based on characteristics dictionary | |
CN104992419A (en) | Super pixel Gaussian filtering pre-processing method based on JND factor | |
CN107545570A (en) | A kind of reconstructed image quality evaluation method of half reference chart | |
CN105376563A (en) | No-reference three-dimensional image quality evaluation method based on binocular fusion feature similarity | |
Yang et al. | No-reference quality evaluation of stereoscopic video based on spatio-temporal texture | |
CN110910365A (en) | Quality evaluation method for multi-exposure fusion image of dynamic scene and static scene simultaneously | |
CN101329762A (en) | Method for evaluating adjustable fidelity based on content relevant image dimension | |
Tu et al. | V-PCC projection based blind point cloud quality assessment for compression distortion | |
CN103108209B (en) | Stereo image objective quality evaluation method based on integration of visual threshold value and passage | |
CN111311584B (en) | Video quality evaluation method and device, electronic equipment and readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20150527 Termination date: 20200712 |
|
CF01 | Termination of patent right due to non-payment of annual fee |