CN104506852A - Objective quality assessment method facing video conference encoding - Google Patents

Objective quality assessment method facing video conference encoding Download PDF

Info

Publication number
CN104506852A
CN104506852A CN201410826849.4A CN201410826849A CN104506852A CN 104506852 A CN104506852 A CN 104506852A CN 201410826849 A CN201410826849 A CN 201410826849A CN 104506852 A CN104506852 A CN 104506852A
Authority
CN
China
Prior art keywords
face
area
mouth
eye
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410826849.4A
Other languages
Chinese (zh)
Other versions
CN104506852B (en
Inventor
徐迈
马源
张京泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201410826849.4A priority Critical patent/CN104506852B/en
Publication of CN104506852A publication Critical patent/CN104506852A/en
Application granted granted Critical
Publication of CN104506852B publication Critical patent/CN104506852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses an objective quality assessment method facing video conference encoding. The method includes a training part and an assessment part. The training part includes the steps of firstly, extracting the face and a face area; secondly, acquiring the concern degree of a single pixel point; thirdly, performing calibration and normalization on the face area; fourthly, acquiring a Gaussian mixture model. The assessment part includes the steps of firstly, aiming at a group of videos, automatically extracting the pixel number in the background area, the fact area, the left eye area, the right edge area, the mouth area and the nose area; secondly, performing calibration and normalization on the face area; thirdly, acquiring a weight atlas; fourthly, calculating the peak signal to noise ratio based on the Gaussian mixture model, and assessing the image quality of a video conference system after encoding. The method has the advantages that the defect that traditional methods do not consider video contents is overcome, image quality assessment precision is improved by giving more weights on video image faces, and objective quality assessment results can be reflected.

Description

A kind of objective quality assessment method towards video conference coding
Technical field
The present invention relates to a kind of objective quality assessment method towards video conference coding, belong to the perception visual quality assessment technology field of video conference coding.
Background technology
When assessing the efficiency of different Video coding modes, the index of visual quality is absolutely necessary.The visual quality assessment of perception Video coding can be divided into two classes: subjective evaluation and objective evaluation.Because the mankind are the most direct recipients when watching video, subjective visual quality assessment is the most accurate in the method for assessment Video coding, the most reliably.But its poor efficiency and high cost facilitate the development of the evaluation index of objective visual quality.The object of objective evaluation improves the correlation of itself and subjective visual quality, to measure visual quality exactly.The most widely used objective indicator comprises peak signal noise ratio (peak signal-to-noiseratio, PSNR), structural similarity (structural similarity, SSIM), visual signal noise ratio (visualsignal-to-noise ratio, VSNR), visual quality metrics (video quality metrics, and based drive integrity of video evaluation (MOtion-based Video Integrity Evaluation, MOVIE) VQM).
The video conference of perception Video coding is widely studied, because face is a ROI (Region-of-Interest, area-of-interest) for video conference.But, be not the objective visual quality appraisal procedure of video conference exploitation now specially.
Summary of the invention
The object of the invention is the deficiency of the objective evaluation method in order to solve existing video quality, providing a kind of objective indicator for video conference coding, be intended to improve the correlation between the subjective perceptual quality of beholder.
Towards an objective quality assessment method for video conference coding, comprise training and assessment two parts;
Training department divides and comprises following step:
Step one: face and face area extract;
Step 2: carry out eye tracker experiment, obtains the focus coordinate position for each two field picture when tester watches video, obtains the concerned degree of single pixel;
Step 3: face area is calibrated and normalization;
Step 4: obtain gauss hybrid models;
Evaluation part comprises following step:
Step one: for one group of video, repetition training part steps one, extracts number of pixels in background, face, left eye, right eye, mouth, nasal area automatically;
Step 2: the step 3 of repetition training process, calibrates and normalization face area;
Step 3: obtain on gauss hybrid models basis in the training stage, calculates the Gaussian Profile weight around right eye, left eye, mouth, nose, other regions of face, the weight of background area and above each region, obtains weight collection of illustrative plates;
Step 4: on weight collection of illustrative plates basis, calculate the Y-PSNR based on gauss hybrid models, the picture quality after assessment video conferencing system coding.
The invention has the advantages that:
(1) the present invention is directed to the image quality measure method after video conferencing system coding, avoid the deficiency that conventional method reckons without video content, by giving video image face more weight, promote the precision of image quality measure, make it more reflect the result of subjective quality assessment;
(2) the present invention extracts on basis at each region of face (as nose, face), some key areas for face give larger weight, thus meet current and that future video conference system resolution improves constantly, display size is increasing development trend;
(3) the present invention is by introducing the experimental data of eye tracker, in conjunction with the computational tool of statistical learning, the rule of people's visual attention when video conference can be excavated, be applied to the image quality measure after video conferencing system coding further, significantly improve the degree of correlation of itself and subjective quality assessment.
Accompanying drawing explanation
Fig. 1 is method flow diagram of the present invention;
Fig. 2 face feature automatic Calibration algorithm;
The automatic extraction of Fig. 3 face key area;
Fig. 4 calibration and normalized method;
The method for drafting of Fig. 5 weight collection of illustrative plates;
The calculating signal of Fig. 6 GMM-PSNR.
Embodiment
Below in conjunction with drawings and Examples, the present invention is described in further detail.
Present invention employs the method for real time facial characteristic automatic Calibration, to follow the tracks of the key feature points of face.After Face datection, by combining local detection (texture information) and global optimization (face structure), the distributed model (PDM) of the point of key feature generates in frame of video.In the present invention, the face utilizing 66 PDM to extract and the profile of face.The PDM of 66 can carry out the sampling of the key point of face and facial characteristics well, and therefore these points can connect profile and the region of face and the face feature accurately extracted.Therefore, in the method that is used in of the PDM of 66 with the face extracted and face's key feature.Finally, face and face's key area are according to their contours extract.
In the video of conversational class scene, found through experiments, in video, face content can attract observer's overwhelming majority concern power.Therefore, according to the difference of the concern power of observer, quantize background further, face and the unequal importance of face, thus promote the objective quality assessment accuracy of video conference.In order to obtain the value of so unequal importance, the video that meeting is relevant carries out the experiment of some eye tracker.
In an experiment, use eye tracker to have recorded observer when watching video, drop on the eye gaze point in frame of video.The focus of eye gaze point representative observer, therefore, the result of eye tracking can be used for producing subjective concern model.After eye tracker experiment, belong to right eye, left eye, mouth, nose, other regions of face, and the number of the eye gaze point of background is recorded.According to the number of eyeball fixes point dropping on zones of different, introduce a new concept, eye gaze point/pixel (EFP/P), to be reflected in the pixel level of these region attention rates.Here, following EFP/P value is had.
After obtaining the result of above-mentioned eye tracker experiment, use it to train GMM, to produce the weights of importance collection of illustrative plates of each frame of video.Therefore, GMM-PSNR can by calculating in conjunction with corresponding weight collection of illustrative plates.Before training GMM, the eye gaze point that preliminary treatment obtains at upper joint with calibration and normalization be carried out.Subsequently, in calibration and normalized eyeball fixes point, GMM is trained with expectation maximization (EM) algorithm.GMM can by run several times EM iteration until convergence and obtain.In view of the parameter of the GMM obtained, weight collection of illustrative plates can be calculated, set up objective metric G MM-PSNR.
The present invention is a kind of objective quality assessment method towards video conference coding, and flow process as shown in Figure 1, comprises training and assessment two parts;
Training department divides and comprises following step:
Step one: face and face area extract;
Face feature automatic Calibration algorithm is utilized automatically to extract number of pixels in background, face, left eye, right eye, mouth, nasal area in given video conference sequence.
Be specially: first, the face area key point in video conference sequence in each two field picture is obtained, second by face feature automatic Calibration algorithm, utilize mean value drift technology on the face area extracted, the left eye in Local Search face area image, right eye, mouth, nasal area key point, and these key points are mated with the key point distributed model (PDM) in database, realize left eye, right eye, mouth, nasal area key point is optimized, and the 3rd, face in each two field picture after being optimized, left eye, right eye, mouth, nasal area key point, as shown in Figure 2, obtains 66 key points, the 4th altogether, respectively by face, left eye, right eye, mouth, the key point of nasal area is connected, and obtains face, left eye, right eye, mouth, nose profile, as shown in Figure 3, the 5th, obtain face respectively, left eye, right eye, mouth, number of pixels in nasal area, deducts face's number of pixels by image pixel number, obtains background pixel number, finally realizes the automatic extraction of face's key area.
Wherein, points distribution models adopts mean value drift technology, by the training to one group of standard testing image.
Wherein, the face in different facial image, left eye, right eye, mouth, nasal area key point can be extracted.
Step 2: carry out eye tracker experiment, obtains the focus coordinate position for each two field picture when tester watches video, obtains the concerned degree of single pixel;
If the concerned degree of single region (left eye, right eye, mouth, nose, other regions of face, background) is eyes focus number/this area pixel number (efp/p):
c r = f r / p r c l = f l / p l c m = f m / p m c n = f n / p n c o = f o / p o c b = f b / p b
Wherein: c r, c l, c m, c n, c o, c brepresent the degree of concern of single pixel of right eye, left eye, mouth, nose, other regions of face, background area respectively, f r, f l, f m, f n, f o, f bbe illustrated respectively in eye tracker experiment, tester drops on the eyes focus number of right eye, left eye, mouth, nose, other regions of face, background area, p r, p l, p m, p n, p o, p brepresent the pixel number in right eye, left eye, mouth, nose, other regions of face, background area respectively;
Step 3: face area is calibrated and normalization;
Calibration can avoid the face uncertainty that causes of diverse location in the picture, and normalized method can make the present invention be adapted to the situation that in video conference, human face region number of pixels does not wait.
Concrete grammar is:
As shown in Fig. 4 (a), random selecting one two field picture, adopt leftmost side point in image face area key point, as calibration original point B, to obtain in other images point A in the leftmost side in face area key point, obtain coordinate transformation relation between A, B, focus in other images is changed according to coordinate transformation relation, completes calibration.
As shown in Fig. 4 (b), random selecting one two field picture, the abscissa length (point on the right side of the right eye in 66 o'clock and the distance between the point on the left of right eye) of personage's right eye in image is adopted as normalization unit, to be normalized according to normalization unit by the focus in other images.
Step 4: obtain gauss hybrid models;
Suppose that eye gaze point obeys gauss hybrid models, on the basis of normalization with calibration eye tracker data, the linear superposition being write as Gaussian component by gauss hybrid models is as follows:
p ( x * ) = Σ k = 1 K π k ℵ k ( x * )
ℵ k ( x * ) = 1 2 π · 1 | Σ k | 1 2 · exp { - 1 2 ( x * - μ k ) T · Σ k - 1 · ( x * - μ k ) }
Wherein: represent a Gaussian component, π k, μ kand Σ kthe mixed coefficint of a kth Gaussian component, average and variance, and x *represent the eye gaze point after two-dimensional calibration and normalization.K represents the quantity of the Gaussian component of GMM.Due to the number ratio eyes of eye gaze point and the much less of mouth of nose, the number K of Gaussian component is set to 3 here, they correspond to right eye, left eye and mouth separately.Meanwhile, by μ kbe set to the normalization barycenter of each face feature.
Above-mentioned steps in off-line case, for one group of training video, by the experiment of design eye tracker and data analysis thereof, obtains the gauss hybrid models for assessment of video conferencing system objective quality.
(2) evaluation part comprises following step
Step one: the step one with training process is identical, extracts background, face, left eye, right eye, mouth, nasal area automatically.
Step 2: the step 3 with training process is identical, the face area of calibration and normalization video.Particular content is shown in Fig. 4.
Step 3: obtain on gauss hybrid models basis in the training stage, calculate the Gaussian Profile weight around right eye, left eye, mouth, nose, other regions of face, the weight of background area and above each region, particular content is shown in Fig. 5.
Fig. 5 is the method for drafting of weight collection of illustrative plates.In this example is implemented, by the importance of face and each pixel of background in weight collection of illustrative plates quantitation video conference system.The two field picture being input as video conference of this example.First, face is automatically extracted with the key area of face according to the method for Fig. 3.Secondly, the key point in video is carried out calibrating and normalization according to the method for Fig. 4.Finally, use Fig. 1 training department step by step two, four GMM train the parameter obtained, according to each pixel affiliated area (mainly containing background, face, left eye, right eye, nose, face), the weight of each pixel is gone out by formulae discovery below, and export the weight collection of illustrative plates of this video conference image, by the importance of weight size setting image each pixel when quality evaluation.
Wherein,
g ( x ) = max k π k ℵ k ( x ) Σ x ∈ others max k π k ℵ k ( x ) · p o
The present invention is not limited to the weight size of adopting and setting image pixel in this way.
Step 4: on weight collection of illustrative plates basis, calculates the Y-PSNR (GMM-PSNR) based on gauss hybrid models, the picture quality after assessment video conferencing system coding.Particular content is shown in Fig. 4.
Fig. 6 is the calculating signal of GMM-PSNR.In this example is implemented, the GMM-PSNR of the calculating exportable measurement video conferencing system encoded images quality of GMM-PSNR.First, with traditional balancing method (as PSNR), by calculating the root-mean-square error of raw video image and video image to be assessed, obtain the residual error of image before and after coding.Then, the weighting of root-mean-square error with weight collection of illustrative plates is multiplied, the value of GMM-MSE can be obtained.Finally, by the method for taking the logarithm, GMM-PSNR is calculated.Circular and computing formula thereof are in the explanation of Fig. 1.The present invention is not limited to the improvement to traditional PS NR.Also can improve by being multiplied with the weighting in weight collection of illustrative plates other balancing methods (as SSIM=Structural SIMilarity, SSIM).
Specific formula for calculation is as follows:
MSE GMM = Σ i = 1 M Σ j = 1 N ( ω x · ( I x ′ - I x ) ) 2 Σ i = 1 M Σ j = 1 N ω x 2
PSNR GMM = 10 · log ( 2 n - 1 ) 2 MSE GMM
Wherein I ' xand I xthe value of pixel x on process video and original video frame respectively.M and N be respectively vertically with the pixel count of horizontal direction.N (=8) is bit depth.
Finally, based on the Y-PSNR (GMM-PSNR) of gauss hybrid models after the present invention's exportable video conferencing system coding, be used for weighing the reduction situation of picture quality before and after Video coding.Identical with conventional peak signal to noise ratio (PSNR), the unit of measurement of GMM-PSNR is dB.But, due to people when watching video for image in the attention rate in each region different, the face area that GMM-PSNR does not wait for importance in video conferencing system gives the weight varied in size, thus significantly promotes the degree of correlation of itself and subjective quality assessment.
The present invention can provide a kind of more efficiently evaluating method for the quality of transmission of video in video conference.Through test, relative to traditional Objective Video appraisal procedure, as VQM, MOVIE, PSNR, GMM-PSNR significantly improves and subjective testing standard, as MOS, DMOS, between correlation, describe GMM-PSNR can as a kind of more effective towards video conference coding objective metric.This Video processing for video conference, compression and video communication are all very favourable.It can monitor the performance of video system, and provides the feedback regulating codec or channel parameter, ensures video quality within the acceptable range.Video quality assessment standard also may be used for design to codec performance, evaluates and optimizes.It also may be used for designing and optimizing the Digital Video System meeting visual model.
The present invention relates to the objective quality assessment method of video sequence, for the perception visual quality assessment of video conference coding.Present invention employs the real-time technique that eye tracker experiment and face and face feature are extracted.In an experiment, the importance in background, face and face feature region is determined the attention rate of various piece based on observer.The eye gaze point utilizing eye tracker to collect, and suppose that it is distributed as gauss hybrid models, can generate a weights of importance collection of illustrative plates, observable person is for the attention rate in region each in TV news thus.According to the weight collection of illustrative plates that this produces, different weights can be distributed to each pixel in frame of video, thus improve existing Objective Quality Assessment method.More specifically, the present invention relates to the perceived video quality assessment of a kind of video conference based on existing video quality evaluation method coding.

Claims (3)

1., towards an objective quality assessment method for video conference coding, comprise training and assessment two parts;
Training department divides and comprises following step:
Step one: face and face area extract;
Face feature automatic Calibration algorithm is utilized automatically to extract number of pixels in background, face, left eye, right eye, mouth, nasal area in given video conference sequence;
Step 2: carry out eye tracker experiment, obtains the focus coordinate position for each two field picture when tester watches video, obtains the concerned degree of single pixel;
If the concerned degree in single region is eyes focus number/this area pixel number efp/p, wherein single region is left eye, right eye, mouth, nose, other regions of face or background, then:
c r = f r / p r c l = f l / p l c m = f m / p m c n = f n / p n c o = f o / p o c b = f b / p b
Wherein: c r, c l, c m, c n, c o, c brepresent the degree of concern of single pixel of right eye, left eye, mouth, nose, other regions of face, background area respectively, f r, f l, f m, f n, f o, f bbe illustrated respectively in eye tracker experiment, tester drops on the eyes focus number of right eye, left eye, mouth, nose, other regions of face, background area, p r, p l, p m, p n, p o, p brepresent the pixel number in right eye, left eye, mouth, nose, other regions of face, background area respectively;
Step 3: face area is calibrated and normalization;
Concrete grammar is:
Random selecting one two field picture, adopt leftmost side point in image face area key point, as calibration original point B, to obtain in other images point A in the leftmost side in face area key point, obtain coordinate transformation relation between A, B, focus in other images is changed according to coordinate transformation relation, completes calibration;
Random selecting one two field picture, adopts the abscissa length of personage's right eye in image as normalization unit, is normalized by the focus in other images according to normalization unit;
Step 4: obtain gauss hybrid models;
Suppose that eye gaze point obeys gauss hybrid models, on the basis of normalization with calibration eye tracker data, the linear superposition being write as Gaussian component by gauss hybrid models is as follows:
Wherein: represent a Gaussian component, π k, μ kand Σ kthe mixed coefficint of a kth Gaussian component, average and variance, and x *represent the eye gaze point after two-dimensional calibration and normalization; K represents the quantity of the Gaussian component of GMM;
Above-mentioned steps in off-line case, for one group of training video, obtains the gauss hybrid models for assessment of video conferencing system objective quality;
Evaluation part comprises following step:
Step one: for one group of video, repetition training part steps one, extracts number of pixels in background, face, left eye, right eye, mouth, nasal area automatically;
Step 2: the step 3 of repetition training process, calibrates and normalization face area;
Step 3: obtain on gauss hybrid models basis in the training stage, calculates the Gaussian Profile weight around right eye, left eye, mouth, nose, other regions of face, the weight of background area and above each region, obtains weight collection of illustrative plates;
Step 4: on weight collection of illustrative plates basis, calculate the Y-PSNR based on gauss hybrid models, the picture quality after assessment video conferencing system coding.
2. a kind of objective quality assessment method towards video conference coding according to claim 1, the step one of described training part is specially:
The first, the face area key point in video conference sequence in each two field picture is obtained by face feature automatic Calibration algorithm;
The second, utilize mean value drift technology on the face area extracted, left eye in Local Search face area image, right eye, mouth, nasal area key point, and these key points are mated with the key point distributed model in database, realize left eye, right eye, mouth, the optimization of nasal area key point;
Three, the face in each two field picture after being optimized, left eye, right eye, mouth, nasal area key point;
Four, respectively the key point of face, left eye, right eye, mouth, nasal area is connected, obtains face, left eye, right eye, mouth, nose profile;
Five, obtain number of pixels in face, left eye, right eye, mouth, nasal area respectively, image pixel number is deducted face's number of pixels, obtain background pixel number, finally realize the automatic extraction of face's key area.
3. a kind of objective quality assessment method towards video conference coding according to claim 1, in the step 4 of described training part, if K=3, corresponds respectively to right eye, left eye and mouth; If μ kfor the normalization barycenter of each face feature.
CN201410826849.4A 2014-12-25 2014-12-25 A kind of objective quality assessment method towards video conference coding Active CN104506852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410826849.4A CN104506852B (en) 2014-12-25 2014-12-25 A kind of objective quality assessment method towards video conference coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410826849.4A CN104506852B (en) 2014-12-25 2014-12-25 A kind of objective quality assessment method towards video conference coding

Publications (2)

Publication Number Publication Date
CN104506852A true CN104506852A (en) 2015-04-08
CN104506852B CN104506852B (en) 2016-08-24

Family

ID=52948564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410826849.4A Active CN104506852B (en) 2014-12-25 2014-12-25 A kind of objective quality assessment method towards video conference coding

Country Status (1)

Country Link
CN (1) CN104506852B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376645A (en) * 2018-10-18 2019-02-22 深圳英飞拓科技股份有限公司 A kind of face image data preferred method, device and terminal device
CN110365966A (en) * 2019-06-11 2019-10-22 北京航空航天大学 A kind of method for evaluating video quality and device based on form
US10860858B2 (en) * 2018-06-15 2020-12-08 Adobe Inc. Utilizing a trained multi-modal combination model for content and text-based evaluation and distribution of digital video content to client devices
CN113506260A (en) * 2021-07-05 2021-10-15 北京房江湖科技有限公司 Face image quality evaluation method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110069138A1 (en) * 2009-09-24 2011-03-24 Microsoft Corporation Mimicking human visual system in detecting blockiness artifacts in compressed video streams
CN102170552A (en) * 2010-02-25 2011-08-31 株式会社理光 Video conference system and processing method used therein
CN102984540A (en) * 2012-12-07 2013-03-20 浙江大学 Video quality assessment method estimated on basis of macroblock domain distortion degree
WO2013056123A2 (en) * 2011-10-14 2013-04-18 T-Mobile USA, Inc Quality of user experience testing for video transmissions
CN104243994A (en) * 2014-09-26 2014-12-24 厦门亿联网络技术股份有限公司 Method for real-time motion sensing of image enhancement

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110069138A1 (en) * 2009-09-24 2011-03-24 Microsoft Corporation Mimicking human visual system in detecting blockiness artifacts in compressed video streams
CN102170552A (en) * 2010-02-25 2011-08-31 株式会社理光 Video conference system and processing method used therein
WO2013056123A2 (en) * 2011-10-14 2013-04-18 T-Mobile USA, Inc Quality of user experience testing for video transmissions
CN102984540A (en) * 2012-12-07 2013-03-20 浙江大学 Video quality assessment method estimated on basis of macroblock domain distortion degree
CN104243994A (en) * 2014-09-26 2014-12-24 厦门亿联网络技术股份有限公司 Method for real-time motion sensing of image enhancement

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
余良强: "《视频会议系统中的无参考视频质量评价方法》", 《CNKI》, 9 June 2014 (2014-06-09) *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10860858B2 (en) * 2018-06-15 2020-12-08 Adobe Inc. Utilizing a trained multi-modal combination model for content and text-based evaluation and distribution of digital video content to client devices
CN109376645A (en) * 2018-10-18 2019-02-22 深圳英飞拓科技股份有限公司 A kind of face image data preferred method, device and terminal device
CN109376645B (en) * 2018-10-18 2021-03-26 深圳英飞拓科技股份有限公司 Face image data optimization method and device and terminal equipment
CN110365966A (en) * 2019-06-11 2019-10-22 北京航空航天大学 A kind of method for evaluating video quality and device based on form
CN113506260A (en) * 2021-07-05 2021-10-15 北京房江湖科技有限公司 Face image quality evaluation method and device, electronic equipment and storage medium
CN113506260B (en) * 2021-07-05 2023-08-29 贝壳找房(北京)科技有限公司 Face image quality assessment method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN104506852B (en) 2016-08-24

Similar Documents

Publication Publication Date Title
CN104079925B (en) Ultra high-definition video image quality method for objectively evaluating based on vision perception characteristic
WO2018023734A1 (en) Significance testing method for 3d image
CN103152600B (en) Three-dimensional video quality evaluation method
CN102421007B (en) Image quality evaluating method based on multi-scale structure similarity weighted aggregate
CN101976444B (en) Pixel type based objective assessment method of image quality by utilizing structural similarity
CN104811691B (en) A kind of stereoscopic video quality method for objectively evaluating based on wavelet transformation
CN105160678A (en) Convolutional-neural-network-based reference-free three-dimensional image quality evaluation method
CN101950422B (en) Singular value decomposition(SVD)-based image quality evaluation method
CN107396095B (en) A kind of no reference three-dimensional image quality evaluation method
CN102663747B (en) Stereo image objectivity quality evaluation method based on visual perception
CN105338343A (en) No-reference stereo image quality evaluation method based on binocular perception
CN104202594B (en) A kind of method for evaluating video quality based on 3 D wavelet transformation
CN106920232A (en) Gradient similarity graph image quality evaluation method and system based on conspicuousness detection
CN104506852B (en) A kind of objective quality assessment method towards video conference coding
CN103096122A (en) Stereoscopic vision comfort level evaluation method based on motion features inside area of interest
Yang et al. Blind assessment for stereo images considering binocular characteristics and deep perception map based on deep belief network
CN107743225B (en) A method of it is characterized using multilayer depth and carries out non-reference picture prediction of quality
CN106791822B (en) It is a kind of based on single binocular feature learning without reference stereo image quality evaluation method
US20230025527A1 (en) Quantitative analysis method and system for attention based on line-of-sight estimation neural network
CN108259893B (en) Virtual reality video quality evaluation method based on double-current convolutional neural network
CN106447695A (en) Same object determining method and device in multi-object tracking
CN105894507B (en) Image quality evaluating method based on amount of image information natural scene statistical nature
CN106993188A (en) A kind of HEVC compaction coding methods based on plurality of human faces saliency
CN102708568B (en) Stereoscopic image objective quality evaluation method on basis of structural distortion
CN104144339B (en) A kind of matter based on Human Perception is fallen with reference to objective evaluation method for quality of stereo images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant