CN103596007B - A kind of method adjusting video encoding quality according to viewing environment change - Google Patents

A kind of method adjusting video encoding quality according to viewing environment change Download PDF

Info

Publication number
CN103596007B
CN103596007B CN201310546122.6A CN201310546122A CN103596007B CN 103596007 B CN103596007 B CN 103596007B CN 201310546122 A CN201310546122 A CN 201310546122A CN 103596007 B CN103596007 B CN 103596007B
Authority
CN
China
Prior art keywords
viewing environment
psnr
viewing
frame
quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310546122.6A
Other languages
Chinese (zh)
Other versions
CN103596007A (en
Inventor
陈加忠
熊端
李榕
朱鹏飞
王冼
舒琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201310546122.6A priority Critical patent/CN103596007B/en
Publication of CN103596007A publication Critical patent/CN103596007A/en
Application granted granted Critical
Publication of CN103596007B publication Critical patent/CN103596007B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a kind of method adjusting video encoding quality according to viewing environment change, wherein, mobile device obtains the ambient parameter gathered by the sensor carried, and use the network transmission protocol that ambient parameter is sent to video server, the ambient parameter analysis that video server analysis receives, and estimate the viewing environment state that mobile device is current, the quality of Video coding is dynamically changed according to the viewing environment state of estimation.The present invention, by dynamically adjusting video flow quality, can mate current viewing scene best, improves visual quality and Quality of experience, and improves the utilization ratio of bandwidth.

Description

A kind of method adjusting video encoding quality according to viewing environment change
Technical field
The present invention relates to a kind of method for video coding, be specifically related to a kind of video encoding quality according to viewing environment change Method of adjustment.
Background technology
Traditional Video Codec uses the statistical redundancy eliminated on room and time to realize the compression of information content.For Further raising compression ratio and do not sacrifice the quality that vision system can be perceived, it should better profit from regarding in video content Feel redundancy.The existence of this visual redundancy is because the nonlinear characteristic of human visual system (HVS).And on the mobile apparatus Video Applications and traditional video-see are distinguished of both existing.First, in terms of screen size and brightness, mobile device Display screen is different with TV or computer monitor, even the most suitable different between mobile device, such as panel computer Compare with mobile phone and be generally of bigger display screen, or the resolution ratio of different mobile phones also can be different.Secondly, mobile device exists Can be in various different environment during viewing video.Viewing closely, extraneous light and beholder's health rock (movement), These all perceived qualities on beholder cause great impact.Owing to viewing equipment and environment are totally different from traditional indoor Pattern, the viewing condition depending on tradition JND (lucky perceived difference or distortion) method will no longer be able to produce high Visual quality or effective bandwidth usage.Therefore, tradition should be different from towards the Video coding processing mobile device with transmission Equipment, in order to realize optimum experience quality and bandwidth availability ratio under mobile viewing environment.
Summary of the invention
In consideration of it, it is an object of the invention to propose a kind of video encoding quality adjustment side according to viewing environment change Method, it utilizes the ambient parameter during sensor collection viewing video configured on mobile phone, as the Video coding changing server The foundation of the video flow of quality and network, thus dynamically adjust video flow quality (code check and Y-PSNR PSNR), with Mate well current viewing scene.
For realizing above goal of the invention, the present invention by the following technical solutions:
A kind of method adjusting video encoding quality according to viewing environment change, comprises the following steps:
(1) video server receives, by the network transmission protocol, the sensor carried by mobile device that mobile device obtains The various environmental parameters gathered;
(2) various environmental parameters that video server analysis receives, and estimate current viewing environment state;
(3) video server dynamically changes the quality of Video coding according to the viewing environment state of estimation.
The technique effect of the present invention is: by dynamically adjusting video flow quality (code check and Y-PSNR PSNR), can be Mate well current viewing scene, and the utilization ratio of raising bandwidth, and improve visual quality and Quality of experience.
Accompanying drawing explanation
Fig. 1 video server workflow;
Fig. 2 coding quality adjustable strategies;
Fig. 3 present frame quantization parameter is predicted;
Fig. 4 is according to the class value adjustment quantization parameter to PSNR value and viewing environment.
Detailed description of the invention
In order to make the purpose of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, right The present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, and It is not used in the restriction present invention.
As it is shown in figure 1, start video server to gather video source, it is possible to use local file or obtained by camera Real-time video content, encodes video source.
After video server obtains the ambient parameter that mobile phone collects, need to dynamically change video according to Parameters variation Coding quality.Reach the optimization of " mass flow ".Environment is the most stable, and the best bit rate of video quality is the highest.And watch ring The when that border condition being poor, people is relatively low for the requirement of video quality, such that it is able to reduce video quality, reduces flow.
Specifically, the present invention includes following link:
One, viewing environment rank is specified according to sensor parameters
As in figure 2 it is shown, can be according to the parameter of sensor as a ginseng of video data quantization during coding Examine, dynamically change coding quality, make video quality change with viewing environment.
Mobile device obtains the sensor parameters (i.e. ambient parameter residing for this mobile device) needed, sensing to be obtained Device system service obtains controlling example, and the sensor type obtained is wanted in registration.The most just can obtain the illumination of needs, distance Parameter value with three sensors of acceleration of gravity.Wherein, so-called " distance " refers generally to face in the object of sensor, from biography Distance between that object and sensor that sensor is nearest.In the case of viewing video, face sensor is often Human eye.Because the value of light sensor and range sensor is all a scalar, it is possible to directly transmit without process It is analyzed again on server;The value of acceleration transducer is the vector of a scaling element containing three directions, so First carry out simple absolute value weighted sum to process.What server received is the weighted sum of acceleration absolute value increment, expression The value of light intensity and the value representing distance.Wherein, the Socket that the transmission of sensor parameters uses communicates.At server End creates a passive Socket and also binds a transient port, is set to Listen state, waits and to be subjected sends out from client The connection request sent, when the connection request of client comes then, reads the sensing of client by calling Read after Accept Device parameter is analyzed and quantifies, and obtains the viewing environment class value under a mobile platform.Client then creates one Socket, the IP address of binding server end and port information, just can send the data to service after request successful connection Device end.Because parameter transmission is one towards connection and needs transmitting, so selecting to use TCP transmission agreement, and do not adopt Use UDP host-host protocol.
Actual test shows, in each parameter, three scalars of acceleration can be bigger on viewing video impact, and the most vertical Component in ground is maximum on the impact of quality, namely teetertotters on viewing impact maximum, so by the three of acceleration sides Carry out absolute value to the increment of component and seek weighted sum.Assume the value of the increment of three square upward components in acceleration transducer vector For Δ x, Δ y, Δ z, wherein Δ z represents the increment of that component being perpendicular to ground, it is preferable that absolute value weighted sum according to 0.25 × | Δ x |+0.25 × | Δ y |+0.5 × | Δ z | asks for.Wherein, acceleration transducer was sampled once every 0.1 second, Current sampled value and previous sampled value are subtracted each other, and obtain the increment of three components, by the absolute value of the increment of three components Weighted sum is asked to be sent to video server.
The ambient parameter gathering each sensor divides interval, represents the not same district of ambient parameter with flag bit Between, the value of flag bit for example, 0,1,2.The rank of viewing environment is represented, according to mark by each sensor flag position sum The environmental quality of estimation can be divided into 5 ranks by position sum.By the parameter that the interval computation divided is current, it is possible to compare Obtain current environmental quality exactly.
If the value of light falls in 0~600 (less than 600) is interval, it is 0 by light mark position, is otherwise 1;Distance It is 0 that the value of parameter falls the mark position representing distance 0~6 (less than 6), is not so set to 1;If the value of acceleration is in interval The mark position representing acceleration is 0 by 0~5 (less than 5), will represent the flag bit of acceleration interval 5~10 (less than 10) Being set to 1, the mark position otherwise representing acceleration is 2.
As shown in table 1, after obtaining the flag bit of each ambient parameter, according to flag bit sum, viewing environment is divided into 5 Individual rank.
5 viewing environment ranks that each ambient parameter of table 1 is corresponding
Server judges according to 3 flag bits: if 1. three flag bits are all 0, by viewing environment grade setting be then level=1;If 2. three flag bits and be 1, then it is level=2 by viewing environment grade setting;If 3. three flag bits With for 2, then be level=3 by viewing environment grade setting;If 4. three flag bits and be 3, then viewing environment rank is set It is set to level=4;If 5. three flag bits and be 4, then it is level=5 by viewing environment grade setting.Ambient parameter, mark Corresponding relation between position and viewing environment rank is as shown in table 1.
It is pointed out that the interval division above with respect to ambient parameter, the value of flag bit and the setting of environmental grade Merely illustrative, it will be appreciated by those skilled in the art that and can carry out accommodation according to actual conditions.
Two, according to the quantization parameter of viewing environment rank adjusting encoder
When viewing environment rank is constant, do not change the quantization parameter of encoder;When viewing environment rank changes, root Different encoder quantization parameters is selected according to different viewing environment ranks.Environmental quality is deteriorated, and distortion can select to strengthen.? In the present embodiment, it is the arithmetic progression of 2dB that the Y-PSNR size of the permission of each viewing environment rank meets such as tolerance. By environmental grade adjust coding strategy it is crucial that choose suitable quantization step according to current JND.Its committed step is such as Shown in Fig. 3.
Wherein, according to the standard variance of currently viewing environmental grade, former frame and present frame conversion coefficient in the transform domain as illustrated And the Y-PSNR of former frame, quantization step, the quantization parameter of present frame is determined according to rate distortion criterion.Described currently Frame refers to currently encoding but without the frame of video completing coding.According to the distortion under signal to noise ratio and certain quantization parameter, The choosing method of design quantization step.
The formula of Y-PSNR is expressed as:
PSNR=10 × log2552/MSE (1)
Wherein MSE is mean square error, i.e. distortion.Conversion coefficient after uniform quantization is assumed to be obedience independent identically distributed zero Average La Palasi source, its standard variance is σ, and quantization step isWhereinQuantizing distortion is permissible It is estimated as:
D Q k = σ 2 θ 1 e θ 2 θ 1 × ( 2 + θ 1 - 2 θ 2 θ 1 ) + 2 - 2 e θ 1 2 ( 1 - e θ 1 ) - - - ( 2 - 1 )
Wherein θ2For quantifying side-play amount, span 0~1, take 0.5 in the present invention.Therefore (2-1) formula is rewritten as (2- 2) formula:
σ 2 D Q k - σ 2 = 1 θ 1 e - 0.5 θ 1 - e 0.5 θ 1 - - - ( 2 - 2 )
The value of quantizing distortion is the most relevant with the quantization parameter of present frame and variance.Formula (2-2) deformation can obtain (3) formula:
σ 2 D Q k - σ 2 = f θ 1 - - - ( 3 )
That is, if it is known that the estimate of quantizing distortionWith present frame variances sigma in the transform domain as illustrated2, can calculate by (4) formula Go out the quantization step of present frame.
θ 1 = f - 1 ( σ 2 D Q k - σ 2 ) , Q step k = θ 1 σ 2 - - - ( 4 )
In the implementation, for reducing computation complexity, it is calculated f θ according to (3) formula1Value after, it is possible to use look-up table At f θ1With θ1Corresponding table in search this f θ1θ corresponding to value1Value, calculates the estimated value of quantization step further according to (4) formula.Allusion quotation The f θ of type1With θ1Corresponding relation as shown in table 2.
Table 2f θ1With θ1Corresponding relation
Specifically, the step of the quantization parameter obtaining present frame is as follows:
The Y-PSNR PSNR1 of the most known former frame, calculates the distortion D1 of former frame according to (1) formula;
2., according to the relation between viewing environment rank and the Y-PSNR of permission, determine currently viewing environmental grade Under, it is allowed to the adjusted value of Y-PSNR;
3., according to result and the Y-PSNR PSNR1 of former frame of the 2nd step, calculate the Y-PSNR of present frame PSNR2;
4. PSNR2 is substituted into estimate D2 that (1) formula calculates the quantizing distortion of present frame, i.e.
5. former frame and the standard variance of present frame conversion coefficient, and D1 and D2 substitutes into before (3) formula calculates respectively The f θ of one frame and present frame1, the θ of former frame and present frame is obtained by look-up table1, the quantization of former frame is calculated further according to (4) formula The estimate of step-lengthEstimate with the quantization step of present frame, and the difference of the two quantization step estimate Δ;
6. the Δ obtained according to former frame real quantization step QP1 and the 5th step, calculates the quantization that will use of present frame Step-length QP2, and it is converted into quantization parameter, input coding device, it is achieved encoder bit rate is according to the self-adaptative adjustment of environmental condition.
The subjective and objective quality of quantization parameter more small video is the best, obtains code check the highest.By the acceptance shifting that server is real-time The ambient parameter of moved end, changes quantization parameter when every frame encodes by these ambient parameters.
Mobile-terminated receipts Video stream information uses http protocol, accesses application layer by the network address and port.Clothes Business device is by HTTP dynamic image distribution to mobile client, and client terminal playing is from the video flowing of server.
Validity following with an experimental verification technical solution of the present invention:
Use certain smart mobile phone as terminal mobile device.It has the TFT screen of 4 inches, and resolution ratio is 854 × 480 Pixel.Have light sensor, range sensor and acceleration transducer simultaneously.Light sensor value has: 10, and 225,320, 640,1280.Range sensor has two values 9.0 and 0.0.Acceleration transducer has three parameters and represents x, y, z-axis respectively Direction.The time interval of sensor sample is 0.1 second, and calculates the absolute value increment of all directions.Take according to above-mentioned parameter The interval value of light is set to 0~600 and more than 600 by value scope;Distance interval is divided into 0~5 and more than 5;Three of acceleration Interval is 0~5,5~10, more than 10.
Class value according to table 1 viewing environment according to Fig. 3 estimate current just can the distortion i.e. D2 of perception, thus further Estimate the quantization parameter Q in Video codingstepValue.QstepThe span of value is 0~255.Fig. 4 mainly illustrate 30dB and Situation during two quality bounds of 40dB, for the PSNR value between 0dB and 40dB, equally does according to the step of Fig. 4 Finer judgement.

Claims (7)

1. the method adjusting video encoding quality according to viewing environment change, comprises the following steps:
(1) video server receives, by the network transmission protocol, the sensor collection carried by mobile device that mobile device obtains Various environmental parameters;
(2) various environmental parameters that video server analysis receives, and estimate current viewing environment state, specifically include: Every kind of ambient parameter is divided interval, and one flag bit is set for every kind of ambient parameter, the different value tables of flag bit Show the different interval of this ambient parameter, according to flag bit value sum, the viewing environment state of estimation is divided into different viewing rings Border rank;
(3) video server dynamically changes the quality of Video coding according to the viewing environment state of estimation, specifically includes: work as viewing When environmental grade is constant, do not change the quantization parameter of encoder;When viewing environment rank changes, according to different viewing environments Rank selects different encoder quantization parameters, simultaneously according to currently viewing environmental grade, former frame and present frame at transform domain The standard variance of middle conversion coefficient and the Y-PSNR of former frame, quantization step, determine currently according to rate distortion criterion The quantization parameter of frame.
Method the most according to claim 1, wherein, described various environmental parameters includes light sensor, range sensor The parameter gathered with acceleration transducer.
Method the most according to claim 2, wherein, the acceleration parameter that described video server receives is x, y, z three The absolute value weighted sum of the increment of component of acceleration on change in coordinate axis direction, z direction represents the direction being perpendicular to ground, and these three increases Amount is expressed as: Δ x, Δ y, Δ z.
Method the most according to claim 3, wherein, the absolute value weighted sum of described increment is preferably: 0.25 × | Δ x |+ 0.25×|Δy|+0.5×|Δz|。
Method the most according to claim 1, wherein, the described network transmission protocol is Transmission Control Protocol, and uses socket to enter Row data send.
Method the most according to claim 1, wherein, the Y-PSNR size that each viewing environment rank allows meets Difference series, viewing environment quality is the poorest, it is allowed to Y-PSNR the least.
Method the most according to claim 1, wherein, the described quantization parameter determining present frame particularly as follows:
(3a) the Y-PSNR PSNR1 of known former frame, calculates mean square error MSE of former frame, i.e. distortion according to (1) formula D1;
PSNR=10 × log 2552/MSE (1)
(3b) according to the relation between viewing environment rank and the Y-PSNR of permission, determine under currently viewing environmental grade, The adjusted value of the Y-PSNR allowed;
(3c) according to result and the Y-PSNR PSNR1 of former frame of (3b) step, the Y-PSNR of present frame is calculated PSNR2;
(3d) PSNR2 is substituted into estimate D2 that (1) formula calculates the quantizing distortion of present frame, i.e.
(3e) former frame and the standard variance of present frame conversion coefficient, and D1 and D2 substitutes into (2) formula respectively and calculates previous The f θ of frame and present frame1, the θ of former frame and present frame is obtained by look-up table1, the quantization step of former frame is calculated further according to (3) formula Long estimateEstimate with the quantization step of present frameAnd the difference Δ of the two quantization step estimate;
σ 2 D Q k - σ 2 = fθ 1 - - - ( 2 )
Q s t e p k = θ 1 σ 2 - - - ( 3 )
(3f) Δ obtained according to former frame real quantization step QP1 and (3e) step, calculates the amount that will use of present frame Change step-length QP2, and be converted into quantization parameter.
CN201310546122.6A 2013-11-06 2013-11-06 A kind of method adjusting video encoding quality according to viewing environment change Expired - Fee Related CN103596007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310546122.6A CN103596007B (en) 2013-11-06 2013-11-06 A kind of method adjusting video encoding quality according to viewing environment change

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310546122.6A CN103596007B (en) 2013-11-06 2013-11-06 A kind of method adjusting video encoding quality according to viewing environment change

Publications (2)

Publication Number Publication Date
CN103596007A CN103596007A (en) 2014-02-19
CN103596007B true CN103596007B (en) 2016-09-07

Family

ID=50085965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310546122.6A Expired - Fee Related CN103596007B (en) 2013-11-06 2013-11-06 A kind of method adjusting video encoding quality according to viewing environment change

Country Status (1)

Country Link
CN (1) CN103596007B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109257633B (en) * 2018-09-28 2020-07-28 西安交通大学 Environment-aware HTTP adaptive streaming media QoE (quality of experience) optimization method
CN112351254A (en) * 2020-10-30 2021-02-09 重庆中星微人工智能芯片技术有限公司 Monitoring video coding and decoding device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101765003A (en) * 2008-12-23 2010-06-30 上海茂碧信息科技有限公司 Method for transmitting audio and video under environment of network with different speeds
CN102855460A (en) * 2011-06-30 2013-01-02 陕西省公安厅 Research on video server pattern recognition algorithm parameters based on automatic modification of linear sensors
CN103120003A (en) * 2010-09-23 2013-05-22 捷讯研究有限公司 System and method for dynamic coordination of radio resources usage in a wireless network environment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8428759B2 (en) * 2010-03-26 2013-04-23 Google Inc. Predictive pre-recording of audio for voice input

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101765003A (en) * 2008-12-23 2010-06-30 上海茂碧信息科技有限公司 Method for transmitting audio and video under environment of network with different speeds
CN103120003A (en) * 2010-09-23 2013-05-22 捷讯研究有限公司 System and method for dynamic coordination of radio resources usage in a wireless network environment
CN102855460A (en) * 2011-06-30 2013-01-02 陕西省公安厅 Research on video server pattern recognition algorithm parameters based on automatic modification of linear sensors

Also Published As

Publication number Publication date
CN103596007A (en) 2014-02-19

Similar Documents

Publication Publication Date Title
CN102629379B (en) Image quality evaluation method based on visual characteristic
Xue et al. Mobile video perception: New insights and adaptation strategies
CN103414915B (en) Quality evaluation method and device for uploaded videos of websites
Xing et al. Assessment of stereoscopic crosstalk perception
US20210360224A1 (en) Method and apparatus for transmission parameter distribution of video resource
CN1788494A (en) System and method for transmission of a multitude of video sequences
CN105430383A (en) Method for evaluating experience quality of video stream media service
Aguiar et al. A real-time video quality estimator for emerging wireless multimedia systems
US20200204804A1 (en) Transcoding Media Content Using An Aggregated Quality Score
Xue et al. A study on perception of mobile video with surrounding contextual influences
Xue et al. Mobile JND: Environment adapted perceptual model and mobile video quality enhancement
CN102142145A (en) Image quality objective evaluation method based on human eye visual characteristics
CN103596007B (en) A kind of method adjusting video encoding quality according to viewing environment change
CN106303611A (en) A kind of method and system realizing film source propelling movement
CN103297801A (en) No-reference video quality evaluation method aiming at video conference
CN107105226B (en) A kind of video quality evaluation device
CN103475851A (en) Dynamic encoding device and method based on bandwidth detection
CN106604029B (en) A kind of bit rate control method of the moving region detection based on HEVC
CN116440501A (en) Self-adaptive cloud game video picture rendering method and system
CN101895787B (en) Method and system for subjectively evaluating video coding performance
Anegekuh et al. Encoding and video content based HEVC video quality prediction
TW200713127A (en) Image transmission mechanism and method for implementing the same
CN115550658B (en) Data transmission method based on intelligent campus management platform
CN103796036A (en) Coding parameter adjusting method and device
CN102487449A (en) Video signal test method of digital television receiving terminal and system thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160907

Termination date: 20171106

CF01 Termination of patent right due to non-payment of annual fee