CN110944200A - Method for evaluating immersive video transcoding scheme - Google Patents
Method for evaluating immersive video transcoding scheme Download PDFInfo
- Publication number
- CN110944200A CN110944200A CN201911257216.5A CN201911257216A CN110944200A CN 110944200 A CN110944200 A CN 110944200A CN 201911257216 A CN201911257216 A CN 201911257216A CN 110944200 A CN110944200 A CN 110944200A
- Authority
- CN
- China
- Prior art keywords
- video
- quality
- field
- view
- frame rate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000013139 quantization Methods 0.000 claims abstract description 17
- 230000000903 blocking effect Effects 0.000 claims abstract description 6
- 238000001914 filtration Methods 0.000 claims abstract description 5
- 238000013441 quality evaluation Methods 0.000 claims description 22
- 230000001419 dependent effect Effects 0.000 claims description 4
- 238000001303 quality assessment method Methods 0.000 claims description 4
- 230000007423 decrease Effects 0.000 claims description 3
- 230000003247 decreasing effect Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 abstract description 10
- 238000007906 compression Methods 0.000 abstract description 5
- 230000006835 compression Effects 0.000 abstract description 4
- 230000008447 perception Effects 0.000 abstract description 3
- 238000006243 chemical reaction Methods 0.000 abstract 1
- 238000011156 evaluation Methods 0.000 description 8
- 230000005540 biological transmission Effects 0.000 description 7
- 238000007430 reference method Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/40—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/154—Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
Abstract
The invention discloses a method for effectively evaluating an immersive video transcoding scheme. The method comprises the following specific steps: (1) the method comprises the steps of segmenting a complete 360-degree panoramic video into a plurality of regular rectangles; (2) performing sobel filtering on each block of each frame in the video after blocking to extract edge characteristic information, calculating residual errors of continuous frames in a gradient information domain, and taking a mean value and a variance of residual error results; (3) according to the field position of the user, combining the block characteristics, and performing characteristic conversion and model parameter calculation on the field content; (4) parameters to be applied to the current video compression coding are input, including resolution, frame rate and quantization step size, and the coding scheme is output to compare with the subjective perception quality change of the original non-transcoded video. By the method, rapid and accurate quality prediction can be performed according to the behavior characteristics of the user who is viewed in an immersive mode, and the prediction result is close to the subjective quality perception data of the real user.
Description
Technical Field
The invention relates to the field of computational vision, in particular to a method for evaluating an immersive video transcoding scheme.
Background
In the process of video acquisition, transmission and storage, the video needs to be compressed and encoded to meet the requirements of storage and transmission. In the process of video compression and encoding, the main parameters which have a large influence on the subjective quality of the encoded video include three parameters, namely resolution, frame rate and quantization step size. In past researches, it has been found that the subjective quality influence of each of the three parameters on the coded video is related to the video content, for example, for a video scene containing a high-speed moving object, the influence of the change of the frame rate on the subjective quality is more obvious than that of a natural landscape video with slow change. In addition, the situation that the subjective quality of the same video content has a large difference under the same code rate due to the difference of the encoding parameters also easily causes the waste of transmission and storage resources. Therefore, how to evaluate the quality of the coding scheme in the compression process becomes a very critical problem for different video contents, and is also a problem that is always explored in the development process of video quality evaluation technology.
The video quality evaluation technology aims to evaluate the video quality after lossy processing such as compression and transmission. The existing video quality evaluation algorithm can be divided into subjective evaluation and objective evaluation on the aspect of a method. Objective quality assessment is mainly divided into three categories according to the amount of information provided by reference videos: full reference methods, half reference methods, and no reference methods.
The full-reference method requires both original (no quality loss) video and lossy video, and is easy to implement and apply, and the method mainly compares the information content of two pieces of video with the same content and the similarity of certain characteristics. The non-reference method only depends on the lossy video, and the implementation difficulty is high, and some common implementation methods at present include some specific algorithms to detect specific types of distortion for evaluation, such as fuzzy detection, noise detection, edge analysis, and the like. The semi-reference method only needs partial information of an original video or a reference video or extracts partial characteristics as a reference for evaluating the quality of a lossy video, the method provides a solution under the condition that the reference video information cannot be completely acquired, the application in an actual system can provide a more stable and accurate evaluation result than that without reference, and meanwhile, the unnecessary consumption of storage and transmission resources caused by the full-reference method is avoided. Based on the above discussion, for the coding scheme to evaluate the video quality evaluation scene with the original high-quality reference, if coding is performed first to obtain the lossy video and then evaluation is performed, not only is the calculation resource wasted, but also more processing time is required, and ultra-low delay response cannot be achieved. Therefore, establishing the link between the coding parameters and the video quality based on the idea of the semi-reference video quality evaluation technology is a reasonable choice for solving the aforementioned problems.
With the development of software and hardware technologies, immersive media contents such as VR (virtual reality) and AR (augmented reality) gradually enter the consumer market, and play more and more important roles in the fields of education, medical treatment, entertainment and the like. Immersive interaction methods are very different from conventional flat video interaction in both viewing environment and user freedom. When a user watches a traditional flat video, the screen of the playing device can only cover a local area in the center of the visual field of the user, and the user has no freedom in watching content. In an immersive viewing environment, the video content generally covers the entire field of view of the user, isolating most of the unnecessary external visual interference. Meanwhile, the 360-degree video content gives a higher degree of freedom to the user, the view field of the user can only cover part of the video content which is selected by the user at a certain moment, the user can change the direction and the position according to the subjective intention of the user in the watching process, and the attention of the user is usually focused on the central part of the current view field.
Based on the above changes, the conventional method for evaluating the common quality of the flat video cannot meet the requirements of immersive viewing. On one hand, the change of the viewing environment can bring about the change of the subjective quality perception characteristics of the user, and some quality evaluation models applied to the traditional plane video can have larger errors on the evaluation result; on the other hand, directly evaluating the quality of a complete panoramic video has not been able to accommodate the local features of a user's focus in an immersive viewing environment.
However, an immersive video quality evaluation model which can be designed specifically for the above changes and has high practicability is not yet proposed, and how to carry out reasonable design optimization to adapt to changes brought by an immersive viewing environment to a video quality evaluation technology becomes a very important subject how to directly link video coding parameters with video subjective quality to solve coding scheme evaluation.
Disclosure of Invention
In view of the above prior art variations and features, it is an object of the present invention to propose a method of evaluating an immersive video transcoding scheme.
The invention utilizes the technology of a semi-reference quality evaluation model, takes the coding parameters and a small amount of characteristics of the original video as model input, and can output the quality loss of the coding parameters relative to the original high-quality video through simple calculation, and the technical scheme is specifically adopted as follows:
a method of evaluating an immersive video transcoding scheme, comprising the steps of:
step 1, dividing a complete high-bit-rate panoramic video into a plurality of regular rectangles, wherein the size of each rectangle is smaller than one half of the size of the corresponding user in the transverse direction and the longitudinal direction;
step 2, conducting sobel filtering on each block of each frame in the video after blocking to extract edge characteristic information, obtaining corresponding gradient domain information, then making difference on the corresponding blocks of the two continuous frames before and after blocking, and taking mean value sigma of the obtained residual errormeanTaking the standard deviation sigma of the sumstd;
Step 3, after the position information of the user view field is obtained, calculating the parameters of the quality evaluation model according to the coverage condition of the view field to each block video:
whereinIs a coefficient describing the change in video quality with decreasing video resolution, αqIs a description of the coefficient of change in video quality as the compressed quantization step size increases, αtIs a coefficient describing the change of video quality as the frame rate decreases, and
n denotes the number of blocks covered by the current field of view, skRepresenting the area, s, covered by the field of view of each block of videoFoVWhich represents the area of coverage of the field of view,andrepresenting the characteristic results of each of the segmented videos,the sum of the characteristics of all the blocked videos covered by the field of view;
step 4, calculating a quality evaluation model, wherein a specific formula is as follows:
wherein Q (s, Q, t) is a video quality assessment prediction result after encoding according to a given parameter; resolution ratioFrame rateQuantization step sizeStar、TtarAnd QtarThree coding parameters, S, representing the corresponding actual resolution, frame rate and quantization step size at transcodingori,ToriAnd QtarRepresenting the resolution, frame rate and quantization step size of the original high quality video αsIs input dependent QtarA numerical value;
and 5, evaluating each coding scheme by using the methods in the steps 1 to 4 to obtain a corresponding quality evaluation prediction result, and selecting a corresponding resolution, frame rate and quantization step size parameter combination when the Q (s, Q, t) value is maximum as a final coding scheme.
The invention provides a semi-reference video quality evaluation method capable of adapting to an immersive viewing environment, and a direct mapping relation between coding parameters and video quality can be established by combining feature-dependent model parameters and taking the coding parameters as two key points of input. In addition, the block feature prediction module is added to improve the response speed of the actual system during deployment and operation by combining the user behavior characteristics in the immersive viewing environment. The method not only directly establishes the direct relation between the compression coding parameters and the video quality, but also can realize the self-adaptation of the video content based on the characteristic dependence of the model parameters, thereby realizing the immersive video quality evaluation method with strong practicability and high accuracy, being used for evaluating different coding schemes and selecting the optimized coding scheme according to the evaluation result.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
fig. 2 is a schematic view of video blocking.
Detailed Description
Referring to fig. 1, the method for the server to evaluate the immersive video transcoding scheme of the invention specifically comprises the following steps:
step 1, as shown in fig. 2, aiming at the randomness characteristic that only the image in the view field is visible to the user and the view field of the user is selected in the viewing process of the panoramic video, in order to accelerate the subsequent processing, firstly, the complete high-bit-rate panoramic video is partitioned, the subsequent partitioning characteristic calculation steps are matched to deal with the view field of the user which may appear at any position, one complete panoramic video is divided into a plurality of regular rectangles, the size of each rectangle in the horizontal direction and the longitudinal direction is smaller than one half of the size of the corresponding view field in the horizontal direction and the longitudinal direction, and therefore the accuracy of the content characteristic calculation of the subsequent view field can be ensured. And a proper block feature calculation strategy is introduced to adapt to the randomly-appearing field position of the user, so that the calculation speed is optimized on the premise of ensuring the accuracy.
And 2, extracting the characteristics of each block of video content after the division is finished, wherein the required characteristics comprise the following characteristics on the premise of ensuring the result accuracy and simplifying the calculated amount:
the method is characterized in that: after each frame in the video is subjected to sobel filtering to extract edge features, the difference is made between two continuous frames, the residual error is averaged, N-1 numerical values are finally generated by the content of the N frames of video, the average value of the N-1 numerical values is recorded asWhere n is the corresponding block number.
The second characteristic: after each frame in the video is subjected to sobel filtering to extract edge features, the two continuous frames are subjected to subtraction, the standard deviation is taken for the residual error, N-1 numerical values are finally generated by the content of the N frames of the video, the average value of the N-1 numerical values is taken and recorded asWhere n is the corresponding block number.
Step 3, after the position information of the user view field is obtained, calculating the parameters of the quality evaluation model according to the coverage condition of the view field to each block video:
wherein sigmameanAnd σstdBased on the video gradient domain characteristics calculated in the previous steps,is a coefficient describing the change in video quality with decreasing video resolution, αqIs a description of the coefficient of change in video quality as the compressed quantization step size increases, αtIs a coefficient describing the change of video quality as the frame rate decreases, and
n denotes the number of blocks covered by the current field of view, skRepresenting the area, s, covered by the field of view of each block of videoFoVWhich represents the area of coverage of the field of view,andrepresenting each blockAs a result of the characteristics of the video,the sum of the characteristics of all the partitioned videos covered by the field of view.
Step 4, performing quality evaluation on a plurality of alternative coding schemes meeting the requirements, and substituting specific parameters of each combination into calculation under the condition that a plurality of different combinations of resolution, frame rate and quantization step size can meet the corresponding storage or transmission requirements, so as to obtain a quality evaluation result corresponding to each coding scheme, wherein the specific formula is as follows:
α thereinsIs input dependent QtarThe value, Q (s, Q, t), is the video quality assessment prediction result after encoding according to the given parameters; resolution ratioFrame rateQuantization step sizeStar、TtarAnd QtarThree coding parameters, S, representing the corresponding actual resolution, frame rate and quantization step size at transcodingori,ToriAnd QtarRepresenting the parameters of the original high quality video.
And 5, after each coding scheme to be selected is evaluated, obtaining a corresponding quality evaluation prediction result, and selecting a corresponding resolution, frame rate and quantization step parameter combination when the Q (s, Q, t) value is maximum as a final coding scheme.
Based on the steps, the quality evaluation result corresponding to each coding scheme combination meeting the transcoding requirement can be obtained, and therefore the coding scheme with the optimal quality is selected. According to the method, under the condition that coding is not carried out, quality loss brought to the original video by the corresponding coding parameters can be evaluated only according to the coding parameters and the original high-quality video characteristics, and meanwhile, rapid calculation can be carried out on user view fields which randomly appear in an actual application scene based on the feature extraction and calculation strategy of blocks, so that the method is more suitable for high-freedom immersive video watching and high-quality panoramic video transmission scenes.
Claims (1)
1. A method of evaluating an immersive video transcoding scheme comprising the steps of:
step 1, dividing a complete high-bit-rate panoramic video into a plurality of regular rectangles, wherein the size of each rectangle is smaller than one half of the size of the corresponding user in the transverse direction and the longitudinal direction;
step 2, conducting sobel filtering on each block of each frame in the video after blocking to extract edge characteristic information, obtaining corresponding gradient domain information, then making difference on the corresponding blocks of the two continuous frames before and after blocking, and taking mean value sigma of the obtained residual errormeanTaking the standard deviation sigma of the sumstd;
Step 3, after the position information of the user view field is obtained, calculating the parameters of the quality evaluation model according to the coverage condition of the view field to each block video:
whereinIs a coefficient describing the change in video quality with decreasing video resolution, αqIs a description of the coefficient of change in video quality as the compressed quantization step size increases, αtIs a coefficient describing the change of video quality as the frame rate decreases, and
n denotes the number of blocks covered by the current field of view, skRepresenting the area, s, covered by the field of view of each block of videoFoVWhich represents the area of coverage of the field of view,andrepresenting the characteristic results of each of the segmented videos,the sum of the characteristics of all the blocked videos covered by the field of view;
step 4, calculating a quality evaluation model, wherein a specific formula is as follows:
wherein Q (s, Q, t) is a video quality assessment prediction result after encoding according to a given parameter; resolution ratioFrame rateQuantization step sizeStar、TtarAnd QtarThree coding parameters, S, representing the corresponding actual resolution, frame rate and quantization step size at transcodingori,ToriAnd QtarRepresenting the resolution, frame rate and quantization step size of the original high quality video αsIs input dependent QtarA numerical value;
and 5, evaluating each coding scheme by using the methods in the steps 1 to 4 to obtain a corresponding quality evaluation prediction result, and selecting a corresponding resolution, frame rate and quantization step size parameter combination when the Q (s, Q, t) value is maximum as a final coding scheme.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911257216.5A CN110944200B (en) | 2019-12-10 | 2019-12-10 | Method for evaluating immersive video transcoding scheme |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911257216.5A CN110944200B (en) | 2019-12-10 | 2019-12-10 | Method for evaluating immersive video transcoding scheme |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110944200A true CN110944200A (en) | 2020-03-31 |
CN110944200B CN110944200B (en) | 2022-03-15 |
Family
ID=69910006
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911257216.5A Active CN110944200B (en) | 2019-12-10 | 2019-12-10 | Method for evaluating immersive video transcoding scheme |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110944200B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111696081A (en) * | 2020-05-18 | 2020-09-22 | 南京大学 | Method for reasoning panoramic video quality according to visual field video quality |
CN112653892A (en) * | 2020-12-18 | 2021-04-13 | 杭州当虹科技股份有限公司 | Method for realizing transcoding test evaluation by using video characteristics |
CN113497932A (en) * | 2020-04-07 | 2021-10-12 | 上海交通大学 | Method, system and medium for measuring video transmission time delay |
WO2022088033A1 (en) * | 2020-10-30 | 2022-05-05 | 深圳市大疆创新科技有限公司 | Data processing method and apparatus, image signal processor, and mobile platform |
CN114760506A (en) * | 2022-04-11 | 2022-07-15 | 北京字跳网络技术有限公司 | Video transcoding evaluation method, device, equipment and storage medium |
CN115225961A (en) * | 2022-04-22 | 2022-10-21 | 上海赛连信息科技有限公司 | No-reference network video quality evaluation method and device |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9615098B1 (en) * | 2009-11-30 | 2017-04-04 | Google Inc. | Adaptive resolution transcoding for optimal visual quality |
CN106973281A (en) * | 2017-01-19 | 2017-07-21 | 宁波大学 | A kind of virtual view video quality Forecasting Methodology |
CN107040783A (en) * | 2015-10-22 | 2017-08-11 | 联发科技股份有限公司 | Video coding, coding/decoding method and the device of the non-splicing picture of video coding system |
CN107040771A (en) * | 2017-03-28 | 2017-08-11 | 北京航空航天大学 | A kind of Encoding Optimization for panoramic video |
WO2018136301A1 (en) * | 2017-01-20 | 2018-07-26 | Pcms Holdings, Inc. | Field-of-view prediction method based on contextual information for 360-degree vr video |
CN108513119A (en) * | 2017-02-27 | 2018-09-07 | 阿里巴巴集团控股有限公司 | Mapping, processing method, device and the machine readable media of image |
CN108833880A (en) * | 2018-04-26 | 2018-11-16 | 北京大学 | Using across user behavior pattern carry out view prediction and realize that virtual reality video optimizes the method and apparatus transmitted |
CN108924554A (en) * | 2018-07-13 | 2018-11-30 | 宁波大学 | A kind of panorama video code Rate-distortion optimization method of spherical shape weighting structures similarity |
CN109769104A (en) * | 2018-10-26 | 2019-05-17 | 西安科锐盛创新科技有限公司 | Unmanned plane panoramic picture transmission method and device |
US20190320193A1 (en) * | 2017-10-10 | 2019-10-17 | Tencent Technology (Shenzhen) Company Limited | Video transcoding method, computer device, and storage medium |
-
2019
- 2019-12-10 CN CN201911257216.5A patent/CN110944200B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9615098B1 (en) * | 2009-11-30 | 2017-04-04 | Google Inc. | Adaptive resolution transcoding for optimal visual quality |
CN107040783A (en) * | 2015-10-22 | 2017-08-11 | 联发科技股份有限公司 | Video coding, coding/decoding method and the device of the non-splicing picture of video coding system |
CN106973281A (en) * | 2017-01-19 | 2017-07-21 | 宁波大学 | A kind of virtual view video quality Forecasting Methodology |
WO2018136301A1 (en) * | 2017-01-20 | 2018-07-26 | Pcms Holdings, Inc. | Field-of-view prediction method based on contextual information for 360-degree vr video |
CN108513119A (en) * | 2017-02-27 | 2018-09-07 | 阿里巴巴集团控股有限公司 | Mapping, processing method, device and the machine readable media of image |
CN107040771A (en) * | 2017-03-28 | 2017-08-11 | 北京航空航天大学 | A kind of Encoding Optimization for panoramic video |
US20190320193A1 (en) * | 2017-10-10 | 2019-10-17 | Tencent Technology (Shenzhen) Company Limited | Video transcoding method, computer device, and storage medium |
CN108833880A (en) * | 2018-04-26 | 2018-11-16 | 北京大学 | Using across user behavior pattern carry out view prediction and realize that virtual reality video optimizes the method and apparatus transmitted |
CN108924554A (en) * | 2018-07-13 | 2018-11-30 | 宁波大学 | A kind of panorama video code Rate-distortion optimization method of spherical shape weighting structures similarity |
CN109769104A (en) * | 2018-10-26 | 2019-05-17 | 西安科锐盛创新科技有限公司 | Unmanned plane panoramic picture transmission method and device |
Non-Patent Citations (1)
Title |
---|
杨超: "面向虚拟视质量的3D视频编码研究", 《中国博士学位论文全文数据库 信息科技辑》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113497932A (en) * | 2020-04-07 | 2021-10-12 | 上海交通大学 | Method, system and medium for measuring video transmission time delay |
CN111696081A (en) * | 2020-05-18 | 2020-09-22 | 南京大学 | Method for reasoning panoramic video quality according to visual field video quality |
CN111696081B (en) * | 2020-05-18 | 2024-04-09 | 南京大学 | Method for reasoning panoramic video quality from visual field video quality |
WO2022088033A1 (en) * | 2020-10-30 | 2022-05-05 | 深圳市大疆创新科技有限公司 | Data processing method and apparatus, image signal processor, and mobile platform |
CN112653892A (en) * | 2020-12-18 | 2021-04-13 | 杭州当虹科技股份有限公司 | Method for realizing transcoding test evaluation by using video characteristics |
CN112653892B (en) * | 2020-12-18 | 2024-04-23 | 杭州当虹科技股份有限公司 | Method for realizing transcoding test evaluation by utilizing video features |
CN114760506A (en) * | 2022-04-11 | 2022-07-15 | 北京字跳网络技术有限公司 | Video transcoding evaluation method, device, equipment and storage medium |
CN114760506B (en) * | 2022-04-11 | 2024-02-09 | 北京字跳网络技术有限公司 | Video transcoding evaluation method, device, equipment and storage medium |
CN115225961A (en) * | 2022-04-22 | 2022-10-21 | 上海赛连信息科技有限公司 | No-reference network video quality evaluation method and device |
CN115225961B (en) * | 2022-04-22 | 2024-01-16 | 上海赛连信息科技有限公司 | No-reference network video quality evaluation method and device |
Also Published As
Publication number | Publication date |
---|---|
CN110944200B (en) | 2022-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110944200B (en) | Method for evaluating immersive video transcoding scheme | |
Yang et al. | Predicting the perceptual quality of point cloud: A 3d-to-2d projection-based exploration | |
Moorthy et al. | Visual quality assessment algorithms: what does the future hold? | |
Gu et al. | Multiscale natural scene statistical analysis for no-reference quality evaluation of DIBR-synthesized views | |
TWI805784B (en) | A method for enhancing quality of media | |
JP6283108B2 (en) | Image processing method and apparatus | |
Jakhetiya et al. | A highly efficient blind image quality assessment metric of 3-D synthesized images using outlier detection | |
Ma et al. | Reduced-reference video quality assessment of compressed video sequences | |
CN110751649B (en) | Video quality evaluation method and device, electronic equipment and storage medium | |
Jakhetiya et al. | A prediction backed model for quality assessment of screen content and 3-D synthesized images | |
CN110139147B (en) | Video processing method, system, mobile terminal, server and storage medium | |
Li et al. | Subjective and objective quality assessment of compressed screen content videos | |
Shao et al. | No-reference view synthesis quality prediction for 3-D videos based on color–depth interactions | |
CN114363623A (en) | Image processing method, image processing apparatus, image processing medium, and electronic device | |
CN112102212A (en) | Video restoration method, device, equipment and storage medium | |
Gao et al. | Quality assessment for omnidirectional video: A spatio-temporal distortion modeling approach | |
Wang et al. | Reference-free DIBR-synthesized video quality metric in spatial and temporal domains | |
Tandon et al. | CAMBI: Contrast-aware multiscale banding index | |
Le Callet et al. | No reference and reduced reference video quality metrics for end to end QoS monitoring | |
CN115131229A (en) | Image noise reduction and filtering data processing method and device and computer equipment | |
Yuan et al. | Object shape approximation and contour adaptive depth image coding for virtual view synthesis | |
CN116980604A (en) | Video encoding method, video decoding method and related equipment | |
Xintao et al. | Hide the image in fc-densenets to another image | |
CN116471262A (en) | Video quality evaluation method, apparatus, device, storage medium, and program product | |
CN113628121B (en) | Method and device for processing and training multimedia data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |