CN110650342A - Quick coding method based on multi-feature analysis of coding unit - Google Patents

Quick coding method based on multi-feature analysis of coding unit Download PDF

Info

Publication number
CN110650342A
CN110650342A CN201910820042.2A CN201910820042A CN110650342A CN 110650342 A CN110650342 A CN 110650342A CN 201910820042 A CN201910820042 A CN 201910820042A CN 110650342 A CN110650342 A CN 110650342A
Authority
CN
China
Prior art keywords
coding unit
depth
coding
complexity
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910820042.2A
Other languages
Chinese (zh)
Inventor
刘欣刚
朱超
吴立帅
汪卫彬
代成
李辰琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201910820042.2A priority Critical patent/CN110650342A/en
Publication of CN110650342A publication Critical patent/CN110650342A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a rapid coding method based on coding unit characteristic analysis, and belongs to the technical field of video coding. The invention comprises the following steps: extracting texture, edge and structural features of the coding unit; inputting the extracted multiple features into an SVM classifier for off-line learning to obtain classification models of the SVM at various depths; searching a coding unit with the maximum current association degree through the extracted features to perform depth 0 pre-judgment; dividing the coding unit into three cases of simple, medium and complex according to the characteristics; and stopping the depth judgment of the simply divided coding units under each depth, skipping the current depth of the complicated coding units for the next depth judgment, and judging the current depth of the medium-sized coding units according to the original flow. The method for rapidly dividing the coding units based on the complexity of the video images can greatly reduce the calculation complexity of the coding units in the depth judgment process, and saves the coding time on the premise of ensuring the video quality.

Description

Quick coding method based on multi-feature analysis of coding unit
Technical Field
The invention relates to the technical field of video coding, in particular to a high-efficiency video coding method for coding unit depth division based on video image complexity and multiple characteristics.
Background
The video is composed of a frame of image, but the original video cannot meet daily storage and transmission requirements due to the huge data volume of the original video, so that the original video needs to be compressed. The international telecommunication union (ITU-T) and the international organization for standardization/international electrotechnical commission (ISO/IEC) were again in cooperation with each other to establish the joint working group for video coding (JCT-VC), and a new generation of video coding standard, the high efficiency video coding standard (HEVC/h.265), was published in 2013. HEVC continues to use the hybrid coding framework of the previous generation h.264, introduces a variety of advanced coding techniques, and for the same video sequence, on the premise of ensuring the unchanged coding quality, the HEVC standard saves the coding bit rate by 50% compared with the h.264 standard. Although HEVC has a great improvement in coding efficiency, its computational complexity is very large, and its coding time is almost twice as long as that of the h.264 standard, which also largely hinders the popularization and application of the HEVC standard in daily life.
HEVC employs a flexible block partitioning approach, including Coding Units (CU), Prediction Units (PU), and Transform Units (TU). In the CU layer HEVC, a coded image is divided into four pixel sizes, 64x64, 32x32, 16x16 and 8x8, by means of quadtree recursion, and is represented by four depths of 0, 1, 2 and 3, respectively, where a coding unit of 64x64 is called a Coding Tree Unit (CTU). The final partitioning combination of the CU is determined by cost comparison from maximum depth 3 to minimum depth 0 down and up. Since the partitioning and comparing methods for the CUs are computationally complex, how to reduce unnecessary computation becomes a key to speed up the HEVC coding time.
The complexity characteristic of the image and the final division result are often connected. Generally, areas with simple image texture are generally coded by using larger coding blocks, while areas with complex image texture are coded by using more small blocks. Aiming at the problem of redundancy of the recursive computation of the quadtree at the CU layer, most of the traditional methods fit the two classification curves of the coding unit based on single statistical characteristics, and set a threshold value according to the curve result to perform division judgment of the CU layer. Since the complexity of the image cannot be accurately measured by a single feature and the adaptive requirement cannot be met by only using a single threshold, the prediction efficiency of the coding is very low.
Disclosure of Invention
The invention aims to: the invention provides a rapid coding method based on coding unit multi-feature analysis, aiming at the technical problems of high computational complexity, single selection feature and single division threshold value of the traditional method in the existing high-efficiency video coding technology.
The invention comprises the following steps:
s1: extracting features of the coding units under all depths to obtain texture, edge and structural features of the coding units under all depths;
s2: and (3) offline learning of classifier features:
inputting the multi-features extracted in the step S1 at different depths into an SVM classifier for off-line learning to obtain SVM classification models at each depth, wherein each SVM classification model is used for determining the complexity classification of the coding units at each depth;
s3: determining the CTU with the maximum neighborhood relevance based on the multi-feature relevance;
s4: and (3) judging the maximum association CTU depth 0:
judging the current CTU in advance according to the depth of the CTU with the maximum neighborhood relevance; on the premise that the current coding depth is 0, if the final partition depth of the CTU with the maximum association degree with the current coding unit is 0, terminating the quad-tree partition of the current CTU; otherwise, continuing to execute step S5;
s5: and (3) complexity prediction:
when the CU at each depth is coded, inputting the extracted texture, edge and structure characteristics into an SVM classification model at the corresponding depth, and dividing the complexity of a coding unit into three conditions of simplicity, medium and complexity according to an output result;
s6: and executing corresponding division judgment according to the classification result of the complexity prediction:
if the coding unit image is classified as simple, terminating the quadtree division of the current depth;
if the coding unit image is classified as complex, directly performing current quadtree division (namely, directly performing quadtree division on the current coding unit at the current depth), and performing complexity prediction and division judgment of the next depth;
if the coding unit image is classified as medium, the coding is performed according to the HEVC standard.
Further, in step S1, the texture, edge, and structural features are specifically:
texture characteristics: extracting the mean square error of a pixel neighborhood of each coding unit as a characteristic for measuring the Texture Complexity (TC) of the image;
edge characteristics: extracting a pixel Sobel gradient value of each coding unit as a characteristic for measuring the Edge Complexity (EC) of an image;
the structure is characterized in that: the variance of each coding unit and the variance of the prediction residuals of its four sub-blocks is extracted as a feature to measure the Complexity (SC) of the picture Structure.
Further, the step S2 includes the following steps:
s21: inputting TC, EC and SC of all coding units and corresponding depth information into an SVM classifier for off-line training;
s22: for the training model under each depth, obtaining a threshold value corresponding to the complexity division condition of the coding unit based on the trained classifier parameters, namely corresponding to the classifier model under each depth;
s23: and respectively determining the optimal complexity prediction parameters of the coding units under each depth according to the accuracy of the offline training classification.
Further, the step S5 includes the following steps:
s51: inputting TC, EC and SC of the coding unit under the current depth into an SVM classifier for complexity calculation;
s52: if the output result of the prediction coding unit of the SVM classifier which is not divided is less than 0, the image complexity of the coding unit is classified as simple;
s53: if the output result of the direct division of the prediction coding unit classified by the SVM is greater than 0, the image complexity of the coding unit is classified as complex;
s54: if the SVM classification outputs other results, the image complexity of the coding unit is classified as medium.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that: the invention extracts a plurality of characteristics of the coded image, and more accurately measures the complexity condition of the coded image from a plurality of angles. And depth 0 judgment is terminated in advance according to the relevance of the neighborhood images, and the depth 0 judgment is accelerated by utilizing the time-space domain and the depth information of the coded images, so that the coding time is greatly accelerated. A multi-classifier prediction model is obtained by a characteristic off-line learning method, and prediction of multiple classifiers and multiple thresholds is more accurate and flexible.
Drawings
FIG. 1: the invention relates to a quick coding flow chart of multi-feature analysis of a coding unit.
FIG. 2: the invention extracts the horizontal and vertical Sobel gradient template schematic diagrams of the image edge complexity.
FIG. 3: the coding unit quadtree is divided into schematic diagrams.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the following embodiments and accompanying drawings.
The invention discloses a rapid coding method based on multi-feature analysis of a coding unit, which is a High Efficiency Video Coding (HEVC) method for accelerating depth division of the coding unit based on complexity characteristics of a video image. Firstly, extracting coding unit characteristics under each depth, including texture, edge and structure characteristics of a coding unit; performing offline learning on the classifier features, inputting the extracted multiple features into an SVM classifier for offline learning, and obtaining classification models of SVM at various depths; analyzing the association degree of adjacent coding units in a time-space domain, and searching the coding unit with the maximum association degree at present through the extracted features to perform depth 0 pre-judgment; judging the complexity, namely dividing the coding units into three conditions of simple, medium and complex according to the characteristics; and (3) coding unit depth judgment, namely stopping the depth judgment of the coding units divided into simple depths under each depth, skipping the current depth of the coding units divided into complex depths to judge the next depth, and judging the current depth of the coding units divided into medium depths according to the original flow. The method for rapidly dividing the coding units based on the complexity of the video images can greatly reduce the calculation complexity of the coding units in the depth judgment process, and saves the coding time on the premise of ensuring the video quality.
Referring to fig. 1, the specific implementation process is as follows:
s1: and extracting the features of the coding units at each depth.
S11: extracting the pixel neighborhood mean square error of each coding unit as a characteristic for measuring the Texture Complexity (TC), namely the TC expression is shown as formula (1) and formula (2):
Figure BDA0002187279680000041
Figure BDA0002187279680000042
wherein, N represents the number of the current encoding block pixel points, f (i, j) represents the image pixel value of the current encoding unit at the coordinate (i, j), and
Figure BDA0002187279680000043
then, the pixel mean value of eight neighborhood pixels of the current image block at the pixel point with the coordinate (i, j) is obtained;
s12: extracting a pixel Sobel gradient value of each coding unit as a characteristic for measuring image Edge Complexity (EC), wherein EC expressions are shown as formula (3) and formula (4):
Figure BDA0002187279680000044
Figure BDA0002187279680000045
in the formula, N represents the number of the current coding block pixel points,
Figure BDA0002187279680000046
represents the Sobel gradient mean, E, of the current coding unit at coordinate (i, j)hor(i, j) and EvecAnd (i, j) respectively refer to a horizontal Sobel gradient value and a vertical Sobel gradient value of the current coding unit at the coordinate (i, j). In this embodiment, a horizontal Sobel gradient template (S)hor) And vertical Sobel gradient template (S)vec) As shown in fig. 2;
s13: extracting the variance of the prediction residual variance of each coding unit and its four sub-blocks as the characteristic for measuring the image Structure Complexity (SC), and taking the variance of the prediction residual variance of each coding unit and its four sub-blocks as SCom of the coding unit, as shown in formula (5):
Figure BDA0002187279680000047
in the formula, variDenotes the prediction residual variance of the sub-block numbered i,
Figure BDA0002187279680000048
representing the mean of the variance of the prediction residuals;
s2: the classifier feature offline learning method comprises the following steps of inputting extracted multiple features into an SVM classifier for offline learning to obtain classification models of SVM under various depths, and comprises the following specific steps:
s21: inputting TC, EC and SC of all coding units and corresponding depth information into an SVM classifier for off-line training;
s22: for the training model at each depth, selecting appropriate classifier parameters to obtain a threshold corresponding to the division condition of the coding unit, that is, the classifier model at each depth can be simply represented by formula (6):
Figure BDA0002187279680000051
in the formula, ω and b are parameters obtained after the classifier training, x is the input characteristics of TC, EC, SC, etc., sign is the activation function: when the output value is-1, the predicted coding unit is not divided, when the output value is 1, the predicted coding unit is continuously divided, and when the output value is 0, the predicted coding unit is uncertain;
s23: according to the accuracy of the offline training classification, respectively determining the optimal prediction parameters of the coding units under each depth, which is shown in formula (7):
Figure BDA0002187279680000052
in the formula, ω1、b1Respectively, parameters of the prediction coding unit not being partitioned, ω2、b2Respectively, parameters for direct partitioning of the prediction coding unit;
s3: determining the CTU with the maximum neighborhood relevance based on multi-feature relevance analysis;
s31: performing image feature relevance analysis on CTUs adjacent to a time-space domain of a current coding unit according to TC, EC and SC, and determining the CTU with the maximum relevance (also called as maximum relevance CTU), wherein a calculation expression of the maximum relevance is shown as a formula (8):
wherein R represents the maximum correlation degree, TCcur、ECcur、SCcurTexture complexity, edge complexity and structure complexity of the current coding unit image respectively, nei is the CTU of the adjacent time-space domain, i is the co-located CTU, right left CTU, upper left CTU and right upper CTU of the previous frame respectivelyA square CTU and an upper right CTU;
and obtaining the maximum association CTU of the current coding unit based on the CTU corresponding to the maximum association degree.
S4: judging the depth 0 of the maximum correlation CTU, and judging the current CTU in advance according to the depth of the neighborhood maximum correlation CTU; on the premise that the current coding depth is 0, if the final partition depth of the CTU with the maximum association degree with the current coding unit is 0, the quadtree partition of the current CTU is terminated, and a schematic diagram of the quadtree partition of the coding unit is shown in fig. 3. Otherwise, continuing to execute the following steps;
s5: and (3) complexity prediction, when the CU at each depth is coded, inputting the extracted features into a classification model, dividing a coding unit into three conditions of simplicity, medium and complexity according to an output result, and performing corresponding processing:
s51: inputting TC, EC and SC of a coding unit under the current depth into a classifier to carry out complexity calculation;
s52: if y1If the output result is less than 0, the image complexity of the coding unit is classified as simple;
s53: if y2If the output result is greater than 0, the image complexity of the coding unit is classified as complex;
s54: if other results are output, the image complexity of the coding unit is classified as medium;
s6: and (4) processing the classification result, and executing corresponding division judgment according to the classification result:
s61: if the coding unit image is classified as simple, terminating the quadtree division of the current depth;
s62: if the coding unit image is classified as complex, directly performing current quadtree division and performing next depth judgment;
s63: if the coding unit image is classified as medium, coding according to the original HEVC standard;
the rapid coding method extracts a plurality of characteristics of the image, and judges the division condition of each depth of the coding unit by using a classifier trained by characteristic off-line. The above method reduces the number of comparisons traversing the CU at all depths, thereby greatly reducing the encoding complexity. Through analysis of experimental simulation results, the coding time of the invention can be reduced by 52.97% on the premise of equivalent acceptable quality loss, while the current similar method is 46.5%. Therefore, the invention effectively reduces the coding complexity on the premise that the coding performance is hardly influenced.
While the invention has been described with reference to specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except mutually exclusive features and/or steps.

Claims (5)

1. The quick coding method based on the multi-feature analysis of the coding unit is characterized by comprising the following steps of:
s1: extracting features of the coding units under all depths to obtain texture, edge and structural features of the coding units under all depths;
s2: and (3) offline learning of classifier features:
inputting the multi-features extracted in the step S1 at different depths into an SVM classifier for off-line learning to obtain SVM classification models at each depth, wherein each SVM classification model is used for determining the complexity classification of the coding units at each depth;
s3: determining the CTU with the maximum neighborhood relevance based on the multi-feature relevance;
s4: and (3) judging the maximum association CTU depth 0:
judging the current CTU in advance according to the depth of the CTU with the maximum neighborhood relevance; on the premise that the current coding depth is 0, if the final partition depth of the CTU with the maximum association degree with the current coding unit is 0, terminating the quad-tree partition of the current CTU; otherwise, continuing to execute step S5;
s5: and (3) complexity prediction:
when the CU at each depth is coded, inputting the extracted texture, edge and structure characteristics into an SVM classification model at the corresponding depth, and dividing the complexity of a coding unit into three conditions of simplicity, medium and complexity according to an output result;
s6: and executing corresponding division judgment according to the classification result of the complexity prediction:
if the coding unit image is classified as simple, terminating the quadtree division of the current depth;
if the coding unit image is classified as complex, directly performing quadtree division on the current coding unit at the current depth, and performing complexity prediction and division judgment of the next depth;
if the coding unit image is classified as medium, the coding is performed according to the HEVC standard.
2. The method according to claim 1, wherein in step S1, the texture, edge and texture features are specifically:
texture characteristics: the pixel neighborhood mean square error of each coding unit;
edge characteristics: pixel Sobel gradient values of each coding unit;
the structure is characterized in that: the variance of the prediction residual variance of each coding unit from its four sub-blocks.
3. The method of claim 1, wherein the step S2 includes the steps of:
s21: inputting the texture, edge and structure characteristics of all coding units and corresponding depth information into an SVM classifier for off-line training;
s22: for the training model under each depth, obtaining a classifier model under each depth based on the trained classifier parameters;
s23: and respectively determining the optimal complexity prediction parameters of the coding units under each depth according to the accuracy of the offline training classification.
4. The method according to claim 1, wherein in step S3, the step of determining the CTU with the largest neighborhood relevance includes:
based on texture features, edge features and structural features of the coding unit, determining the CTU with the maximum neighborhood relevance degree from the adjacent space-time domain of the current coding unit:
calculating the difference degree R between the current coding unit and the CTU of the adjacent time-space domaincur-i
Figure FDA0002187279670000021
Wherein A iscur、BcurAnd CcurRespectively representing texture, edge and texture features of the current coding unit, Ai、BiAnd CiRespectively representing texture features, edge features and structural features of CTUs of adjacent time-space domains of a current coding unit;
taking the difference degree Rcur-iAnd the CTU of the minimum adjacent time-space domain is taken as the CTU with the maximum neighborhood relevance of the current coding unit.
5. The method of claim 1, wherein the step S5 includes the steps of:
s51: inputting texture, edge and structural characteristics of the coding unit at the current depth into an SVM classifier for complexity calculation;
s52: if the output result of the prediction coding unit of the SVM classifier which is not divided is less than 0, the image complexity of the coding unit is classified as simple;
s53: if the output result of the direct division of the prediction coding unit classified by the SVM is greater than 0, the image complexity of the coding unit is classified as complex;
s54: if the SVM classification outputs other results, the image complexity of the coding unit is classified as medium.
CN201910820042.2A 2019-08-31 2019-08-31 Quick coding method based on multi-feature analysis of coding unit Pending CN110650342A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910820042.2A CN110650342A (en) 2019-08-31 2019-08-31 Quick coding method based on multi-feature analysis of coding unit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910820042.2A CN110650342A (en) 2019-08-31 2019-08-31 Quick coding method based on multi-feature analysis of coding unit

Publications (1)

Publication Number Publication Date
CN110650342A true CN110650342A (en) 2020-01-03

Family

ID=68991418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910820042.2A Pending CN110650342A (en) 2019-08-31 2019-08-31 Quick coding method based on multi-feature analysis of coding unit

Country Status (1)

Country Link
CN (1) CN110650342A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111385585A (en) * 2020-03-18 2020-07-07 北京工业大学 3D-HEVC depth map coding unit division fast decision method based on machine learning
CN112437310A (en) * 2020-12-18 2021-03-02 重庆邮电大学 VVC intra-frame coding rapid CU partition decision method based on random forest
CN114584771A (en) * 2022-05-06 2022-06-03 宁波康达凯能医疗科技有限公司 Method and system for dividing intra-frame image coding unit based on content self-adaption

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959611A (en) * 2016-07-14 2016-09-21 同观科技(深圳)有限公司 Adaptive H264-to-HEVC (High Efficiency Video Coding) inter-frame fast transcoding method and apparatus
US20170337711A1 (en) * 2011-03-29 2017-11-23 Lyrical Labs Video Compression Technology, LLC Video processing and encoding
CN108174208A (en) * 2018-02-12 2018-06-15 杭州电子科技大学 A kind of efficient video coding method of feature based classification
WO2018187622A1 (en) * 2017-04-05 2018-10-11 Lyrical Labs Holdings, Llc Video processing and encoding
CN108769696A (en) * 2018-06-06 2018-11-06 四川大学 A kind of DVC-HEVC video transcoding methods based on Fisher discriminates
CN109302610A (en) * 2018-10-26 2019-02-01 重庆邮电大学 A kind of screen content coding interframe fast algorithm based on rate distortion costs
CN110087087A (en) * 2019-04-09 2019-08-02 同济大学 VVC interframe encode unit prediction mode shifts to an earlier date decision and block divides and shifts to an earlier date terminating method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170337711A1 (en) * 2011-03-29 2017-11-23 Lyrical Labs Video Compression Technology, LLC Video processing and encoding
CN105959611A (en) * 2016-07-14 2016-09-21 同观科技(深圳)有限公司 Adaptive H264-to-HEVC (High Efficiency Video Coding) inter-frame fast transcoding method and apparatus
WO2018187622A1 (en) * 2017-04-05 2018-10-11 Lyrical Labs Holdings, Llc Video processing and encoding
CN108174208A (en) * 2018-02-12 2018-06-15 杭州电子科技大学 A kind of efficient video coding method of feature based classification
CN108769696A (en) * 2018-06-06 2018-11-06 四川大学 A kind of DVC-HEVC video transcoding methods based on Fisher discriminates
CN109302610A (en) * 2018-10-26 2019-02-01 重庆邮电大学 A kind of screen content coding interframe fast algorithm based on rate distortion costs
CN110087087A (en) * 2019-04-09 2019-08-02 同济大学 VVC interframe encode unit prediction mode shifts to an earlier date decision and block divides and shifts to an earlier date terminating method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
赵宏等: "基于深度预测的HEVC编码单元快速划分算法", 《计算机应用与软件》 *
金智鹏等: "HEVC帧内编码单元快速划分算法", 《南京邮电大学学报(自然科学版)》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111385585A (en) * 2020-03-18 2020-07-07 北京工业大学 3D-HEVC depth map coding unit division fast decision method based on machine learning
CN112437310A (en) * 2020-12-18 2021-03-02 重庆邮电大学 VVC intra-frame coding rapid CU partition decision method based on random forest
CN114584771A (en) * 2022-05-06 2022-06-03 宁波康达凯能医疗科技有限公司 Method and system for dividing intra-frame image coding unit based on content self-adaption
CN114584771B (en) * 2022-05-06 2022-09-06 宁波康达凯能医疗科技有限公司 Method and system for dividing intra-frame image coding unit based on content self-adaption

Similar Documents

Publication Publication Date Title
CN110650342A (en) Quick coding method based on multi-feature analysis of coding unit
US11070803B2 (en) Method and apparatus for determining coding cost of coding unit and computer-readable storage medium
CN107071416B (en) HEVC intra-frame prediction mode rapid selection method
CN111462261B (en) Fast CU partitioning and intra-frame decision method for H.266/VVC
CN107371022B (en) Inter-frame coding unit rapid dividing method applied to HEVC medical image lossless coding
CN109040764B (en) HEVC screen content intra-frame rapid coding algorithm based on decision tree
CN108712648B (en) Rapid intra-frame coding method for depth video
CN104125473B (en) A kind of 3D video depths image method for choosing frame inner forecast mode and system
CN111355956B (en) Deep learning-based rate distortion optimization rapid decision system and method in HEVC intra-frame coding
CN108737841A (en) Coding unit depth determination method and device
CN110446052B (en) 3D-HEVC intra-frame depth map rapid CU depth selection method
CN104038760B (en) A kind of wedge shape Fractionation regimen system of selection of 3D video depths image frame in and system
CN105430391B (en) The intraframe coding unit fast selecting method of logic-based recurrence classifier
CN109446967B (en) Face detection method and system based on compressed information
CN108174208B (en) Efficient video coding method based on feature classification
CN106682094A (en) Human face video retrieval method and system
CN108712647A (en) A kind of CU division methods for HEVC
Mu et al. Fast coding unit depth decision for HEVC
Shi et al. Asymmetric-kernel CNN based fast CTU partition for HEVC intra coding
CN111414938A (en) Target detection method for bubbles in plate heat exchanger
CN112770120B (en) 3D video depth map intra-frame rapid coding method based on depth neural network
CN103702131A (en) Pattern-preprocessing-based intraframe coding optimization method and system
Xue et al. Fast coding unit decision for intra screen content coding based on ensemble learning
CN109040756B (en) HEVC image content complexity-based rapid motion estimation method
CN106878754A (en) A kind of 3D video depths image method for choosing frame inner forecast mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200103