CN108322745A - Fast selecting method in a kind of frame based on inseparable quadratic transformation pattern - Google Patents

Fast selecting method in a kind of frame based on inseparable quadratic transformation pattern Download PDF

Info

Publication number
CN108322745A
CN108322745A CN201810167292.6A CN201810167292A CN108322745A CN 108322745 A CN108322745 A CN 108322745A CN 201810167292 A CN201810167292 A CN 201810167292A CN 108322745 A CN108322745 A CN 108322745A
Authority
CN
China
Prior art keywords
cunsstidx
ref
coding unit
nsst
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810167292.6A
Other languages
Chinese (zh)
Other versions
CN108322745B (en
Inventor
张昊
王塞博
雷诗哲
牟凡
符婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201810167292.6A priority Critical patent/CN108322745B/en
Publication of CN108322745A publication Critical patent/CN108322745A/en
Application granted granted Critical
Publication of CN108322745B publication Critical patent/CN108322745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses fast selecting methods in a kind of frame based on inseparable quadratic transformation pattern, by using the temporal correlation in the video sequence of the coding unit CU of same position different depth, the index value of look-ahead NSST optimal modes, skip its unnecessary index cyclic process, avoid the NSST mode selection processes taken in coding flow, in the case where ensureing that Subjective video quality declines negligible, reduce the computation complexity of encoder, the scramble time is reduced, code efficiency is improved.Meanwhile scheme of the present invention is simple and practicable, is conducive to the industrialization promotion of video encoding standard of new generation.

Description

Fast selecting method in a kind of frame based on inseparable quadratic transformation pattern
Technical field
The invention belongs to coding and decoding video field, more particularly in a kind of frame based on inseparable quadratic transformation pattern quickly Selection method.
Background technology
NSST (Non-separable Secondary Transform) is inseparable quadratic transformation.It is compiled in early stage video In code, encoder carries out video compress using DCT (Discrete Cosine Transform) discrete cosine transforms.But it passes The DCT of system is a kind of suboptimum transformation, and when residual signals have larger diagonal components, DCT can not be effectively to signal energy It is compressed.In video encoding standard of new generation, NSST is instead of the DCT in script standard, by quadratic transformation to signal It is effectively compressed.Recently, the formulation of video encoding standard of new generation introduces a large amount of novel coding tool, and NSST is then it One of.
In recent years, as regarding for people is gradually come into high definition, ultra high-definition video (resolution ratio reaches 4K × 2K, 8K × 4K) application Open country, video compression technology receive huge challenge, and video compression coding standards system is also grown rapidly, in addition, respectively The Video Applications of formula various kinds are continued to bring out also with the development of network and memory technology, nowadays, digital video broadcasting, mobile wireless Video, remote detection, medical imaging and portable photography etc., have all come into people’s lives, the public requirement for video quality Also higher and higher, therefore, the diversification of Video Applications and high Qinghua trend are under more higher than H.265/HEVC code efficiency Generation video encoding standard proposes stronger requirement.Exactly in this context, the VCEG of ITU-T VCEG (compile by video Code expert group) and the MPEG (dynamic image expert group) of ISO/IEC set up Video coding in 2016 and probe into joint group JVET (Joint Video Exploration Team), it is intended to inquire into the research and development and formulation of video encoding standard of new generation.
The video encoding standard of a new generation still uses hybrid encoding frame, including transformation, quantization, entropy coding, pre- in frame The modules such as survey, inter-prediction and loop filtering, still, in order to improve video compression ratio, which uses QTBT (Quadtree Plus binary tree) partition structure, instead of the quad-tree partition of HEVC.Under QTBT structures, a variety of strokes are eliminated Classifying type such as CU, PU and TU detach idea, support the part that more elastic CU classified types preferably to match video data special Sign, the depth division of CU are no longer to be indicated with individual depth Depth, but divide QTDepth and two by quaternary tree depth Fork tree depth divides BTDepth and states jointly.A series of relatively time consuming novel coding tools are introduced in modules simultaneously, The computation complexity of encoder is greatly improved in these improvement for improving compression ratio, is unfavorable for video encoding standard of new generation Industrialization promotion.Therefore, Optimized Coding Based device and coding is reduced in the case where ensureing that Subjective video quality declines negligible Time is that coding and decoding video field one of is urgently studied and solved the problems, such as.
In video encoding standard of new generation, when carrying out intra prediction, each CU (coding unit) complete transformation of coefficient it Afterwards, an index can be transmitted.When index value is 0, indicate that current CU does not have nonzero coefficient;When index value is 2, indicate current CU is encoded using Planar (plane prediction mode) or DC (angle prediction mode);When index value is 3, current CU is indicated It is encoded using angle prediction mode in frame.Only when the nonzero coefficient number of current CU is not 0, the index value of NSST is It is transmitted, is not transmitted in the case of other, default value 0.NSST includes altogether 4 index values, and each index value corresponds to different Quadratic transformation mode, index value are respectively 0,1,2,3.When NSST index values are 0, indicate that current CU does not use quadratic transformation.Rope Draw value be 1~3 when, then it represents that enable current CU use quadratic transformation.
It is found by the test analysis of the reference software JEM to video encoding standard of new generation, is configured in All Intra Under, the scramble time of NSST indexes cycle accounts for about the 30% of total encoding time.It therefore, if can be by sentencing to relevant information Disconnected, a new generation can be effectively improved to avoid unnecessary quadratic transformation mode computation by reducing NSST index range of DOs in advance The intraframe coding efficiency of video encoding standard.
The index of NSST recycles:In encoder-side, current CU is cycle with NSST index values, is recycled 4 times, by comparing rate The optimum N SST patterns of distortion function RD Cost selection present encoding blocks, that is, select the index value of optimal mode.
Invention content
The present invention be directed to video encoding standards of new generation in the prior art, and intraframe coding efficiency is too low, proposes a kind of base It is skipped by the index value of look-ahead NSST optimal modes in fast selecting method in the frame of inseparable quadratic transformation pattern Its unnecessary index cyclic process reduces encoder in the case where ensureing that Subjective video quality declines negligible Computation complexity reduces the scramble time, improves code efficiency.
Fast selecting method in a kind of frame based on inseparable quadratic transformation pattern, includes the following steps:
Step 1:It is divided using the quad-tree partition depth QTDepth of the coding unit CU under current depth and binary tree deep BTDepth is spent, judges whether to meet entry condition 1, if not satisfied, then entering step 2, otherwise, enters step 4;
Condition 1:QTDepth≧2&&BTDepth≧1
Meet condition 1 and show that the depth of current CU is higher, skips the NSST indexes cycle under current depth, be directly entered step Rapid 4;
The low CU of depth enters step 2, that is, carries out complete NSST indexes cycle under current depth;
Step 2:The index cycle that complete NSST is carried out to the coding unit CU under current depth, chooses under current depth The optimum N SST patterns of coding unit CU, and obtain the index value BestNSSTidx under optimum N SST patterns;
Calculate the total depth DepthCount of current layer coding unit CU, and by the index value of current layer coding unit BestNSSTidx, binary tree divide depth B TDepth, division direction flag bit BTSplitFlag is stored in array Ref_ respectively During the first row of CUNSSTidx is arranged to third, the value being expert at is identical as the value of DepthCount;
It is sequentially stored into Ref_CUNSSTidx [DepthCount] [0], Ref_CUNSSTidx [DepthCount] [1], In Ref_CUNSSTidx [DepthCount] [2];
Wherein, Ref_CUNSSTidx is the two-dimensional array of each coding unit CU, size 100*3, and element in array It is initialized as -1;DepthCount=QTDepth+BTDepth;BTDepth is determined that value is by encoder configuration file [0,3];BTSplitFlag values are 0 or 1,0 expression horizontal division, and 1 indicates vertical division;
Step 3:Judge whether the coding unit CU under current depth meets condition 2, if so, being carried out to coding unit CU The coding of next depth, return to step 1, if it is not, current coded unit CU completes all NSST indexes cycles, return to step 1 is right Next coding unit CU is judged;
Condition 2:QTDepth≦4&&BTDepth≦3
Step 4:Element value is respectively 0,1,2,3 number difference in statistics array Ref_CUNSSTidx [100] [0] For n0, n1, n2, n3, if n0, n1, n2, n3 are not mutually equal, selection maximum value is assigned to NSST patterns from n0, n1, n2, n3 The index value Nfin1 of first optimum prediction, enters step 5, otherwise return to step 2;
When identical there are any two number in n0, n1, n2, n3, show that the CU under current depth need to be carried out completely NSST indexes recycle;
Step 5:It is not 0 that DepthCount, which is not -1, BTDepth, in statistics array Ref_CUNSSTidx respectively, BTSplitFlag is respectively 0 and 1 number Num1, Num2 and summation Total1, Total2 for being expert at corresponding NSSTidx;
Step 6:It is 0 to enable the initial value of i, scans each row element in array Ref_CUNSSTidx successively, judges each Whether the element in row meets condition 3:
Condition 3:Ref_CUNSSTidx[i][0]≠-1&&Ref_CUNSSTidx[i][1]≠0&&Ref_CUNSSTidx [i] [2]=0
If all rows are unsatisfactory for, N1=Total1/Num1 is enabled, enters step 7, if current line meets, enables Num1 =Num1+1, Total1=Total1+Ref_CUNSSTidx [i] [0], i=i+1 continue to scan on the next line in array, directly It is scanned through to all rows, enables N1=Total1/Num1, enter step 7;
Step 7:It is 0 to enable the initial value of i, scans each row element in array Ref_CUNSSTidx successively, judges each Whether the element in row meets condition 3:
Condition 3:Ref_CUNSSTidx[i][0]≠-1&&Ref_CUNSSTidx[i][1]≠0&&Ref_CUNSSTidx [i] [2]=1
If all rows are unsatisfactory for, N2=Total2/Num2 is enabled, enters step 8, if current line meets, enables Num2 =Num2+1, Total2=Total2+Ref_CUNSSTidx [i] [0], i=i+1 continue to scan on the next line in array, directly It is scanned through to all rows, enables N2=Total2/Num2, enter step 8;
Step 8:In conjunction with the division direction flag bit and N1, N2 of coding unit CU under current depth, NSST patterns the are calculated The index value Nfin2 of two optimum predictions;
Wherein, Round () indicates that round number, λ indicate the relevance parameter of N1, N2 and Nfin2, value range For 0-1;
Step 9:NSST indexes into current depth recycle, and index value chooses min (Nfin1, Nfin2) and max successively (Nfin1, Nfin2) executes the RDCost operations under manipulative indexing value, skips the NSST indexes cycle of remaining index value, completes The NSST indexes of coding unit CU recycle;
Wherein, min () is to be minimized function, and max () is to be maximized function.
Experiment shows the division direction flag bit according to current coded unit CU, to being divided in array Ref_CUNSSTidx Direction flag is NSST optimal mode average value N1 at 0 and division symbolizing position is NSST optimal mode average value N2 at 1, It distributes different weights to carry out that Nfin2 is calculated, accuracy rate is up to 90% or more.
Further, the optimum N SST patterns of coding unit CU refer to comparing to encode under current depth under the current depth Rate distortion function RDCosts of the unit CU under different NSST patterns, it is current deep to select the minimum corresponding NSST patterns of RDCost The optimum N SST patterns of the lower coding unit CU of degree.
Further, λ values are 0.3.
Coding unit CU under current depth refers to the coding unit CU encoded, the coding under next depth Unit CU refers to QTDepth+1 or BTDepth+1, to coding unit CU after the coding unit CU end-of-encodes under current depth Into the coded treatment of next depth.
Advantageous effect
The present invention provides fast selecting methods in a kind of frame based on inseparable quadratic transformation pattern, by using same position Set the temporal correlation in the video sequence of the coding unit CU of different depth, the index of look-ahead NSST optimal modes Value, skips its unnecessary index cyclic process, avoids the NSST mode selection processes taken in coding flow, is ensureing video In the case that subjective quality decline is negligible, the computation complexity of encoder is reduced, reduces the scramble time, coding is improved and imitates Rate.Meanwhile scheme of the present invention is simple and practicable, is conducive to the industrialization promotion of video encoding standard of new generation.
Description of the drawings
Fig. 1 is the flow diagram of the present invention.
Specific implementation mode
Below in conjunction with drawings and examples, the present invention is described further.
As shown in Figure 1, for the flow diagram of the present invention, quickly selected in a kind of frame based on inseparable quadratic transformation pattern The side of selecting
Method includes the following steps:
Step 1:It is divided using the quad-tree partition depth QTDepth of the coding unit CU under current depth and binary tree deep BTDepth is spent, judges whether to meet entry condition 1, if not satisfied, then entering step 2, otherwise, enters step 4;
Condition 1:QTDepth≧2&&BTDepth≧1
Meet condition 1 and show that the depth of current CU is higher, skips the NSST indexes cycle under current depth, be directly entered step Rapid 4;
The low CU of depth enters step 2, that is, carries out complete NSST indexes cycle under current depth;
Step 2:The index cycle that complete NSST is carried out to the coding unit CU under current depth, chooses under current depth The optimum N SST patterns of coding unit CU, and obtain the index value BestNSSTidx under optimum N SST patterns;
The optimum N SST patterns of coding unit CU refer to comparing coding unit CU under current depth to exist under the current depth Rate distortion function RDCost under different NSST patterns selects the minimum corresponding NSST patterns of RDCost to be encoded under current depth The optimum N SST patterns of unit CU.
Calculate the total depth DepthCount of current layer coding unit CU, and by the index value of current layer coding unit BestNSSTidx, binary tree divide depth B TDepth, division direction flag bit BTSplitFlag is stored in array Ref_ respectively During the first row of CUNSSTidx is arranged to third, the value being expert at is identical as the value of DepthCount;
It is sequentially stored into Ref_CUNSSTidx [DepthCount] [0], Ref_CUNSSTidx [DepthCount] [1], In Ref_CUNSSTidx [DepthCount] [2];
Wherein, Ref_CUNSSTidx is the two-dimensional array of each coding unit CU, size 100*3, and element in array It is initialized as -1;DepthCount=QTDepth+BTDepth;BTDepth is determined that value is by encoder configuration file [0,3];BTSplitFlag values are 0 or 1,0 expression horizontal division, and 1 indicates vertical division;
Step 3:Judge whether the coding unit CU under current depth meets condition 2, if so, being carried out to coding unit CU The coding of next depth, return to step 1, if it is not, current coded unit CU completes all NSST indexes cycles, return to step 1 is right Next coding unit CU is judged;
Condition 2:QTDepth≦4&&BTDepth≦3
Step 4:Element value is respectively 0,1,2,3 number difference in statistics array Ref_CUNSSTidx [100] [0] For n0, n1, n2, n3, if n0, n1, n2, n3 are not mutually equal, selection maximum value is assigned to NSST patterns from n0, n1, n2, n3 The index value Nfin1 of first optimum prediction, enters step 5, otherwise return to step 2;
When identical there are any two number in n0, n1, n2, n3, show that the CU under current depth need to be carried out completely NSST indexes recycle;
Step 5:It is not 0 that DepthCount, which is not -1, BTDepth, in statistics array Ref_CUNSSTidx respectively, BTSplitFlag is respectively 0 and 1 number Num1, Num2 and summation Total1, Total2 for being expert at corresponding NSSTidx;
Step 6:It is 0 to enable the initial value of i, scans each row element in array Ref_CUNSSTidx successively, judges each Whether the element in row meets condition 3:
Condition 3:Ref_CUNSSTidx[i][0]≠-1&&Ref_CUNSSTidx[i][1]≠0&&Ref_CUNSSTidx [i] [2]=0
If all rows are unsatisfactory for, N1=Total1/Num1 is enabled, enters step 7, if current line meets, enables Num1 =Num1+1, Total1=Total1+Ref_CUNSSTidx [i] [0], i=i+1 continue to scan on the next line in array, directly It is scanned through to all rows, enables N1=Total1/Num1, enter step 7;
Step 7:It is 0 to enable the initial value of i, scans each row element in array Ref_CUNSSTidx successively, judges each Whether the element in row meets condition 3:
Condition 3:Ref_CUNSSTidx[i][0]≠-1&&Ref_CUNSSTidx[i][1]≠0&&Ref_CUNSSTidx [i] [2]=1
If all rows are unsatisfactory for, N2=Total2/Num2 is enabled, enters step 8, if current line meets, enables Num2 =Num2+1, Total2=Total2+Ref_CUNSSTidx [i] [0], i=i+1 continue to scan on the next line in array, directly It is scanned through to all rows, enables N2=Total2/Num2, enter step 8;
Step 8:In conjunction with the division direction flag bit and N1, N2 of coding unit CU under current depth, NSST patterns the are calculated The index value Nfin2 of two optimum predictions;
Wherein, Round () indicates that round number, λ indicate the relevance parameter of N1, N2 and Nfin2, value range For 0-1;When λ values are 0.3, timeliness is best.
Step 9:NSST indexes into current depth recycle, and index value chooses min (Nfin1, Nfin2) and max successively (Nfin1, Nfin2) executes the RDCost operations under manipulative indexing value, skips the NSST indexes cycle of remaining index value, completes The NSST indexes of coding unit CU recycle;
Wherein, min () is to be minimized function, and max () is to be maximized function.
In order to verify the correctness and validity of above-mentioned algorithm, the present invention is based on reference software JEM7.0 in Visual The algorithm is realized on Studio2015 softwares.The configuration selection JEM standard configuration files of the specific coding parameter of all experiments: The standard configuration file of encoder_intra_jvet10.cfg and corresponding cycle tests.
For the quality of verification algorithm performance, BDBR (Bjotegaard Delta Bit rate) and Δ T is used herein Two indices are assessed.Wherein, BDBR is the influence to video quality for assessment algorithm, and BDBR is bigger to illustrate algorithm pair The influence of video quality is bigger, i.e. the performance of algorithm is poorer.Δ T is then the promotion for reflecting current algorithm to encoder efficiency, Calculation formula is as follows:
Wherein, TorgIt represents and encodes used time, T using the Raw encoder for being not added with any fast algorithmnewIt represents The time required to accelerating to encode after the short-cut counting method, Δ T is then represented accelerate the short-cut counting method after the percentage that is promoted in efficiency of encoder.
According to Simulation results table 1:Scramble time reduces 12%, and it is only 0.92 that BDBR, which rises,.Thus real It tests result and can be seen that the present invention under the premise of ensureing Subjective video quality, be effectively improved code efficiency, reached this The purpose of invention.

Claims (3)

1. fast selecting method in a kind of frame based on inseparable quadratic transformation pattern, which is characterized in that include the following steps:
Step 1:Depth is divided using the quad-tree partition depth QTDepth of the coding unit CU under current depth and binary tree BTDepth judges whether to meet entry condition 1, if not satisfied, then entering step 2, otherwise, enters step 4;
Condition 1:QTDepth≧2&&BTDepth≧1
Step 2:The index cycle that complete NSST is carried out to the coding unit CU under current depth, chooses and is encoded under current depth The optimum N SST patterns of unit CU, and obtain the index value BestNSSTidx under optimum N SST patterns;
Calculate the total depth DepthCount of current layer coding unit CU, and by the index value of current layer coding unit BestNSSTidx, binary tree divide depth B TDepth, division direction flag bit BTSplitFlag is stored in array Ref_ respectively During the first row of CUNSSTidx is arranged to third, the value being expert at is identical as the value of DepthCount;
Wherein, Ref_CUNSSTidx be each coding unit CU two-dimensional array, size 100*3, and in array element just Beginning turns to -1;DepthCount=QTDepth+BTDepth;BTDepth is determined that value is [0,3] by encoder configuration file; BTSplitFlag values are 0 or 1,0 expression horizontal division, and 1 indicates vertical division;
Step 3:Judge whether the coding unit CU under current depth meets condition 2, if so, being carried out to coding unit CU next The coding of depth, return to step 1, if it is not, current coded unit CU completes all NSST indexes cycles, return to step 1, to next A coding unit CU is judged;
Condition 2:QTDepth≦4&&BTDepth≦3
Step 4:The number that element value is respectively 0,1,2,3 in statistics array Ref_CUNSSTidx [100] [0] be respectively n0, N1, n2, n3, if n0, n1, n2, n3 are not mutually equal, selection maximum value is assigned to NSST patterns first from n0, n1, n2, n3 The index value Nfin1 of optimum prediction, enters step 5, otherwise return to step 2;
Step 5:It is not 0 that DepthCount, which is not -1, BTDepth, in statistics array Ref_CUNSSTidx respectively, BTSplitFlag is respectively 0 and 1 number Num1, Num2 and summation Total1, Total2 for being expert at corresponding NSSTidx;
Step 6:It is 0 to enable the initial value of i, each row element in array Ref_CUNSSTidx is scanned successively, in judging per a line Element whether meet condition 3:
Condition 3:Ref_CUNSSTidx[i][0]≠-1&&Ref_CUNSSTidx[i][1]≠0&&Ref_CUNSSTidx[i] [2]=0
If all rows are unsatisfactory for, N1=Total1/Num1 is enabled, enters step 7, if current line meets, enables Num1= Num1+1, Total1=Total1+Ref_CUNSSTidx [i] [0], i=i+1 continue to scan on the next line in array, until All rows scan through, and enable N1=Total1/Num1, enter step 7;
Step 7:It is 0 to enable the initial value of i, each row element in array Ref_CUNSSTidx is scanned successively, in judging per a line Element whether meet condition 3:
Condition 3:Ref_CUNSSTidx[i][0]≠-1&&Ref_CUNSSTidx[i][1]≠0&&Ref_CUNSSTidx[i] [2]=1
If all rows are unsatisfactory for, N2=Total2/Num2 is enabled, enters step 8, if current line meets, enables Num2= Num2+1, Total2=Total2+Ref_CUNSSTidx [i] [0], i=i+1 continue to scan on the next line in array, until All rows scan through, and enable N2=Total2/Num2, enter step 8;
Step 8:In conjunction with the division direction flag bit and N1, N2 of coding unit CU under current depth, NSST patterns second are calculated most The index value Nfin2 of good prediction;
Wherein, Round () indicates that round number, λ indicate N1, N2 and Nfi
The relevance parameter of n2, value range 0-1;
Step 9:NSST indexes into current depth recycle, and index value chooses min (Nfin1, Nfin2) and max successively (Nfin1, Nfin2) executes the RDCost operations under manipulative indexing value, skips the NSST indexes cycle of remaining index value, completes The NSST indexes of coding unit CU recycle;
Wherein, min () is to be minimized function, and max () is to be maximized function.
2. according to the method described in claim 1, it is characterized in that, under the current depth coding unit CU optimum N SST moulds Formula refers to comparing rate distortion function RDCosts of the coding unit CU under different NSST patterns under current depth, selects RDCost most Small corresponding NSST patterns are the optimum N SST patterns of coding unit CU under current depth.
3. method according to claim 1 or 2, which is characterized in that λ values are 0.3.
CN201810167292.6A 2018-02-28 2018-02-28 Fast selecting method in a kind of frame based on inseparable quadratic transformation mode Active CN108322745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810167292.6A CN108322745B (en) 2018-02-28 2018-02-28 Fast selecting method in a kind of frame based on inseparable quadratic transformation mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810167292.6A CN108322745B (en) 2018-02-28 2018-02-28 Fast selecting method in a kind of frame based on inseparable quadratic transformation mode

Publications (2)

Publication Number Publication Date
CN108322745A true CN108322745A (en) 2018-07-24
CN108322745B CN108322745B (en) 2019-12-03

Family

ID=62900841

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810167292.6A Active CN108322745B (en) 2018-02-28 2018-02-28 Fast selecting method in a kind of frame based on inseparable quadratic transformation mode

Country Status (1)

Country Link
CN (1) CN108322745B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020103932A1 (en) * 2018-11-22 2020-05-28 Beijing Bytedance Network Technology Co., Ltd. Using reference lines for intra mode video processing
WO2020216303A1 (en) * 2019-04-23 2020-10-29 Beijing Bytedance Network Technology Co., Ltd. Selective use of secondary transform in coded video
WO2020228672A1 (en) * 2019-05-10 2020-11-19 Beijing Bytedance Network Technology Co., Ltd. Context modeling of reduced secondary transforms in video
CN114223207A (en) * 2019-04-16 2022-03-22 联发科技股份有限公司 Method and apparatus for encoding and decoding video data using secondary transform
US20220150518A1 (en) * 2020-11-11 2022-05-12 Tencent America LLC Method and apparatus for video coding
CN115037934A (en) * 2018-09-02 2022-09-09 Lg电子株式会社 Method for encoding and decoding image signal and computer-readable recording medium
US11575901B2 (en) 2019-08-17 2023-02-07 Beijing Bytedance Network Technology Co., Ltd. Context modeling of side information for reduced secondary transforms in video
US11638008B2 (en) 2019-08-03 2023-04-25 Beijing Bytedance Network Technology Co., Ltd. Selection of matrices for reduced secondary transform in video coding
US11924469B2 (en) 2019-06-07 2024-03-05 Beijing Bytedance Network Technology Co., Ltd. Conditional signaling of reduced secondary transform in video bitstreams
US11943476B2 (en) 2019-04-16 2024-03-26 Hfi Innovation Inc. Methods and apparatuses for coding video data with adaptive secondary transform signaling

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160117368A1 (en) * 2014-05-06 2016-04-28 Snap-On Incorporated Methods and systems for providing an auto-generated repair-hint to a vehicle repair tool
US20170094313A1 (en) * 2015-09-29 2017-03-30 Qualcomm Incorporated Non-separable secondary transform for video coding
CN107005718A (en) * 2014-12-10 2017-08-01 联发科技(新加坡)私人有限公司 Use the method for the Video coding of y-bend tree block subregion
CN107197253A (en) * 2017-04-10 2017-09-22 中山大学 A kind of quick decision methods of QTBT based on KB wave filters and system
US20170347128A1 (en) * 2016-05-25 2017-11-30 Arris Enterprises Llc Binary ternary quad tree partitioning for jvet
WO2018018486A1 (en) * 2016-07-28 2018-02-01 Mediatek Inc. Methods of reference quantization parameter derivation for signaling of quantization parameter in quad-tree plus binary tree structure
US20180048889A1 (en) * 2016-08-15 2018-02-15 Qualcomm Incorporated Intra video coding using a decoupled tree structure

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160117368A1 (en) * 2014-05-06 2016-04-28 Snap-On Incorporated Methods and systems for providing an auto-generated repair-hint to a vehicle repair tool
CN107005718A (en) * 2014-12-10 2017-08-01 联发科技(新加坡)私人有限公司 Use the method for the Video coding of y-bend tree block subregion
US20170094313A1 (en) * 2015-09-29 2017-03-30 Qualcomm Incorporated Non-separable secondary transform for video coding
US20170347128A1 (en) * 2016-05-25 2017-11-30 Arris Enterprises Llc Binary ternary quad tree partitioning for jvet
WO2018018486A1 (en) * 2016-07-28 2018-02-01 Mediatek Inc. Methods of reference quantization parameter derivation for signaling of quantization parameter in quad-tree plus binary tree structure
US20180048889A1 (en) * 2016-08-15 2018-02-15 Qualcomm Incorporated Intra video coding using a decoupled tree structure
CN107197253A (en) * 2017-04-10 2017-09-22 中山大学 A kind of quick decision methods of QTBT based on KB wave filters and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIN ZHAO等: "NSST:Non-Separable Secondary Transforms for Next Generation Video Coding", 《2016 PICTURE CODING SYMPOSIUM (PCS)》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115037934B (en) * 2018-09-02 2024-04-26 Lg电子株式会社 Method for encoding and decoding image signal and computer readable recording medium
CN115037934A (en) * 2018-09-02 2022-09-09 Lg电子株式会社 Method for encoding and decoding image signal and computer-readable recording medium
US11743453B2 (en) 2018-11-22 2023-08-29 Beijing Bytedance Network Technology Co., Ltd Pulse code modulation technique in video processing
US11095882B2 (en) 2018-11-22 2021-08-17 Beijing Bytedance Network Technology Co., Ltd. Pulse code modulation technique in video processing
WO2020103932A1 (en) * 2018-11-22 2020-05-28 Beijing Bytedance Network Technology Co., Ltd. Using reference lines for intra mode video processing
US11943476B2 (en) 2019-04-16 2024-03-26 Hfi Innovation Inc. Methods and apparatuses for coding video data with adaptive secondary transform signaling
CN114223207B (en) * 2019-04-16 2023-09-22 寰发股份有限公司 Method and apparatus for encoding and decoding video data using secondary transforms
CN114223207A (en) * 2019-04-16 2022-03-22 联发科技股份有限公司 Method and apparatus for encoding and decoding video data using secondary transform
US11991393B2 (en) 2019-04-16 2024-05-21 Hfi Innovation Inc. Methods and apparatuses for coding video data with secondary transform
US11956469B2 (en) 2019-04-16 2024-04-09 Hfi Innovation Inc. Methods and apparatuses for coding video data with adaptive secondary transform signaling depending on TB level syntax
US11546636B2 (en) 2019-04-23 2023-01-03 Beijing Bytedance Network Technology Co., Ltd. Clipping operation in secondary transform based video processing
US11647229B2 (en) 2019-04-23 2023-05-09 Beijing Bytedance Network Technology Co., Ltd Use of secondary transform in coded video
WO2020216299A1 (en) * 2019-04-23 2020-10-29 Beijing Bytedance Network Technology Co., Ltd. Use of secondary transform in coded video
WO2020216303A1 (en) * 2019-04-23 2020-10-29 Beijing Bytedance Network Technology Co., Ltd. Selective use of secondary transform in coded video
US11611779B2 (en) 2019-05-10 2023-03-21 Beijing Bytedance Network Technology Co., Ltd. Multiple secondary transform matrices for video processing
US11622131B2 (en) 2019-05-10 2023-04-04 Beijing Bytedance Network Technology Co., Ltd. Luma based secondary transform matrix selection for video processing
US11575940B2 (en) 2019-05-10 2023-02-07 Beijing Bytedance Network Technology Co., Ltd. Context modeling of reduced secondary transforms in video
WO2020228672A1 (en) * 2019-05-10 2020-11-19 Beijing Bytedance Network Technology Co., Ltd. Context modeling of reduced secondary transforms in video
US11924469B2 (en) 2019-06-07 2024-03-05 Beijing Bytedance Network Technology Co., Ltd. Conditional signaling of reduced secondary transform in video bitstreams
US11882274B2 (en) 2019-08-03 2024-01-23 Beijing Bytedance Network Technology Co., Ltd Position based mode derivation in reduced secondary transforms for video
US11638008B2 (en) 2019-08-03 2023-04-25 Beijing Bytedance Network Technology Co., Ltd. Selection of matrices for reduced secondary transform in video coding
US11968367B2 (en) 2019-08-17 2024-04-23 Beijing Bytedance Network Technology Co., Ltd. Context modeling of side information for reduced secondary transforms in video
US11575901B2 (en) 2019-08-17 2023-02-07 Beijing Bytedance Network Technology Co., Ltd. Context modeling of side information for reduced secondary transforms in video
US20220150518A1 (en) * 2020-11-11 2022-05-12 Tencent America LLC Method and apparatus for video coding

Also Published As

Publication number Publication date
CN108322745B (en) 2019-12-03

Similar Documents

Publication Publication Date Title
CN108322745B (en) Fast selecting method in a kind of frame based on inseparable quadratic transformation mode
Choi et al. Near-lossless deep feature compression for collaborative intelligence
CN103024383B (en) A kind of based on lossless compression-encoding method in the frame of HEVC framework
Lei et al. Fast intra prediction based on content property analysis for low complexity HEVC-based screen content coding
CN104094602B (en) Code and decode the method, apparatus and system of the validity mapping of the residual error coefficient of change of scale
US10003792B2 (en) Video encoder for images
CN101884219B (en) Method and apparatus for processing video signal
US10091526B2 (en) Method and apparatus for motion vector encoding/decoding using spatial division, and method and apparatus for image encoding/decoding using same
CN111355956B (en) Deep learning-based rate distortion optimization rapid decision system and method in HEVC intra-frame coding
CN107147911A (en) LIC quick interframe coding mode selection method and device is compensated based on local luminance
CN103067704B (en) A kind of method for video coding of skipping in advance based on coding unit level and system
CN107959852A (en) Decoded method is carried out to vision signal
CN102474566B (en) Wavelet transformation encoding/decoding method and device
CN106464870A (en) Template-matching-based method and apparatus for encoding and decoding intra picture
CN106028034B (en) Coefficient coding tuning in HEVC
US11558608B2 (en) On split prediction
CN110099191A (en) The method for removing deblocking man-made noise
US20230297833A1 (en) Method and device for providing compression and transmission of training parameters in distributed processing environment
CN109874012A (en) A kind of method for video coding, encoder, electronic equipment and medium
CN116708816A (en) Encoding device, decoding device, and data transmitting device
CN115086672A (en) Point cloud attribute coding method and device, point cloud attribute decoding method and device and related equipment
EP3913917A1 (en) Methods for performing encoding and decoding, decoding end and encoding end
CN108605133A (en) The method and apparatus for selecting scanning sequency
Chen et al. CNN-optimized image compression with uncertainty based resource allocation
CN103051896B (en) Mode skipping-based video frequency coding method and mode skipping-based video frequency coding system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant