CN106507106A - Video interprediction encoding method based on reference plate - Google Patents
Video interprediction encoding method based on reference plate Download PDFInfo
- Publication number
- CN106507106A CN106507106A CN201610979281.9A CN201610979281A CN106507106A CN 106507106 A CN106507106 A CN 106507106A CN 201610979281 A CN201610979281 A CN 201610979281A CN 106507106 A CN106507106 A CN 106507106A
- Authority
- CN
- China
- Prior art keywords
- reference plate
- ctu
- duplicate contents
- encoded
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/149—Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/154—Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Algebra (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention discloses a kind of video interprediction encoding method based on reference plate, reference content of the method based on reference plate is managed and prioritization scheme, more information can be collected under conditions of DPB capacity is limited so that present encoding block can more maximum probability find match block.Finally, overall effect is in the case where coding and decoding end complexity is slightly increased, and effectively improves the efficiency of Video coding.
Description
Technical field
A kind of the present invention relates to technical field of video coding, more particularly to video interprediction encoding side based on reference plate
Method.
Background technology
In recent years, with the fast development of the Internet, the application demand for video in the Internet is more and more, and regards
The data volume of frequency is very big, in order to transmit in band-limited the Internet it may first have to which the problem of solution is exactly video compress
Encoded question.
The video encoding standard that has formulated belongs to mixed video coding framework.So-called hybrid video coding, typically by
Following sections are constituted:Prediction (Prediction), conversion (Transform), quantization (Quantization) and entropy code
(Entropy Coding).Wherein prediction is generally divided into infra-frame prediction and inter prediction.Can only regarding from intra prediction mode
Frequency frame is referred to as I frames, can both use intra prediction mode that the frame of video of inter-frame forecast mode can also be used to be referred to as P frames or B frames.
Inter-frame forecast mode is predicted to present frame by the use of the pixel of other reconstructed frames as reference, so as to remove the time
Redundancy.Compared to infra-frame prediction, the accuracy rate of general inter prediction is higher, and during video compress, role is bigger.
The efficiency of inter prediction is further improved, is very urgent demand in Video coding.
At present, when predicting between conducting frame, reconstructed multiple frames can be selected as reference frame.Using multi-reference frame energy
Video coding performance is enough brought to be lifted, reason mainly has in terms of two:One is due to the impact of the factors such as camera noise, multi-reference frame
And combinations thereof more signal forms can be provided, allow present encoding block to find more preferable match block;Two is when video sequence
When in row, presence is blocked and reappeared, present encoding block has the bigger block that may find coupling in multi-reference frame.General is newest
The multi-reference frame of video encoding standard High Efficiency Video Coding (HEVC) is by one closest to frame and several
The secondary top-quality frames composition for closing on, the management of multi-reference frame are based on reference picture collection (Reference Picture Set, RPS).
The selection of reference frame and management are the parts of very core in inter-frame prediction techniques, and the impact to overall video coding efficiency also compares
Larger.
There is a few thing to propose other multi-reference frames to choose and management method, for example:
Based on video content adaptive reference frame choose (A.S.Dias, S.Schwarz, M.Siekmann, S.Bosse,
H.Schwarz,D.Marpe,J.Zubrzycki and M.Mrak,“Perceptually optimised video
compression.”in IEEE International Conference On Multimedia and Expo
(ICME2015),Torino,2015.)
Based on the improvement reference frame list list sorting (S.Schwarz and the M.Mrak, " Improved that are referenced distribution of content
reference picture list sorting in video coding.”in IEEE International
Conference on Systems,Signals and Image Processing(IWSSIP 2015),London,2015.)
The shortcoming of above method is as follows:
1st, the multi-reference frame in HEVC is chosen and management adopts same processing scheme to different video contents, does not have certainly
Adaptability.
2nd, the adaptive reference frame based on video content is chosen and needs to carry out preanalysis to video to be encoded, it is difficult to apply
Scene in streaming coding.
3rd, affected by video content to be encoded based on the improvement reference frame list list sorting for being referenced distribution of content larger, and
Need by extra reference frame internal memory aid in treatment.
In addition to disadvantages mentioned above, existing reference content management and prioritization scheme are all based on reference frame.In decoding end
In the case that reference buffer area (Decoded Picture Buffer, DPB) capacity is limited, in multi-reference frame, content repeats, and deposits
In redundancy, the utilization ratio to relief area is not high, limits the further performance boost of Video coding.
Content of the invention
It is an object of the invention to provide a kind of video interprediction encoding method based on reference plate, by optimizing with reference to interior
Hold, particularly remove the redundancy between multi-reference frame, make full use of decoding buffer zone capacity, effectively improve the compression of Video coding
Efficiency.
The purpose of the present invention is achieved through the following technical solutions:
A kind of video interprediction encoding method based on reference plate, including:
Reference plate comprising single content is generated and management:When current reference frame will remove short-term reference frame list, base
The distribution situation of intra-frame prediction block on the current reference frame, selects the reference plate comprising single content and is added to reference plate buffering
Area;If reference plate relief area is full, according to the reference plate priority calculation of predefined determine to select comprising list
Whether the reference plate of one content is added to reference plate relief area;
Reference plate comprising duplicate contents is generated and management:Static, dynamic present in detection video sequence and interval weight
Multiple content, then carries out high-quality coding, then dividing based on high-quality encoding block according to testing result to the block of duplicate contents
Cloth situation generates the reference plate comprising duplicate contents;Reference plate comprising duplicate contents is directly added into reference plate relief area, then
According to reference plate priority ratio compared with being managed;
Reference plate retrieval and utilization:For current CTU to be encoded, from reference plate relief area by the way of Histogram Matching
In roughly select out the reference plate of multiple candidates, then choose a reference plate by the way of fast motion estimation;By the ginseng that selects
Examine after piece processed by the way of predetermined, with original reference frame in Video coding side by side for the encoding block in currently CTU to be encoded
With reference to the flag bit of reference plate whether will be with reference to during coding for the encoding block that indicates in currently CTU to be encoded, and if quilt
With reference to when, index of the reference plate in reference plate relief area enrolls code stream.
Described current reference frame will remove short-term reference frame list when, based on current reference frame intra-frame prediction block point
Cloth situation, selecting the step of the reference plate comprising single content is added to reference plate relief area includes:
When current reference frame will remove short-term reference frame list, according to specified reference plate size, sliding window is constructed
From top to bottom, current reference frame is from left to right scanned, and the area for finding out the block of infra-frame prediction in window reaches the position of maximum;
If the maximum exceedes the threshold value that specifies, the reconstruction image regional structure one in corresponding window includes single content
Reference plate;The pixel value in reconstruction image region is not only recorded in the reference plate of acquisition, and also includes following information:The position of window
Put, the meansigma methodss of all pieces of quantization step, current reference frame order in the video sequence, reference plate are referenced in reference plate
Number of times and reference plate order;
After selecting a reference plate comprising single content, by the block labelling of the infra-frame prediction in the window of its present position
For inter prediction, repeat above scanning process, till it can not select the satisfactory reference plate comprising single content.
If the reference plate relief area is full, determine to select according to the reference plate priority calculation of predefined
The reference plate comprising single content whether be added to reference plate relief area and include:
A reference plate comprising single content is often generated, is just added it in reference plate relief area, if reference plate
Relief area is full;Then reference plate and choosing in reference plate relief area is calculated according to the reference plate priority calculation of predefined
The priority of the reference plate comprising single content for going out, computing formula is:
Pc=(QP0-QPc)×NQP+Ic+Trc×Nr;
In above formula, PcFor the priority of reference plate, QP0Be set certain constant, QPcIt is the average qp value of reference plate, NQP
With NrIt is as specified constant, IcFor the order of reference plate, TrcFor the number of times that reference plate is encoded block reference, it is specified constant;
If the priority of the reference plate comprising single content that selects is more than lowest priority ginseng in reference plate relief area
The priority of piece is examined, then the reference of lowest priority in reference plate relief area is replaced with the reference plate comprising single content that selects
Piece;Otherwise, the reference plate comprising single content that selects is abandoned.
The content that static present in the detection video sequence, dynamic and interval are repeated includes:
The detection unit of different duplicate contents is code tree unit CTU, after the CTU for detecting duplicate contents, prediction
The number of repetition of duplicate contents in corresponding CTU;Detailed process is as follows:
Static duplicate contents detection:The average of each CTU internal variance in image is calculated, if the average of current CTU is less than
The threshold value of setting, the current CTU of labelling are static repetition, and estimate number of repetition L of static duplicate contentsA=S × frame per second, here
Afterwards in the S seconds, the content of dynamic and spaced repetition is not detected at the CTU comprising static duplicate contents;
Dynamic duplicate contents detection:Original for the CTU of non-static repetition, with the original pixels of CTU on its reference frame
Beginning pixel is taken exercises estimation, obtains corresponding motion vector and corresponding estimation residual error;All little if all of residual values
There are dynamic duplicate contents in the threshold value for setting, the corresponding CTU of deduction, and assume that dynamic duplicate contents are equal within a period of time
Even motion, moving averages MVmeanFor:
Wherein, MViFor the motion vector that CTU and its certain reference frame are estimated, diFor corresponding CTU places frame and ginseng
Examine the time interval of frame, numbers of the N for reference frame;After obtaining moving averages, then number of repetition L for estimating dynamic duplicate contentsB:
LB=min { Lx,Ly};
Wherein, Lx、LyThe existence time that respectively level, vertical direction are estimated, W, H are respectively the wide, high of video sequence,
Cx、CyThe horizontal stroke of respectively current CTU, vertical coordinate, MVXmean、MVYmeanFor MVmeanX, y-component;Detect comprising dynamic repetition
After the CTU of content, estimating in the range of existence time in dynamic duplicate contents no longer detects new dynamic duplicate contents;
Interval duplicate contents detection:When current reference piece is referred to by certain CTU, then corresponding CTU is on current reference piece
Total cost of fast motion estimation will be recorded;Meanwhile, the position that current reference piece mates in fast motion estimation also by
Record;If current reference piece is repeatedly referred to, current reference piece need to retain the average of the total cost of fast motion estimation and
The union of matched position;When being encoded to M frames, for each CTU on M frames, reference plate is carried out in reference plate relief area
Retrieval, if the total cost of the fast motion estimation for obtaining is less than the total cost average of the fast motion estimation of current reference piece record
Certain multiple, and the certain multiple of matched position union of the position that mates comprising current reference piece record, then judge corresponding
There is repetition with the content in the reference plate for retrieving in the content of CTU;If the reference plate for retrieving was drawn before M frames are compiled
With l time, infer that similar duplicate contents there is also L in follow-up cataloged procedureCSecondary, LCCalculation such as following formula institute
Show:
Wherein, length of the P for video sequence.
Described according to testing result the block of duplicate contents is carried out high-quality coding include:
Assume that certain duplicate contents is estimated to repeat L time, then optimum encoding parameter is calculated for corresponding duplicate contents, compiled
The determination of code parameter also determines in CTU levels, is shown below:
Wherein, λ is the basic LaGrange parameter of whole video sequence, λiIt is the corresponding CTU i & lts of certain duplicate contents
The coding parameter arranged during appearance;Determine coding parameter λiAfterwards, corresponding QPiObtained according to equation below:
QPi=4.2005 × ln (λi)+13.7122
The QP for obtainingiUsing as corresponding CTU encode when quantization step;Simultaneously for using coding parameter λ1CTU, mark
Remember that the CTU is encoded for high-quality, and its QP value is enrolled code stream.
The distribution situation based on high-quality encoding block generates the reference plate comprising duplicate contents;Comprising duplicate contents
Reference plate is directly added into reference plate relief area, then according to reference plate priority ratio compared be managed including:
The distribution situation that CTU is encoded based on high-quality generates the reference plate comprising duplicate contents, and its step is as follows:Encode
After present frame, according to specified reference plate size, construction sliding window from top to bottom, from left to right scans current reference frame, looks for
The CTU quantity for going out to carry out high-quality coding in window reaches the position of maximum;If the maximum exceedes the threshold value that specifies,
Reference plate is generated with the pixel value in window;The reference plate of generation be comprising duplicate contents reference plate, wherein also include as
Lower information:The average qp value of all CTU in pixel value, window;After the reference plate of duplicate contents is generated, by the height of its present position
Quality coded mark is changed to regular quality coding, continuing above scanning process, can generate duplicate contents until can not select
Reference plate till;
The reference plate comprising duplicate contents for generating is directly added into reference plate buffer area;Mode is as follows:If comprising repetition
In the reference plate of content, the reference plate that has one or more CTU with reference to then is replaced with the reference plate comprising duplicate contents
The reference plate existed in reference plate relief area is changed, the reference plate being specifically replaced is will currently to include duplicate contents
In reference plate, all CTU choose that most reference plate of reference number of times;Otherwise, ginseng is replaced with the reference plate comprising duplicate contents
Examine the minimum reference plate of priority in piece relief area;
Afterwards, the reference plate comprising duplicate contents of reference plate relief area according to reference plate priority ratio compared with being managed.
The reference plate for roughly selecting out multiple candidates by the way of Histogram Matching from reference plate relief area, then adopt
The step of mode of fast motion estimation chooses a reference plate includes:
Histogram Matching:Calculate first straight between all reference plates in currently CTU to be encoded and reference plate relief area
Square map distance, then the less reference plate of multiple Histogram distances is selected as the result of Histogram Matching using predetermined way;
The calculation of Histogram distance is as follows:1) its rectangular histogram is calculated to currently CTU to be encoded:To be currently to be encoded
The luminance component of CTU is divided into the block of 4 32 × 32, counts the rectangular histogram of a H dimension per block, to two chromatic components of U and V,
The rectangular histogram of one H dimension of each statistics;Six H are tieed up rectangular histogram and is spliced to form 6H dimension rectangular histograms;2) to every in reference plate relief area
The block of 64 × 64 sizes in one reference plate, is spaced several pixels with level and vertical direction and is once sampled, to sampling
Block calculate rectangular histogram;3) it is right one by one to carry out the rectangular histogram of all to the rectangular histogram and reference plate of currently CTU to be encoded sampling blocks
Than calculating the distance between rectangular histogram, and selecting wherein minimum distance as the rectangular histogram of currently CTU to be encoded and reference plate
Distance;
Fast motion estimation:In the multiple reference plates obtained from Histogram Matching, using the method for fast motion estimation, select
Go out a reference plate for most possibly utilizing;Currently CTU to be encoded and each reference plate carry out the calculating side of fast motion estimation
Formula is as follows:Current CTU is divided into the fritter that fixed size is R × R, the block of each R × R search cost in current reference piece
Minimum block, wherein cost are made up of the encoder bit rate of absolute difference, sad value and motion vector;The mode of search is for first by every five
The raster scanning of point determines the starting point that searches in reference plate, then around the starting point up and down in certain pixel coverage
Carry out 8 points of brilliant search;The cost of the block of all R × R is added up as currently CTU to be encoded on current reference piece
Overall cost, and then filter out corresponding reference plate during overall Least-cost.
Described the reference plate that selects is processed as by the way of predetermined new reference frame, and by itself and Video coding Central Plains
Having reference frame to refer to for current block to be encoded side by side includes:
The reference plate that selects is positioned over the blank with reference frame formed objects according to position when choosing from reference frame
On;Meanwhile, by reference plate relief area with the reference plate that selects in same frame reference plate also together be placed on blank, then
Using the blank as a new reference frame;
By new reference frame with original reference frame in Video coding side by side, use when encoding for current block to be encoded;Wherein,
When current block to be encoded carries out inter prediction encoding on new reference frame, the starting point that searches for during estimation is set to rectangular histogram
Match somebody with somebody found optimum position;The choosing method of motion-vector prediction MVP is as follows:Check five positions around current prediction unit PU
Whether the block that puts employs reference plate, and and if only if adopts reference plate, and its reference plate is come with the reference plate used by current PU
From same reference frame when, then the motion vector MV on relevant position is used as candidate MVP;Therefrom choose two MVP in order to add
Enter candidate's MVP lists, if the available MVP obtained on locus is less than two, uses successively and be located from current block to be encoded
Position point to reference plate center MV and (0,0) fill, till filling up two.
Whether the encoding block that will be used for indicating in currently CTU to be encoded with reference to the flag bit of reference plate, Yi Jiruo
With reference to when, index of the reference plate in reference plate relief area enrolls code stream to be included:
Whether the flag bit with reference to reference plate be one 0 or 1 flag bit, flag bit enters using the two of context-adaptive
Arithmetic coding processed, the context model that uses have three, and whether the CTU by currently the CTU left sides to be encoded and top is with reference to reference
Piece is determining, if two CTU are not predicted using reference plate, with first context model;If one of them employs
Reference plate is predicted, then with second context model;If both having used reference plate, with the 3rd context model;
If currently CTU to be encoded with reference to reference plate, index of the reference plate in relief area uses block code, coding length
The reference plate number of reference plate relief area when degree is referenced depending on reference plate;
Afterwards, by corresponding flag bit, code stream is enrolled with index of the reference plate in relief area.
The method also includes:Currently whether CTU to be encoded has carried out high-quality coding, and if carry out high-quality coding,
QP when then high-quality is encoded needs to enroll code stream relative to the adjusted value of normal QP;Specific as follows:
The flag bit for whether carrying out high-quality coding be one 0 or 1 flag bit, flag bit is using context-adaptive
Whether binary arithmetic coding, the context model that uses have three, carried out by the CTU of currently the CTU left sides to be encoded and top
High-quality encodes to determine, if two CTU do not carry out high-quality coding, with first context model;If wherein one
Individual carry out high-quality coding, then with second context model;If both carrying out high-quality coding, with the 3rd context
Model;
If carrying out high-quality coding, QP when high-quality is encoded uses block code relative to the adjusted value of normal QP.
As seen from the above technical solution provided by the invention, the reference content based on reference plate is managed and optimization side
Case, can collect more information under conditions of DPB capacity is limited so that present encoding block can more maximum probability find
Match somebody with somebody block.Finally, overall effect is in the case where coding and decoding end complexity is slightly increased, and effectively improves Video coding
Efficiency.
Description of the drawings
In order to be illustrated more clearly that the technical scheme of the embodiment of the present invention, below will be to using needed for embodiment description
Accompanying drawing be briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for this
For the those of ordinary skill in field, on the premise of not paying creative work, can be obtaining other according to these accompanying drawings
Accompanying drawing.
Fig. 1 is a kind of flow chart of video interprediction encoding method based on reference plate provided in an embodiment of the present invention;
Fig. 2 is the flow chart for generating the reference plate comprising single content provided in an embodiment of the present invention;
Fig. 3 is the flow chart for generating the reference plate comprising duplicate contents provided in an embodiment of the present invention;
Fig. 4 is reference plate retrieval provided in an embodiment of the present invention and the flow chart using process;
Fig. 5 calculates the currently histogrammic schematic diagrams of CTU to be encoded for provided in an embodiment of the present invention.
Specific embodiment
Accompanying drawing in reference to the embodiment of the present invention, to the embodiment of the present invention in technical scheme carry out clear, complete
Ground description, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiment.Based on this
Inventive embodiment, the every other enforcement obtained under the premise of creative work is not made by those of ordinary skill in the art
Example, belongs to protection scope of the present invention.
Fig. 1 is a kind of flow chart of video interprediction encoding method based on reference plate provided in an embodiment of the present invention.
As shown in figure 1, which mainly comprises the steps:
Step 1, the reference plate comprising single content are generated and management:Short-term reference frame list will be removed in current reference frame
When, based on the distribution situation of intra-frame prediction block on current reference frame, select the reference plate comprising single content and be added to reference plate
Relief area;If reference plate relief area is full, according to the bag that the reference plate priority calculation of predefined determines to select
Whether the reference plate containing single content is added to reference plate relief area;
Step 2, the reference plate comprising duplicate contents are generated and management:Static, dynamic present in detection video sequence and
The content that interval is repeated, is then carried out high-quality coding according to testing result, then is encoded based on high-quality to the block of duplicate contents
The distribution situation of block generates the reference plate comprising duplicate contents;Reference plate comprising duplicate contents is directly added into reference plate buffering
Area, then according to reference plate priority ratio is compared with being managed;
Step 3, reference plate retrieval and utilization:For currently to be encoded to CTU (code tree unit), using Histogram Matching
Mode roughly selects out the reference plate of multiple candidates from reference plate relief area, then chooses one by the way of fast motion estimation
Reference plate;The reference plate that selects is processed as new reference frame by the way of predetermined, and which is original with Video coding
Reference frame will be used for indicating the coding in currently CTU to be encoded during coding side by side for the encoding block reference in currently CTU to be encoded
Whether block with reference to the flag bit of reference plate, and if when being referenced, index of the reference plate in reference plate relief area enrolls code stream.
In order to make it easy to understand, elaborating below for these three processes.
First, the reference plate comprising single content is generated and management.
The process mainly includes:Generate the reference plate comprising single content;Reference plate of the management comprising single content;Specifically
As follows:
1st, the reference plate comprising single content is generated.
As shown in Fig. 2 it is as follows to generate the reference plate process comprising single content:Short term reference will be removed in current reference frame
During frame list, according to specified reference plate size, construction sliding window from top to bottom, from left to right scans current reference frame, looks for
The area for going out the block of infra-frame prediction in window reaches the position of maximum;If the maximum exceedes the threshold value that specifies, right
Answer one reference plate comprising single content of reconstruction image regional structure in window;The reference plate of acquisition not only records reconstruction figure
As the pixel value in region, also include following information:All pieces of quantization step in the position of window, reference plate
The meansigma methodss of (Quantization Parameter, QP), current reference frame order (Picture in the video sequence
Order Count, POC), the order of the number of times that reference plate is referenced and reference plate;
After selecting a reference plate comprising single content, by the block labelling of the infra-frame prediction in the window of its present position
For inter prediction, repeat above scanning process, till it can not select the satisfactory reference plate comprising single content.
Exemplary, the allocation plan of reference plate number and size for current reference frame under different resolution image
As shown in table 1.
Image resolution ratio | Reference plate size | Reference plate number |
416*240 | 80*80 | 16 |
832*480 | 112*112 | 32 |
1024*768 | 112*112 | 64 |
1280*720 | 120*120 | 64 |
1920*1080 | 256*256 | 64 |
The allocation plan of reference plate number and size under 1 different resolution image of table
2nd, reference plate of the management comprising single content.
A reference plate comprising single content is often generated, is just added it in reference plate relief area.
If there is clear position in reference plate relief area, can directly by a upper process select comprising single content
Reference plate is directly placed into.
If reference plate relief area is full, need to calculate reference according to the reference plate priority calculation of predefined
The priority of reference plate and the reference plate comprising single content that selects in piece relief area, computing formula is:
Pc=(QP0-QPc)×NQP+Ic+Trc×Nr;
In above formula, PcFor the priority of reference plate, QP0Be set certain constant, QPcIt is the average qp value of reference plate, NQP
With NrIt is specified constant, IcFor the order of reference plate, TrcFor the number of times that reference plate is encoded block reference.It should be noted that
When reference plate is put into relief area, its order IcIt is that the reference plate number having had in current buffer adds 1 (i.e. first puts
The reference plate order for entering relief area is 1, by that analogy);If reference plate in an encoding process selected reference when, it
Order can be updated to the order of order maximum reference plate in now relief area and add 1.In addition, when reference plate is newly put into relief area,
It is referenced number of times Trc0 is set to, hereafter, when reference plate selected reference in an encoding process, it is referenced number of times pair
1 should be increased.
Afterwards, then by the reference plate and the new ginseng comprising single content of lowest priority in comparison reference piece relief area
The priority for examining piece is relative just, decides whether for the new reference plate comprising single content to be put into reference plate relief area.Such as
The priority of the reference plate comprising single content that fruit is selected more than in reference plate relief area lowest priority reference plate preferential
Level, then replace the reference plate of lowest priority in reference plate relief area with the reference plate comprising single content that selects;Otherwise, lose
Abandon the reference plate comprising single content that selects.
2nd, the reference plate comprising duplicate contents is generated and management
The process mainly includes:Generate the reference plate comprising duplicate contents;Reference plate of the management comprising duplicate contents;Specifically
As follows:
1st, the reference plate comprising duplicate contents is generated.
Generally, there are a large amount of contents for repeating in video sequence, the present invention proposes to examine the duplicate contents in video sequence
Measure and, carry out high-quality and encode and be stored in reference plate for next code reference, the reference plate comprising duplicate contents is generated
Flow chart is as shown in figure 3, mainly include three parts:1) duplicate contents detection;2) coding parameter determines;3) generate comprising repetition
The reference plate of content;The detailed process of these three parts is as follows:
1) duplicate contents detection.
In the embodiment of the present invention, the duplicate contents in video sequence are divided three classes:In static duplicate contents, dynamic repeat
Hold, be spaced duplicate contents.It is discussed below, for the duplicate contents of different situations, the corresponding detection scheme for being adopted.At these
Detection unit employed in detection scheme is the code tree unit (coding tree unit, CTU) in HEVC, is detecting
After duplicate contents CTU, the number of repetition of the CTU can be also predicted.
A, static duplicate contents detection:Background in static duplicate contents in video sequence, such as meeting sequence, often
Repeat a period of time in fixed space position in video.Original pixels and its ginseng during coding, based on present frame
The original pixels for examining frame can obtain the variance of each pixel position.With the carrying out of coding, multigroup continuous side is collected
Difference, and with the average of variance at same position pixel as the static duplicate contents of detection foundation.In calculating image in each CTU
The average of variance, if the average of current CTU is less than the threshold value for setting, the current CTU of labelling is static repetition, and estimates static weight
Number of repetition L of multiple contentA=S × frame per second, thereafter in the S seconds, the places of the CTU comprising static duplicate contents detection dynamic with
The content of spaced repetition.Exemplary, S=10 can be set herein, that is, assume static repetition 10 seconds.
B, dynamic duplicate contents detection:Dynamic duplicate contents in video sequence, the back of the body changed such as camera lens is moved
Scape and the personage that moves in video etc., are detected based on the mode of estimation.For the CTU of non-static repetition, with the original of CTU
Original pixels of the beginning pixel on its reference frame are taken exercises estimation, obtain corresponding motion vector and corresponding estimation is residual
Difference;If all of the threshold value that residual values both less than set, infer in corresponding CTU, there are dynamic duplicate contents, and assume to move
State duplicate contents are in a period of time interior Uniform Movement, moving averages MVmeanFor:
Wherein, MViFor the motion vector that CTU and its certain reference frame are estimated, diFor corresponding CTU places frame and ginseng
Examine the time interval of frame, numbers of the N for reference frame;After obtaining moving averages, then number of repetition L for estimating dynamic duplicate contentsB:
LB=min { Lx,Ly};
Wherein, Lx、LyThe existence time that respectively level, vertical direction are estimated, W, H are respectively the wide, high of video sequence,
Cx、CyThe horizontal stroke of respectively current CTU, vertical coordinate, MVXmean、MVYmeanFor MVmeanX, y-component;Detect comprising dynamic repetition
After the CTU of content, estimating in the range of existence time in dynamic duplicate contents no longer detects new dynamic duplicate contents;
C, interval duplicate contents detection:Persistently occur in the video sequence not with static duplicate contents and dynamic duplicate contents
With spaced repetition content typically can not find coupling in the range of its short-term reference frame, between the present invention is detected based on reference plate
Every the content that property repeats.When current reference piece is referred to by certain CTU, then corresponding quick motions of the CTU on current reference piece
The total cost that estimates will be recorded (fast motion estimation process will be introduced later);Meanwhile, current reference piece is quick
The position that mates during estimation is also recorded;If current reference piece is repeatedly referred to, current reference piece needs to retain soon
The average of the total cost of fast estimation and the union of matched position;When being encoded to M frames, for each CTU on M frames,
Reference plate retrieval is carried out in reference plate relief area, if the total cost of the fast motion estimation for obtaining is less than current reference piece record
The certain multiple of the total cost average of fast motion estimation, and matched position of the position that mates comprising current reference piece record is simultaneously
There is repetition with the content in the reference plate for retrieving in the certain multiple of collection, the then content for judging corresponding CTU;If retrieved
Reference plate had been cited l time before M frames are compiled, and inferred that similar duplicate contents there is also in follow-up cataloged procedure
LCSecondary, LCCalculation be shown below:
Wherein, length of the P for video sequence.
2) coding parameter determines.
Assume that certain duplicate contents is estimated to repeat L time, then optimum encoding parameter is calculated for corresponding duplicate contents, compiled
The determination of code parameter also determines in CTU levels, is shown below:
Wherein, λ is the basic LaGrange parameter of whole video sequence, λiIt is the corresponding CTU i & lts of certain duplicate contents
The coding parameter arranged during appearance;From the point of view of first formula above, λ1The namely corresponding CTU of certain duplicate contents the 1st time
The coding parameter arranged during appearance is relatively low, and just starts using second formula from the 2nd time, that is to say equal to normal parameter.
Determine coding parameter λiAfterwards, corresponding QPiObtained according to equation below:
QPi=4.2005 × ln (λi)+13.7122
Afterwards, can be by the QP for obtainingiUsing as corresponding CTU encode when quantization step;Join simultaneously for use coding
Number λ1CTU, the labelling CTU is high-quality coding, and its QP value is enrolled code stream.
3) reference plate comprising duplicate contents is generated.
In the embodiment of the present invention, the distribution situation for encoding CTU based on high-quality generates the reference plate comprising duplicate contents, its
Generating process is similar with the reference plate comprising single content explained before;Specific as follows:
After having encoded present frame, according to specified reference plate size, construction sliding window from top to bottom, is from left to right scanned
Current reference frame, finding out carries out the CTU quantity of high-quality coding and reaches the position of maximum in window;If the maximum exceedes
The threshold value that specifies, then generate reference plate with the pixel value in window;The reference plate of generation is the reference plate comprising duplicate contents,
Including information as the information of single content reference plate, such as the average qp value of all CTU in pixel value, window
Deng;After the reference plate of duplicate contents is generated, the high-quality code identification of its present position is changed to regular quality coding, is repeated
Above scanning process, till it can not select the reference plate that can generate duplicate contents;
The reference plate comprising duplicate contents for generating is directly added into reference plate buffer area.Mode is as follows:If comprising repetition
In the reference plate of content, the reference plate that has one or more CTU with reference to then is replaced with the reference plate comprising duplicate contents
The reference plate existed in reference plate relief area is changed, the reference plate being specifically replaced is will currently to include duplicate contents
In reference plate, all CTU choose that most reference plate of reference number of times;Otherwise, ginseng is replaced with the reference plate comprising duplicate contents
Examine the minimum reference plate of priority in piece relief area.
2nd, reference plate of the management comprising duplicate contents.
The reference plate comprising duplicate contents of reference plate relief area, according to the calculating side of reference plate priority described previously
Formula, is calculated its priority, and is equally managed with single content reference plate.
3rd, reference plate retrieval and utilization.
When several reference plates existing in reference plate relief area, certain block is encoded, it is possible to carry out using reference plate
Inter prediction.Reference plate is retrieved with the flow chart using process as shown in figure 4, being mainly made up of three parts:Reference plate retrieval,
Coding selects rate-distortion optimization (Rate Distortion Optimization, RDO) and the syntax for reference plate to tie
Structure, is introduced separately below.
1st, reference plate retrieval
As the reference plate quantity of reference plate relief area may be more, carry out estimation one by one and can dramatically increase coding side
Complexity, so in the embodiment of the present invention, the method initially with Histogram Matching and fast motion estimation retrieves one
The reference plate for most possibly utilizing, then does further estimation in the reference plate for retrieving.Histogram Matching and quick
The specific practice of estimation is as follows.
Histogram Matching:Calculate first straight between all reference plates in currently CTU to be encoded and reference plate relief area
Square map distance, then the less reference plate of multiple Histogram distances is selected as the result of Histogram Matching using predetermined way.Show
Example property, after calculating all Histogram distances, can be sorted according to descending order successively, then end is selected from sequence
Result of multiple Histogram distances as Histogram Matching.
The calculation of Histogram distance is as follows:1) its rectangular histogram is calculated to currently CTU to be encoded:To be currently to be encoded
The luminance component of CTU is divided into the block of 4 32 × 32, counts the rectangular histogram of a H dimension per block, to two chromatic components of U and V,
The rectangular histogram of one H dimension of each statistics;Six H are tieed up rectangular histogram and is spliced to form 6H dimension rectangular histograms, said process can be found in Fig. 5;2)
Block to 64 × 64 sizes in each reference plate in reference plate relief area, is spaced several pixels with level and vertical direction and enters
Row is once sampled, and calculates rectangular histogram to the block that samples;3) by all to the rectangular histogram and reference plate of currently CTU to be encoded sampling blocks
Rectangular histogram is contrasted one by one, is calculated the distance between rectangular histogram, and is selected wherein minimum distance as currently CTU to be encoded
Histogram distance with reference plate.Exemplary, above-mentioned H can be set to 32.
Fast motion estimation:In the multiple reference plates obtained from Histogram Matching, using the method for fast motion estimation, select
Go out a reference plate for most possibly utilizing;Currently CTU to be encoded and each reference plate carry out the calculating side of fast motion estimation
Formula is as follows:Current CTU is divided into the fritter that fixed size is R × R, the block of each R × R search cost in current reference piece
Minimum block, wherein cost are by absolute difference and the coding of (Sum of Absolute Difference, SAD) value and motion vector
Code check is constituted;The mode of search is first to be determined the starting point that searches in reference plate by the raster scanning every 5 points, then rises at this
Around initial point up and down in certain pixel coverage (for example:In 64 pixel coverages) carry out 8 points of brilliant search;By all R ×
The cumulative overall cost as currently CTU to be encoded on current reference piece of the cost of the block of R, and then filter out overall cost most
The corresponding reference plate of hour.Meanwhile, this cost will also record the foundation as interval duplicate contents detection.Example
Property, above-mentioned R can be set to 16.
2nd, coding selects RDO
After retrieving reference plate, by the reference plate that selects according to from reference frame choose when locus be positioned over
On the blank of reference frame formed objects;Meanwhile, by the reference plate in reference plate relief area with the reference plate that selects in same frame
Also it is placed on blank together, then using the blank as a new reference frame, it is arranged side by side with original reference frame in Video coding,
Use when encoding for the block in currently CTU to be encoded.
When current block to be encoded carries out inter prediction encoding on original reference frame in Video coding, using original side
Formula.And when inter prediction encoding is carried out on new reference frame, have following difference with the mode of original reference frame:Estimation
When the starting point searched for be set to the optimum position found by Histogram Matching described previously;The choosing method of motion-vector prediction MVP
As follows:Check whether the block around current prediction unit (prediction unit, PU) on five positions employs reference plate
(single content or duplicate contents), and if only if adopts reference plate, and its reference plate with used by current PU
Reference plate from same reference frame when, then the motion vector MV on relevant position is used as candidate MVP;Original mode is therefrom pressed
The order that specifies is chosen two MVP and adds candidate's MVP lists, if the available MVP obtained on locus is less than two, successively
Using the position of the CTU being located from current block to be encoded point to reference plate center MV and (0,0) fill, filling up two is
Only.
3rd, for the syntactic structure of reference plate
This process is mainly related to coding;Whether some CTU to be encoded with reference to new reference frame, if with reference to,
Then think that CTU to be encoded with reference to the corresponding reference plate of new reference frame, index of the reference plate in reference plate relief area during coding,
It is required for enrolling code stream.
First, if with reference to reference plate flag bit be one 0 or 1 flag bit, flag bit adopts context-adaptive
Binary arithmetic coding (Context Adaptive Binary Arithmetic Coding, CABAC), that used is upper and lower
Whether literary model has three, determined with reference to reference plate by the CTU of currently the CTU left sides to be encoded and top, if two CTU do not have
Have and predicted using reference plate, then with first context model;If one of them employs reference plate prediction, with second
Hereafter model;If both having used reference plate, with the 3rd context model;If currently CTU to be encoded with reference to reference plate,
Then index of the reference plate in relief area uses block code, reference plate relief area when code length is referenced depending on reference plate
Reference plate number;For example, with 3 code bits (bits) during 5-8 reference plate, with 4 bits during 9-16 reference plate, with such
Push away.Afterwards, by corresponding flag bit, code stream is enrolled with index of the reference plate in relief area.
It will be understood by those skilled in the art that a CTU can be decomposed into multiple pieces and be encoded, Histogram Matching and fast
Fast estimation is done to CTU, can find out a reference plate to each CTU.Then, in many of the subordinate to this CTU
When individual encoding block carries out coding selection RDO, original reference frame in Video coding can be adopted, it is also possible to generate using reference plate
New reference frame.Finally, after all encoding blocks of this CTU all finish RDO, actual coding is carried out, if this
There is at least one encoding block to select new reference frame in CTU, then it is assumed that this CTU with reference to reference plate, compile a flag bit
1, and encode the index for choosing reference plate;Otherwise, illustrate that this CTU does not select reference plate, compile a flag bit 0.
Additionally, currently whether CTU to be encoded has carried out high-quality coding, and if carrying out high-quality coding, high-quality
Adjusted values of QP during coding relative to normal QP, needs to enroll code stream;Specific as follows:Whether the mark of high-quality coding is carried out
Position be one 0 or 1 flag bit, flag bit using context-adaptive binary arithmetic coding, the context model that uses
There are three, determined by whether the CTU of currently the CTU left sides to be encoded and top carries out high-quality coding, if two CTU do not have
High-quality coding is carried out, then with first context model;If one of them carries out high-quality coding, with second context
Model;If both carrying out high-quality coding, with the 3rd context model;If carrying out high-quality coding, high-quality is compiled
QP during code uses block code relative to the adjusted value of normal QP.
It will be understood by those skilled in the art that the context model being previously mentioned refer to pre- for the probability of arithmetic coding
Model is surveyed, similar with general context model in Video coding.
On the other hand, in order to the effect of such scheme is described, also it is tested based on such scheme.Test condition bag
Include:(low delay B is Low-delay B, LDB for interframe configuration;Low delay P is Low-delay P, LDP), basic quantization step-length
(QP) { 22,27,32,37 } are set to, based on software be HM16.7, five classes of B-F that cycle testss lead in sequencing row for HEVC.
Baseline is set to:HM16.7 uses 5 reference frames, contrast scheme to be set to HM using 4 reference frames plus being equivalent to 1 reference frame institute
With several reference plates of internal memory, wherein lower 1 reference frame of different resolution reference plate number of equal value and size are shown in and can join
The table 1 for seing above.Experimental result see the table below 2~3, and wherein table 2 is the performance comparison under LDB, LDP setting, and table 3 is LDB, LDP
Encoding and decoding end complexity contrast under arranging.As can be seen that suggested plans can distinguish under LDB and LDP patterns relative to baseline
The code check for obtaining 5.1% and 5.0% is saved, and the complexity at encoding and decoding end is not significantly increased.
Performance comparison under table 2 LDB, LDP setting
Mode | LDB | LDP |
Encoding Time | 129% | 142% |
Decoding Time | 124% | 123% |
Encoding and decoding end complexity under table 3 LDB, LDP are arranged is contrasted
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment can
To be realized by software, it is also possible to which the mode by software plus necessary general hardware platform is realizing.Such understanding is based on,
The technical scheme of above-described embodiment can be embodied in the form of software product, the software product can be stored in one non-easily
The property lost storage medium (can be CD-ROM, USB flash disk, portable hard drive etc.) in, use so that a computer sets including some instructions
Standby (can be personal computer, server, or network equipment etc.) executes the method described in each embodiment of the invention.
The above, the only present invention preferably specific embodiment, but protection scope of the present invention is not limited thereto,
Any those familiar with the art in the technical scope of present disclosure, the change or replacement that can readily occur in,
Should all be included within the scope of the present invention.Therefore, protection scope of the present invention should be with the protection model of claims
Enclose and be defined.
Claims (10)
1. a kind of video interprediction encoding method based on reference plate, it is characterised in that include:
Reference plate comprising single content is generated and management:Current reference frame will remove short-term reference frame list when, based on work as
The distribution situation of intra-frame prediction block on front reference frame, selects the reference plate comprising single content and is added to reference plate relief area;Such as
Fruit reference plate relief area is full, then the reference plate priority calculation according to predefined determine to select comprising single content
Reference plate whether be added to reference plate relief area;
Reference plate comprising duplicate contents is generated and management:Repeat in static, dynamic and interval present in detection video sequence
Content, then carries out high-quality coding, then the distribution feelings based on high-quality encoding block according to testing result to the block of duplicate contents
Condition generates the reference plate comprising duplicate contents;Reference plate comprising duplicate contents is directly added into reference plate relief area, then according to
Reference plate priority ratio is compared with being managed;
Reference plate retrieval and utilization:For current CTU to be encoded, thick from reference plate relief area by the way of Histogram Matching
The reference plate of multiple candidates is selected, then a reference plate is chosen by the way of fast motion estimation;By the reference plate that selects
After being processed by the way of predetermined, with original reference frame in Video coding side by side for the encoding block reference in currently CTU to be encoded,
Whether the flag bit of reference plate will be with reference to for the encoding block indicated in currently CTU to be encoded during coding, and if being referenced
When, index of the reference plate in reference plate relief area enrolls code stream.
2. a kind of video interprediction encoding method based on reference plate according to claim 1, it is characterised in that described
When current reference frame will remove short-term reference frame list, based on the distribution situation of intra-frame prediction block on current reference frame, select
The step of reference plate comprising single content is added to reference plate relief area includes:
When current reference frame will remove short-term reference frame list, according to specified reference plate size, construction sliding window is from upper
Arrive down, from left to right scan current reference frame, the area for finding out the block of infra-frame prediction in window reaches the position of maximum;If
The maximum exceedes the threshold value specified, then one reference comprising single content of reconstruction image regional structure in the corresponding window
Piece;The pixel value in reconstruction image region is not only recorded in the reference plate of acquisition, and also includes following information:The position of window, ginseng
Examine the order in the video sequence of the meansigma methodss of all pieces of quantization step, current reference frame in piece, reference plate is referenced time
Number and the order of reference plate;
After selecting a reference plate comprising single content, the block of the infra-frame prediction in the window of its present position is labeled as frame
Between predict, repeat above scanning process, till it can not select the satisfactory reference plate comprising single content.
3. a kind of video interprediction encoding method based on reference plate according to claim 1 and 2, it is characterised in that
If the reference plate relief area is full, according to the reference plate priority calculation of predefined determine to select comprising list
Whether the reference plate of one content is added to reference plate relief area includes:
A reference plate comprising single content is often generated, is just added it in reference plate relief area, if reference plate buffering
Area is full;Then the reference plate priority calculation according to predefined calculates reference plate in reference plate relief area and selects
The priority of the reference plate comprising single content, computing formula is:
Pc=(QP0-QPc)×NQP+Ic+Trc×Nr;
In above formula, PcFor the priority of reference plate, QP0Be set certain constant, QPcIt is the average qp value of reference plate, NQPWith Nr
It is as specified constant, IcFor the order of reference plate, TrcFor the number of times that reference plate is encoded block reference, it is specified constant;
If the priority of the reference plate comprising single content that selects is more than lowest priority reference plate in reference plate relief area
Priority, then with the reference plate for selecting the reference plate comprising single content and replacing lowest priority in reference plate relief area;
Otherwise, the reference plate comprising single content that selects is abandoned.
4. a kind of video interprediction encoding method based on reference plate according to claim 1, it is characterised in that described
The content that static, dynamic and interval are repeated present in detection video sequence includes:
The detection unit of different duplicate contents is code tree unit CTU, and after the CTU for detecting duplicate contents, prediction is corresponding
The number of repetition of duplicate contents in CTU;Detailed process is as follows:
Static duplicate contents detection:The average of each CTU internal variance in image is calculated, if the average of current CTU is less than setting
Threshold value, the current CTU of labelling is static repetition, and estimates number of repetition L of static duplicate contentsA=S × frame per second, thereafter S
In second, the content of dynamic and spaced repetition is not detected at the CTU comprising static duplicate contents;
Dynamic duplicate contents detection:Original image for the CTU of non-static repetition, with the original pixels of CTU on its reference frame
Element is taken exercises estimation, obtains corresponding motion vector and corresponding estimation residual error;Both less than set if all of residual values
Fixed threshold value, infers the dynamic duplicate contents of presence in corresponding CTU, and assumes dynamic duplicate contents uniform fortune within a period of time
Dynamic, moving averages MVmeanFor:
Wherein, MViFor the motion vector that CTU and its certain reference frame are estimated, diFor corresponding CTU places frame and reference frame
Time interval, N for reference frame number;After obtaining moving averages, then number of repetition L for estimating dynamic duplicate contentsB:
LB=min { Lx,Ly};
Wherein, Lx、LyThe existence time that respectively level, vertical direction are estimated, W, H are respectively wide, high, the C of video sequencex、Cy
The horizontal stroke of respectively current CTU, vertical coordinate, MVXmean、MVYmeanFor MVmeanX, y-component;Detect comprising dynamic duplicate contents
CTU after, estimating in the range of existence time in dynamic duplicate contents no longer detects new dynamic duplicate contents;
Interval duplicate contents detection:When current reference piece is referred to by certain CTU, then corresponding CTU is fast on current reference piece
Total cost of fast estimation will be recorded;Meanwhile, the position that current reference piece mates in fast motion estimation is also recorded;
If current reference piece is repeatedly referred to, current reference piece needs the average and match bit for retaining the total cost of fast motion estimation
The union that puts;When being encoded to M frames, for each CTU on M frames, reference plate retrieval is carried out in reference plate relief area, such as
Certain times less than the total cost average of the fast motion estimation of current reference piece record of the total cost of the fast motion estimation that obtains of fruit
Number, and the certain multiple of matched position union of the position that mates comprising current reference piece record, then judge that corresponding CTU's is interior
Hold to exist with the content in the reference plate for retrieving and repeat;The l if reference plate for retrieving had been cited before M frames are compiled
Secondary, infer that similar duplicate contents there is also L in follow-up cataloged procedureCSecondary, LCCalculation be shown below:
Wherein, length of the P for video sequence.
5. a kind of video interprediction encoding method based on reference plate according to claim 4, it is characterised in that described
High-quality coding is carried out to the block of duplicate contents according to testing result includes:
Assume that certain duplicate contents is estimated to repeat L time, then optimum encoding parameter, coding ginseng is calculated for corresponding duplicate contents
Several determinations also determines in CTU levels, is shown below:
Wherein, λ is the basic LaGrange parameter of whole video sequence, λiIt is that the corresponding CTU i & lts of certain duplicate contents occur
When the coding parameter that arranges;Determine coding parameter λiAfterwards, corresponding QPiObtained according to equation below:
QPi=4.2005 × ln (λi)+13.7122
The QP for obtainingiUsing as corresponding CTU encode when quantization step;Simultaneously for using coding parameter λ1CTU, labelling should
CTU is encoded for high-quality, and its QP value is enrolled code stream.
6. a kind of video interprediction encoding method based on reference plate according to claim 5, it is characterised in that described
Distribution situation based on high-quality encoding block generates the reference plate comprising duplicate contents;Reference plate comprising duplicate contents directly adds
Enter reference plate relief area, then according to reference plate priority ratio compared be managed including:
The distribution situation that CTU is encoded based on high-quality generates the reference plate comprising duplicate contents, and its step is as follows:Encode current
After frame, according to specified reference plate size, construction sliding window from top to bottom, from left to right scans current reference frame, finds out window
The CTU quantity for carrying out high-quality coding in mouthful reaches the position of maximum;If the maximum exceedes the threshold value that specifies, window is used
Pixel value in mouthful generates reference plate;The reference plate of generation is the reference plate comprising duplicate contents, wherein also includes following letter
Breath:The average qp value of all CTU in pixel value, window;After the reference plate of duplicate contents is generated, by the high-quality of its present position
Code identification is changed to regular quality coding, continues above scanning process, until can not select the ginseng that can generate duplicate contents
Till examining piece;
The reference plate comprising duplicate contents for generating is directly added into reference plate buffer area;Mode is as follows:If comprising duplicate contents
Reference plate in, the reference plate that has one or more CTU with reference to then replaces ginseng with the reference plate comprising duplicate contents
The reference plate existed in piece relief area is examined, the reference plate being specifically replaced is will currently to include the reference of duplicate contents
In piece, all CTU choose that most reference plate of reference number of times;Otherwise, reference plate is replaced with the reference plate comprising duplicate contents
The minimum reference plate of priority in relief area;
Afterwards, the reference plate comprising duplicate contents of reference plate relief area according to reference plate priority ratio compared with being managed.
7. a kind of video interprediction encoding method based on reference plate according to claim 1, it is characterised in that described
The reference plate of multiple candidates is roughly selected out by the way of Histogram Matching from reference plate relief area, then adopts fast motion estimation
Mode include the step of choose a reference plate:
Histogram Matching:In currently to be encoded CTU and reference plate relief area rectangular histogram all reference plates between is calculated first
Distance, then the less reference plate of multiple Histogram distances is selected as the result of Histogram Matching using predetermined way;
The calculation of Histogram distance is as follows:1) its rectangular histogram is calculated to currently CTU to be encoded:By currently CTU's to be encoded
Luminance component is divided into the block of 4 32 × 32, counts the rectangular histogram of a H dimension per block, to two chromatic components of U and V, each statistics
The rectangular histogram of one H dimension;Six H are tieed up rectangular histogram and is spliced to form 6H dimension rectangular histograms;2) to each reference in reference plate relief area
The block of 64 × 64 sizes in piece, is spaced several pixels with level and vertical direction and is once sampled, to the block meter that samples
Calculate rectangular histogram;3) rectangular histogram of all to the rectangular histogram and reference plate of currently CTU to be encoded sampling blocks is contrasted one by one, is calculated
The distance between rectangular histogram, and wherein minimum distance is selected as the Histogram distance of currently CTU to be encoded and reference plate;
Fast motion estimation:In the multiple reference plates obtained from Histogram Matching, using the method for fast motion estimation, one is selected
The individual most possible reference plate for utilizing;The calculation that currently CTU to be encoded carries out fast motion estimation with each reference plate is such as
Under:Current CTU is divided into the fritter that fixed size is R × R, the block of each R × R search cost in current reference piece is minimum
Block, wherein cost is made up of the encoder bit rate of absolute difference, sad value and motion vector;The mode of search is first by every 5 points
Raster scanning determines the starting point that searches in reference plate, is then carried out in certain pixel coverage around the starting point up and down
8 points of brilliant search;Using cumulative for the cost of the block of all R × R as currently entirety of the CTU to be encoded on current reference piece
Cost, and then filter out corresponding reference plate during overall Least-cost.
8. a kind of video interprediction encoding method based on reference plate according to claim 1 or 7, it is characterised in that
Described the reference plate that selects is processed as by the way of predetermined new reference frame, and by itself and original reference frame in Video coding
Referring to for current block to be encoded side by side includes:
The reference plate that selects is positioned on the blank with reference frame formed objects according to position when choosing from reference frame;With
When, by the reference plate in reference plate relief area with the reference plate that selects in same frame also together be placed on blank, then should
Blank is used as a new reference frame;
By new reference frame with original reference frame in Video coding side by side, use when encoding for current block to be encoded;Wherein, currently
When block to be encoded carries out inter prediction encoding on new reference frame, the starting point that searches for during estimation is set to Histogram Matching institute
The optimum position that finds;The choosing method of motion-vector prediction MVP is as follows:Around inspection current prediction unit PU on five positions
Block whether employ reference plate, and if only if adopts reference plate, and its reference plate and the reference plate used by current PU from
During one reference frame, then the motion vector MV on relevant position is used as candidate MVP;Therefrom choose two MVP in order and add time
MVP lists are selected, if the available MVP obtained on locus is less than two, the position being located from current block to be encoded is used successively
Put the MV of sensing reference plate center and (0,0) filling, till filling up two.
9. a kind of video interprediction encoding method based on reference plate according to claim 1, it is characterised in that described
The flag bit of reference plate whether will be with reference to for the encoding block that indicates in currently CTU to be encoded, and if when referring to, reference plate
Index in reference plate relief area enrolls code stream to be included:
Whether the flag bit with reference to reference plate be one 0 or 1 flag bit, flag bit calculated using the binary system of context-adaptive
Whether art is encoded, and the context model that uses has three, come with reference to reference plate by the CTU of currently the CTU left sides to be encoded and top
Determine, if two CTU are not predicted using reference plate, with first context model;If one of them employs reference
Piece is predicted, then with second context model;If both having used reference plate, with the 3rd context model;
If currently CTU to be encoded with reference to reference plate, index of the reference plate in relief area uses block code, code length to take
Certainly when reference plate is referenced reference plate relief area reference plate number;
Afterwards, by corresponding flag bit, code stream is enrolled with index of the reference plate in relief area.
10. a kind of video interprediction encoding method based on reference plate according to claim 9, it is characterised in that should
Method also includes:Currently whether CTU to be encoded has carried out high-quality coding, and if carrying out high-quality coding, high-quality volume
QP during code needs to enroll code stream relative to the adjusted value of normal QP;Specific as follows:
The flag bit for whether carrying out high-quality coding be one 0 or 1 flag bit, flag bit enters using the two of context-adaptive
Arithmetic coding processed, the context model that uses have three, high-quality by whether the CTU of currently the CTU left sides to be encoded and top is carried out
Measure coding to determine, if two CTU do not carry out high-quality coding, with first context model;If one of them enters
Row high-quality is encoded, then with second context model;If both carrying out high-quality coding, with the 3rd context mould
Type;
If carrying out high-quality coding, QP when high-quality is encoded uses block code relative to the adjusted value of normal QP.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610979281.9A CN106507106B (en) | 2016-11-08 | 2016-11-08 | Video interprediction encoding method based on reference plate |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610979281.9A CN106507106B (en) | 2016-11-08 | 2016-11-08 | Video interprediction encoding method based on reference plate |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106507106A true CN106507106A (en) | 2017-03-15 |
CN106507106B CN106507106B (en) | 2018-03-06 |
Family
ID=58323882
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610979281.9A Active CN106507106B (en) | 2016-11-08 | 2016-11-08 | Video interprediction encoding method based on reference plate |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106507106B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107295334A (en) * | 2017-08-15 | 2017-10-24 | 电子科技大学 | Adaptive reference picture choice method |
CN109640089A (en) * | 2018-11-02 | 2019-04-16 | 西安万像电子科技有限公司 | Image coding/decoding method and device |
WO2019218286A1 (en) * | 2018-05-16 | 2019-11-21 | 华为技术有限公司 | Video encoding and decoding method and apparatus |
WO2020135371A1 (en) * | 2018-12-24 | 2020-07-02 | 华为技术有限公司 | Flag bit context modeling method and device |
CN111464810A (en) * | 2020-04-09 | 2020-07-28 | 上海眼控科技股份有限公司 | Video prediction method, video prediction device, computer equipment and computer-readable storage medium |
CN111901597A (en) * | 2020-08-05 | 2020-11-06 | 杭州当虹科技股份有限公司 | CU (CU) level QP (quantization parameter) allocation algorithm based on video complexity |
CN114882390A (en) * | 2022-03-15 | 2022-08-09 | 北京工业大学 | CTU histogram-based video frame type decision method in VVC (variable video coding) standard |
CN117615129A (en) * | 2024-01-23 | 2024-02-27 | 腾讯科技(深圳)有限公司 | Inter-frame prediction method, inter-frame prediction device, computer equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101461242A (en) * | 2006-03-30 | 2009-06-17 | Lg电子株式会社 | A method and apparatus for decoding/encoding a video signal |
CN102067608A (en) * | 2008-06-25 | 2011-05-18 | 高通股份有限公司 | Fragmented reference in temporal compression for video coding |
CN102461171A (en) * | 2009-05-01 | 2012-05-16 | 汤姆森特许公司 | Reference picture lists for 3dv |
US20140072038A1 (en) * | 2011-09-29 | 2014-03-13 | Telefonaktiebolaget LM Ericson (pub) | Reference Picture List Handling |
US20140086322A1 (en) * | 2011-06-07 | 2014-03-27 | Sony Corporation | Image processing device and method |
CN104768017A (en) * | 2014-01-03 | 2015-07-08 | 联发科技股份有限公司 | Video coding method |
CN104811729A (en) * | 2015-04-23 | 2015-07-29 | 湖南大目信息科技有限公司 | Multi-reference-frame encoding method for videos |
-
2016
- 2016-11-08 CN CN201610979281.9A patent/CN106507106B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101461242A (en) * | 2006-03-30 | 2009-06-17 | Lg电子株式会社 | A method and apparatus for decoding/encoding a video signal |
CN102067608A (en) * | 2008-06-25 | 2011-05-18 | 高通股份有限公司 | Fragmented reference in temporal compression for video coding |
CN102461171A (en) * | 2009-05-01 | 2012-05-16 | 汤姆森特许公司 | Reference picture lists for 3dv |
US20140086322A1 (en) * | 2011-06-07 | 2014-03-27 | Sony Corporation | Image processing device and method |
US20140072038A1 (en) * | 2011-09-29 | 2014-03-13 | Telefonaktiebolaget LM Ericson (pub) | Reference Picture List Handling |
CN104768017A (en) * | 2014-01-03 | 2015-07-08 | 联发科技股份有限公司 | Video coding method |
CN104811729A (en) * | 2015-04-23 | 2015-07-29 | 湖南大目信息科技有限公司 | Multi-reference-frame encoding method for videos |
Non-Patent Citations (6)
Title |
---|
BLI LI等: "A Unified framework of hash-based matching for screen content coding", 《IEEE》 * |
WEIXIAO等: "weighted rate-distortion optimization for screen content coding", 《IEEE》 * |
WOJCIECH等: "steganography in modern smartphones and mitigation techniques", 《IEEE》 * |
ZHONGBO SHI等: "spatially scalable video coding for hevc", 《IEEE》 * |
吴峰等: "基于模型的编码", 《计算机学报》 * |
吴峰等: "渐进、精细的可伸缩性视频编码", 《计算机学报》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107295334B (en) * | 2017-08-15 | 2019-12-03 | 电子科技大学 | Adaptive reference picture chooses method |
CN107295334A (en) * | 2017-08-15 | 2017-10-24 | 电子科技大学 | Adaptive reference picture choice method |
US11765378B2 (en) | 2018-05-16 | 2023-09-19 | Huawei Technologies Co., Ltd. | Video coding method and apparatus |
WO2019218286A1 (en) * | 2018-05-16 | 2019-11-21 | 华为技术有限公司 | Video encoding and decoding method and apparatus |
CN109640089A (en) * | 2018-11-02 | 2019-04-16 | 西安万像电子科技有限公司 | Image coding/decoding method and device |
WO2020135371A1 (en) * | 2018-12-24 | 2020-07-02 | 华为技术有限公司 | Flag bit context modeling method and device |
US11985303B2 (en) | 2018-12-24 | 2024-05-14 | Huawei Technologies Co., Ltd. | Context modeling method and apparatus for flag |
CN111464810A (en) * | 2020-04-09 | 2020-07-28 | 上海眼控科技股份有限公司 | Video prediction method, video prediction device, computer equipment and computer-readable storage medium |
CN111901597A (en) * | 2020-08-05 | 2020-11-06 | 杭州当虹科技股份有限公司 | CU (CU) level QP (quantization parameter) allocation algorithm based on video complexity |
CN111901597B (en) * | 2020-08-05 | 2022-03-25 | 杭州当虹科技股份有限公司 | CU (CU) level QP (quantization parameter) allocation algorithm based on video complexity |
CN114882390A (en) * | 2022-03-15 | 2022-08-09 | 北京工业大学 | CTU histogram-based video frame type decision method in VVC (variable video coding) standard |
CN114882390B (en) * | 2022-03-15 | 2024-05-28 | 北京工业大学 | Video frame type decision method based on CTU histogram in VVC coding standard |
CN117615129A (en) * | 2024-01-23 | 2024-02-27 | 腾讯科技(深圳)有限公司 | Inter-frame prediction method, inter-frame prediction device, computer equipment and storage medium |
CN117615129B (en) * | 2024-01-23 | 2024-04-26 | 腾讯科技(深圳)有限公司 | Inter-frame prediction method, inter-frame prediction device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN106507106B (en) | 2018-03-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106507106B (en) | Video interprediction encoding method based on reference plate | |
CN103765892B (en) | The method and apparatus decoded using infra-frame prediction to the method and apparatus of Video coding and to video | |
CN104869407B (en) | Method For Decoding Video | |
CN103220528B (en) | Method and apparatus by using large-scale converter unit coding and decoding image | |
TWI466549B (en) | Motion prediction method | |
CN102474609B (en) | Method and apparatus for encoding images and method and apparatus for decoding images | |
CN104506863B (en) | For the equipment that motion vector is decoded | |
CN106131546B (en) | A method of determining that HEVC merges and skip coding mode in advance | |
CN110521205A (en) | Sub- predicting unit temporal motion vector prediction for coding and decoding video | |
CN104980739A (en) | Method And Apparatus For Video Encoding Using Deblocking Filtering, And Method And Apparatus For Video Decoding Using The Same | |
CN102065298B (en) | High-performance macroblock coding implementation method | |
CN104754354A (en) | Method and apparatus for decoding video | |
CN102804777A (en) | Method and apparatus for encoding video and method and apparatus for decoding video by considering skip and split order | |
CN109104609A (en) | A kind of lens boundary detection method merging HEVC compression domain and pixel domain | |
CN104980761A (en) | Method and device for coding and decoding motion vector | |
CN103563382A (en) | Method and apparatus for encoding images and method and apparatus for decoding images | |
CN107079165A (en) | Use the method for video coding and device of prediction residual | |
CN104604226A (en) | Method and apparatus for coding video having temporal scalability, and method and apparatus for decoding video having temporal scalability | |
CN108989799A (en) | A kind of selection method, device and the electronic equipment of coding unit reference frame | |
CN1194544C (en) | Video encoding method based on prediction time and space domain conerent movement vectors | |
CN105791863B (en) | 3D-HEVC depth map intra-frame predictive encoding method based on layer | |
LU506098B1 (en) | Inter-frame coding tree unit division method and device | |
Li et al. | Fast Decision-Tree-Based Series Partitioning and Mode Prediction Termination Algorithm for H. 266/VVC | |
CN106101699B (en) | For the depth modelling mode adjudging method of 3D-HEVC depth map encoding | |
CN110062243A (en) | A kind of light field video motion estimating method based on neighbour's optimization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |