CN110365988A - A kind of H.265 coding method and device - Google Patents
A kind of H.265 coding method and device Download PDFInfo
- Publication number
- CN110365988A CN110365988A CN201810320172.5A CN201810320172A CN110365988A CN 110365988 A CN110365988 A CN 110365988A CN 201810320172 A CN201810320172 A CN 201810320172A CN 110365988 A CN110365988 A CN 110365988A
- Authority
- CN
- China
- Prior art keywords
- module
- block
- cost
- partition mode
- search
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/12—Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
- H04N19/122—Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/56—Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Discrete Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Include following module the present invention provides a kind of H.265 coding method and device, described device: preprocessing module, thick selecting module and accurate comparison module, the preprocessing module are used to a present frame in an original video being divided into multiple CTU blocks;The thick selecting module is used to divide each CTU block according to multiple partition modes, and each CU block therein is divided into corresponding one or more PU block;The thick selecting module is also used to carry out inter-prediction and intra prediction to each partition mode of each CTU block, and generates one or more predictive information corresponding with each partition mode;The accurate comparison module is used for compared with the corresponding predictive information of each partition mode of each CTU block carries out cost, and generation is for generating the H.265 entropy coding information of code stream for present frame and present frame being generated to the reconfiguration information of reconstructed frame.The present invention improves search precision by way of distribution search, while reducing hardware resource consumption.
Description
Technical field
The present invention relates to H.265 coding fields, more particularly to a kind of H.265 coding method and device.
Background technique
It H.265 is ITU-T VCEG prepared new video encoding standard after H.264.H.265 standard around
H.264 existing video encoding standard, retains original certain technologies, while being improved some relevant technologies.It is new to add
Technology to improve the relationship between code stream, coding quality, delay and algorithm complexity, being optimal setting.Specifically
Research contents includes: to improve compression efficiency, improve robustness and error recovery capabilities, reduce real-time time delay, reducing channel and obtain
Take time and random access time delay, reduce complexity etc..Currently, existing H.265 algorithm is in the prevalence of hardware resource consumption
Big problem.
Summary of the invention
For this reason, it may be necessary to a kind of technical solution H.265 encoded be provided, to reduce the hardware resource consumption of H.265 algorithm.
To achieve the above object, a kind of H.265 code device, including following module is inventor provided: preprocessing module,
Thick selecting module and accurate comparison module, the preprocessing module are connect with the thick selecting module, the thick selecting module and
The accurate comparison module connection;Wherein:
The preprocessing module is used to a present frame in an original video being divided into multiple CTU blocks;
The thick selecting module is for dividing each CTU block according to multiple partition modes, and each partition mode is by one
CTU block is divided into corresponding multiple CU blocks, and each CU block therein is divided into corresponding one or more PU block;It is described
Thick selecting module is also used to carry out inter-prediction and intra prediction to each partition mode of each CTU block, and generate one with
The corresponding predictive information of each partition mode;
The accurate comparison module is used to carry out generation to predictive information corresponding with each partition mode of each CTU block
Valence compares, and selects a partition mode the smallest for each CTU block cost and encoded information corresponding with the partition mode,
And according to the partition mode selected encoded information corresponding with its, generates the entropy for present frame to be generated to H.265 code stream and compile
Code information and the reconfiguration information that present frame is generated to reconstructed frame.
Further, further include entropy code module, the entropy code module is connect with accurate comparison module:
The entropy code module is used for according to the smallest partition mode of the corresponding cost of each CTU block and according to right with it
The entropy coding information corresponding with present frame that the encoded information answered generates, to generate H.265 code stream corresponding with present frame.
Further, including post-processing module, the post-processing module are connect with accurate comparison module:
The post-processing module be used for according to the smallest partition mode of cost corresponding with each CTU block and according to its
The reconfiguration information corresponding with present frame that corresponding encoded information generates, to generate reconstructed frame corresponding with present frame.
Further, the post-processing module includes deblocking filtering module and the adaptive offset module of sample;The deblocking
Filter module is connected with the adaptive offset module of sample;
The deblocking filtering module be used for using the smallest partition mode of cost provided by accurate comparison module and and its
Corresponding encoded information, is filtered reconstructed frame;
The adaptive offset module of sample is used to carry out the reconstructed frame after filtering processing SAO calculating, and will be after calculating
Data be transmitted to entropy code module.
Further, the thick selecting module includes the thick selecting module of inter-prediction and the thick selecting module of intra prediction, institute
It states the thick selecting module of inter-prediction to connect with preprocessing module, accurate comparison module respectively, the thick selecting module of intra prediction
It is connect respectively with preprocessing module, accurate comparison module;Wherein:
The thick selecting module of inter-prediction is used to carry out inter-prediction to each PU block in each partition mode, and selects
The reference information that the one or more for being less than default cost value relative to each PU block cost is obtained from reference frame is selected, and will
The motion vector of the reference PU block selected is as the corresponding predictive information of the partition mode;
The thick selecting module of intra prediction is used to carry out intra prediction to each PU block in each partition mode, and selects
Select the one or more intra prediction directions for being less than default cost value relative to each PU block cost, and by the intra prediction of selection
Direction is as the corresponding predictive information of the partition mode.
Further, the thick selecting module of the intra prediction further includes reference pixel generation module;
The reference pixel generation module is used to use the original image of present frame to each PU block in each partition mode
Reference pixel is usually generated, and all intra prediction directions are predicted by the rule of H.265 agreement according to reference pixel
Obtain the prediction result of all directions, and according to the prediction result of all directions respectively with original pixels calculated distortion cost, and
Cost, sequencing selection goes out the lesser one or more intra prediction directions of cost from small to large.
Further, the thick selecting module of the inter-prediction further includes having: coarse search module, smart search module and score picture
Plain search module, the coarse search module are connect with preprocessing module, and the coarse search module is connect with smart search module, described
Smart search module is connect with fractional pixel search module.
Further, the coarse search module from referential array for selecting a frame, in its primitive frame or reconstructed frame
One reference frame of middle selection carries out down-sampling operation to reference frame and current CTU block, and finds in the reference frame after down-sampling
The smallest location of pixels of cost compared with the CTU block after down-sampling, and calculate the location of pixels slightly searching relative to current CTU block
Rope vector.
Further, the smart search module is used for according to coarse search vector, for each PU block reference frame weight
A smart region of search is set in composition picture, and one corresponding cost of PU block of generation is the smallest by one in the essence region of search
A essence search vector;And for according to the motion vector information around current CTU block, generating to have equally with coarse search vector
One or more predicted motion vectors of function, and smart search vector is generated according to predicted motion vector;And owning generation
Smart search vector is sent to fractional pixel search module.
Further, the fractional pixel search module is used for according to the smart search vector each received, for each
PU block sets a corresponding fractional pixel search region in reference frame, and one is generated in the fractional pixel search region
The smallest fractional pixel search vector of the corresponding cost of PU block.
Further, the accurate comparison module includes distribution module, multiple layered method modules and multiple layerings ratio
Compared with module, the distribution module is connect with thick selecting module, and the layering comparison module is connect with distribution module, in which:
The distribution module is used for each partition mode according to each CTU block, by each CU in each partition mode
It block and is distributed to from the corresponding predictive information of CU block to different layered method modules;
The layered method module calculates multiple cost informations for predictive information corresponding with CU block based on the received
And it carries out in layer relatively, selecting the smallest prediction mode of the corresponding cost of CU block and partition mode;
The selected prediction mode and division out of the layered method module that the layering comparison module is used to compare different layers
The corresponding minimum cost of mode selects partition mode the smallest for CTU block cost and corresponding encoded information.
Inventor additionally provides a kind of H.265 coding method, and the method is applied to H.265 code device, described device
Including following module: mould is selected in preprocessing module, thick selecting module and accurate comparison module, the preprocessing module and the roughing
Block connection, the thick selecting module are connect with the accurate comparison module;It the described method comprises the following steps:
A present frame in one original video is divided into multiple CTU blocks by preprocessing module;
Thick selecting module divides each CTU block according to multiple partition modes, and each partition mode divides a CTU block
For corresponding multiple CU blocks, and each CU block therein is divided into corresponding one or more PU block;And to each CTU
Each partition mode of block carries out inter-prediction and intra prediction, and generates a prediction letter corresponding with each partition mode
Breath;
Accurate comparison module pair with each partition mode of each CTU block compared with corresponding predictive information progress cost,
Select a partition mode the smallest for each CTU block cost and encoded information corresponding with the partition mode, and according to
The partition mode selected encoded information corresponding with its generates the entropy coding information for present frame to be generated to H.265 code stream
With the reconfiguration information that present frame is generated to reconstructed frame.
Further, described device further includes entropy code module, and the entropy code module is connect with accurate comparison module;Institute
State method the following steps are included:
Entropy code module is according to the smallest partition mode of the corresponding cost of each CTU block and according to corresponding coding
The entropy coding information corresponding with present frame that information generates, to generate H.265 code stream corresponding with present frame.
Further, described device includes post-processing module, and the post-processing module is connect with accurate comparison module: described
Method includes:
Post-processing module is according to the smallest partition mode of cost corresponding with each CTU block and according to corresponding volume
The reconfiguration information corresponding with present frame that code information generates, to generate reconstructed frame corresponding with present frame.
Further, the post-processing module includes deblocking filtering module and the adaptive offset module of sample;The deblocking
Filter module is connected with the adaptive offset module of sample;The described method includes:
Deblocking filtering module utilizes the smallest partition mode of cost and corresponding volume provided by accurate comparison module
Code information, is filtered reconstructed frame;
The adaptive offset module of sample carries out SAO calculating to the reconstructed frame after filtering processing, and the data after calculating are passed
Transport to entropy code module.
Further, the thick selecting module includes the thick selecting module of inter-prediction and the thick selecting module of intra prediction, institute
It states the thick selecting module of inter-prediction to connect with preprocessing module, accurate comparison module respectively, the thick selecting module of intra prediction
It is connect respectively with preprocessing module, accurate comparison module;The described method includes:
The thick selecting module of inter-prediction in each partition mode each PU block carry out inter-prediction, and select relative to
Each PU block cost is less than the reference informations that the one or more of default cost value is obtained from reference frame, and by the ginseng of selection
The motion vector of PU block is examined as the corresponding predictive information of the partition mode;
The thick selecting module of intra prediction in each partition mode each PU block carry out intra prediction, and select relative to
Each PU block cost is less than one or more intra prediction directions of default cost value, and using the intra prediction direction selected as
The corresponding predictive information of the partition mode.
Further, the thick selecting module of the intra prediction further includes reference pixel generation module;The described method includes:
Reference pixel generation module generates each PU block in each partition mode using the original pixels of present frame
Reference pixel, and according to reference pixel by H.265 agreement rule all intra prediction directions are predicted to obtain it is each
The prediction result in direction, and according to the prediction result of all directions respectively with original pixels calculated distortion cost, and cost from
It is small to go out the lesser one or more intra prediction directions of cost to big sequencing selection.
Further, the thick selecting module of the inter-prediction further includes having: coarse search module, smart search module and score picture
Plain search module, the coarse search module are connect with preprocessing module, and the coarse search module is connect with smart search module, described
Smart search module is connect with fractional pixel search module.
Further, which comprises
Coarse search module selects a reference for selecting a frame from referential array in its primitive frame or reconstructed frame
Frame carries out down-sampling operation to reference frame and current CTU block, and found in the reference frame after down-sampling with after down-sampling
CTU block compares the smallest location of pixels of cost, and calculates coarse search vector of the location of pixels relative to current CTU block.
Further, which comprises
Smart search module sets an essence according to coarse search vector, for each PU block in the reconstructed image of reference frame
Region of search, and the corresponding cost of PU block the smallest one smart search vector is generated in the essence region of search;And
According to the motion vector information around current CTU block, generating with coarse search vector there are the one or more of said function to predict
Motion vector, and smart search vector is generated according to predicted motion vector;And all smart search vectors of generation are sent to score
Pixel search module.
Further, which comprises
Fractional pixel search module is set in reference frame according to the smart search vector each received for each PU block
One corresponding fractional pixel search region, and the corresponding cost of PU block is generated most in the fractional pixel search region
A small fractional pixel search vector.
Further, the accurate comparison module includes distribution module, multiple layered method modules and multiple layerings ratio
Compared with module, the distribution module is connect with thick selecting module, and the layering comparison module is connect with distribution module;The method packet
It includes:
Distribution module is according to each partition mode of each CTU block, by each CU block, the Yi Jiyu in each partition mode
The corresponding predictive information of CU block is distributed to different layered method modules;
Layered method module predictive information corresponding with CU block based on the received, calculates multiple cost informations and carries out layer
The smallest prediction mode of the corresponding cost of CU block and partition mode are selected in interior comparison;
The selected prediction mode and partition mode out of layered method module that layering comparison module compares different layers is corresponding
Minimum cost, select partition mode the smallest for CTU block cost and corresponding encoded information.
H.265 coding method and device, described device include following module described in above-mentioned technical proposal: preprocessing module,
Thick selecting module and accurate comparison module, the preprocessing module are connect with the thick selecting module, the thick selecting module and
The accurate comparison module connection;Wherein: the preprocessing module is used to divide a present frame in an original video
For multiple CTU blocks;The thick selecting module will for dividing each CTU block, each partition mode according to multiple partition modes
One CTU block is divided into corresponding multiple CU blocks, and each CU block therein is divided into corresponding one or more PU block;
The thick selecting module is also used to carry out inter-prediction and intra prediction to each partition mode of each CTU block, and generates one
A predictive information corresponding with each partition mode;The accurate comparison module is used for each division with each CTU block
The corresponding predictive information of mode carries out cost comparison, select a partition mode the smallest for each CTU block cost and
Encoded information corresponding with the partition mode, and according to the partition mode selected encoded information corresponding with its, generation is used for
Present frame is generated into the H.265 entropy coding information of code stream and present frame is generated to the reconfiguration information of reconstructed frame.The present invention is by dividing
The mode of cloth search improves search precision, while preferably remaining the details of reconstructed image, reduces hardware resource consumption.
Detailed description of the invention
Fig. 1 is the schematic diagram for the H.265 code device that an embodiment of the present invention is related to;
Fig. 2 is the schematic diagram of the thick selecting module for the H.265 code device that an embodiment of the present invention is related to;
Fig. 3 is the schematic diagram of the coarse search process for the H.265 code device that an embodiment of the present invention is related to;
Fig. 4 is the schematic diagram of the smart search process for the H.265 code device that an embodiment of the present invention is related to;
Fig. 5 is the schematic diagram of the fractional pixel search for the H.265 code device that an embodiment of the present invention is related to;
Fig. 6-A is the schematic diagram that the H.265 code device that an embodiment of the present invention is related to scans for prediction;
Fig. 6-B is the schematic diagram that the H.265 code device that another embodiment of the present invention is related to scans for prediction;
Fig. 7 is the schematic diagram of the accurate comparison module for the H.265 code device that an embodiment of the present invention is related to;
Fig. 8 is the schematic diagram of the layering comparison module for the H.265 code device that an embodiment of the present invention is related to;
Fig. 9 is the flow chart for the H.265 coding method that an embodiment of the present invention is related to;
Figure 10 is the flow chart for the coarse search method H.265 encoded that an embodiment of the present invention is related to;
Figure 11 is the flow chart for the smart searching method H.265 encoded that an embodiment of the present invention is related to;
Figure 12 is the flow chart for the fractional pixel search method H.265 encoded that an embodiment of the present invention is related to;
Figure 13 is the schematic diagram of the motion vector information around the current CTU block that an embodiment of the present invention is related to;
Appended drawing reference:
100, original video;
101, original image frame;
102, present frame;
110, image encoding apparatus;120, preprocessing module;130, thick selecting module;140, accurate comparison module;150,
Entropy code module;160, deblocking filtering module;170, the adaptive-biased module of sample;
121, current CTU;141, encoded information;180, encoded video;190, code stream;145, reconstructed frame image;
230, the thick selecting module of inter-prediction;211, coarse search module;213, smart search module;215, fractional pixel search
Module;
330, the thick selecting module of intra prediction;231, reference pixel generation module;
310, reference frame;311, down-sampling;320, the image after down-sampling;351, motion vector;352, minimum cost picture
Plain block;330, current CTU;340, CTU after down-sampling.
410, reference frame;420, the current position PU;421, motion vector is restored;423, smart searching motion vector;430, smart
Region of search;431, initiating searches position;433, minimum cost position;
510, reference frame;520, the current position PU;521, smart searching motion vector;423, fractional pixel search movement arrow
Amount;530, fractional pixel search region;531, initiating searches position;533, minimum cost position;
711, distribution module;721, level-one calculates Level_calc0;722, second level calculates Level_calc1;
723, three-level calculates Level_calc2;724, level Four calculates Level_calc3;
740, it is layered comparison module;
810, single-stage computing module;820, inter-frame mode cost computing module;830, frame mode cost computing module;
840, preferred module.
Specific embodiment
Technology contents, construction feature, the objects and the effects for detailed description technical solution, below in conjunction with specific reality
It applies example and attached drawing is cooperated to be explained in detail.
Referring to Fig. 1, the schematic diagram for the H.265 code device being related to for an embodiment of the present invention.Described device is figure
As encoding device 110, described device can be the chip with image encoding function, or include the electricity of said chip
Sub- equipment, the Intelligent mobile equipments such as mobile phone in this way, tablet computer, personal digital assistant can also be personal computer, industry dress
The electronic equipments such as redundant computer.Described device includes following module: preprocessing module 120, thick selecting module 130 and accurate ratio
Compared with module 140, the preprocessing module 120 is connect with the thick selecting module 130, the thick selecting module 130 and the essence
True comparison module 140 connects;Wherein:
The preprocessing module 120 is used to a present frame 102 in an original video 100 being divided into multiple CTU
Block (Coding Tree Unit, coding tree unit).CTU is a sub-block in current frame image, and size can be 16x16
Block, 32x32 sub-block, any one in 64x64 sub-block.Specifically, in the available original video 100 of preprocessing module
Original image frame 101, and from original image frame 101 select a present frame 102.
The thick selecting module 130 is for dividing each CTU block according to multiple partition modes, and each partition mode is by one
A CTU block is divided into corresponding multiple CU blocks (Coding Unit, coding unit), and each CU block therein is divided into
Corresponding one or more PU block (Prediction Unit, predicting unit);The thick selecting module 130 is also used to each
Each partition mode of CTU block carries out inter-prediction and intra prediction, and generation one is corresponding with each partition mode pre-
Measurement information.Partition mode is selected according to actual needs, such as the current CTU 121 of a 64x64 size, can be incited somebody to action
It is divided into 4 32x32 sub-blocks;4 16x16 sub-blocks can be classified as again for each 32x32 sub-block.
The accurate comparison module 140 be used for predictive information corresponding with each partition mode of each CTU block into
Row cost compares, and selects a partition mode the smallest for each CTU block cost and coding corresponding with the partition mode
Information, and according to the partition mode selected encoded information corresponding with its is generated for present frame to be generated H.265 code stream
Entropy coding information and the reconfiguration information that present frame is generated to reconstructed frame.In this way, improving search essence by way of distribution search
Degree, while the details of reconstructed image is preferably remained, reduce hardware resource consumption.
In certain embodiments, described device further includes entropy code module 150, the entropy code module 150 and accurate ratio
Connected compared with module 140: the entropy code module 150 be used for according to the corresponding the smallest partition mode of cost of each CTU block and
It is corresponding with present frame to generate according to the entropy coding information corresponding with present frame that corresponding encoded information generates
H.265 code stream.Specifically, the accurate comparison module 140 is generated according to the smallest partition mode of CTU cost and prediction mode
Data needed for entropy coding corresponding with the CTU, i.e., encoded information 141 as shown in Figure 1, the entropy code module 150 are used
The data needed for according to entropy coding corresponding with the CTU generate code stream 190 after coding corresponding with original video.Together
When, image encoding apparatus 110 also can exports coding rear video 180, a certain picture frame of encoded video 180 is reconstructed image
Frame 145.
In certain embodiments, described device includes post-processing module, and the post-processing module and accurate comparison module connect
It connects.The post-processing module is used for according to the smallest partition mode of cost corresponding with each CTU block and according to corresponding
Encoded information generate reconfiguration information corresponding with present frame, to generate reconstructed frame corresponding with present frame.
Preferably, the post-processing module includes deblocking filtering module 160 and the adaptive offset module 170 of sample;It is described
Deblocking filtering module 160 and the adaptive offset module 170 of sample connect;The deblocking filtering module 160 is used to utilize accurate ratio
Compared with the smallest partition mode of cost provided by module and corresponding encoded information, reconstructed frame is filtered;Institute
The adaptive offset module 170 of sample is stated for carrying out SAO calculating to the reconstructed frame after filtering processing, and the data after calculating are passed
Transport to entropy code module 150.
As shown in Fig. 2, the thick selecting module 130 includes that the thick selecting module 230 of inter-prediction and intra prediction roughing are selected
Module 330, the thick selecting module 230 of inter-prediction is connect with preprocessing module 120, accurate comparison module 140 respectively, described
The thick selecting module 330 of intra prediction is connect with preprocessing module 120, accurate comparison module 140 respectively;Wherein:
The thick selecting module 230 of inter-prediction is used to carry out inter-prediction to each PU block in each partition mode,
And the reference information for selecting the one or more for being less than default cost value relative to each PU block cost to obtain from reference frame, with
And using the motion vector of the reference PU block selected as the corresponding predictive information of the partition mode.Each PU block has itself right
The motion vector answered, the motion vector of each PU block specifically may be used for obtaining predictive information from the reference frame of reconstruct
To obtain predictive information by the corresponding motion vector of PU block using the position where current PU block as starting point.
The thick selecting module 330 of intra prediction is used to carry out intra prediction to each PU block in each partition mode,
And the one or more intra prediction directions for being less than default cost value relative to each PU block cost are selected, and will be in the frame of selection
Prediction direction is as the corresponding predictive information of the partition mode.
In certain embodiments, the thick selecting module 230 of the inter-prediction further includes having: coarse search module 211, essence search
Module 213 and fractional pixel search module 215, the coarse search module 211 are connect with preprocessing module 120, the coarse search
Module 211 is connect with smart search module 213, and the essence search module is connect with 213 fractional pixel search modules 215.
The coarse search module selects one for selecting a frame from referential array in its primitive frame or reconstructed frame
As reference frame, down-sampling operation is carried out respectively to reference frame and current CTU block, and found in the reference frame after down-sampling with
CTU block after down-sampling compares the smallest location of pixels of cost, and calculates coarse search of the location of pixels relative to current CTU block
Vector.
The reference listing is the list for storing reference frame, and the reference frame of present frame can have multiframe, be all by reference to
List index.One reference frame includes reconstructed frame and primitive frame.Since reference frame and current CTU block are grasped by down-sampling
Obtain, thus the coarse search vector that is calculated of coarse search module also should be the search vector of corresponding down-sampling, i.e., compared to
Currently the corresponding coarse search vector of CTU block needs the multiplying power (such as 1/4) multiplied by down-sampling, and will be multiplied by slightly searching after corresponding multiplying power
Rope vector transmission gives next processing module.
As shown in figure 3, the coarse search module selects one as reference frame, to reference from primitive frame or reconstructed frame
Frame and current CTU carry out down-sampling operation respectively, then the generation compared with the CTU after down-sampling is found in the reference frame after down-sampling
The smallest location of pixels of valence and coarse search vector.Preferably, in the present embodiment, the down-sampling of reference frame and current CTU contract
It is identical to put ratio.Such as image 320 after the down-sampling that is obtained after down-sampling 311 of reference frame 310, it is by the length and width of reference frame
Respectively zoom to 1/4, then the CTU after the down-sampling that current CTU330 is obtained after down-sampling 331, by by current CTU330 long
It is wide respectively to zoom to 1/4 and obtain.Then with the CTU340 (B sub-block in Fig. 3) after down-sampling for unit, image (Fig. 3 after down-sampling
Middle A sub-block) in predicted, and successively calculate the CTU340 after sampling and each corresponding sub-block in image 320 after down-sampling
The cost of (centered on pixel each in A sub-block, taking sub-block identical with B sub-block size), finds and the CTU after down-sampling
Compared to the smallest block of pixels of cost, it is denoted as minimum cost block of pixels 352 (C sub-block in Fig. 3), and records current minimum cost pixel
The center pixel position of block and coarse search vector, coarse search vector are the center pixel of CTU340 (B sub-block in Fig. 3) after down-sampling
Vector shift (i.e. motion vector in Fig. 3 between the center pixel position of minimum cost block of pixels 352 (C sub-block in Fig. 3)
351)。
In certain embodiments, the thick selecting module 330 of the intra prediction further includes reference pixel generation module 231.Institute
Reference pixel generation module 231 is stated for using the original pixels next life of present frame to each PU block in each partition mode
All intra prediction directions are predicted to obtain respectively by the rule of H.265 agreement at reference pixel, and according to reference pixel
The prediction result in a direction, and according to the prediction result of all directions respectively with original pixels calculated distortion cost, and cost
Sequencing selection goes out the lesser one or more intra prediction directions of cost from small to large.The thick selecting module of intra prediction carries out roughing
The method selected is similar with the thick selecting module of inter-prediction, and details are not described herein again.The two the difference is that carry out intra prediction
When, it is to carry out down-sampling to primitive frame to obtain image after down-sampling, the CTU after down-sampling is in the case where primitive frame progress down-sampling obtains
Image is predicted after sampling;And when carrying out inter-prediction, it is to carry out down-sampling to reference frame to obtain image after down-sampling, under
CTU after sampling carries out down-sampling in reference frame and obtains being predicted in image after down-sampling.
Such as Fig. 6-A and Fig. 6-B, according to agreement H.265, reference pixel should use reconstructed pixel, but hard-wired mistake
Cheng Zhong, current point in time can only obtain original pixels, be unable to get reconstructed pixel toward contact, therefore original image is used in the present invention
Element replaces the mode of reconstructed pixel.The circular segment for having filled black by taking the PU sub-block of 4x4 size as an example, in figure is side pixel,
According to H.265 agreement, boundary pixel a total of 17 of 4x4 block (circular segment that shade is filled in Fig. 6-B), the black in figure
Fill part pixel (i.e. side pixel) should be filled using reconstructed pixel, but current point in time is unable to get reconstructed pixel, only be used
Original pixels replace.Shade fill part is the PU block of 4x4 size.Boundary pixel filling is completed and then is carried out by agreement
Prediction obtains the block of the 4x4 size of dash area filling.
Such as Fig. 4, the essence search module sets a smart field of search to each PU according to coarse search vector in reference frame
Domain, and in the essence region of search, find the cost the smallest one smart search vector of a corresponding PU.Smart search step is
Carried out in reference frame 410, each current CTU includes multiple PU, essence search be then with certain sequentially one from these PU
A ground is elected to be current PU to carry out.Specifically, it is first determined the current position PU 420, what is then obtained before is thick
Search vector (or being reduction motion vector 421) sets a smart region of search 430 to this PU in reference frame.And root again
An initiating searches position corresponding with the current position PU 420 is determined in smart region of search 430 according to reduction motion vector 421
431.It is similar with the way of search of coarse search, in smart region of search 430, centered on 431 pixel of initiating searches position, according to
Secondary calculating initiating searches position 431 sub-block identical with centered on each pixel in smart region of search 430, current PU size
Cost finds minimum cost position 433, and calculates the motion vector between the current position PU 420 and minimum cost position 433,
It is denoted as smart searching motion vector 423.
In certain embodiments, the smart search module is used for according to coarse search vector, for each PU block in reference frame
Reconstructed image in the smart region of search of setting one, and one corresponding cost of PU block of generation is most in the essence region of search
The smart search vector of small one;And for generating and having with coarse search vector according to the motion vector information around current CTU block
There are one or more predicted motion vectors of said function, and smart search vector is generated according to predicted motion vector;And it will generate
All smart search vectors be sent to fractional pixel search module.
As Figure 13 (schemes the current CTU block of a 64x64 size in the sub-block for 10 8x8 sizes for being located at top
The sub-block marked in 13 with 1-10), in the adjacent CTU block in upper left side and the adjacent CTU block of top right-hand side, there is its corresponding respectively
One coarse search result and corresponding motion vector information.In addition, have 16 assistance motion vectors inside current CTU block, because
This is up to 28 mv as adjacent mv (motion vector information around i.e. current CTU block).This 28 motion vector information meetings
By certain screening, the adjoining mv for filtering out preset quantity (such as 3) is transferred to smart search module, so that it is determined that same pre-
If the smart searching motion vector of quantity.In the present embodiment, said function refers to the adjoining of the preset quantity filtered out
The effect for the search result that mv is obtained with coarse search module is consistent, i.e., the interface that can input to smart search module carries out down
The processing of one step.
In the present embodiment, coarse search module can search module to essence and input a motion vector, then also can be from adjoining
Several mv are selected to input to smart search module in mv, it is assumed that a total of N number of mv inputs to smart search module, then smart search module
Also N number of essence search rmv (i.e. smart search vector) can be generated, and N number of smart search vector is inputed into FME (i.e. fraction pixel is searched
Rope module), then relatively obtained an optimal fme_mv by cost from this N number of essence search mv by FME (i.e. fraction pixel is searched
Rope vector), this fme_mv can finally input to accurate comparison module.
As shown in figure 5, in order to further increase search precision, the fractional pixel search module 215 is used for according to each
The smart search vector received sets a corresponding fractional pixel search region 530 for each PU block in reference frame, and
A smallest fractional pixel search vector of the corresponding cost of PU block is generated in the fractional pixel search region 530
423.Specifically, fractional pixel search region 530 can determine in the following manner: according to the current position PU 520 and before
The smart searching motion vector of acquisition determines the corresponding initiating searches position 531 in the current position PU 520 in reference frame 510, to rise
Centered on beginning searching position pixel, respectively extending K pixel in 4 orientation up and down respectively, (value of K can be according to actual needs
Setting), obtaining the square region that side length is 2K is fractional pixel search region 530.It is similar with the way of search of essence search,
Centered on 531 pixel of initiating searches position, successively calculate each in initiating searches position 531 and fractional pixel search region 530
Centered on a pixel, the cost of the identical sub-block of current PU size, find minimum cost position 533, and calculate PU current
The motion vector between minimum cost position 533 is set, fractional pixel search motion vector 523 is denoted as.
Referring to Fig. 7, the signal of the accurate comparison module in the H.265 code device being related to for an embodiment of the present invention
Figure.In certain embodiments, the accurate comparison module 140 include distribution module 711, multiple single-stage computing modules (such as
724) and multiple layering comparison modules 740 721,722,723 and.The distribution module 711 is connect with thick selecting module 130, and
It is connect with multiple single-stage computing modules;Each single-stage computing module layering comparison module 740 corresponding with one connects.
Wherein:
The distribution module 711 is used for the different demarcation mode according to each CTU block, by the difference in each partition mode
Be distributed to from the corresponding predictive information of CU block to different single-stage computing modules;
The single-stage computing module is used for according to the predictive information corresponding with CU block received from distribution module 711,
It calculates multiple cost informations and carries out in layer relatively, selecting the smallest prediction mode of the corresponding cost of the CU block and dividing mould
Formula;
The layering comparison module 740 is used to compare the calculated cost information of single-stage comparison module institute of different layers, choosing
Select out partition mode the smallest for CTU block cost and corresponding encoded information.
In certain embodiments, the accurate comparison module 140 of Fig. 7 contains four single-stage computing modules 721,722,723
With 724.Each single-stage computing module 721,722,723 and 724 can be made of the single-stage computing module 810 of Fig. 8.Such as Fig. 8
Shown, single-stage computing module 810 includes inter-frame mode cost computing module 820, frame mode cost computing module 830 and excellent
Modeling block 840.For the CU of each input, single-stage computing module 810 can be counted by inter-frame mode cost computing module 820
An interframe cost is calculated, cost in a frame is calculated by frame mode cost computing module 830, and pass through preferred module
840 compare cost in interframe cost and frame, determine the smallest partition mode of integrate-cost and prediction mode, as currently
The smallest partition mode of the corresponding cost of the CU of input and prediction mode.
It returns in the embodiment of Fig. 7, each single-stage computing module 721,722,723 and 724 is for handling a specific grade
Other CU block.For example, single-stage computing module 721 can be set as level-one computing module, for handling the CU block of 64x64 size;Single-stage
Computing module 722 can be set as second level computing module, for handling the CU block of 32x32 size;Single-stage computing module 723 can be set as three
Grade computing module, for handling the CU block of 16x16 size;Single-stage computing module 724 can be set as level Four computing module, for handling
The CU block of 8x8 size.Assuming that accurate comparison module 140 receives a CTU from thick selecting module 130 and divides mould accordingly
Formula, predictive information and multiple interframe movement vector sum reference informations.Distribution module 711 can be according under various partition modes
CU computing module 721-724 at different levels are distributed to according to its size.
In certain embodiments, the frame mode cost computing module 830 of each single-stage computing module, can receive and certain
The relevant one or more intraframe prediction informations of the CU of a rank, calculate and select cost in a frame.Each single-stage calculates mould
The inter-frame mode cost computing module 820 of block, can simultaneously/receive one or more frame relevant to the CU of some rank parallel
Between motion vector and reference information, calculate and select an interframe cost.Later, the preferred module of each single-stage computing module
840 can be from calculated frame in cost and interframe cost, a preferably minimum cost.In other words, work as minimum cost
When being cost in frame, illustrate that H.265 being encoded using relevant intraframe prediction information is preferably to select;When minimum cost is
When interframe cost, illustrate that H.265 being encoded using relevant interframe movement vector sum reference information is preferably to select.
For example, four 8x8 blocks that layering comparison module 743 level Four computing module 724 can be calculated are corresponding most
The sum of small cost, the minimum cost for 1 16x16 block being calculated with 1 from three-level computing module 723 are compared, and
To a kind of that smaller partition mode of cost.Specifically, one of compared object is compared in layering: 4 8x8 blocks (assuming that
Referred to as A, B, C, D block), it can be entirely the minimum cost block that interframe compares acquisition, the minimum generation entirely relatively obtained in frame
Valence block, or compare the minimum cost block relatively obtained in the minimum cost block and frame of acquisition comprising interframe simultaneously.For example A block can
To be that interframe obtains, B, C, D block can be acquisition in frame.Or A, C block can be interframe acquisition, B, D block are in frame
It obtains.
Likewise, layering comparison module 742 can choose 4 obtain from layering comparison module 743, minimum costs
16x16 block, the 32x32 block for being combined the minimum cost being calculated with 1 from second level computing module 722 are compared.Tool
For body, 4 16x16 blocks (assuming that referred to as E, F, G, H block) that layering comparison module 742 selects, may include complete
16x16CU block is also possible to be made of multiple 8x8 blocks.For example E block can be a 16x16CU block of interframe acquisition;F block can
To be the 16x16CU block obtained in frame;G block can be the 4 8x8 blocks composition obtained in obtain including interframe and frame
16x16 combination block.
Likewise, layering comparison module 741 can choose 4 from the layering acquisition of comparison module 742, there is minimum cost
32x32 block, the 64x64 block for being combined the minimum cost being calculated with 1 from level-one computing module 721 are compared.Tool
For body, 4 32x32 blocks (assuming that referred to as I, J, K, L block) that layering comparison module 741 selects, may include complete
32x32CU block is also possible to be made of multiple 16x16 blocks, the combination block being each made of multiple 8x8 blocks again.For example I block can be with
It is the 32x32CU block that interframe obtains;J block be include interframe obtain and frame in obtain 4 16x16CU blocks form;K
What one or more 16x16 blocks in block can be made of multiple 8x8 blocks respectively.
By above mode, the combination for having CTU, CU and PU block of minimum cost can be found by being layered comparison module 740,
Select partition mode the smallest for CTU block cost and corresponding encoded information.
As shown in figure 9, inventor additionally provides a kind of H.265 coding method, the method is applied to H.265 coding dress
Set, described device includes following module: preprocessing module, thick selecting module and accurate comparison module, the preprocessing module with
The thick selecting module connection, the thick selecting module are connect with the accurate comparison module;It the described method comprises the following steps:
It initially enters step S101 preprocessing module and a present frame in one original video is divided into multiple CTU
Block;
It then enters step the thick selecting module of S102 and divides each CTU block, each division mould according to multiple partition modes
One CTU block is divided into corresponding multiple CU blocks by formula, and each CU block therein is divided into corresponding one or more
PU block;And inter-prediction and intra prediction are carried out to each partition mode of each CTU block, and generate one and each division
The corresponding predictive information of mode;
Then enter step the accurate comparison module pair of S103 prediction letter corresponding with each partition mode of each CTU block
Breath carries out cost comparison, selects a partition mode the smallest for each CTU block cost and corresponding with the partition mode
Encoded information, and according to the partition mode selected encoded information corresponding with its, it generates for present frame to be generated H.265 code
The entropy coding information of stream and the reconfiguration information that present frame is generated to reconstructed frame.
In certain embodiments, described device further includes entropy code module, the entropy code module and accurate comparison module
Connection;The described method comprises the following steps: entropy code module according to the corresponding the smallest partition mode of cost of each CTU block and
It is corresponding with present frame to generate according to the entropy coding information corresponding with present frame that corresponding encoded information generates
H.265 code stream.
In certain embodiments, described device includes post-processing module, and the post-processing module and accurate comparison module connect
Connect: the described method includes: post-processing module according to the smallest partition mode of cost corresponding with each CTU block and according to its
The reconfiguration information corresponding with present frame that corresponding encoded information generates, to generate reconstructed frame corresponding with present frame.
Preferably, the post-processing module includes deblocking filtering module and the adaptive offset module of sample;The deblocking filter
Wave module is connected with the adaptive offset module of sample;The described method includes: deblocking filtering module is mentioned using accurate comparison module
The smallest partition mode of the cost of confession and corresponding encoded information, are filtered reconstructed frame;Sample is adaptively inclined
Shifting formwork block carries out SAO calculating to the reconstructed frame after filtering processing, and the data after calculating are transmitted to entropy code module.
In certain embodiments, the thick selecting module includes that mould is selected in the thick selecting module of inter-prediction and intra prediction roughing
Block, the thick selecting module of inter-prediction are connect with preprocessing module, accurate comparison module respectively, and the intra prediction roughing is selected
Module is connect with preprocessing module, accurate comparison module respectively;The described method includes: the thick selecting module of inter-prediction is to each stroke
Each PU block in merotype carries out inter-prediction, and select one that is less than default cost value relative to each PU block cost or
Multiple reference informations obtained from reference frame, and the motion vector of the reference PU block selected is opposite as the partition mode
The predictive information answered;The thick selecting module of intra prediction carries out intra prediction to each PU block in each partition mode, and selects
It is less than one or more intra prediction directions of default cost value relative to each PU block cost, and by the intra prediction side of selection
To as the corresponding predictive information of the partition mode.
In certain embodiments, the thick selecting module of the intra prediction further includes reference pixel generation module;The method
Include: reference pixel generation module to each PU block in each partition mode, ginseng is generated using the original pixels of present frame
Pixel is examined, and all intra prediction directions are predicted to obtain each side by the rule of H.265 agreement according to reference pixel
To prediction result, and according to the prediction result of all directions respectively with original pixels calculated distortion cost, and cost from small
Go out the lesser one or more intra prediction directions of cost to big sequencing selection.
As shown in Figure 10, in certain embodiments, the thick selecting module of the inter-prediction further includes having: coarse search module,
Smart search module and fractional pixel search module, the coarse search module are connect with preprocessing module, the coarse search module with
Smart search module connection, the essence search module are connect with fractional pixel search module.The described method includes:
It initially enters step S201 coarse search module and selects a frame from referential array, in its primitive frame or reconstructed frame
Select a reference frame;It then enters step S202 and down-sampling operation is carried out to reference frame and current CTU block;Then enter step
S203 finds the smallest location of pixels of cost compared with the CTU block after down-sampling in the reference frame after down-sampling, and calculating should
Coarse search vector of the location of pixels relative to current CTU block.
As shown in figure 11, in certain embodiments, which comprises
Step S301 essence search module is initially entered according to coarse search vector, for each PU block reference frame reconstruct
A smart region of search is set in image;Then entering step S302, that the PU block is generated in the essence region of search is corresponding
Cost the smallest one smart search vector;And it according to the motion vector information around current CTU block, generates and coarse search vector
One or more predicted motion vectors with said function, and smart search vector is generated according to predicted motion vector;And it will give birth to
At all smart search vectors be sent to fractional pixel search module.
As shown in figure 12, in certain embodiments, which comprises
Step S401 fractional pixel search module is initially entered according to the smart search vector each received, for each PU
Block sets a corresponding fractional pixel search region in reference frame;S402 is then entered step in the fractional pixel search area
A smallest fractional pixel search vector of the corresponding cost of PU block is generated in domain.
In certain embodiments, the accurate comparison module includes distribution module, multiple layered method modules and multiple
It is layered comparison module, the distribution module is connect with thick selecting module, and the layering comparison module is connect with distribution module;It is described
Method includes:
Distribution module is according to each partition mode of each CTU block, by each CU block, the Yi Jiyu in each partition mode
The corresponding predictive information of CU block is distributed to different layered method modules;
Layered method module predictive information corresponding with CU block based on the received, calculates multiple cost informations and carries out layer
The smallest prediction mode of the corresponding cost of CU block and partition mode are selected in interior comparison;
The selected prediction mode and partition mode out of layered method module that layering comparison module compares different layers is corresponding
Minimum cost, select partition mode the smallest for CTU block cost and corresponding encoded information.
H.265 coding method and device, described device include following module described in above-mentioned technical proposal: preprocessing module,
Thick selecting module and accurate comparison module, the preprocessing module are connect with the thick selecting module, the thick selecting module and
The accurate comparison module connection;Wherein: the preprocessing module is used to divide a present frame in an original video
For multiple CTU blocks;The thick selecting module will for dividing each CTU block, each partition mode according to multiple partition modes
One CTU block is divided into corresponding multiple CU blocks, and each CU block therein is divided into corresponding one or more PU block;
The thick selecting module is also used to carry out inter-prediction and intra prediction to each partition mode of each CTU block, and generates one
A predictive information corresponding with each partition mode;The accurate comparison module is used for each division with each CTU block
The corresponding predictive information of mode carries out cost comparison, select a partition mode the smallest for each CTU block cost and
Encoded information corresponding with the partition mode, and according to the partition mode selected encoded information corresponding with its, generation is used for
Present frame is generated into the H.265 entropy coding information of code stream and present frame is generated to the reconfiguration information of reconstructed frame.The present invention is by dividing
The mode of cloth search improves search precision, while preferably remaining the details of reconstructed image, reduces hardware resource consumption.
It should be noted that being not intended to limit although the various embodiments described above have been described herein
Scope of patent protection of the invention.Therefore, it based on innovative idea of the invention, change that embodiment described herein is carried out and is repaired
Change, or using equivalent structure or equivalent flow shift made by description of the invention and accompanying drawing content, it directly or indirectly will be with
Upper technical solution is used in other related technical areas, is included within scope of patent protection of the invention.
Claims (22)
1. a kind of H.265 code device, which is characterized in that including following module: preprocessing module, thick selecting module and accurate ratio
Compared with module, the preprocessing module is connect with the thick selecting module, and the thick selecting module and the accurate comparison module connect
It connects;Wherein:
The preprocessing module is used to a present frame in an original video being divided into multiple CTU blocks;
The thick selecting module is for dividing each CTU block according to multiple partition modes, and each partition mode is by a CTU block
Corresponding multiple CU blocks are divided into, and each CU block therein is divided into corresponding one or more PU block;The roughing
Module is selected to be also used to carry out each partition mode of each CTU block inter-prediction and intra prediction, and generate one with it is each
The corresponding predictive information of partition mode;
The accurate comparison module is used to carry out cost ratio to predictive information corresponding with each partition mode of each CTU block
Compared with selecting a partition mode the smallest for each CTU block cost and encoded information corresponding with the partition mode, and root
According to the partition mode selected encoded information corresponding with its, generates the entropy coding for present frame to be generated to H.265 code stream and believe
Breath and the reconfiguration information that present frame is generated to reconstructed frame.
2. H.265 code device according to claim 1, which is characterized in that it further include entropy code module, the entropy coding
Module is connect with accurate comparison module:
The entropy code module is used for according to the smallest partition mode of the corresponding cost of each CTU block and according to corresponding
The entropy coding information corresponding with present frame that encoded information generates, to generate H.265 code stream corresponding with present frame.
3. H.265 code device according to claim 2, which is characterized in that including post-processing module, the post-processing mould
Block is connect with accurate comparison module:
The post-processing module is used for according to the smallest partition mode of cost corresponding with each CTU block and according to corresponding
Encoded information generate reconfiguration information corresponding with present frame, to generate reconstructed frame corresponding with present frame.
4. H.265 code device according to claim 3, which is characterized in that the post-processing module includes deblocking filtering
Module and the adaptive offset module of sample;The deblocking filtering module is connected with the adaptive offset module of sample;
The deblocking filtering module is used to utilize the smallest partition mode of cost provided by accurate comparison module and corresponding
Encoded information, reconstructed frame is filtered;
The adaptive offset module of sample is used to carry out SAO calculating to the reconstructed frame after filtering processing, and by the number after calculating
According to being transmitted to entropy code module.
5. H.265 code device according to claim 1, which is characterized in that the thick selecting module includes inter-prediction
Thick selecting module and the thick selecting module of intra prediction, the thick selecting module of inter-prediction compare with preprocessing module, accurately respectively
It is connected compared with module, the thick selecting module of intra prediction is connect with preprocessing module, accurate comparison module respectively;Wherein:
The thick selecting module of inter-prediction is used to carry out inter-prediction to each PU block in each partition mode, and selects phase
For the reference information that the one or more that each PU block cost is less than default cost value is obtained from reference frame, and will selection
Reference PU block motion vector as the corresponding predictive information of the partition mode;
The thick selecting module of intra prediction is used to carry out intra prediction to each PU block in each partition mode, and selects phase
It is less than one or more intra prediction directions of default cost value for each PU block cost, and by the intra prediction direction of selection
As the corresponding predictive information of the partition mode.
6. H.265 code device according to claim 5, which is characterized in that the thick selecting module of intra prediction is also wrapped
Include reference pixel generation module;
The reference pixel generation module is used for each PU block in each partition mode, using present frame original image usually
Reference pixel is generated, and all intra prediction directions are predicted to obtain by the rule of H.265 agreement according to reference pixel
The prediction result of all directions, and according to the prediction result of all directions respectively with original pixels calculated distortion cost, and in generation
Sequencing selection goes out the lesser one or more intra prediction directions of cost to valence from small to large.
7. H.265 code device according to claim 5, which is characterized in that the thick selecting module of inter-prediction is also wrapped
Included: coarse search module, smart search module and fractional pixel search module, the coarse search module are connect with preprocessing module,
The coarse search module is connect with smart search module, and the essence search module is connect with fractional pixel search module.
8. H.265 code device according to claim 7, which is characterized in that
The coarse search module selects a reference for selecting a frame from referential array in its primitive frame or reconstructed frame
Frame carries out down-sampling operation to reference frame and current CTU block, and found in the reference frame after down-sampling with after down-sampling
CTU block compares the smallest location of pixels of cost, and calculates coarse search vector of the location of pixels relative to current CTU block.
9. H.265 code device according to claim 7, which is characterized in that
The essence search module is used to set one in the reconstructed image of reference frame for each PU block according to coarse search vector
A essence region of search, and the corresponding cost of PU block the smallest one smart search vector is generated in the essence region of search;
And for according to the motion vector information around current CTU block, generate one that there is said function with coarse search vector or
Multiple predicted motion vectors, and smart search vector is generated according to predicted motion vector;And all smart search vectors of generation are sent out
Give fractional pixel search module.
10. H.265 code device according to claim 9, which is characterized in that
The fractional pixel search module is used for according to the smart search vector each received, for each PU block in reference frame
A corresponding fractional pixel search region is set, and generates a PU block corresponding generation in the fractional pixel search region
The smallest fractional pixel search vector of valence.
11. H.265 code device according to claim 1, which is characterized in that the accurate comparison module includes distribution
Module, multiple layered method modules and multiple layering comparison modules, the distribution module are connect with thick selecting module, the layering
Comparison module is connect with distribution module, in which:
The distribution module is used for according to each partition mode of each CTU block, by each partition mode each CU block, with
And it is distributed to from the corresponding predictive information of CU block to different layered method modules;
The layered method module calculates multiple cost informations and goes forward side by side for predictive information corresponding with CU block based on the received
In row layer relatively, the smallest prediction mode of the corresponding cost of CU block and partition mode are selected;
The selected prediction mode and partition mode out of layered method module that the layering comparison module is used to compare different layers
Corresponding minimum cost selects partition mode the smallest for CTU block cost and corresponding encoded information.
12. a kind of H.265 coding method, which is characterized in that the method is applied to H.265 code device, described device include
Following module: preprocessing module, thick selecting module and accurate comparison module, the preprocessing module and the thick selecting module connect
It connects, the thick selecting module is connect with the accurate comparison module;It the described method comprises the following steps:
A present frame in one original video is divided into multiple CTU blocks by preprocessing module;
Thick selecting module divides each CTU block according to multiple partition modes, and a CTU block is divided into pair by each partition mode
The multiple CU blocks answered, and each CU block therein is divided into corresponding one or more PU block;And to each CTU block
Each partition mode carries out inter-prediction and intra prediction, and generates a predictive information corresponding with each partition mode;
Accurate comparison module pair is compared with the corresponding predictive information of each partition mode of each CTU block carries out cost, selection
A partition mode the smallest for each CTU block cost and encoded information corresponding with the partition mode out, and according to selection
The encoded information corresponding with its of partition mode out is generated for present frame to be generated the H.265 entropy coding information of code stream and is incited somebody to action
The reconfiguration information of present frame generation reconstructed frame.
13. H.265 coding method according to claim 12, which is characterized in that described device further includes entropy code module,
The entropy code module is connect with accurate comparison module;It the described method comprises the following steps:
Entropy code module is according to the smallest partition mode of the corresponding cost of each CTU block and according to corresponding encoded information
The entropy coding information corresponding with present frame generated, to generate H.265 code stream corresponding with present frame.
14. H.265 coding method according to claim 13, which is characterized in that described device includes post-processing module, institute
Post-processing module is stated to connect with accurate comparison module: the described method includes:
Post-processing module is believed according to the smallest partition mode of cost corresponding with each CTU block and according to corresponding coding
The reconfiguration information corresponding with present frame generated is ceased, to generate reconstructed frame corresponding with present frame.
15. H.265 coding method according to claim 14, which is characterized in that the post-processing module includes deblocking filter
Wave module and the adaptive offset module of sample;The deblocking filtering module is connected with the adaptive offset module of sample;The method
Include:
Deblocking filtering module is believed using the smallest partition mode of cost provided by accurate comparison module and corresponding coding
Breath, is filtered reconstructed frame;
The adaptive offset module of sample carries out SAO calculating to the reconstructed frame after filtering processing, and the data after calculating are transmitted to
Entropy code module.
16. H.265 coding method according to claim 12, which is characterized in that the thick selecting module includes that interframe is pre-
Survey thick selecting module and the thick selecting module of intra prediction, the thick selecting module of inter-prediction respectively with preprocessing module, accurate
Comparison module connection, the thick selecting module of intra prediction are connect with preprocessing module, accurate comparison module respectively;The method
Include:
The thick selecting module of inter-prediction carries out inter-prediction to each PU block in each partition mode, and selects relative to each
PU block cost is less than the reference informations that the one or more of default cost value is obtained from reference frame, and by the reference PU of selection
The motion vector of block is as the corresponding predictive information of the partition mode;
The thick selecting module of intra prediction carries out intra prediction to each PU block in each partition mode, and selects relative to each
PU block cost is less than one or more intra prediction directions of default cost value, and using the intra prediction direction selected as this stroke
The corresponding predictive information of merotype.
17. H.265 coding method according to claim 16, which is characterized in that the thick selecting module of intra prediction is also
Including reference pixel generation module;The described method includes:
Reference pixel generation module generates reference using the original pixels of present frame to each PU block in each partition mode
Pixel, and all intra prediction directions are predicted to obtain all directions by the rule of H.265 agreement according to reference pixel
Prediction result, and according to the prediction result of all directions respectively with original pixels calculated distortion cost, and cost from it is small to
Big sequencing selection goes out the lesser one or more intra prediction directions of cost.
18. H.265 coding method according to claim 16, which is characterized in that the thick selecting module of inter-prediction is also
Include: coarse search module, smart search module and fractional pixel search module, the coarse search module and preprocessing module connect
It connects, the coarse search module is connect with smart search module, and the essence search module is connect with fractional pixel search module.
19. H.265 coding method according to claim 18, which is characterized in that the described method includes:
Coarse search module selects a reference frame for selecting a frame from referential array in its primitive frame or reconstructed frame,
Down-sampling operation is carried out to reference frame and current CTU block, and is found in the reference frame after down-sampling and the CTU block after down-sampling
Compared to the smallest location of pixels of cost, and calculate coarse search vector of the location of pixels relative to current CTU block.
20. H.265 coding method according to claim 18, which is characterized in that the described method includes:
Smart search module sets an essence search according to coarse search vector, for each PU block in the reconstructed image of reference frame
Region, and the corresponding cost of PU block the smallest one smart search vector is generated in the essence region of search;And according to
Motion vector information around current CTU block, generates the one or more predicted motions for having said function with coarse search vector
Vector, and smart search vector is generated according to predicted motion vector;And all smart search vectors of generation are sent to fraction pixel
Search module.
21. H.265 coding method according to claim 20, which is characterized in that the described method includes:
Fractional pixel search module sets one according to the smart search vector each received, for each PU block in reference frame
Corresponding fractional pixel search region, and one corresponding cost of PU block of generation is the smallest in the fractional pixel search region
One fractional pixel search vector.
22. H.265 coding method according to claim 12, which is characterized in that the accurate comparison module includes point
Hair module, multiple layered method modules and multiple layering comparison modules, the distribution module are connect with thick selecting module, and described point
Layer comparison module is connect with distribution module;The described method includes:
Distribution module according to each partition mode of each CTU block, by each partition mode each CU block and with the CU
The corresponding predictive information of block is distributed to different layered method modules;
Layered method module predictive information corresponding with CU block based on the received, calculates multiple cost informations and carries out a layer internal ratio
Compared with selecting the smallest prediction mode of the corresponding cost of CU block and partition mode;
The selected prediction mode and partition mode out of layered method module that layering comparison module compares different layers is corresponding most
Small cost selects partition mode the smallest for CTU block cost and corresponding encoded information.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810320172.5A CN110365988B (en) | 2018-04-11 | 2018-04-11 | H.265 coding method and device |
US17/603,002 US11956452B2 (en) | 2018-04-11 | 2020-04-10 | System and method for H.265 encoding |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810320172.5A CN110365988B (en) | 2018-04-11 | 2018-04-11 | H.265 coding method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110365988A true CN110365988A (en) | 2019-10-22 |
CN110365988B CN110365988B (en) | 2022-03-25 |
Family
ID=68214123
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810320172.5A Active CN110365988B (en) | 2018-04-11 | 2018-04-11 | H.265 coding method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110365988B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020207451A1 (en) * | 2019-04-11 | 2020-10-15 | 福州瑞芯微电子股份有限公司 | H.265 encoding method and apparatus |
CN112204974A (en) * | 2019-10-31 | 2021-01-08 | 深圳市大疆创新科技有限公司 | Image prediction and video coding method, device, movable platform and storage medium |
CN114205614A (en) * | 2021-12-16 | 2022-03-18 | 福州大学 | Intra-frame prediction mode parallel hardware method based on HEVC standard |
CN115529459A (en) * | 2022-10-10 | 2022-12-27 | 格兰菲智能科技有限公司 | Central point searching method and device, computer equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103888762A (en) * | 2014-02-24 | 2014-06-25 | 西南交通大学 | Video coding framework based on HEVC standard |
CN106454349A (en) * | 2016-10-18 | 2017-02-22 | 哈尔滨工业大学 | Motion estimation block matching method based on H.265 video coding |
CN107888918A (en) * | 2012-01-17 | 2018-04-06 | 杰尼普Pte有限公司 | A kind of method that post-processing is carried out to reconstruction image |
-
2018
- 2018-04-11 CN CN201810320172.5A patent/CN110365988B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107888918A (en) * | 2012-01-17 | 2018-04-06 | 杰尼普Pte有限公司 | A kind of method that post-processing is carried out to reconstruction image |
CN103888762A (en) * | 2014-02-24 | 2014-06-25 | 西南交通大学 | Video coding framework based on HEVC standard |
CN106454349A (en) * | 2016-10-18 | 2017-02-22 | 哈尔滨工业大学 | Motion estimation block matching method based on H.265 video coding |
Non-Patent Citations (1)
Title |
---|
何艳坤: "基于一致性的HEVC快速PU模式选择算法研究", 《中国优秀硕士学位论文全文数据库(电子期刊)》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020207451A1 (en) * | 2019-04-11 | 2020-10-15 | 福州瑞芯微电子股份有限公司 | H.265 encoding method and apparatus |
CN112204974A (en) * | 2019-10-31 | 2021-01-08 | 深圳市大疆创新科技有限公司 | Image prediction and video coding method, device, movable platform and storage medium |
WO2021081905A1 (en) * | 2019-10-31 | 2021-05-06 | 深圳市大疆创新科技有限公司 | Image prediction and video coding methods, apparatus, mobile platform, and storage medium |
CN114205614A (en) * | 2021-12-16 | 2022-03-18 | 福州大学 | Intra-frame prediction mode parallel hardware method based on HEVC standard |
CN114205614B (en) * | 2021-12-16 | 2023-08-04 | 福州大学 | HEVC standard-based intra-frame prediction mode parallel hardware method |
CN115529459A (en) * | 2022-10-10 | 2022-12-27 | 格兰菲智能科技有限公司 | Central point searching method and device, computer equipment and storage medium |
CN115529459B (en) * | 2022-10-10 | 2024-02-02 | 格兰菲智能科技有限公司 | Center point searching method, center point searching device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110365988B (en) | 2022-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110365988A (en) | A kind of H.265 coding method and device | |
CN105009577B (en) | A kind of telescopic video encoding and decoding method, equipment and computer readable storage medium | |
CN105532001B (en) | For using the difference vector based on depth to carry out interlayer coding method and coding/decoding method and equipment to video | |
CN105025293B (en) | Method and apparatus to Video coding and to the decoded method and apparatus of video | |
CN101326550B (en) | Motion estimation using prediction guided decimated search | |
CN102665079B (en) | Adaptive fast intra prediction mode decision for high efficiency video coding (HEVC) | |
CN109089119A (en) | A kind of method and apparatus of motion-vector prediction | |
CN103248895B (en) | A kind of quick mode method of estimation for HEVC intraframe coding | |
CN107071417B (en) | A kind of intra-frame prediction method for Video coding | |
EP2942961A1 (en) | Methods for encoding/decoding of video using common merging candidate set of asymmetric partitions | |
CN104365104B (en) | For multiple view video coding and decoded method and apparatus | |
CN109792519A (en) | Image processing method and its device based on intra prediction mode | |
CN102263947A (en) | Method and system for motion estimation of images | |
CN106170093B (en) | Intra-frame prediction performance improving coding method | |
CN104796720A (en) | Method and apparatus for encoding video | |
CN111314698A (en) | Image coding processing method and device | |
CN104539949B (en) | The method and device of quick partitioning based on edge direction in HEVC screen codings | |
WO2018219938A1 (en) | Method and apparatus for low-complexity bi-directional intra prediction in video encoding and decoding | |
CN104754338A (en) | Selection method and device for intra-frame predication mode | |
CN103313058B (en) | The HEVC Video coding multimode optimization method realized for chip and system | |
CN110419214A (en) | Intra prediction mode searching method and device, method for video coding and device and recording medium | |
CN112055203A (en) | Inter-frame prediction method, video coding method and related devices thereof | |
CN106162176A (en) | Method for choosing frame inner forecast mode and device | |
Ma et al. | Residual-based video restoration for HEVC intra coding | |
Azgin et al. | An efficient FPGA implementation of HEVC intra prediction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 350003 building 18, No.89, software Avenue, Gulou District, Fuzhou City, Fujian Province Applicant after: Ruixin Microelectronics Co.,Ltd. Address before: 350003 building 18, No.89, software Avenue, Gulou District, Fuzhou City, Fujian Province Applicant before: FUZHOU ROCKCHIP ELECTRONICS Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |