CN101325707B - System for encoding and decoding texture self-adaption video - Google Patents

System for encoding and decoding texture self-adaption video Download PDF

Info

Publication number
CN101325707B
CN101325707B CN 200710069093 CN200710069093A CN101325707B CN 101325707 B CN101325707 B CN 101325707B CN 200710069093 CN200710069093 CN 200710069093 CN 200710069093 A CN200710069093 A CN 200710069093A CN 101325707 B CN101325707 B CN 101325707B
Authority
CN
China
Prior art keywords
texture
adaption
video
decoding
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 200710069093
Other languages
Chinese (zh)
Other versions
CN101325707A (en
Inventor
虞露
武晓阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN 200710069093 priority Critical patent/CN101325707B/en
Publication of CN101325707A publication Critical patent/CN101325707A/en
Application granted granted Critical
Publication of CN101325707B publication Critical patent/CN101325707B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention claims a texture self-adapted video coding system, a texture self-adapted video decoding system and a texture self-adapted video coding and decoding system. The texture self-adapted video coding system includes a video coder and a coding end texture analyzer. The texture self-adapted video decoding system includes a video decoder and a decoding end texture analyzer. The texture self-adapted video coding and decoding system includes a texture self-adapted video coding system and a texture self-adapted decoding system. The texture self-adapted video coding and decoding system brings the texture feature information of video image into the video coding and decoding system to improve the compression efficiency and perceived quality of video coding.

Description

System for encoding and decoding texture self-adaption video
Technical field
The present invention relates to the signal processing and the communications field, especially, relate to a kind of texture self-adaption video coded system, a kind of texture self-adaption video decode system and a kind of system for encoding and decoding texture self-adaption video.
Background technology
The current video encoding and decoding standard; Formulate H.261 such as ITU; H.263, the MPEG-1 that H.26L organizes to set up with the MPEG of ISO, MPEG-2; MPEG-4, and H.264/MPEG-AVC (abbreviation is H.264) and the video encoding standard AVS second portion of Chinese independent intellectual property right that JVT formulates all are based on conventional hybrid coding and decoding video framework.
A free-revving engine of video coding compresses vision signal exactly, reduces the data volume of vision signal, thereby practices thrift the memory space and the transmission bandwidth of vision signal.On the one hand, raw video signal, data volume is very huge, and this is the necessity place of video coding compression; On the other hand, there is a large amount of redundant informations in raw video signal, and this is video coding possibility of compressing place.These redundant informations can be divided into spatial redundancy information, time redundancy information, data redundancy information and visual redundancy information.Wherein the three kinds of redundant informations in front only are the redundant information on the statistic concept between considered pixel, general name statistical redundancy information; Visual redundancy information stresses to consider human visual system's characteristic more.Video coding will reduce the video signal data amount, just needs to reduce the various redundant informations that exist in the vision signal.Conventional hybrid video coding framework is a video coding framework of taking all factors into consideration predictive coding, transition coding and entropy coding, puts forth effort to reduce the statistical redundancy information of vision signal, and conventional hybrid video coding framework has following main feature:
(1) utilize predictive coding to reduce time redundancy information and spatial redundancy information;
(2) utilize transition coding further to reduce spatial redundancy information;
(3) utilize entropy coding to reduce data redundancy information;
Predictive coding comprises intraframe predictive coding and inter prediction encoding.With the frame of video of intraframe predictive coding technique compresses, be called intracoded frame (I frame).The cataloged procedure of intracoded frame is following: at first, coded frame is divided into encoding block (a kind of form of coding unit); Encoding block is carried out infra-frame prediction, obtain the residual error data of infra-frame prediction; Then residual error data is carried out the two-dimensional transform coding; In transform domain, conversion coefficient is quantized then; Convert 2D signal to one-dimensional signal through overscanning then; Carry out entropy coding at last.With the frame of video of inter prediction encoding technique compresses, be called inter-frame encoding frame (P frame, B frame).The cataloged procedure of inter-frame encoding frame is following: at first, coded frame is divided into encoding block; Adopt motion estimation techniques to obtain motion vector and reference block (a kind of form of reference unit) to encoding block; Adopt motion compensation technique then, obtain the residual error data behind the inter prediction; Then residual error data is carried out the two-dimensional transform coding; In transform domain, conversion coefficient is quantized then; Convert 2D signal to one-dimensional signal through overscanning then; Carry out entropy coding at last.Residual error data, residual signals just, with respect to raw video signal, spatial redundancy information and time redundancy information have all reduced.If spatial redundancy information and time redundancy information represent that with correlation on the mathematics then the spatial coherence of residual signals and temporal correlation are all little than original video information amount.Then residual signals is carried out the two-dimensional transform coding, further reduce spatial coherence, at last conversion coefficient is quantized to reduce data redundancy information with entropy coding.It is thus clear that continue to improve the compression efficiency of video coding, need more accurate predictive coding, further reduce the spatial coherence and the temporal correlation of prediction back residual signals; Also need more effective transition coding technology simultaneously, further reduce spatial coherence; After predictive coding and transition coding, design the scanning technique, quantification technique and the entropy coding that adapt simultaneously.
Above-mentioned video encoding and decoding standard based on conventional hybrid coding and decoding video framework though obtained very big success, further improve the compression efficiency of video coding, and conventional hybrid coding and decoding video framework itself exists bottleneck.Result of study shows that vision signal is not a stationary source, that is to say that the characteristic of each coding unit is not quite similar.Yet; The design of functional module but is to be based upon on the hypothesis basis of steady vision signal in the conventional hybrid coding and decoding video framework; For example; Wherein predictive coding module, transition coding module, quantization modules, scan module etc. are when encoding to coding unit, and the mode of operation that is adopted is all fixed:
When (1) predictive compensation is accurate to sub-pix in inter prediction encoding, need to adopt interpolation technique that the point of the sub-pix in the reference picture is made up.The existing video standard based on conventional hybrid coding and decoding video framework all adopts horizontal, vertical separable one dimension interpolation filter, and sub-pix is made up.The tap number and the coefficient of interpolation filter are all fixed, and therefore, the interpolation filter of employing is with irrelevant by the content of interpolation image.
(2) transition coding module extensively adopt discrete cosine transform (DCT) technology with and approximate converter technique integer cosine transformation (ICT) technology.Transition coding is intended reducing spatial coherence, and the coding unit energy is concentrated to a few conversion coefficient, is exactly to concentrate to low frequency energy in DCT, ICT.Transformation matrix is all fixed, and therefore, the conversion of employing is with irrelevant by the content of changing image.
(3) quantization modules is that conversion coefficient a kind of diminished irreversible coding module.The quantification technique that adopts in the video standard at present is that each conversion coefficient is carried out the scalar quantization of identical step-length or through the quantization matrix weighting high frequency coefficient slightly quantized (utilizing human eye to the insensitive characteristic of high-frequency signal).It is thus clear that quantizing process is irrelevant with the picture material that is quantized.
(4) scan module is that 2D signal is changed into one-dimensional signal, and concrete is exactly to be converted into run, level signal to the two-dimensional transform coefficient after quantizing, and is beneficial to run, level signal are carried out entropy coding.Adopt in the video encoding standard at present to the zigzag scan mode of frame coding with to a vertical preferential alternative scan mode of coding; And the scanning sequency of fixing these two kinds of scannings, for the conversion coefficient of encoding block basically along left to bottom right order.It is thus clear that scanning sequency is irrelevant with the picture material that is scanned.
In recent years; In order further to improve video coding efficient; Emerge some new coding techniquess, these technological common ground are " self adaptation ", can select different coded system (refer to some functional module select to adapt mode of operation) to every frame or each encoding block.The adaptive approach that these are technological, some is realized based on rate-distortion optimization (RDO) technology, promptly from several kinds of candidates' method, selects a kind of method optimum on the RD meaning through the approach of this high complexity of RDO; Some adopts the thought of " twice " based on the method for statistics, and after first pass finished, the mode of operation of utilizing the data statistics of first pass to obtain adapting was carried out coding second time with the mode of operation that adapts then.
A kind of adaptive transformation technology is arranged in addition, and it is based on neural network method.Initial transformation pattern of initial setting along with the carrying out of coding, is progressively trained new pattern conversion through neural net, and ensuing encoding block is carried out transition coding.
" self adaptation " viewpoint of these methods is actually the different thought of local feature of utilizing coding unit, but their local feature is fuzzy general.
On the basis of analyzing in front and studying; In order to break through the bottleneck of conventional hybrid coding and decoding video framework; The present invention proposes a kind of system for encoding and decoding texture self-adaption video, and system for encoding and decoding texture self-adaption video comprises texture self-adaption video coded system and texture self-adaption video decode system.System for encoding and decoding texture self-adaption video is brought the textural characteristics of video image (image local feature a kind of) information in the video coding and decoding system into, to improve the compression efficiency and the subjective quality of video coding.
Summary of the invention
The objective of the invention is to bottleneck, propose a kind of texture self-adaption video coded system, a kind of texture self-adaption video decode system and a kind of system for encoding and decoding texture self-adaption video to conventional hybrid coding and decoding video framework.System for encoding and decoding texture self-adaption video comprises texture self-adaption video coded system and texture self-adaption video decode system.System for encoding and decoding texture self-adaption video is brought the textural characteristics of video image in the video coding and decoding system into, to improve the compression efficiency and the subjective quality of video coding.
The texture self-adaption video coded system comprises video encoder and coding side texture analyzer; Video encoder comprises an encoding function module at least, to accomplish encoding compression; The coding side texture analyzer carries out texture analysis, to extract coding unit textural characteristics information; At least there is an encoding function module in the video encoder, the coding unit textural characteristics information Control that its mode of operation is extracted by the coding side texture analyzer.The input signal of coding side texture analyzer comprise following one or more: raw image data, reference image data, encoding function module dateout.The coding unit textural characteristics information that the coding side texture analyzer extracts comprise following one or more: grain direction information, texture strength information, grain direction strength information etc.The mode of operation of encoding function module is meant the encoding function module according to coding unit textural characteristics information by the coding unit textural characteristics information Control that the coding side texture analyzer extracts, and confirms to adopt the mode of operation that adapts with coding unit textural characteristics information; The different coding functional module can use coding unit textural characteristics information of the same race or not of the same race to control.
The texture self-adaption video decode system comprises Video Decoder and decoding end grain reason analyzer; Video Decoder comprises a decoding function module at least, to accomplish decoding and rebuilding; The decoding end texture analyzer carries out texture analysis, to extract decoding unit textural characteristics information; At least there is a decoding function module in the Video Decoder, the decoding unit textural characteristics information Control that its mode of operation is extracted by the decoding end texture analyzer.The input signal of decoding end texture analyzer comprise following one or more: reference image data, decoding function module dateout.The decoding unit textural characteristics information that the decoding end texture analyzer extracts comprise following one or more: grain direction information, texture strength information, grain direction strength information etc.The mode of operation of decoding function module is meant the decoding function module according to decoding unit textural characteristics information by the decoding unit textural characteristics information Control that the decoding end texture analyzer extracts, and confirms to adopt the mode of operation that adapts with decoding unit textural characteristics information; Different decoding function modules can use decoding unit textural characteristics information of the same race or not of the same race to control.
System for encoding and decoding texture self-adaption video comprises texture self-adaption video coded system and texture self-adaption video decode system.
Description of drawings
Fig. 1 is a texture self-adaption video coded system sketch map;
Fig. 2 is a texture self-adaption video decode system sketch map;
Fig. 3 is the system for encoding and decoding texture self-adaption video sketch map;
Fig. 4 is n * m encoding block initial data sketch map;
Fig. 5 is the schematic diagram data of n * m reference image block;
Fig. 6 is that intra prediction mode is as coding side texture analyzer signal input sketch map;
Fig. 7 is a Sobel operator sketch map;
The sketch map of Fig. 8 texture self-adaption interpolating module;
Fig. 9 is whole pixel and sub-pix point sketch map;
Figure 10 is the sketch map of texture self-adaption scan module;
Figure 11 is vertical priority scan sequential schematic;
Figure 12 is horizontal priority scan sequential schematic;
Figure 13 is embodiment 1 sketch map: the texture self-adaption video coded system;
Figure 14 is embodiment 2 sketch mapes: the texture self-adaption video decode system;
Figure 15 is embodiment 4 sketch mapes: the texture self-adaption video coded system;
Figure 16 is embodiment 5 sketch mapes: the texture self-adaption video decode system;
Figure 17 is embodiment 7 sketch mapes: the texture self-adaption video coded system;
Figure 18 is embodiment 8 sketch mapes: the texture self-adaption video decode system;
Figure 19 is embodiment 10 sketch mapes: the texture self-adaption video coded system;
Figure 20 is embodiment 11 sketch mapes: the texture self-adaption video decode system;
Embodiment
The present invention relates to a kind of texture self-adaption video coded system (shown in Figure 1), a kind of texture self-adaption video decode system (shown in Figure 2) and a kind of system for encoding and decoding texture self-adaption video (shown in Figure 3), notice that " transmission channel " shown in Fig. 3 is not included in the system for encoding and decoding texture self-adaption video.
The system for encoding and decoding texture self-adaption video coverage is very wide, below earlier the noun that relates among the present invention is illustrated.
The instance of A, coding unit
Coding unit is the unit of texture self-adaption, the set that it is made up of the video image vegetarian refreshments.The form of coding unit is a lot, and in the differential pulse modulation coding system in early days, coding unit is individual pixels point one by one; Coding unit is a rectangular block of pixels in current many video encoding standards, comprises square; And up-to-date the coding unit of mentioning in the document is arranged is triangle, different form such as trapezoidal; Coding unit also can be a band (slice), a frame, a form such as field; In addition, coding unit can also be made up of non-conterminous pixel.
Encoding block is a kind of instance of coding unit, the rectangular block that it is made up of pixel, and the rectangular block size is n * m, and representing this encoding block height is n pixel, and width is a m pixel.Encoding block such as 16 * 16,16 * 8 encoding block, 8 * 16 encoding block, 8 * 8 encoding block, 8 * 4 encoding block, 4 * 8 encoding block, 4 * 4 encoding block.Below will be that example provides the practical implementation instance with the encoding block, when not specifying, will use encoding block to replace coding unit.But the method for enumerating in the embodiment can be used for the coding unit of other form equally.
The instance of B, decoding unit
Decoding unit is the different sayings of same things at system's diverse location with coding unit.Coding unit is the notion in the texture self-adaption video coded system, corresponding therewith, in the texture self-adaption video decode system, it just is called as decoding unit.So mention giving an example and also suitable the giving an example and explanation of explanation of coding unit among the A to decoding unit.
The instance of C, video coding functional module
The encoding function module comprises one or more with in the lower module in the video encoder: prediction module, interpolating module, conversion module, inverse transform block, quantization modules, inverse quantization module, scan module, counter-scanning module, block elimination filtering module, entropy coding module etc.These encoding function modules can one be subdivided into a plurality of or a plurality of encoding function modules that are merged into, and can be divided into 1/2nd picture element interpolation modules and 1/4 pixel interpolation module such as interpolating module; Merge into the change quantization module such as conversion module and quantization modules.Video encoder also can have other function division methods, forms the new encoding function module of a cover.
Encoding function module in the video encoder links through certain mode, accomplishes the function of encoding compression.
For an encoding function module, its mode of operation can be various, and such as interpolating module, adopting different filter tap number and filter coefficient is exactly the different working pattern.
The instance of D, video decode functional module
Decoding function module in the Video Decoder comprises one or more with in the lower module: prediction module, interpolating module, inverse transform block, inverse quantization module, counter-scanning module, block elimination filtering module, entropy decoder module etc.These decoding function modules can one be subdivided into a plurality of or a plurality of decoding function modules that are merged into, and can be divided into 1/2nd picture element interpolation modules and 1/4 pixel interpolation module such as interpolating module; Merge into the inverse transformation quantization modules such as inverse transform block and inverse quantization module.Video Decoder also can have other function division methods, forms the new decoding function module of a cover.
Decoding function module in the Video Decoder links through certain mode, accomplishes the function of decoding and rebuilding.
For a decoding function module, its mode of operation can be various, and such as interpolating module, adopting different filter tap number and filter coefficient is exactly the different working pattern.
The instance of E, textural characteristics information
The expression mode of textural characteristics information can have grain direction information, texture strength information, grain direction strength information, and other expression mode also can be arranged, for example texture structure etc.
E-1, grain direction information
Grain direction information subjectivity show as texture in the image towards, generally represent with the texture angle of inclination.The angle of inclination is a continuous quantity, in use, can be quantified as discrete magnitude.Can select different precision when quantizing, be divided into different types of direction to texture.During quantification, the texture angle of inclination that angle is belonged to same quantization areas is classified as same type of grain direction.Such as, when quantified precision was 4 class, grain direction information can be divided into cross grain, vertical texture, left diagonal grain and right diagonal grain.Certainly, some coding unit, it does not have tangible grain direction, we can say that the corresponding texture strength of each grain direction is suitable yet, is referred to as flat site, and flat site is a kind of special grain direction information.
The direction at the edge in the image is a kind of instance of grain direction information.
E-2, texture strength information
The texture strength information subjectivity shows as the obvious degree of texture in the image, can represent with gradient intensity, also can represent with energy intensity, can also use other method representation.
E-3, grain direction strength information
The grain direction strength information is meant grain direction is divided into variety classes by E-1, and the grain direction of each kind all has corresponding with it strength information.The grain direction strength information is exactly the texture strength information corresponding to each grain direction.
The instance of the input signal of F, coding side texture analyzer
F-1, raw image data
Raw image data is meant the data of being formed or being made up by the original image original pixel value.Building mode is varied, for example interpolation method, filtering mode, pixel repetitive mode etc.
F-2, reference image data
Reference image data is meant the data of being formed or being made up by the pixel value of decipher reestablishment image.Building mode is varied, for example interpolation method, filtering mode, pixel repetitive mode etc.
F-3, encoding function module dateout
The data corresponding of encoding function module output with the present encoding unit.
For example, functional module is an intra-framed prediction module, and dateout is the intra prediction mode of present encoding unit.Fig. 6 is this routine sketch map.
The corresponding data of coding unit of having encoded with (one or more) of encoding function module output.For example, functional module is an intra-framed prediction module, and dateout is the intra prediction mode of the coding unit of top, present encoding unit and left.In order to supply the coding side texture analyzer to analyze the textural characteristics of present encoding unit, these information should be passed through input coding end grain reason analyzer behind the buffer memory of certain hour.
Encoding function module dateout is not limited to the intra prediction mode of intra-framed prediction module output, and it can be the conversion coefficient of conversion module output; The scanning sequency of scan module output; The inter-frame forecast mode of inter prediction module output etc.
Notice that encoding function module dateout is meant the some or all of dateout of encoding function module, for example intra prediction mode is the part dateout of intra-framed prediction module here.
F-4, input signal are several data
Input signal comprises the several data in the following data: raw image data, reference image data and encoding function module dateout.
For example, the raw image data of coding unit and coding unit interframe matching unit are all sent into the coding side texture analyzer as input signal.
Wherein, Fig. 4 is an example of the raw image data of coding unit, and coding unit is an encoding block, and it is the piece P of a n * m, P Ji(j, the i) pixel value of position are the original pixel values of this pixel to represent.
Matching unit is the reference image data the most close with coding unit.The pixel of formation matching unit and present encoding unit are called the interframe matching unit not in same two field picture; The pixel of formation matching unit and present encoding unit are called matching unit in the frame in same two field picture.Fig. 5 is an example of interframe matching unit, and interframe matching unit R is that size is the piece of n * m.R JiRepresent (j, i) value of position pixel.
The instance of the input signal of G, decoding end texture analyzer
G-1, reference image data
When the input signal of decoding end texture analyzer was reference image data, it and F-2 were consistent.
G-2, decoding function module dateout
The data corresponding of decoding function module output with current decoding unit.
For example, one, after the entropy decoder module carries out code stream analyzing, obtain current decoding unit textural characteristics information, this textural characteristics information is outputed to the decoding end texture analyzer; Two, functional module is an intra-framed prediction module, and dateout is the intra prediction mode of current decoding unit.
Decoding function module output with the corresponding data of (one or more) decoded decoding unit.
For example, functional module is an intra-framed prediction module, and dateout is the intra prediction mode of the decoding unit of current decoding unit top and left.In order to supply the decoding end texture analyzer to analyze the textural characteristics of current decoding unit, these information should be passed through input decoding end texture analyzer behind the buffer memory of certain hour.
Decoding function module dateout is not limited to these examples, and it can also be the conversion coefficient of conversion module output; The scanning sequency of scan module output; The inter-frame forecast mode of inter prediction module output etc.
Notice that decoding function module dateout is meant the some or all of dateout of decoding function module, for example intra prediction mode is the part dateout of entropy decoder module here.
G-3, input signal are several data
Input signal comprises the several data in the following data: reference image data and decoding function module dateout.
For example, the inter-frame forecast mode of the top of the interframe matching unit of decoding unit and decoding unit and left decoding unit is all sent into the decoding end texture analyzer as input signal.
H, coding side texture analyzer extract the instance of coding unit textural characteristics information
H-1, input signal are raw image data
The input signal such as the F-1 of coding side texture analyzer are said.
With the raw image data of coding unit is that the situation of input signal is an example, and coding unit is an encoding block here, and Fig. 4 is the encoding block P of a n * m, P Ji(j, the i) value of position pixel are the initial data of this pixel to represent.Texture analyzer extracts the textural characteristics information that obtains encoding block P, i.e. grain direction information and texture strength information with the Sobel operator.Fig. 7 is the acquiring method of Sobel operator x direction and y direction gradient.Can be according to the Sobel operator in the hope of P JiX direction and the gradient of y direction.
The gradient of its x direction is:
hx(P ji)=P j-1,i-1+2×P j-1,i+P j-1,i+1-P j+1,i-1-2×P j+1,i-P j+1,i+1
The gradient of its y direction is:
hy(P ji)=P j-1,i+1+2×P j,i+1+P j+1,i+1-P j-1,i-1-2×P j-1,i-P j-1,i+1
P then JiGradient direction do
Dir (P Ji)=arctan (hy (P Ji)/hx (P Ji)), arctan is an arctan function;
P JiGradient intensity do
Mag (P Ji)=sqrt (hx ((P Ji) ^2+hy (P Ji) ^2), sqrt is the rooting function, ^2 refers to square.
With Dir (P Ji) be quantized into the class of corresponding precision, the corresponding a kind of grain direction information of each class.Quantize class such as four: the right diagonal angle of horizontal, vertical, left diagonal sum.
In order to confirm grain direction information and the texture strength information of P, obtain P successively 11To P N-1, m-1Dir value of this (n-2) * (m-2) individual pixel and Mag value quantize class according to Dir, and be with these some classification, pixel Mag value and, Mag value and be the grain direction strength information of P in obtaining every type.Prevailing grain direction strength information is exactly the texture strength information of P, and its pairing grain direction information is decided to be the grain direction information of P; If do not have prevailing texture strength information in the classification, can think that P is a flat site.
Texture analyzer can also utilize raw image data to adopt other operator or method to extract therewith the identical or different textural characteristics information of expression way in the instance.
H-2, input signal are reference image data
The input signal such as the F-2 of coding side texture analyzer are said.
Reference image data is an example with the interframe matching unit, and coding unit is an encoding block, and the interframe matching unit is interframe match block, for example Fig. 5.The textural characteristics information extracting method can adopt the method for giving an example among the H-1.
II-3, input signal are encoding function module dateout
The input signal such as the F-3 of coding side texture analyzer are said.
Example one: input signal is the intra prediction mode of present encoding unit among the F-3, and intra prediction mode is directive, such as the lateral prediction pattern, and vertical predictive mode, left diagonal angle predictive mode, right diagonal angle predictive mode and DC predictive mode.The coding side texture analyzer is a grain direction information according to the textural characteristics information of this encoding block of predictive mode decision here.If predictive mode is the lateral prediction pattern, grain direction is cross grain; If predictive mode is vertical predictive mode, grain direction is vertical texture; If predictive mode is a left diagonal angle predictive mode, grain direction is left diagonal grain; If predictive mode is a right diagonal angle predictive mode, grain direction is right diagonal grain; If predictive mode is the DC predictive mode, textural characteristics information is flat site, does not have obvious texture.
Example two: input signal is the inter-frame forecast mode information of the coding unit of the top, present encoding unit of inter prediction module output and left.Block size when inter-frame forecast mode is meant inter prediction.Such as, when the inter-frame forecast mode of the encoding block of encoding block top and left is 16 * 8, confirm that this encoding block is the cross grain direction; When the inter-frame forecast mode of the encoding block of encoding block top and left is 8 * 16, confirm that this encoding block is vertical grain direction; Other situation confirms that this encoding block is a flat site.
H-4, input signal are data splitting
The input signal such as the F-4 of coding side texture analyzer are said.
Here, raw image data is an example with the initial data of coding unit, and reference image data is an example with the interframe matching unit.These two kinds of signals are as the input signal of coding side texture analyzer, and coding unit is an encoding block, and the coding side texture analyzer is tried to achieve the differential signal between them earlier, and differential signal is meant the difference of encoding block and interframe match block; Adopt the method for giving an example among the H-1 to handle to differential signal, to obtain coding unit textural characteristics information.
I, decoding end texture analyzer texture feature extraction information instances
I-1, input signal are reference image data
When the input signal of decoding end texture analyzer is reference image data, of G-1, can extract decoding unit textural characteristics information by the method for giving an example among the H-2.
I-2, input signal are decoding function module dateout
Example one: comprise decoding unit textural characteristics information in the code stream; A kind of coding form of the textural characteristics information of the input signal of the decoding end texture analyzer decoding block that to be the entropy decoder module obtain through code stream analyzing or the textural characteristics information of decoding block; This information is input to the decoding end texture analyzer; The decoding end texture analyzer directly utilizes these information of perhaps decoding, and obtains the textural characteristics information of decoding block.
Example two: the input signal of decoding end texture analyzer is the intra prediction mode of the current decoding block of intra-framed prediction module output.The decoding end texture analyzer is confirmed the textural characteristics information of decoding block according to the method for giving an example among the H-3.
Example three: input signal is the inter-frame forecast mode information of the decoding unit of the current decoding unit top of inter prediction module output and left.The decoding end texture analyzer is confirmed the textural characteristics information of decoding block according to the method for giving an example among the H-3.
The mode of operation of J, encoding function module, decoding function module and the instance that textural characteristics adapts
J1, texture self-adaption interpolating module
Fig. 8 is a texture self-adaption interpolating module sketch map.The coding side texture analyzer extracts the textural characteristics information of encoding block, by textural characteristics information Control texture self-adaption interpolating module, makes it select a kind of interpolation method that adapts, and just mode of operation makes up the sub-pix point.The texture self-adaption interpolating module has N class grain direction interpolation, a kind of mode of operation of each type correspondence among Fig. 8.The textural characteristics information that the coding side texture analyzer extracts has grain direction information.The texture self-adaption interpolating module is after the grain direction information that obtains encoding block, and according to the different choice different working pattern of grain direction, the interpolation of carrying out sub-pix makes up.Fig. 9 is the sub-pix point sketch map that needs interpolation, the whole locations of pixels of capitalization A-P representative among the figure; The position of the sub-pix point that the whole pixel A of lowercase a-o representative is corresponding.The sub-pix point can be divided into 1/2nd pixels and 1/4th pixels.Wherein, b, h, j represent 1/2nd pixels, and other lowercase is represented 1/4th pixels.What following form was listed is the pairing interpolation mode of operation of encoding block different texture direction.Certainly, the design of interpolation filter not only is confined to this method, and other method for designing can also be arranged.
Cross grain Vertical texture
/ 2nd pixel b b’=-I+5A+5B-J b=Clip((b’+4)>>3) b’=A+B b=(b’+1)>>1
/ 2nd pixel h h’=A+C h=Clip((h’+1)>>1) h’=-F+5A+5C-N h=Clip((h’+4)>>3)
/ 2nd pixel j j’=-bb+5h+5cc-dd j=(Clip(j’+4)>>3) j’=-aa+5b+5cc-ff j=Clip((j’+4)>>3)
/ 4th pixel a a’=A+b a=Clip((a’+1)>>1) a’=A+b a=Clip((a’+1)>>1)
/ 4th pixel c c’=B+b c=Clip((c’+1)>>1) c’=B+b c=Clip((c’+1)>>1)
/ 4th pixel d d’=A+h d=Clip((d’+1)>>1) d’=A+h d=Clip((d’+1)>>1)
/ 4th pixel f f’=b+j f=Clip((f’+1)>>1) f’=b+j f=Clip((f’+1)>>1)
/ 4th pixel i i’=h+j i=Clip((i’+1)>>1) i’=h+j i=Clip((i’+1)>>1)
/ 4th pixel k k’=j+cc k=Clip((k’+1)>>1) k’=j+cc k=Clip((k’+1)>>1)
/ 4th pixels 1 l’=C+h l=Clip((l’+1)>>1) l’=C+h l=Clip((l’+1)>>1)
/ 4th pixel n n’=j+ee n=Clip((n’+1)>>1) n’=j+ee n=Clip((n’+1)>>1)
/ 4th pixel e e’=d+f e=Clip((e’+1)>>1) e’=a+i e=Clip((e’+1)>>1)
/ 4th pixel g m’=f+p g=Clip((m’+1)>>1) g’=c+k g=Clip((m’+1)>>1)
/ 4th pixel m m’=l+n m=Clip((m’+1)>>1) m’=i+r m=Clip((m’+1)>>1)
/ 4th pixel o o’=n+q o=Clip((o’+1)>>1) o’=k+s o=Clip((o’+1)>>1)
Left side diagonal grain Right diagonal grain
/ 2nd pixel b b’=7A+7B+C+G b=Clip((b’+8)>>4) b’=7A+7B+F+D; b=Clip((b’+8)>>4)
/ 2nd pixel h h’=7A+7C+B+K h=Clip((h’+8)>>4) h’=7A+7C+I+D; h=Clip((h’+8)>>4)
/ 2nd pixel j j’=A+7B+7C+D; j=Clip((j’+8)>>4) j’=7A+B+C+7D; j=Clip((j’+8)>>4)
/ 4th pixel a a’=A+b a=Clip((a’+1)>>1) a’=A+b a=Clip((a’+1)>>1)
/ 4th pixel c c’=B+b c=Clip((c’+1)>>1) c’=B+b c=Clip((c’+1)>>1)
/ 4th pixel d d’=A+h d=Clip((d’+1)>>1) d’=A+h d=Clip((d’+1)>>1)
/ 4th pixel f f’=b+j f=Clip((f’+1)>>1) f’=b+j f=Clip((f’+1)>>1)
/ 4th pixel i i’=h+j i=Clip((i’+1)>>1) i’=h+j i=Clip((i’+1)>>1)
/ 4th pixel k k’=j+cc k=Clip((k’+1)>>1) k’=j+cc k=Clip((k’+1)>>1)
/ 4th pixels 1 l’=C+h l=Clip((l’+1)>>1) l’=C+h l=Clip((l’+1)>>1)
/ 4th pixel n n’=j+ee n=Clip((n’+1)>>1) n’=j+ee n=Clip((n’+1)>>1)
/ 4th pixel e e’=b+h e=Clip((e’+1)>>1) e’=A+j e=Clip((e’+1)>>1)
/ 4th pixel g g’=B+j g=Clip((g’+1)>>1) g’=b+cc g=Clip((g’+1)>>1)
/ 4th pixel m m’=j+C m=Clip((m’+1)>>1) m’=h+ee m=Clip((m’+1)>>1)
/ 4th pixel o o’=cc+ee o=Clip((o’+1)>>1) o’=j+D o=Clip((o’+1)>>1)
The texture self-adaption interpolating module is positioned at the texture self-adaption video coded system and is called texture self-adaption interpolation encoding function module, is positioned at the texture self-adaption video decode system and is called texture self-adaption interpolation decoding function module.
J-2, texture self-adaption scan module instance
Figure 10 is a texture self-adaption scan method sketch map.The coding side texture analyzer extracts the textural characteristics information of encoding block, a kind of scanning sequency that adapts of textural characteristics Information Selection that the texture self-adaption scan module extracts according to the coding side texture analyzer, every kind of corresponding a kind of mode of operation of scanning sequency.Scanning sequency has vertically preferential scanning sequency; Laterally preferential scanning sequency is arranged; Have or not the scanning sequency of orientation preferentially etc.Shown in figure 11, be the example of 8 * 8 two kinds vertical preferential scanning sequencies, the scanning sequency on the right is bigger than the vertical degree of priority of the scanning sequency on the left side among the figure; Shown in figure 12, be the example of 8 * 8 two kinds horizontal preferential scanning sequencies, the scanning sequency on the right is bigger than the horizontal degree of priority of the scanning sequency on the left side among the figure; Directionless preferential scanning sequency is such as the zigzag scanning sequency.The textural characteristics information of texture analyzer output is grain direction information and texture strength information, and grain direction information is divided into cross grain, vertically three kinds of texture and other textures; Texture strength information is divided into strong, weak two kinds.The for example clear situation of using grain direction information and texture strength information control texture self-adaption scan module mode of operation of following table, such as for strong cross grain, the scanning sequency shown in the right figure of adaptive scanning module employing Figure 11 is as mode of operation; For other weak texture, the adaptive scanning module adopts the zigzag scanning sequency as mode of operation etc.
Figure S07169093320070718D000141
The for example clear situation of only using grain direction information Control texture self-adaption scan module mode of operation of following table, such as for cross grain, the adaptive scanning module adopts the scanning sequency shown in the right figure of Figure 11 as mode of operation; For vertical texture, the adaptive scanning module adopts the scanning sequency shown in the right figure of Figure 12 as mode of operation; For other texture, the adaptive scanning module adopts the zigzag scanning sequency as mode of operation.
Figure S07169093320070718D000142
The texture self-adaption scan module is positioned at the texture self-adaption video coded system and is called texture self-adaption scanning encoding functional module, is positioned at the texture self-adaption video decode system and is called texture self-adaption scan decoder functional module.
Certainly, the example that also has other encoding function module, decoding function module and textural characteristics to adapt such as conversion module, quantization modules or the like, is given an example no longer one by one.
Embodiment 1
Figure 13 is the sketch map of embodiment 1, and it is a texture self-adaption video coded system.The input signal of coding side texture analyzer is the reference image data of coding unit, and is of F-2, and output information is the grain direction information of coding unit, the method that method for distilling such as H-2 gave an example of grain direction information.The mode of operation of texture self-adaption interpolating module makes the texture self-adaption interpolating module choose the interpolation mode of operation that adapts with the method for being given an example among the J-1 according to grain direction information in the grain direction information Control video encoder of output.Other encoding function module in the video encoder is not carried out self adaptation according to textural characteristics information.
Embodiment 2
Figure 14 is the sketch map of embodiment 2, and it is a texture self-adaption video decode system.The input signal of decoding end texture analyzer is the reference image data of decoding unit, and is of G-1, and output information is the grain direction information of decoding unit, the method that method for distilling such as I-1 gave an example of grain direction information.The mode of operation of texture self-adaption interpolating module makes the texture self-adaption interpolating module choose the interpolation mode of operation that adapts with the method for being given an example among the J-1 according to grain direction information in the grain direction information Control Video Decoder of output.Other decoding function module in the Video Decoder is not carried out self adaptation according to textural characteristics information.
Embodiment 3
The system for encoding and decoding texture self-adaption video of embodiment 3 comprises the texture self-adaption video coded system of embodiment 1 and the texture self-adaption video decode system of embodiment 2.
Embodiment 4
Figure 15 is the sketch map of embodiment 4, and it is a texture self-adaption video coded system.The input signal of coding side texture analyzer is the reference image data of coding unit, and is of F-2, and output information is the grain direction information and the texture strength information of coding unit, and method for distilling adopts the method that H-2 gave an example.Texture self-adaption interpolating module in the grain direction information Control video encoder makes the texture self-adaption interpolating module choose the interpolation mode of operation that adapts with the method for being given an example among the J-1 according to grain direction information; Texture self-adaption scan module in grain direction information and the texture strength information control of video encoder makes the texture self-adaption scan module choose the scanning work pattern that adapts according to the method for being given an example among the J-2.Other encoding function module in the video encoder is not carried out self adaptation according to textural characteristics information.
Embodiment 5
Figure 16 is the sketch map of embodiment 5, and it is a texture self-adaption video decode system.The input signal of decoding end texture analyzer is the reference image data that needs the decoding unit of decoding, and of G-1, output information is grain direction information and the texture strength information that needs decoding unit, and method for distilling adopts the method that I-1 gave an example.Texture self-adaption interpolating module in the grain direction information Control Video Decoder makes the texture self-adaption interpolating module choose the interpolation mode of operation that adapts with the method for being given an example among the J-1 according to grain direction information; Texture self-adaption scan module in grain direction information and the texture strength information control of video decoder makes the texture self-adaption scan module choose the counter-scanning mode of operation that adapts according to the method for being given an example among the J-2.Other decoding function module in the Video Decoder is not carried out self adaptation according to textural characteristics information.
Embodiment 6
The system for encoding and decoding texture self-adaption video of embodiment 6 comprises the texture self-adaption video coded system of embodiment 4 and the texture self-adaption video decode system of embodiment 5.
Embodiment 7
Figure 17 is the sketch map of embodiment 7, and it is a texture self-adaption video coded system.The input signal of coding side texture analyzer is the initial data that needs the coding unit of coding, and of F-1, output information is grain direction information and the texture strength information that needs coding unit, and method for distilling adopts the method that H-1 gave an example.Texture self-adaption interpolating module in the grain direction information Control video encoder makes the texture self-adaption interpolating module choose the interpolation mode of operation that adapts by the method for being given an example among the J-1 according to grain direction information; Texture self-adaption scan module in grain direction information and the texture strength information control of video encoder makes the texture self-adaption scan module choose the scanning work pattern that adapts according to institute's simplified method among the J-2.Other encoding function module in the video encoder is not carried out self adaptation according to textural characteristics information.
Embodiment 8
Figure 18 is the sketch map of embodiment 8, and it is a texture self-adaption video decode system.The input signal of decoding end texture analyzer is the output signal of entropy decoder module; Of G-2; Output information is grain direction information and the texture strength information that needs the decoding unit of decoding, and the method for distilling of grain direction information and texture strength information adopts the method for being given an example among the I-2.Texture self-adaption interpolating module in the grain direction information Control Video Decoder makes the texture self-adaption interpolating module choose the interpolation mode of operation that adapts by the method for being given an example among the J-1 according to the grain direction characteristic information; Texture self-adaption scan module in grain direction information and the texture strength information control of video decoder makes the texture self-adaption scan module choose the counter-scanning mode of operation that adapts according to the method for being given an example among the J-2.Other decoding function module in the Video Decoder is not carried out self adaptation according to textural characteristics information.
Embodiment 9
The system for encoding and decoding texture self-adaption video of embodiment 9 comprises the texture self-adaption video coded system of embodiment 7 and the texture self-adaption video decode system of embodiment 8.
Embodiment 10
Figure 19 is the sketch map of embodiment 10, and it is a texture self-adaption video coded system.The input signal of coding side texture analyzer is the intra prediction mode of intra-framed prediction module output, and of F-3, output information is the grain direction information of coding unit, and the method for distilling of grain direction information adopts the method for being given an example among the H-3.Texture self-adaption interpolating module and texture self-adaption scan module in the grain direction information Control video encoder make the texture self-adaption interpolating module choose the interpolation mode of operation that adapts by the method for being given an example among the J-1 according to grain direction information; Make the texture self-adaption scan module according to grain direction information, choose the scanning work pattern that adapts according to method among the J-2.Other encoding function module in the video encoder is not carried out self adaptation according to textural characteristics information.
Embodiment 11
Figure 20 is the sketch map of embodiment 11, and it is a texture self-adaption video decode system.The input signal of decoding end texture analyzer is the intra prediction mode of intra-framed prediction module output, and of G-2, output information is the grain direction information that needs decoding unit, and the method for distilling of grain direction information adopts the method for being given an example among the I-2.Texture self-adaption interpolating module and texture self-adaption scan module in the grain direction information Control Video Decoder make the texture self-adaption interpolating module choose the interpolation mode of operation that adapts by the method for giving an example among the J-1 according to grain direction information; Make the texture self-adaption scan module according to grain direction information, choose the counter-scanning mode of operation that adapts according to the method for being given an example among the J-2.Other decoding function module in the Video Decoder is not carried out self adaptation according to textural characteristics information.
Embodiment 12
The system for encoding and decoding texture self-adaption video of embodiment 12 comprises the texture self-adaption video coded system of embodiment 10 and the texture self-adaption video decode system of embodiment 11.
The foregoing description is used for the present invention that explains, rather than limits the invention, and in the protection range of spirit of the present invention and claim, any modification and change to the present invention makes all fall into protection scope of the present invention.

Claims (5)

1. texture self-adaption video coded system, it is characterized in that: it comprises video encoder and coding side texture analyzer; Video encoder comprises an encoding function module at least, to accomplish encoding compression; The coding side texture analyzer is used to carry out texture analysis, to extract coding unit textural characteristics information; Video encoder comprises the texture self-adaption interpolating module; Said texture self-adaption interpolating module is selected a kind of interpolation method that adapts according to the coding unit textural characteristics information that the coding side texture analyzer extracts, and makes up the sub-pix point; Wherein, said textural characteristics information is grain direction information.
2. texture self-adaption video coded system according to claim 1 is characterized in that: the input signal of described coding side texture analyzer comprise following one or more: the reference image data of raw image data, coding unit, encoding function module dateout.
3. texture self-adaption video decode system is characterized in that: it comprises Video Decoder and decoding end grain reason analyzer; Video Decoder comprises a decoding function module at least, to accomplish decoding and rebuilding; The decoding end texture analyzer is used to carry out texture analysis, to extract decoding unit textural characteristics information; Video Decoder comprises the texture self-adaption interpolating module; Said texture self-adaption interpolating module is selected a kind of interpolation method that adapts according to the decoding unit textural characteristics information that the decoding end texture analyzer extracts, and makes up the sub-pix point; Wherein, said textural characteristics information is grain direction information.
4. texture self-adaption video decode system according to claim 3 is characterized in that: the input signal of described decoding end texture analyzer comprise following one or more: the reference image data of decoding unit, decoding function module dateout.
5. system for encoding and decoding texture self-adaption video, it is characterized in that: it comprises described texture self-adaption video coded system of claim 1 and the described texture self-adaption video decode system of claim 3.
CN 200710069093 2007-06-12 2007-06-12 System for encoding and decoding texture self-adaption video Expired - Fee Related CN101325707B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200710069093 CN101325707B (en) 2007-06-12 2007-06-12 System for encoding and decoding texture self-adaption video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200710069093 CN101325707B (en) 2007-06-12 2007-06-12 System for encoding and decoding texture self-adaption video

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201110388181.6A Division CN102413330B (en) 2007-06-12 2007-06-12 Texture-adaptive video coding/decoding system

Publications (2)

Publication Number Publication Date
CN101325707A CN101325707A (en) 2008-12-17
CN101325707B true CN101325707B (en) 2012-04-18

Family

ID=40188991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200710069093 Expired - Fee Related CN101325707B (en) 2007-06-12 2007-06-12 System for encoding and decoding texture self-adaption video

Country Status (1)

Country Link
CN (1) CN101325707B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101630006B1 (en) * 2009-12-04 2016-06-13 톰슨 라이센싱 Texture-pattern-adaptive partitioned block transform
CN102447896B (en) * 2010-09-30 2013-10-09 华为技术有限公司 Method, device and system for processing image residual block
CN102595113B (en) * 2011-01-13 2014-06-04 华为技术有限公司 Method, device and system for scanning conversion coefficient block
CN102651816B (en) * 2011-02-23 2014-09-17 华为技术有限公司 Method and device for scanning transformation coefficient block
CN102186070B (en) * 2011-04-20 2013-06-05 北京工业大学 Method for realizing rapid video coding by adopting hierarchical structure anticipation
CN102857751B (en) * 2011-07-01 2015-01-21 华为技术有限公司 Video encoding and decoding methods and device
CN102857749B (en) * 2011-07-01 2016-04-13 华为技术有限公司 A kind of pixel classifications method and apparatus of video image
US9432700B2 (en) 2011-09-27 2016-08-30 Broadcom Corporation Adaptive loop filtering in accordance with video coding
CN104349171B (en) * 2013-07-31 2018-03-13 上海通途半导体科技有限公司 The compression of images coding/decoding device and coding and decoding method of a kind of virtually lossless
CN103517069B (en) * 2013-09-25 2016-10-26 北京航空航天大学 A kind of HEVC intra-frame prediction quick mode selection method based on texture analysis
CN104933736B (en) * 2014-03-20 2018-01-19 华为技术有限公司 A kind of Vision Entropy acquisition methods and device
US10728553B2 (en) * 2017-07-11 2020-07-28 Sony Corporation Visual quality preserving quantization parameter prediction with deep neural network
CN116708789B (en) * 2023-08-04 2023-10-13 湖南马栏山视频先进技术研究院有限公司 Video analysis coding system based on artificial intelligence

Also Published As

Publication number Publication date
CN101325707A (en) 2008-12-17

Similar Documents

Publication Publication Date Title
CN101325707B (en) System for encoding and decoding texture self-adaption video
US11949881B2 (en) Apparatus for encoding and decoding image using adaptive DCT coefficient scanning based on pixel similarity and method therefor
CN100586187C (en) Method and apparatus for image intraperdiction encoding/decoding
US9621895B2 (en) Encoding/decoding method and device for high-resolution moving images
CN104602011B (en) Picture decoding apparatus
CN100574446C (en) The apparatus and method of encoding and decoding of video and recording medium thereof
KR101752149B1 (en) Moving image encoding and decoding devices and methods
CN107172424A (en) Loop circuit filtering method and its equipment
JPWO2012042720A1 (en) Image encoding device, image decoding device, image encoding method, and image decoding method
KR101433169B1 (en) Mode prediction based on the direction of intra predictive mode, and method and apparatus for applying quantifization matrix and scanning using the mode prediction
CN102413330B (en) Texture-adaptive video coding/decoding system
KR20100044333A (en) Video encoding apparatus, and apparatus and method of 2 dimensional ordering transform for image signal, and recording medium therefor
KR20200004749A (en) Method for rearranging residual and apparatus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120418