CN102413330B - Texture-adaptive video coding/decoding system - Google Patents

Texture-adaptive video coding/decoding system Download PDF

Info

Publication number
CN102413330B
CN102413330B CN201110388181.6A CN201110388181A CN102413330B CN 102413330 B CN102413330 B CN 102413330B CN 201110388181 A CN201110388181 A CN 201110388181A CN 102413330 B CN102413330 B CN 102413330B
Authority
CN
China
Prior art keywords
texture
adaption
information
self
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110388181.6A
Other languages
Chinese (zh)
Other versions
CN102413330A (en
Inventor
虞露
武晓阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201110388181.6A priority Critical patent/CN102413330B/en
Publication of CN102413330A publication Critical patent/CN102413330A/en
Application granted granted Critical
Publication of CN102413330B publication Critical patent/CN102413330B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a texture-adaptive video coding/decoding system. The texture-adaptive video coding system comprises a video coder and a coding end texture analyzer; and the texture-adaptive video decoding system comprises a video decoder and a decoding end texture analyzer. The texture-adaptive video coding/decoding system incorporates the texture feature information of video images into the video coding/decoding system, thereby improving the compression efficiency and subjective quality of video coding.

Description

A kind of system for encoding and decoding texture self-adaption video
The application is that number of patent application is 200710069093.3, denomination of invention for " system for encoding and decoding texture self-adaption video ", the applying date be the divisional application of the patent application on June 12nd, 2007.
Technical field
The present invention relates to signal and process and the communications field, especially, relate to a kind of texture self-adaption video coded system, a kind of texture self-adaption video decode system and a kind of system for encoding and decoding texture self-adaption video.
Background technology
Current video encoding and decoding standard, such as H.261 ITU formulates, H.263, H.26L the MPEG-1 organizing to set up with the MPEG of ISO, MPEG-2, MPEG-4, and JVT formulate H.264/MPEG-AVC(abbreviation H.264) and the video encoding standard AVS Part II of Chinese independent intellectual property right be all based on conventional hybrid coding and decoding video framework.
A free-revving engine of Video coding compresses vision signal exactly, reduces the data volume of vision signal, thereby saves memory space and the transmission bandwidth of vision signal.On the one hand, raw video signal, data volume is very huge, and this is the necessity place of Video coding compression; On the other hand, there is a large amount of redundant informations in raw video signal, and this is the possibility place of Video coding compression.These redundant informations can be divided into spatial redundancy information, time redundancy information, data redundancy information and visual redundancy information.Wherein three kinds of redundant informations are only the redundant information in statistic concept between considered pixel above, general name statistical redundancy information; Visual redundancy information stresses to consider human visual system's characteristic more.Video coding will reduce video signal data amount, just need to reduce the various redundant informations that exist in vision signal.Conventional hybrid Video coding framework is the Video coding framework that considers predictive coding, transition coding and entropy coding, puts forth effort to reduce the statistical redundancy information of vision signal, and conventional hybrid Video coding framework has following main feature:
(1) utilize predictive coding to reduce time redundancy information and spatial redundancy information;
(2) utilize transition coding further to reduce spatial redundancy information;
(3) utilize entropy coding to reduce data redundancy information;
Predictive coding comprises intraframe predictive coding and inter prediction encoding.By the frame of video of intraframe predictive coding technique compresses, be called intracoded frame (I frame).The cataloged procedure of intracoded frame is as follows: first, coded frame is divided into encoding block (a kind of form of coding unit); Encoding block is carried out to infra-frame prediction, obtain the residual error data of infra-frame prediction; Then residual error data is carried out to two-dimensional transform coding; Then in transform domain, conversion coefficient is quantized; Then convert 2D signal to one-dimensional signal through overscanning; Finally carry out entropy coding.By the frame of video of inter prediction encoding technique compresses, be called inter-frame encoding frame (P frame, B frame).The cataloged procedure of inter-frame encoding frame is as follows: first, coded frame is divided into encoding block; Adopt motion estimation techniques to obtain motion vector and reference block (a kind of form of reference unit) to encoding block; Then adopt motion compensation technique, obtain the residual error data after inter prediction; Then residual error data is carried out to two-dimensional transform coding; Then in transform domain, conversion coefficient is quantized; Then convert 2D signal to one-dimensional signal through overscanning; Finally carry out entropy coding.Residual error data, namely residual signals, with respect to raw video signal, spatial redundancy information and time redundancy information have all reduced.If spatial redundancy information and time redundancy information represents by correlation on mathematics, the spatial coherence of residual signals and temporal correlation are all little than original video information amount.Then residual signals is carried out to two-dimensional transform coding, further reduce spatial coherence, finally conversion coefficient is quantized and entropy coding reduction data redundancy information.The visible compression efficiency that will continue to improve Video coding, needs more accurate predictive coding, further reduces spatial coherence and the temporal correlation of the rear residual signals of prediction; Also need more effective transition coding technology, further reduce spatial coherence simultaneously; After predictive coding and transition coding, design the scanning technique, quantification technique and the entropy coding that adapt simultaneously.
The above-mentioned video encoding and decoding standard based on conventional hybrid coding and decoding video framework, although obtained very large success, further improve the compression efficiency of Video coding, and conventional hybrid coding and decoding video framework itself exists bottleneck.Result of study demonstration, vision signal is not stationary source, that is to say that the feature of each coding unit is not quite similar.But, in conventional hybrid coding and decoding video framework, the design of functional module is to be but based upon on the hypothesis basis of steady vision signal, for example, wherein predictive coding module, transition coding module, quantization modules, scan module etc. are in the time encoding to coding unit, and the mode of operation adopting is all fixed:
(1), when predictive compensation is accurate to sub-pix in inter prediction encoding, need to adopt interpolation technique to build the sub-pix point in reference picture.The existing video standard based on conventional hybrid coding and decoding video framework all adopts horizontal, longitudinal separable one dimension interpolation filter, and sub-pix is built.Tap number and the coefficient of interpolation filter are all fixed, and therefore, the interpolation filter of employing is with irrelevant by the content of interpolation image.
(2) transition coding module extensively adopt discrete cosine transform (DCT) technology with and approximate converter technique integer cosine transformation (ICT) technology.Transition coding is intended, reducing spatial coherence, coding unit energy being concentrated to a few conversion coefficient, is exactly to concentrate to low frequency energy in DCT, ICT.Transformation matrix is all fixed, and therefore, the conversion of employing is irrelevant with the content that is transformed image.
(3) quantization modules is that the one of conversion coefficient is damaged to irreversible coding module.The quantification technique adopting in video standard is at present that each conversion coefficient is carried out the scalar quantization of identical step-length or by quantization matrix weighting, high frequency coefficient slightly quantized to (utilizing human eye to the insensitive characteristic of high-frequency signal).Visible, quantizing process is irrelevant with the picture material being quantized.
(4) scan module is that 2D signal is changed into one-dimensional signal, and concrete is exactly that the two-dimensional transform coefficient after quantizing is converted into run, level signal, is beneficial to run, level signal to carry out entropy coding.In video encoding standard, adopt at present the zigzag scan mode of encoding for frame and the longitudinal preferential alternative scan mode of encoding for field, and fix the scanning sequency of these two kinds of scannings, for the conversion coefficient of encoding block substantially along order left to bottom right.Visible, scanning sequency is irrelevant with the picture material being scanned.
In recent years, in order further to improve Video coding efficiency, emerge some new coding techniquess, the common ground of these technology is " self adaptation ", can select different coded system (refer to some functional module select to adapt mode of operation) to every frame or each encoding block.The adaptive approach of these technology, some is realized based on rate-distortion optimization (RDO) technology, from several candidates' method, selects a kind of method optimum in RD meaning by the approach of this high complexity of RDO; Some method based on statistics, the thought of employing " twice ", after first pass finishes, the mode of operation of utilizing the data statistics of first pass to obtain adapting, then carries out coding second time by the mode of operation adapting.
There is in addition a kind of adaptive transformation technology, its method based on neural net.An initial transformation pattern of initial setting, along with the carrying out of coding, progressively trains new pattern conversion by neural net, and ensuing encoding block is carried out to transition coding.
" self adaptation " viewpoint of these methods is actually the different thought of local feature of utilizing coding unit, but their local feature is fuzzy general.
On the basis of analyzing above and study, in order to break through the bottleneck of conventional hybrid coding and decoding video framework, the present invention proposes a kind of system for encoding and decoding texture self-adaption video, and system for encoding and decoding texture self-adaption video comprises texture self-adaption video coded system and texture self-adaption video decode system.System for encoding and decoding texture self-adaption video is brought the textural characteristics of video image (one of image local feature) information in video coding and decoding system into, to improve compression efficiency and the subjective quality of Video coding.
Summary of the invention
The object of the invention is to the bottleneck for conventional hybrid coding and decoding video framework, propose a kind of system for encoding and decoding texture self-adaption video.System for encoding and decoding texture self-adaption video comprises texture self-adaption video coded system and texture self-adaption video decode system.
The object of the invention is to be achieved through the following technical solutions: a kind of texture self-adaption video coded system, it comprises video encoder and coding side texture analyzer; Video encoder at least comprises an encoding function module, to complete compression coding; Coding side texture analyzer is used for carrying out texture analysis, to extract the textural characteristics information of coding unit; Video encoder comprises texture self-adaption scan module, and the textural characteristics information that described texture self-adaption scan module extracts according to coding side texture analyzer, selects a kind of scanning sequency adapting.
Further: described textural characteristics packets of information is containing grain direction information, or described textural characteristics packets of information is containing grain direction information and texture strength information.
Further: the input signal of described coding side texture analyzer comprise following one or more: the reference image data of raw image data, coding unit, encoding function module output data.
Further: video encoder also comprises texture self-adaption interpolating module, the textural characteristics information that described texture self-adaption interpolating module extracts according to coding side texture analyzer, select a kind of interpolation method adapting, build sub-pix point, wherein, described textural characteristics information is grain direction information.
Further: texture self-adaption interpolating module and texture self-adaption scan module use of the same race or coding unit textural characteristics information not of the same race to control.
A kind of texture self-adaption video decode system, it comprises Video Decoder and decoding end grain reason analyzer; Video Decoder at least comprises a decoding function module, to complete decoding and rebuilding; Decoding end texture analyzer is used for carrying out texture analysis, to extract the textural characteristics information of decoding unit; Video Decoder comprises texture self-adaption scan module, and the textural characteristics information that described texture self-adaption scan module extracts according to decoding end texture analyzer is selected a kind of counter-scanning order adapting.
Further: described textural characteristics packets of information is containing grain direction information, or described textural characteristics packets of information is containing grain direction information and texture strength information.
Further: the input signal of described decoding end texture analyzer comprise following one or more: reference image data, decoding function module output data.
Further: Video Decoder also comprises texture self-adaption interpolating module, the textural characteristics information that described texture self-adaption interpolating module extracts according to decoding end texture analyzer, select a kind of interpolation method adapting, build sub-pix point, wherein, described textural characteristics information is grain direction information.
Further: texture self-adaption interpolating module and texture self-adaption scan module use of the same race or decoding unit textural characteristics information not of the same race to control.
The invention has the beneficial effects as follows, system for encoding and decoding texture self-adaption video of the present invention is brought the textural characteristics of video image in video coding and decoding system into, to improve compression efficiency and the subjective quality of Video coding.
Accompanying drawing explanation
Fig. 1 is texture self-adaption video coded system schematic diagram;
Fig. 2 is texture self-adaption video decode system schematic diagram;
Fig. 3 is system for encoding and decoding texture self-adaption video schematic diagram;
Fig. 4 is n × m encoding block initial data schematic diagram;
Fig. 5 is the schematic diagram data of n × m reference image block;
Fig. 6 is that intra prediction mode is as coding side texture analyzer signal input schematic diagram;
Fig. 7 is Sobel operator schematic diagram;
The schematic diagram of Fig. 8 texture self-adaption interpolating module;
Fig. 9 is whole pixel and sub-pix point schematic diagram;
Figure 10 is the schematic diagram of texture self-adaption scan module;
Figure 11 is longitudinal priority scan sequential schematic;
Figure 12 is horizontal priority scan sequential schematic;
Figure 13 is embodiment 1 schematic diagram: texture self-adaption video coded system;
Figure 14 is embodiment 2 schematic diagrames: texture self-adaption video decode system;
Figure 15 is embodiment 4 schematic diagrames: texture self-adaption video coded system;
Figure 16 is embodiment 5 schematic diagrames: texture self-adaption video decode system;
Figure 17 is embodiment 7 schematic diagrames: texture self-adaption video coded system;
Figure 18 is embodiment 8 schematic diagrames: texture self-adaption video decode system;
Figure 19 is embodiment 10 schematic diagrames: texture self-adaption video coded system;
Figure 20 is embodiment 11 schematic diagrames: texture self-adaption video decode system.
Embodiment
The present invention relates to a kind of texture self-adaption video coded system (shown in Fig. 1), a kind of texture self-adaption video decode system (shown in Fig. 2) and a kind of system for encoding and decoding texture self-adaption video (shown in Fig. 3), notice that " transmission channel " shown in Fig. 3 is not included in system for encoding and decoding texture self-adaption video.
System for encoding and decoding texture self-adaption video coverage is very wide, first the noun relating in the present invention is illustrated below.
The example of A, coding unit
Coding unit is the unit of texture self-adaption, the set that it is made up of video image vegetarian refreshments.The form of coding unit is a lot, and in Differential pulse code modulation coded system in early days, coding unit is independent one by one pixel; In current many video encoding standards, coding unit is rectangular block of pixels, comprises square; And up-to-date to have the coding unit of mentioning in document be triangle, the different form such as trapezoidal; Coding unit can be also the forms such as a band (slice), a frame, a field; In addition, coding unit can also be made up of non-conterminous pixel.
Encoding block is a kind of example of coding unit, the rectangular block that it is made up of pixel, and rectangular block size is n × m, represents that this encoding block height is n pixel, width is m pixel.Such as 16 × 16 encoding block, 16 × 8 encoding block, 8 × 16 encoding block, 8 × 8 encoding block, 8 × 4 encoding block, 4 × 8 encoding block, 4 × 4 encoding block.Below will take encoding block as example provides concrete embodiment, in the time not specifying, will use encoding block replace coding unit.But the method for enumerating in embodiment can be used for the coding unit of other form equally.
The example of B, decoding unit
Decoding unit and coding unit are the different sayings of same thing at system diverse location.Coding unit is the concept in texture self-adaption video coded system, and correspondingly, in texture self-adaption video decode system, it is just called as decoding unit.So mention giving an example and also applicable giving an example and explanation to decoding unit of explanation of coding unit in A.
The example of C, Video coding functional module
In video encoder, encoding function module comprises one or more with in lower module: prediction module, interpolating module, conversion module, inverse transform block, quantization modules, inverse quantization module, scan module, counter-scanning module, block elimination filtering module, entropy coding module etc.These encoding function modules can one be subdivided into multiple or multiple encoding function modules that are merged into, such as interpolating module can be divided into 1/2nd picture element interpolation modules and 1/4 pixel interpolation module; Such as conversion module and quantization modules are merged into change quantization module.Video encoder also can have other function division methods, forms a set of new encoding function module.
Encoding function module in video encoder is connected by certain mode, completes the function of compression coding.
For an encoding function module, its mode of operation can be various, and such as interpolating module, adopting different filter tap numbers and filter coefficient is exactly different mode of operations.
The example of D, video decode functional module
Decoding function module in Video Decoder comprises one or more with in lower module: prediction module, interpolating module, inverse transform block, inverse quantization module, counter-scanning module, block elimination filtering module, entropy decoder module etc.These decoding function modules can one be subdivided into multiple or multiple decoding function modules that are merged into, such as interpolating module can be divided into 1/2nd picture element interpolation modules and 1/4 pixel interpolation module; Such as inverse transform block and inverse quantization module are merged into inverse transformation quantization modules.Video Decoder also can have other function division methods, forms a set of new decoding function module.
Decoding function module in Video Decoder is connected by certain mode, completes the function of decoding and rebuilding.
For a decoding function module, its mode of operation can be various, and such as interpolating module, adopting different filter tap numbers and filter coefficient is exactly different mode of operations.
The example of E, textural characteristics information
The expression mode of textural characteristics information can have grain direction information, texture strength information, grain direction strength information, also can have other expression mode, such as texture structure etc.
E-1, grain direction information
Grain direction information subjectivity show as texture in image towards, generally represent with texture angle of inclination.Angle of inclination is continuous quantity, in use, can be quantified as discrete magnitude.When quantification, can select different precision, texture is divided into different types of direction.When quantification, the texture angle of inclination that angle is belonged to same quantization areas is classified as same class grain direction.Such as, when quantified precision is 4 class, grain direction information can be divided into cross grain, longitudinal texture, left diagonal grain and right diagonal grain.Certainly, some coding unit, it does not have obvious grain direction, can say that texture strength corresponding to each grain direction is suitable yet, is referred to as flat site, and flat site is a kind of special grain direction information.
The direction at the edge in image is a kind of example of grain direction information.
E-2, texture strength information
Texture strength information subjectivity shows as the obvious degree of texture in image, can represent by gradient intensity, also can represent with energy intensity, can also use other method representation.
E-3, grain direction strength information
Grain direction strength information refers to grain direction is divided into variety classes by E-1, and the grain direction of each kind has corresponding with it strength information.Grain direction strength information is exactly the texture strength information corresponding to each grain direction.
The example of the input signal of F, coding side texture analyzer
F-1, raw image data
Raw image data refers to the data that formed or built by original image original pixel value.Building mode is varied, such as interpolation method, filtering mode, pixel repetitive mode etc.
F-2, reference image data
Reference image data refers to the data by the pixel value of decipher reestablishment image forms or builds.Building mode is varied, such as interpolation method, filtering mode, pixel repetitive mode etc.
F-3, encoding function module output data
The data corresponding with present encoding unit of encoding function module output.
For example, functional module is intra-framed prediction module, and output data are the intra prediction mode of present encoding unit.Fig. 6 is this routine schematic diagram.
The data corresponding to the coding unit of having encoded with (one or more) of encoding function module output.
For example, functional module is intra-framed prediction module, and output data are the intra prediction mode of the coding unit of top, present encoding unit and left.In order to analyze the textural characteristics of present encoding unit for coding side texture analyzer, these information should be after the buffer memory of certain hour input coding end texture analyzer.
Encoding function module output data are not limited to the intra prediction mode of intra-framed prediction module output, and it can be the conversion coefficient of conversion module output; The scanning sequency of scan module output; The inter-frame forecast mode of inter prediction module output etc.
Note, encoding function module output data refer to the some or all of output data of encoding function module, and for example intra prediction mode is the part output data of intra-framed prediction module here.
F-4, input signal are several data
Input signal comprises the several data in following data: raw image data, reference image data and encoding function module output data.
For example, coding side texture analyzer is all sent into as input signal in the raw image data of coding unit and coding unit frame matching unit.
Wherein, Fig. 4 is an example of the raw image data of coding unit, and coding unit is encoding block, and it is the piece P of a n × m, P jirepresenting the pixel value of (j, i) position, is the original pixel value of this pixel.
Matching unit is the reference image data the most close with coding unit.The pixel of formation matching unit and present encoding unit, not in same two field picture, are called frame matching unit; The pixel of formation matching unit and present encoding unit, in same two field picture, are called matching unit in frame.Fig. 5 is an example of frame matching unit, and frame matching unit R is the piece of size for n × m.R jirepresent the value of (j, i) position pixel.
The example of the input signal of G, decoding end texture analyzer
G-1, reference image data
When the input signal of decoding end texture analyzer is reference image data, it is consistent with F-2.
G-2, decoding function module output data
The data corresponding with current decoding unit of decoding function module output.
For example, one, entropy decoder module carries out after code stream analyzing, obtains current decoding unit textural characteristics information, and this textural characteristics information is outputed to decoding end texture analyzer; Two, functional module is intra-framed prediction module, the intra prediction mode that output data are current decoding unit.
The data corresponding with (one or more) decoded decoding unit of decoding function module output.
For example, functional module is intra-framed prediction module, and output data are the intra prediction mode of the decoding unit of current decoding unit top and left.In order to analyze the textural characteristics of current decoding unit for decoding end texture analyzer, these information should be inputted decoding end texture analyzer after the buffer memory of certain hour.
Decoding function module output data are not limited to these examples, and it can also be the conversion coefficient of conversion module output; The scanning sequency of scan module output; The inter-frame forecast mode of inter prediction module output etc.
Note, decoding function module output data refer to the some or all of output data of decoding function module, and for example intra prediction mode is the part output data of entropy decoder module here.
G-3, input signal are several data
Input signal comprises the several data in following data: reference image data and decoding function module output data.
For example, the inter-frame forecast mode of the frame matching unit of decoding unit and the top of decoding unit and left decoding unit is all sent into decoding end texture analyzer as input signal.
H, coding side texture analyzer extract the example of coding unit textural characteristics information
H-1, input signal are raw image data
The input signal of coding side texture analyzer is as described in F-1.
Situation take the raw image data of coding unit as input signal is as example, and coding unit is encoding block herein, and Fig. 4 is the encoding block P of a n × m, P jirepresenting the value of (j, i) position pixel, is the initial data of this pixel.Texture analyzer extracts the textural characteristics information that obtains encoding block P, i.e. grain direction information and texture strength information with Sobel operator.Fig. 7 is the acquiring method of Sobel operator x direction and y direction gradient.Can be in the hope of P according to Sobel operator jix direction and the gradient of y direction.
The gradient of its x direction is:
hx(P ji)=P j-1,i-1+2×P j-1,i+P j-1,i+1-P j+1,i-1-2×P j+1,i-P j+1,i+1
The gradient of its y direction is:
hy(P ji)=P j-1,i+1+2×P j,i+1+P j+1,i+1-P j-1,i-1-2×P j-1,i-P j-1,i+1
P jigradient direction be
Dir (P ji)=arctan (hy (P ji)/hx (P ji)), arctan is arctan function;
P jigradient intensity be
Mag (P ji)=sqrt (hx ((P ji) ^2+hy (P ji) ^2), sqrt is rooting function, ^2 refers to square.
By Dir (P ji) be quantized into the class of corresponding precision, the corresponding a kind of grain direction information of each class.Such as four quantize class: the right diagonal angle of horizontal, longitudinal, left diagonal sum.
In order to determine grain direction information and the texture strength information of P, obtain successively P 11to P n-1, m-1this (n-2) × (m-2) Dir value and Mag value of individual pixel, quantizes class according to Dir, by these some classification, obtain pixel Mag value in every class and, Mag value be the grain direction strength information of P.Prevailing grain direction strength information is exactly the texture strength information of P, and its corresponding grain direction information is decided to be the grain direction information of P; If there is no prevailing texture strength information in classification, can think that P is flat site.
Texture analyzer can also utilize raw image data to adopt other operator or method to extract therewith the identical or different textural characteristics information of expression way in example.
H-2, input signal are reference image data
The input signal of coding side texture analyzer is as described in F-2.
Reference image data is take frame matching unit as example, and coding unit is encoding block, and frame matching unit is frame matching piece, for example Fig. 5.Textural characteristics information extracting method can adopt the method exemplifying in H-1.
H-3, input signal are encoding function module output data
The input signal of coding side texture analyzer is as described in F-3.
Example one: input signal is the intra prediction mode of present encoding unit in F-3, and intra prediction mode is directive, such as lateral prediction pattern, longitudinally predictive mode, left diagonal angle predictive mode, right diagonal angle predictive mode and DC predictive mode.Coding side texture analyzer determines the textural characteristics information of this encoding block according to predictive mode, be grain direction information here.If predictive mode is lateral prediction pattern, grain direction is cross grain; If predictive mode is longitudinal predictive mode, grain direction is longitudinal texture; If predictive mode is left diagonal angle predictive mode, grain direction is left diagonal grain; If predictive mode is right diagonal angle predictive mode, grain direction is right diagonal grain; If predictive mode is DC predictive mode, textural characteristics information is flat site, without obvious texture.
Example two: input signal is the inter-frame forecast mode information of the top, present encoding unit of inter prediction module output and the coding unit of left.Block size when inter-frame forecast mode refers to inter prediction.Such as, when above encoding block and when the inter-frame forecast mode of the encoding block of left is 16 × 8, determine that this encoding block is cross grain direction; When above encoding block and when the inter-frame forecast mode of the encoding block of left is 8 × 16, determine that this encoding block is longitudinal grain direction; Other situation, determines that this encoding block is flat site.
H-4, input signal are data splitting
The input signal of coding side texture analyzer is as described in F-4.
Herein, raw image data is take the initial data of coding unit as example, and reference image data is take frame matching unit as example.These two kinds of signals are as the input signal of coding side texture analyzer, and coding unit is encoding block, and coding side texture analyzer is first tried to achieve the differential signal between them, and differential signal refers to the difference of encoding block and frame matching piece; Differential signal is adopted to the method processing exemplifying in H-1, to obtain coding unit textural characteristics information.
I, decoding end texture analyzer texture feature extraction information instances
I-1, input signal are reference image data
When the input signal of decoding end texture analyzer is reference image data, as described in G-1, can extract decoding unit textural characteristics information by the method exemplifying in H-2.
I-2, input signal are decoding function module output data
Example one: comprise decoding unit textural characteristics information in code stream, the input signal of decoding end texture analyzer is the textural characteristics information of decoding block that obtains by code stream analyzing of entropy decoder module or a kind of coding form of the textural characteristics information of decoding block, this input information is arrived to decoding end texture analyzer, decoding end texture analyzer these information of directly utilizing or decode, obtain the textural characteristics information of decoding block.
Example two: the input signal of decoding end texture analyzer is the intra prediction mode of the current decoding block of intra-framed prediction module output.Decoding end texture analyzer, according to the method exemplifying in H-3, is determined the textural characteristics information of decoding block.
Example three: input signal is the inter-frame forecast mode information of the current decoding unit top of inter prediction module output and the decoding unit of left.Decoding end texture analyzer, according to the method exemplifying in H-3, is determined the textural characteristics information of decoding block.
The mode of operation of J, encoding function module, decoding function module and the example that textural characteristics adapts
J-1, texture self-adaption interpolating module
Fig. 8 is texture self-adaption interpolating module schematic diagram.Coding side texture analyzer extracts the textural characteristics information of encoding block, by textural characteristics information control texture self-adaption interpolating module, makes it select a kind of interpolation method adapting, and namely mode of operation, builds sub-pix point.In Fig. 8, texture self-adaption interpolating module has N class grain direction interpolation, the corresponding a kind of mode of operation of each class.The textural characteristics information that coding side texture analyzer extracts has grain direction information.Texture self-adaption interpolating module is obtaining after the grain direction information of encoding block, the mode of operation different according to the different choice of grain direction, and the interpolation of carrying out sub-pix builds.Fig. 9 is the sub-pix point schematic diagram that needs interpolation, and in figure, capitalization A-P represents the position of whole pixel; Lowercase a-o represents the position of the sub-pix point that whole pixel A is corresponding.Sub-pix point can be divided into 1/2nd pixels and 1/4th pixels.Wherein, b, h, j represent 1/2nd pixels, and other lowercase represents 1/4th pixels.What form was listed below is the corresponding interpolation mode of operation of encoding block different texture direction.Certainly, the design of interpolation filter is not only confined to this method, can also have other method for designing.
Figure GDA0000416918850000111
Figure GDA0000416918850000121
Texture self-adaption interpolating module is positioned at texture self-adaption video coded system and is called texture self-adaption interpolation encoding function module, is positioned at texture self-adaption video decode system and is called texture self-adaption interpolation decoding function module.
J-2, texture self-adaption scan module example
Figure 10 is texture self-adaption scan method schematic diagram.Coding side texture analyzer extracts the textural characteristics information of encoding block, a kind of scanning sequency adapting of textural characteristics Information Selection that texture self-adaption scan module extracts according to coding side texture analyzer, every kind of corresponding a kind of mode of operation of scanning sequency.Scanning sequency has longitudinally preferential scanning sequency; There is laterally preferential scanning sequency; Have or not the scanning sequency of orientation preferentially etc.As shown in figure 11, be the example of 8 × 8 two kinds of longitudinal preferential scanning sequencies, in figure, the scanning sequency on the right is larger than the longitudinal degree of priority of the scanning sequency on the left side; As shown in figure 12, be the example of 8 × 8 two kinds of horizontal preferential scanning sequencies, in figure, the scanning sequency on the right is larger than the horizontal degree of priority of the scanning sequency on the left side; Directionless preferential scanning sequency, such as zigzag scanning sequency.The textural characteristics information of texture analyzer output is grain direction information and texture strength information, and grain direction information is divided into cross grain, longitudinally three kinds of texture and other textures; Texture strength information is divided into strong, weak two kinds.Following table for example understands the situation that uses grain direction information and texture strength information control texture self-adaption scan module mode of operation, and such as the cross grain for strong, adaptive scanning module adopts the scanning sequency shown in the right figure of Figure 11 as mode of operation; For other weak texture, adaptive scanning module adopts zigzag scanning sequency as mode of operation etc.
Figure GDA0000416918850000131
Following table for example understands the situation of grain direction information control texture self-adaption scan module mode of operation of only using, and such as for cross grain, adaptive scanning module adopts the scanning sequency shown in the right figure of Figure 11 as mode of operation; For longitudinal texture, adaptive scanning module adopts the scanning sequency shown in the right figure of Figure 12 as mode of operation; For other texture, adaptive scanning module adopts zigzag scanning sequency as mode of operation.
Grain direction information Adaptive scanning module mode of operation
Cross grain Scanning sequency shown in Figure 11 right side
Longitudinally texture Scanning sequency shown in Figure 12 right side
Other textures Zigzag scanning sequency
Texture self-adaption scan module is positioned at texture self-adaption video coded system and is called texture self-adaption scanning encoding functional module, is positioned at texture self-adaption video decode system and is called texture self-adaption scan decoder functional module.
Certainly, the example that also has other encoding function module, decoding function module and textural characteristics to adapt, such as conversion module, quantization modules etc., exemplifies no longer one by one.
Embodiment 1
Figure 13 is the schematic diagram of embodiment 1, and it is a texture self-adaption video coded system.The input signal of coding side texture analyzer is the reference image data of coding unit, and as described in F-2, output information is the grain direction information of coding unit, and the extracting method of grain direction information is as method that H-2 exemplified.In the grain direction information control video encoder of output, the mode of operation of texture self-adaption interpolating module, makes texture self-adaption interpolating module choose the interpolation mode of operation adapting with the method being exemplified in J-1 according to grain direction information.Other encoding function module in video encoder is not carried out self adaptation according to textural characteristics information.
Embodiment 2
Figure 14 is the schematic diagram of embodiment 2, and it is a texture self-adaption video decode system.The input signal of decoding end texture analyzer is the reference image data of decoding unit, and as described in G-1, output information is the grain direction information of decoding unit, and the extracting method of grain direction information is as method that I-1 exemplified.In the grain direction information control Video Decoder of output, the mode of operation of texture self-adaption interpolating module, makes texture self-adaption interpolating module choose the interpolation mode of operation adapting with the method being exemplified in J-1 according to grain direction information.Other decoding function module in Video Decoder is not carried out self adaptation according to textural characteristics information.
Embodiment 3
The texture self-adaption video coded system that the system for encoding and decoding texture self-adaption video of embodiment 3 comprises embodiment 1 and the texture self-adaption video decode system of embodiment 2.
Embodiment 4
Figure 15 is the schematic diagram of embodiment 4, and it is a texture self-adaption video coded system.The input signal of coding side texture analyzer is the reference image data of coding unit, and as described in F-2, output information is grain direction information and the texture strength information of coding unit, the method that extracting method adopts H-2 to exemplify.Texture self-adaption interpolating module in grain direction information control video encoder, makes texture self-adaption interpolating module choose the interpolation mode of operation adapting with the method being exemplified in J-1 according to grain direction information; Texture self-adaption scan module in grain direction information and texture strength information control video encoder, makes texture self-adaption scan module choose the scanning work pattern adapting according to the method exemplifying in J-2.Other encoding function module in video encoder is not carried out self adaptation according to textural characteristics information.
Embodiment 5
Figure 16 is the schematic diagram of embodiment 5, and it is a texture self-adaption video decode system.The input signal of decoding end texture analyzer is the reference image data that needs the decoding unit of decoding, and as described in G-1, output information is grain direction information and the texture strength information that needs decoding unit, the method that extracting method adopts I-1 to exemplify.Texture self-adaption interpolating module in grain direction information control Video Decoder, makes texture self-adaption interpolating module choose the interpolation mode of operation adapting with the method being exemplified in J-1 according to grain direction information; Texture self-adaption scan module in grain direction information and texture strength information control Video Decoder, makes texture self-adaption scan module choose the counter-scanning mode of operation adapting according to the method exemplifying in J-2.Other decoding function module in Video Decoder is not carried out self adaptation according to textural characteristics information.
Embodiment 6
The texture self-adaption video coded system that the system for encoding and decoding texture self-adaption video of embodiment 6 comprises embodiment 4 and the texture self-adaption video decode system of embodiment 5.
Embodiment 7
Figure 17 is the schematic diagram of embodiment 7, and it is a texture self-adaption video coded system.The input signal of coding side texture analyzer is the initial data that needs the coding unit of coding, and as described in F-1, output information is grain direction information and the texture strength information that needs coding unit, the method that extracting method adopts H-1 to exemplify.Texture self-adaption interpolating module in grain direction information control video encoder, makes texture self-adaption interpolating module choose the interpolation mode of operation adapting by the method exemplifying in J-1 according to grain direction information; Texture self-adaption scan module in grain direction information and texture strength information control video encoder, makes texture self-adaption scan module choose according to institute's simplified method in J-2 the scanning work pattern adapting.Other encoding function module in video encoder is not carried out self adaptation according to textural characteristics information.
Embodiment 8
Figure 18 is the schematic diagram of embodiment 8, and it is a texture self-adaption video decode system.The input signal of decoding end texture analyzer is the output signal of entropy decoder module, as described in G-2, output information is grain direction information and the texture strength information that needs the decoding unit of decoding, and the extracting method of grain direction information and texture strength information adopts the method exemplifying in I-2.Texture self-adaption interpolating module in grain direction information control Video Decoder, makes texture self-adaption interpolating module choose the interpolation mode of operation adapting by the method exemplifying in J-1 according to grain direction characteristic information; Texture self-adaption scan module in grain direction information and texture strength information control Video Decoder, makes texture self-adaption scan module choose the counter-scanning mode of operation adapting according to the method exemplifying in J-2.Other decoding function module in Video Decoder is not carried out self adaptation according to textural characteristics information.
Embodiment 9
The texture self-adaption video coded system that the system for encoding and decoding texture self-adaption video of embodiment 9 comprises embodiment 7 and the texture self-adaption video decode system of embodiment 8.
Embodiment 10
Figure 19 is the schematic diagram of embodiment 10, and it is a texture self-adaption video coded system.The input signal of coding side texture analyzer is the intra prediction mode of intra-framed prediction module output, and as described in F-3, output information is the grain direction information of coding unit, and the extracting method of grain direction information adopts the method exemplifying in H-3.Texture self-adaption interpolating module and texture self-adaption scan module in grain direction information control video encoder, make texture self-adaption interpolating module choose the interpolation mode of operation adapting by the method exemplifying in J-1 according to grain direction information; Make texture self-adaption scan module according to grain direction information, choose the scanning work pattern adapting according to method in J-2.Other encoding function module in video encoder is not carried out self adaptation according to textural characteristics information.
Embodiment 11
Figure 20 is the schematic diagram of embodiment 11, and it is a texture self-adaption video decode system.The input signal of decoding end texture analyzer is the intra prediction mode of intra-framed prediction module output, and as described in G-2, output information is the grain direction information that needs decoding unit, and the extracting method of grain direction information adopts the method exemplifying in I-2.Texture self-adaption interpolating module and texture self-adaption scan module in grain direction information control Video Decoder, make texture self-adaption interpolating module choose the interpolation mode of operation adapting by the method exemplifying in J-1 according to grain direction information; Make texture self-adaption scan module according to grain direction information, choose the counter-scanning mode of operation adapting according to the method exemplifying in J-2.Other decoding function module in Video Decoder is not carried out self adaptation according to textural characteristics information.
Embodiment 12
The texture self-adaption video coded system that the system for encoding and decoding texture self-adaption video of embodiment 12 comprises embodiment 10 and the texture self-adaption video decode system of embodiment 11.
Above-described embodiment is used for the present invention that explains, rather than limits the invention, and in the protection range of spirit of the present invention and claim, any modification and change that the present invention is made, all fall into protection scope of the present invention.

Claims (10)

1. a texture self-adaption video coded system, is characterized in that: it comprises video encoder and coding side texture analyzer; Video encoder at least comprises an encoding function module, to complete compression coding; Coding side texture analyzer is used for carrying out texture analysis, to extract the textural characteristics information of coding unit; Video encoder comprises texture self-adaption scan module, and the textural characteristics information that described texture self-adaption scan module extracts according to coding side texture analyzer, selects a kind of scanning sequency adapting.
2. texture self-adaption video coded system according to claim 1, is characterized in that: described textural characteristics packets of information is containing grain direction information, or described textural characteristics packets of information is containing grain direction information and texture strength information.
3. texture self-adaption video coded system according to claim 1, is characterized in that: the input signal of described coding side texture analyzer comprise following one or more: the reference image data of raw image data, coding unit, encoding function module output data.
4. texture self-adaption video coded system according to claim 1, it is characterized in that: video encoder also comprises texture self-adaption interpolating module, the textural characteristics information that described texture self-adaption interpolating module extracts according to coding side texture analyzer, select a kind of interpolation method adapting, build sub-pix point, wherein, described textural characteristics information is grain direction information.
5. texture self-adaption video coded system according to claim 4, is characterized in that: texture self-adaption interpolating module and texture self-adaption scan module use of the same race or coding unit textural characteristics information not of the same race to control.
6. a texture self-adaption video decode system, is characterized in that: it comprises Video Decoder and decoding end grain reason analyzer; Video Decoder at least comprises a decoding function module, to complete decoding and rebuilding; Decoding end texture analyzer is used for carrying out texture analysis, to extract the textural characteristics information of decoding unit; Video Decoder comprises texture self-adaption scan module, and the textural characteristics information that described texture self-adaption scan module extracts according to decoding end texture analyzer is selected a kind of counter-scanning order adapting.
7. texture self-adaption video decode system according to claim 6, is characterized in that: described textural characteristics packets of information is containing grain direction information, or described textural characteristics packets of information is containing grain direction information and texture strength information.
8. texture self-adaption video decode system according to claim 6, is characterized in that: the input signal of described decoding end texture analyzer comprise following one or more: reference image data, decoding function module output data.
9. texture self-adaption video decode system according to claim 6, it is characterized in that: Video Decoder also comprises texture self-adaption interpolating module, the textural characteristics information that described texture self-adaption interpolating module extracts according to decoding end texture analyzer, select a kind of interpolation method adapting, build sub-pix point, wherein, described textural characteristics information is grain direction information.
10. texture self-adaption video decode system according to claim 9, is characterized in that: texture self-adaption interpolating module and texture self-adaption scan module use of the same race or decoding unit textural characteristics information not of the same race to control.
CN201110388181.6A 2007-06-12 2007-06-12 Texture-adaptive video coding/decoding system Active CN102413330B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110388181.6A CN102413330B (en) 2007-06-12 2007-06-12 Texture-adaptive video coding/decoding system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110388181.6A CN102413330B (en) 2007-06-12 2007-06-12 Texture-adaptive video coding/decoding system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN 200710069093 Division CN101325707B (en) 2007-06-12 2007-06-12 System for encoding and decoding texture self-adaption video

Publications (2)

Publication Number Publication Date
CN102413330A CN102413330A (en) 2012-04-11
CN102413330B true CN102413330B (en) 2014-05-14

Family

ID=45915139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110388181.6A Active CN102413330B (en) 2007-06-12 2007-06-12 Texture-adaptive video coding/decoding system

Country Status (1)

Country Link
CN (1) CN102413330B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104661023B (en) * 2015-02-04 2018-03-09 天津大学 Image or method for video coding based on predistortion and training wave filter

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6339658B1 (en) * 1999-03-09 2002-01-15 Rockwell Science Center, Llc Error resilient still image packetization method and packet structure
CN1431828A (en) * 2002-01-07 2003-07-23 三星电子株式会社 Optimum scanning method for change coefficient in coding/decoding image and video
CN1662066A (en) * 2004-02-26 2005-08-31 中国科学院计算技术研究所 Method for selecting predicting mode within frame
CN1757240A (en) * 2003-03-03 2006-04-05 皇家飞利浦电子股份有限公司 Video encoding
CN1879418A (en) * 2003-11-13 2006-12-13 高通股份有限公司 Selective and/or scalable complexity control for video codecs

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6339658B1 (en) * 1999-03-09 2002-01-15 Rockwell Science Center, Llc Error resilient still image packetization method and packet structure
CN1431828A (en) * 2002-01-07 2003-07-23 三星电子株式会社 Optimum scanning method for change coefficient in coding/decoding image and video
CN1757240A (en) * 2003-03-03 2006-04-05 皇家飞利浦电子股份有限公司 Video encoding
CN1879418A (en) * 2003-11-13 2006-12-13 高通股份有限公司 Selective and/or scalable complexity control for video codecs
CN1662066A (en) * 2004-02-26 2005-08-31 中国科学院计算技术研究所 Method for selecting predicting mode within frame

Also Published As

Publication number Publication date
CN102413330A (en) 2012-04-11

Similar Documents

Publication Publication Date Title
CN101325707B (en) System for encoding and decoding texture self-adaption video
US9621895B2 (en) Encoding/decoding method and device for high-resolution moving images
US9787988B2 (en) Image-filtering method and apparatus, and encoding/decoding method and apparatus using same
KR0137401B1 (en) Orthogonal transformation encoder
CN100586187C (en) Method and apparatus for image intraperdiction encoding/decoding
US7263232B2 (en) Spatial extrapolation of pixel values in intraframe video coding and decoding
DE112010004109B4 (en) Method for decoding a bit stream
Lin et al. Mixed chroma sampling-rate high efficiency video coding for full-chroma screen content
DE60305325T2 (en) SYSTEM AND METHOD FOR RATE DRAINING OPTIMIZED DATA PARTITIONING FOR VIDEO-CORDING USING REVERSE ADAPTATION
KR101622450B1 (en) Video encoding and decoding using transforms
CN104602009B (en) Infra-frame prediction decoding device
RU2608674C2 (en) Moving image encoding device, moving image decoding device, moving image encoding method and moving image decoding method
EP2157799A1 (en) Interpolation filter with local adaptation based on block edges in the reference frame
EP1796395A2 (en) Method and device for intra prediction coding and decoding of images
WO2012096150A1 (en) Dynamic image encoding device, dynamic image decoding device, dynamic image encoding method, and dynamic image decoding method
EP2993903A1 (en) Intra-prediction coding under a planar mode
EP2280548A1 (en) Method for decoding a stream of coded data representative of a sequence of images and method for coding a sequence of images
CN107172424A (en) Loop circuit filtering method and its equipment
CN101841713B (en) Video coding method for reducing coding code rate and system
US20040013199A1 (en) Motion estimation method and system for MPEG video streams
KR20010078393A (en) Scalable coding
KR101433169B1 (en) Mode prediction based on the direction of intra predictive mode, and method and apparatus for applying quantifization matrix and scanning using the mode prediction
KR100798446B1 (en) Adaptive double scan method in the H.26L
CN102413330B (en) Texture-adaptive video coding/decoding system
KR101512643B1 (en) 2 Video encoding apparatus and Apparatus and Method of 2 dimensional ordering transform for image signal and Recording Medium therefor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant