CN1281618A - Apparatus and method for compressing video information - Google Patents
Apparatus and method for compressing video information Download PDFInfo
- Publication number
- CN1281618A CN1281618A CN98811966A CN98811966A CN1281618A CN 1281618 A CN1281618 A CN 1281618A CN 98811966 A CN98811966 A CN 98811966A CN 98811966 A CN98811966 A CN 98811966A CN 1281618 A CN1281618 A CN 1281618A
- Authority
- CN
- China
- Prior art keywords
- subband
- macro block
- data
- transform coefficients
- subbands
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/48—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using compressed domain processing techniques other than decoding, e.g. modification of transform coefficients, variable length coding [VLC] data or run-length data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
- H04N19/619—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding the transform being operated outside the prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/63—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/63—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
- H04N19/64—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by ordering of coefficients or of bits for transmission
- H04N19/645—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by ordering of coefficients or of bits for transmission by grouping of coefficients into blocks after the transform
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/115—Selection of the code volume for a coding unit prior to coding
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Complex Calculations (AREA)
Abstract
A method is disclosed for efficiently encoding data representing a video image, thereby reducing the amount of data that must be transferred to a decoder. The method includes transforming data sets utilizing a tensor product wavelet transform (32) which is capable of transmitting remainders from one subband to another. Collections of subbands, in macro-block form (36) are weighted (42), detected (46) and ranked (52) enabling prioritization of the transformed data. A motion compensation technique (56, 60) is performed on the subband data producing motion vectors (58) and prediction errors (68) which are positionally encoded into bit stream packets for transmittal to the decoder. Subband macro-blocks and subband blocks which are equal to zero are identified as such in the bit stream packets to further reduce the amount of data that must be transferred to the decoder.
Description
The application require on November 14th, 1997 application, be used as priority reference, provisional application sequence number 60/066,638 here.
The present invention relates generally to video information is carried out the devices and methods therefor of Code And Decode.More particularly, the present invention relates to be used for a devices and methods therefor with motion prediction is estimated in motion at transform domain.
Because the bandwidth on the transmission channel is limited, so only there is an a limited number of bit can be used for coded audio and video information.Video coding technique is to attempt to use the least possible bit to come video information is encoded, and still keeps a needed image quality of given application.Like this, video compression technology is by removing redundant information and representing to reduce the information of remainder and transmit the needed bandwidth of vision signal with the minimized number bit.From this minimized number bit, image that can a similar source image of reconstruct, and make important feature loss minimum.Use this method, can use than source image data more efficient methods and preserve or transmitted data compressing.
There are some video coding techniques can increase the efficient of coding by the statistical redundancy in the removal vision signal.Many standard picture compress techniques are based on the piece converter technique of input imagery, for example discrete cosine transform (DCT).For example, well-known, by the correlation (by use DCT) of MPEG video coding technique by utilizing the spatial domain mid point of motion picture expert group exploitation, and the correlation between the picture frame (by using prediction and motion compensation) in the time-domain, reduced bit rate widely.
In the coded system (comprising overlapping (lapped) orthogonal transform) based on well-known quadrature and biorthogonal (subband) conversion, an image is transformed, and does not need at first this image to be carried out piecemeal.Transform coder based on DCT is carried out piecemeal based on two reasons to image: 1) experience has shown on 8 * 8 image region, on perhaps a series of 8 * 8 the difference image, DCT is good being similar to known optimal transformation (Kahunen-Luove ' conversion); With 2) processing of DCT increases and is O (NlogN), and by to image blocking matching, just can the limit calculation amount.
Unless other improvement is consequently arranged, has the basic function of being supported well by one 8 * 8 zone of an image (perhaps image outer zero) based on the method for DCT.Quadrature of being considered and biorthogonal conversion have in the image at a limited interval mainly to be supported, still shares the basic element of scope with adjacent area of space.For example, the subband image coding techniques uses one group of filter that an input imagery is divided into a plurality of spatial frequency band, then each frequency band or channel is quantized.See " using the subband video coding that dynamic bit distributes and geometric vector quantizes ", C.Podilchuck ﹠amp about going through of subband image coding techniques; A.Jacquin, SPIE volume 1666, Human Vision, Visual Processing, the 241-52 page or leaf (in February, 1992) of and Digital Display III.In each stage in the sub-band coding process, the low pass that signal is divided into image is similar to and is illustrated in a high general term that produces the details of being lost in the approximation.
In addition, concerning having the basic element of crossing over whole 8 * 8 support, be invariant to conversion based on the transform coder of DCT.This makes can not carry out effective motion compensation in transform domain.So most of employed motion compensation techniques time of having used goes up adjacent picture frame and forms an error term, uses 8 * 8 to come this error term transition coding then.As a result, these Technology Needs execution inversions from the frequency domain to the time-domain bring a reference picture are provided.The example of this system is at the U.S. Patent number No.5 of Suzuki or the like application, can find in 481,553 and the U.S. Patent number No.5 of Murakami or the like application, 025,482.
Fig. 1 has shown a simplified block diagram that uses normal video compression method DCT, present.In module 10, by using motion detection technique, a technology of for example using among the MPEG in being in predictive mode, the variation of presentation image effectively.Especially, former frame is used as a reference frame, and in front in the prediction, and frame is subsequently recently eliminated temporal redundancy (termporal redundancy) mutually with former frame and comes the difference between them is sorted according to degree.This step has constituted carries out the step of motion prediction to frame subsequently, and has also reduced the data volume of frame subsequently.In piece 12, judge to determine image which partly moved.Continue to use the example of MPEG, the data acquisition system that is provided in module 10 is provided,, carry out the interframe movement prediction by motion compensation technique being applied on reference frame and the frame subsequently.Frame subsequently deducts the prediction that is produced and produces a predicated error/frame.Thereafter, in module 14, these variations are converted into feature.In MPEG, come compressed prediction error by the 8 * 8DCT that uses one 2 dimension.
Most of video compression technologies based on DCT or subband concentrate on high precision technology, come video information is encoded and can not lost accuracy in conversion stage.But this high accuracy coding techniques depends on expensive microprocessor, for example the PENTIUM of Intel Company
Processor, it has special-purpose hardware and helps carry out floating-point arithmetic operation and reduce the cost of keeping a high level precision and bringing thus.
But concerning many application, this relatively costly hardware is unpractiaca or incorrect.Like this, just need a low-cost execution mode and also the image quality horizontal dimension can be held in acceptable degree.But, can be on the lower hardware of price the accuracy of the limited conversion of precision that realize, known reduce, its reason is that itself can produce " losing " in the cataloged procedure.As used herein, " losing " system refers to lose precision and lack thus when the decoding from a system of the ability of the complete reconstruct input of conversion coefficient by encoders at different levels.The ability that can not compensate these low precision conversions accuracy that occurred, that reduce is an obstacle to the use of this conversion.
The angle of discussing from the front needs a video encoder to carry out motion compensation in transform domain, need not carry out an inverse transformation in encoder thus, and also very simple to the control structure of software and hardware device.Simultaneously, also need a video encoder in this field, this video encoder should have the conversion that a class can be fit to low precision implementation, comprises a control structure that can reduce hardware price and improve software speed.
The purpose of this invention is to provide new, a unique devices and methods therefor that is used for packed data.More particularly, devices and methods therefor of the present invention can be conditioned and be configured to more effectively the his-and-hers watches example and encode as the data of a video image, reduces the data bulk that must be sent to a decoder thus.
The present invention relates to be used to compress a method of the data that comprise one first data acquisition system and one second data acquisition system.This method comprises first and second data acquisition systems is transformed into corresponding first and second set of transform coefficients.Produce the data of difference between expression first and second set of transform coefficients thereafter.Then, the data that produced are encoded to be sent to decoder.
Can use a tensor product wavelet transformation to come first and second data acquisition systems are carried out conversion.Further, the remainder that is produced in conversion process can be sent to another subband from a subband.
By estimating that the difference between first and second set of transform coefficients produces the data of difference between expression first and second set of transform coefficients, so that motion vector to be provided.This motion vector is applied to first set of transform coefficients and produces a prediction to second set of transform coefficients.Deducting this from second set of transform coefficients predicts and produces one group of predicated error.First and second set of transform coefficients can be carried out error correction guarantee between the encoder synchronously.
In the process of estimating the difference between first and second set of transform coefficients, a set of transform coefficients from first and second set of transform coefficients produces a search territory about the conversion coefficient subclass.Thereafter, the next correlating transforms coefficient subclass of another from first and second set of transform coefficients is applied to this and searches the territory.Then, by progressively increasing, this relevant conversion coefficient subclass traversal is searched the territory to find best position that increases coupling of expression.Then, this relevant subclass carries out moving of a small amount of and travels through this search territory, to find little flux matched position of the best of expression.
Another execution mode that is used to compress the method for the data that comprise one first data acquisition system and one second data acquisition system comprises the set that first and second data acquisition systems is transformed to corresponding first and second subbands.Then, produce the data of difference between expression first and second sets of subbands.For example, can produce this data by carrying out a motion compensation technique.This motion compensation technique can provide output, for example motion vector and predicated error.Thereafter, the data that produced are carried out coding to be sent to decoder.
In one embodiment, the second subband set of macroblocks also can be formed to get up to form a subband macro block group.Thereafter, the data that produced can be obtained by a following motion compensation technique.The difference of estimation between first sets of subbands and subband macro block group provides motion vector.Motion vector is applied to the prediction of the incompatible generation of first subband set to second sets of subbands.Then, deducting this from second sets of subbands predicts and produces one group of predicated error.
Can estimate the difference between first sets of subbands and subband macro block group as follows.From first sets of subbands, produce a search territory about a subclass of conversion coefficient.A relevant conversion coefficient subclass that is obtained from subband macro block group is applied to searches the territory.Then, by progressively increasing, this relevant conversion coefficient subclass traversal is searched the territory to find best position that increases coupling of expression.Then, this relevant conversion coefficient subclass carries out moving of a small amount of and travels through this search territory, to find little flux matched position of the best of expression.
Simultaneously, also disclosing a subband macro block method for packing to carry out organization and administration to carrying out the subband piece from a sets of subbands that conversion obtained of an image.This method comprise from image separate one group of relevant subband piece in the corresponding sets of subbands of visual macro block.This relevant set of blocks is encapsulated in one with as a subband macro block.The step of this separation and encapsulation relevant subbands piece is carried out repetition to each relevant subbands set of blocks in the sets of subbands, forms a subband macro block group.
By the relevant subbands piece in the subband macro block being arranged in and the identical relative position in the position of subband piece in sets of subbands, can further improve the method for macro block encapsulation.This method also comprises the subband macro block in the subband macro block group is placed on and the position identical locus of corresponding visual macro block in visual macro block group.
After the macro block encapsulation, can detect the change between the first subband macro block group (reference) and, the second subband macro block group subsequently.Detection is based on that a distortion estimating a common version equation estimates:
Wherein, e
c=the measurement of the distortion of R relatively;
W
iThe weight of=application; The conversion coefficient of the G=second subband macro block group; With R=reference (for example, the first subband macro block group).An equation form of estimating distortion in more detail is:
Another embodiment of the invention is that to be used for a data set transform be a limited precision methods of conversion coefficient, wherein use a tensor product small echo to carrying out conversion, and its residue of sending is sent to opposite filter paths.More particularly, this execution mode can comprise a low-pass component and high pass component that determines an image.Low-pass component is by normalization, to produce normalized output of low pass and one first remainder (rl).Similarly, high pass component is by normalization, to produce normalized output of high pass and one second remainder (rh).(rl rh) carries out one first operation (g (rl, rh)), and this first operation is added to from this approximate result who is produced to first and second remainders.In addition, (rl rh) carries out one second operation (f (rl, rh)), and this second operation is added to the result who produces from this details to first and second remainders.It should be noted that and in any type of conversion, to use the propagation (error propagation) of remainder, and be not only in tensor product.
The method of above-mentioned limited precision has produced the expression of too finishing of an image.This method also can comprise carries out down-sampledly to high pass and low-pass component, for example be reduced to 1/2 of sampling rate, obtain presentation image in the transform domain necessity and conversion coefficient fully.
The execution mode of the method that precision is limited comprises that having value is-1,2, a low pass filter of 6,2 ,-1 and to have value be-1,2, a high pass filter of-1.This first operation (g (rl, rh)) and second operation (f (rl, rh)) has function:
G (rl, rh)=rh; With
F (rl, rh)=floor (rh+1/2), wherein nh=1/2.
One specific, comprise that the example of an above-mentioned tensor product wavelet transformation has following form:
Wherein: X
2i=input data;
X
2i-1=at input data X
2iPreceding data;
X
2i+1=at input data X
2iAfter data;
D
i=details item (the high pass filter output that divides sampling);
D
I+1=at details item D
iAfter the details item; With
A
i=approximate (the low pass filter output that divides sampling).
An encoder apparatus of the variation that is used between the series of frames of predictive transformation territory is also disclosed in addition.This device comprises a converting means, this converting means has one and is configured to receive first and second frames in the series of frames, and further is configured to produce the input that each supports corresponding first and second sets of subbands of a set of transform coefficients from it.Have a motion compensation unit of an input that is connected to this converting means, be configured to receive first and second sets of subbands, and further be configured to represent effectively the difference between first and second sets of subbands.Also comprised a difference block in addition, this difference block has an input importing and be connected to the output of motion compensation unit that is connected to this converting means.In difference block, second sets of subbands is deducted the input that receives from motion compensation unit, produce a predicated error thus.
This motion compensation unit comprises and is configured to a movement estimation apparatus that first and second sets of subbands are compared.Produce a motion vector set representing difference between first and second sets of subbands approx from it.This motion compensation unit also comprises a motion prediction device, and this motion prediction device has an input that is connected to movement estimation apparatus, and further is configured to produce from it prediction group of the prediction of expression second sets of subbands.In a difference block, second sets of subbands is deducted the prediction of second sets of subbands, to produce predicated error.
A converting means that precision is limited that is used for a picture frame is converted to transform domain is also disclosed.This device comprises that structure is a low-pass component and high pass component that walks abreast and share an input being configured to receive picture frame.Comprised a low pass normalization device, this device has an output and input that further be configured to produce low pass normalization output and one first remainder (rl) that is configured to receive low-pass component.High pass normalization device have be configured to receive a high pass component output and further be configured to produce high pass normalization output with an input one second remainder (rh).One first operating means have be configured to receive first remainder (rl) and second remainder (rh) and further be configured to calculate one first and calculate that (g (rl, rh)) produces an input of one first result of calculation thus.One second operating means have be configured to receive first remainder (rl) and second remainder (rh) and further be configured to calculate one second and calculate that (f (rl, rh)) produces an input of one second result of calculation thus.In addition, a first adder has an input that is configured to receive the low pass normalization output and first result of calculation, and it is approximate that this first adder produces a subband.Similarly, a second adder has an input that is configured to receive the high pass normalization output and second result of calculation, and this second adder produces the details of a subband.
The limited converting means of this precision further comprises one first down-sampled device and comprises that in high pass output one second is down-sampled in low pass output.One 2 (2) down-sampled rate just can provide abundant and necessary conversion coefficient to come reconstruct input imagery in decoder.
From the described details of following connection with figures, can be more readily understood these and other unique feature of devices and methods therefor disclosed herein.
Representational execution mode of the present invention is described below with reference to the accompanying drawings:
Fig. 1 is to use a schematic diagram of discrete cosine transform video-frequency compression method (DCT), prior art, wherein carries out motion compensation in image field;
Fig. 2 is a schematic diagram that shows the general structure of an embodiment of the invention, wherein carries out motion compensation in transform domain;
Fig. 3 is one of the shown execution mode of a Fig. 2 more schematic diagram of detailed structure;
Fig. 4 (a) has shown to have visual macro block (IMB
X, x) a QCIF image of 0,0 to 8,10, Fig. 4 (b) has shown after using a forward wavelet that picture frame is carried out conversion, a subband of expression QCIF image;
Fig. 5 (a) has shown the subband of the QCIF image that presentation graphs 4 (b) is shown, and Fig. 5 (b) has shown from the shown subband of Fig. 5 (a) and represents the subband macro block (SMB that produces
X, x) set, Fig. 5 (c) has shown that the tissue of Fig. 5 (b) subband macro block is so that subband macro block (SMB
X, x) spatially with Fig. 4 (a) in their relevant visual macro block (IMB
X, x) corresponding.
Fig. 6 (a) and 6 (b) show to be used for an input imagery is carried out conversion and divides the bank of filters of sampling, with from each bank of filters produced, with they vertical and horizontal subbands accordingly.
Fig. 7 has shown and has been used for the algorithm that the precision of bank of filters is limited from the high-band zone-transfer to low region and on the contrary, a structure from the low strap zone-transfer to high region;
Fig. 8 has shown in the image field and visual macro block 2,4 (IMB
2,4) each subband (SB accordingly
Ij), the Search Area in transform domain, wherein searching band is P * P point, and when the input imagery size is in QCIF further refinement SB
00Search Area;
Fig. 9 (a) has shown a method estimating motion in transform domain to 9 (d);
Figure 10 has shown a method of predicted motion in transform domain;
Figure 11 is the schematic diagram of another detailed structure of the shown execution mode of displayed map 2;
Figure 12 is the schematic diagram that shows another detailed execution mode of the present invention, wherein carries out estimation in image field, carries out motion prediction in transform domain;
Figure 13 has shown when the input size is QCIF, has searched about visual macro block 2,4 (IMB in image field
2,4) a P * P point Search Area; With
Figure 14 is the schematic diagram that shows another detailed execution mode of the present invention, wherein carries out estimation and motion prediction in image field.
The invention provides and use a limited converter technique of precision to come the devices and methods therefor that digital video signal is compressed.This execution mode is by carrying out motion compensation in transform domain, for example estimation and prediction, rather than resemble prior art and in time-domain, carry out motion compensation, improved traditional, based on harmless or diminish converter technique.Use this method, can use not too expensive hardware to realize the image of quality improvement.
Term " motion compensation " is to understand on broader meaning.In other words, comprised a picture group picture dot element has been carried out estimation and motion prediction, it should be understood that, for example can comprise rotation and flexible although often motion compensation is described and is illustrated as.In addition, term " motion compensation " can comprise, for example produces the data of difference between two data set of expression simply.
Improved compression efficiency by picture inversion being become feature and at first mating these two methods of these features.Disclosure discussed herein is shown as relating to a series of images or frame of video.Can understand at an easy rate, the set that such image sequence is the data element on the spatial sense (perhaps scalar, vector perhaps on the function), these data elements are structurally adjacent, and can carry out index with time or certain other parameter.An image sequence can be a cartesian coordinate, still, also can use other different coordinate system in this field.
In addition, devices and methods therefor of the present invention can be used for the application of non-video, and voice for example are in audio frequency and the Electrocardiographic compression applications.That is, even the present invention disclosed herein is shown as one 2 maintains system (2D), i.e. video compression, but its thought can be applied in the system of any other dimension, to improve general data compression technique.
For example, this thought can be used for the system (1-1/2D) of one dimension half, for example ultrasonic image.In addition, this thought can be used for 3 and maintains for example magnetic resonance image (MRI) (MRI) of system (3D).
In the following description, term " frame " refers to be fed to the single image in a series of images of an encoder, and irrelevant with the form of this single image, promptly with it whether in time-domain, in the frequency domain, perhaps any processing that it is carried out is irrelevant.In addition, term " point " is used to refer to the picture element of image in time-domain, and term " coefficient " and " conversion coefficient " are used in reference to be put by the expression through the point that produced after forward wavelet for example.These terms are used to realize description that execution mode is carried out, and scope of the present invention are not produced any restriction.
With reference now to accompanying drawing,, in the accompanying drawings, similarly label is represented among the present invention similarly parts, has shown the schematic diagram that is used for a execution mode that a series of images or series of frames are compressed in Fig. 2.This figure is one of several embodiments disclosed herein.In the figure of back, will discuss execution mode in more detail.
Among Fig. 2, image is converted into the characteristic set in the transform domain in module 20.In module 22, select to be considered to these very important concerning this image features, promptly be considered to the frame in a past or reference frame, produce these features of big change.In module 24, represent the feature that these are important effectively, thereafter, these important features are sent to a decoder and upgrade a feature in the reference frame.
In addition, in module 20, source image is transformed, and is represented by a set of transform coefficients.Then, in module 22, the conversion coefficient in the evaluation coefficient set is to judge its importance through various weights and evaluation method and to sort according to importance.Then, in module 24, present frame and in the past or reference frame between carry out motion compensation.Motion compensation may comprise carries out estimation to the change between the frame, to produce a motion vector set.Thereafter, during a motion prediction step, motion vector is applied to a reference frame.Set of transform coefficients is deducted the result who produces from motion prediction, to determine the error of this prediction.Then, this predicated error is optionally stretched, and is carried out coding with motion vector at last, to be sent to decoder.
With reference to figure 3, Fig. 3 has shown with reference to one of figure 2 described execution modes structure more specifically.For example, image sequence or a series of frame of video 26 of using that Caltech Intermediate Format (CIF) form encodes are fed to a transducer 28.A CIF frame has 288 * 352 points.In transducer 28, frame is converted into 1/4th CIF (QCIF), for example the shown QCIF image 30 of Fig. 4 a.A QCIF image has 144 * 176 points.By a low pass filter with carry out 1/2nd branch simultaneously and sample and convert CIF to QCIF in the horizontal direction with on the vertical direction.Handle for convenience, 144 * 176 are divided into visual macro block (IMB
X, x), each macro block has 16 * 16 points.Here, QCIF only is used as an example, the present invention is not had in all senses restriction.By to the well-known method of these those of skill in the art, below described technology can be used for other image (with non-image) form at an easy rate.
With reference to figure 3 and 4, QCIF image 30 is fed to module 32 and 36, and module 32 and 36 has been formed the module 20 among Fig. 2, carries out the coupling of image feature in module 20.In more detail, QCIF image 30 (Fig. 4 (a)) is fed in the module 32, and in module 32, a forward wavelet is with the set of each frame transform to subband 34 (Fig. 4 (b)).The structure of this conversion image, i.e. the set of subband 34 is stored in the memory, and to be used by the back, for example by the estimation of carrying out later, motion prediction and decision predicated error time institute use.Below, can be used for a suitable forward wavelet of the present invention with discussing in more detail.
Fig. 5 has shown the process of subband macro block encapsulation.During the encapsulation of subband macro block, all relevant subband pieces (Fig. 5 (a)) are carried out reorganization in the subband 34 during the encapsulation of subband macro block, to form the shown subband macro block 38 of Fig. 5 (b).
For example, visual macro block 2,4 (IMB among the shade subband piece among Fig. 5 (a) and Fig. 4 (a)
2,4) corresponding, and reorganized during the subband macro block that (Fig. 3) carries out in module 36 encapsulation, to form as the subband macro block SMB as shown in Fig. 5 (b)
2,4Then, subband macro block 38 (SMB
0,0To SMB
8,10) be organized into as the subband macro block group 40 as shown in Fig. 5 (c), so that each subband macro block can be by its corresponding visual macro block (IMB in QCIF image 30
X, x) the locus support.In this example, SMB
2,4Be found to be by the IMB as shown in Fig. 4 (a) and 5 (c)
2,4The locus support effectively.
In addition, only relate to frame image represented among the QCIF although it should be noted that execution mode as described herein, the person skilled in art can be readily appreciated that, can use other form and can not depart from thought of the present invention.In addition, it should be noted that also that the special stator block group in each subband macro block is to be used to hold shown specific small echo.To other small echo more suitably other subband data group also will exist.
In the description of subband 34 set (Fig. 4 (b)) and subband macro block group 40 (Fig. 5 (c)), can clearly see about visual macro block 30 set (Fig. 4 (a)) from top,, have a correlation between subband piece and the subband macro block at specific visual macro block.An example of this correlation is as follows: (a) visual macro block 2,4 (IMB
2,4) represented with dash area and also be represented as visual macro block 106 among Fig. 4 (a); (b) all are drawn the subband piece of top shadow part, for example subband 00 (SB among Fig. 4 (b)
00) in subband piece 116 and subband 33 (SB
33) in subband piece 118; (c) subband macro block 2,4 (SMB
2,4), represented with dash area and also be represented as subband macro block 117 among Fig. 5 (c).In this explanation, comprise that description that the coefficient with relation for example as described above is carried out will be referred to as " being correlated with ".
With reference to figure 3, subband macro block group 40 is fed to module 42,46, and 48 and 52, these modules 42,46,48 and 52 have been formed the module 22 among Fig. 2, judge which feature or subband macro block (SMB in module 22
0,0To SMB
8,10) change taken place.Especially, subband macro block group 40 is fed to module 42, in module 42, uses weight to come each the subband macro block in the subband macro block group 40 is stretched, and its flexible size equals the sense organ importance of this subband macro block.The output of the module 42 that is weighted is set of weights 44.
For example can decide the sense organ importance of weighting by an average ratings mark research, perhaps from other coded system, for example the employed weight H.261 and in the system that is found H.263 of international Telephone and Telegraph Consultative Committee (CCITT) decides the sense organ importance of weighting, and the standard of CCITT is used as reference here.About the average ratings mark, can with reference to be used as here reference, K.R.Rao﹠amp; P.Yip work, by Academic Press company 165-174 page or leaf (nineteen ninety) that publish, discrete cosine transform.
Be weighted with after each subband macro block is stretched in module 42, set of weights 44 is fed to change detection module 46 and handles in changing detection module 46 and decide the relative increment that has taken place.This change may also be referred to as term " validity (significance) ", perhaps concerning video, and the distortion of set of weights 44.Can get in touch a given reference, for example the set of weights in zero (0) or a past decides validity.The ring that is extended from change detection module 46 comprises that the set of weights with a past turns back to change detection module 46 with the frame delay 48 as a reference.The output that changes detection module 46 is to change test set 50.
For example when by the initial transmit frame of encoder, employed in changing detection module 46 is one zero (0) reference.Under this situation, entire frame be referenced as zero (0).This is also referred to as the frame internal reference.As described above, also can use the set of weights in a past, wherein the macro block group is weighted in module 42 as described above, is carried out delay then to be used as a reference in the Postponement module 48 that changes detection module 46.This method of back is also referred to as the interframe reference, and it can avoid repeatedly sending redundant and/or unessential information to decoder.
The purpose that is used alternatingly the method for zero (0) reference frame is for during system operation, can regenerate and keep a reference picture that accuracy is higher in decoder.A method is, to the image of per second 30 frames of standard, per the 8th frame period property ground in whole image uses zero (0) reference frame.Alternatively, image can (stoichastically) be refreshed by at random, and (randomy) for example randomly, perhaps using method ground, for example the reference with subband is made as zero (0).In order to realize that easily the reference with all or partial frame is made as the process of zero (0), identifies the subband piece that is referenced as zero (0) like this so that avoid the piece of generation effect is carried out operation of motion compensation (describing below).Like this, in decoder, the subband piece of intactly regenerating and being identified, to refresh whole reference or partial reference wherein, this decides according to concrete needs.
With reference to figure 3, in module 52,, promptly, the sets of subbands 34 that is stored in previously in the memory is sorted with the subband macro block that changes test set 50 again according to their validity according to the change size of each the subband piece that is determined.Ordering is based on the front and detects by the weighted sum of in module 42 and 46 the subband macro block being carried out respectively that the value that is assigned with carries out.The output of module 52 comprises by, the subband group that be sorted 53 55 that sent through line and a subband macro block group 54 that is sorted.
Continuation is with reference to figure 3, and subband group 53 that is sorted and the subband macro block group 54 that is sorted optionally are fed to module 56,60, and 62,68,72 and 76, the module 24 among these modules and Fig. 2 is corresponding, and the macro block that has wherein taken place to change is represented effectively.Especially, the subband macro block group 54 (" present frame ") that is sorted is fed to module 56 and carries out estimation.The subband group 53 that is sorted is fed to Postponement module 62, provides the subband group 57 (' reference ' frame) of a delay to carry out estimation and motion prediction respectively respectively in module 56 and 60 to line 64 then.In a following described mode, in motion estimation module 56, produce a motion vector set 58, and this motion vector set 58 is fed in the module 60 carrying out motion prediction, and also is sent to module 76 and carries out position encoded.
The motion vector 58 that is sent to motion prediction module 60 is used to change the ordering subband group 57 that is delayed, to produce a prediction group 66.Difference block 68 receives the subband group 53 that is sorted and deducts prediction group 66 from it, with generation group difference 70, i.e. predicated error.In module 72, further group difference 70 being stretched produces group difference 74 by flexible.Those persons skilled in art will recognize that the number of the group difference 70 of non-zero is few more, and the accuracy of motion vector 58 set that the change between present frame and the reference frame is predicted is just high more.And difference is few more, and it is just few more with the bit number of correcting the defective in the estimation to be sent to decoder.
Relevant with the form of bit stream, what several standards were arranged in this field flows to the method for row formatization to bit.One based on encoder system H.263 in employed form be an example.The serial string that bit stream is a bit bag basically.The data of a particular type of each bag expression.
For example, the bit bag can comprise the system level data, video, control and voice data.When receive data carrying out when position encoded in module 76, its is become the bit bag according to employed format organization.Generally, the set of the bit bag of a frame of video of expression is a bit of a new frame from identifying it.The number that quantizes and other control code are general just in its back.Then, be a macro block tabulation that is encoded, expression is stretched and organized difference 74.Concerning QCIF, macroblock number equals 99 (99).(seeing Fig. 5 (c)).
In order to realize transmitting effectively that data, each macro block front have a macro block zero bit (MBZ zero bit) to represent whether occur or do not occur non-zero in the macro block easily.If macro block has occurred, the control information of macro block comprises that relevant motion vector set 58 just is sent out, and is thereafter subband data, i.e. the flexible group difference 74 of Xiang Guan quilt.Comprise that this information has reduced the bit number that is sent out through transmission line basically because macro block that not have an appearance by with a single symbolic representation rather than with all null all represent that whole macro block string coefficient needed bit represents.
Another situation that can further raise the efficiency is, when only part subband piece is zero in the subband macro block.Execution mode comprises that knowing (knowledges of SB zero standard) with a subband zero standard identifies the step that its coefficient is zero subband.Be not have difference between the respective sub-bands piece of a subband of zero group 66 that is illustrated in the respective sub-bands piece that is sorted subband group 53 and prediction by its coefficient in the flexible group difference 74.With with representing that respectively each is that zero coefficient is compared, need seldom bit to represent that the SB zero standard knows basically.Certainly, decoder is programmed to identification MB zero bit and the SB zero standard is known, and carries out this symbol of being introduced when position encoded to understand in module 76.The example that is used for string is carried out a zero run-length length code of symbolism is described below.The zero run-length length code
Zero sign indicating number | Be zero number continuously |
????01 | One zero |
????001b 0 | ????2,3 |
????0001b 1b 0 | ????4,5,6,7 |
????00001b 2b 1b 0 | ?8,9,10,11,12,13,14,15 |
????000001b 3b 2b 1b 0 | ????16…31 |
(log 2(N)) n zero MSB-1 nMSB-2… n1 n0 | To general N |
Continuation is with reference to figure 3, and decoder 82 has received coded bit stream group 78 through transmission line 80, and coded bit stream group 78 is fed to the effect that a position decoding module 86 is produced with backward position coding module 76.Motion vector set 58 is extracted from bit stream group 78, and is fed to a prediction module 98.The quilt flexible group difference 88 decoded with subband form (Fig. 4 (b)) is provided to a quantization restorer module 90.In quantization restorer module 90, the conversion coefficient in past and in the past and currently remove to quantize item and be used to recover to quantize transform coefficient values, promptly they are used to produce again group difference 70.
One subband set closes 92, and the reference frame of encoder is fed to a Postponement module 94.One postpones sets of subbands 96 and is fed to a prediction module 98 from Postponement module 94.Similar with the process of carrying out in the motion prediction module 60 in encoder, motion vector set 58 is applied to subbands 96 set of delay in prediction module 98.Here, change the sets of subbands 96 that is delayed and produce the group 100 of a prediction, do not comprise that promptly a subband that is updated image of group difference 70 is represented.In an adder Module 102, will organize difference 70 and carry out Calais's generation sets of subbands 92, i.e. new reference frame mutually with prediction group 100.At last, in module 104, sets of subbands 92 is carried out an anti-wavelet transformation.This step is the inverse transformation of the forward wavelet 32 of top simple description basically, and below will describe it in more detail.From the output of module 104 outputs are reconstructed images 105.
As previously described in Fig. 3 and 4 and demonstration, QCIF image 30 (Fig. 4 (a)) is fed to each frame of video is carried out conversion to form the forward wavelet 32 of sets of subbands 34 (Fig. 4 (b)).An execution mode of a conversion module 32 has utilized a tensor product wavelet transformation.See that about going through of tensor product wavelet transformation Joel Rosiene and Ian Greenshields roll up at the 33rd of Optical Engineering, " the standard small echo of image compresses substantially " of delivering on the 8th phase (in August, 1994) is used as reference here.Can use a conversion that precision is limited, for example well-known Mallat, GenLOT, perhaps Harr conversion.See G.Strang and T.Nguye about the suitable wavelet transformation that substitutes, " small echo and the bank of filters " that Welleley-Cambirdge Press delivered in 1997 is used as reference here.
With reference to figure 4 (b), shown at QCIF image 30 through the sets of subbands 34 after the forward wavelet 32.As noted earlier, the forward wavelet process utilizes tensor product wavelet transformation or other well-known precision limited conversion as being modified here, to reduce the influence of the limited execution mode of precision.Generally, conversion process comprises that m * n level produces (m+1) * (n+1) subband.In one embodiment, Fig. 6 discussed in contact below, and conversion process comprises that 3 * 3 grades produce one and add up to 16 subband.Be positioned at the scope of the invention, disclosure provided here according to following, can use one other
Execution mode.
With reference to figure 6 (a), a forward wavelet process is brought into use 3 grades, and delegation connects delegation ground a QCIF picture frame 30 is carried out filtering.Each level comprises a low pass filter 108 and a high pass filter 110.In one embodiment, each low pass filter 108 has value-1,2,6,2, and-1, each high pass filter has value-1,2 ,-1.
After filtering, in each level, stretch by 112 and 114 pairs of low-pass component of a minute sampler (decimator) and high pass component respectively and divide sampling, perhaps down-sampled, eliminated the component of the sampled value that comprises a discrete signal thus.In shown execution mode, come input imagery with one 1/2 the factor down-sampled, to abandon other sampling.Usage factor be the sampling of 2 branch produced at last necessary and fully conversion coefficient come accurately reconstruct input.Then, the down-sampled value of low-pass component and high pass component be used in a mode describing in detail below with reference to Fig. 7 in each level by normalization.The output of the first order comprises a low pass filter component A OR and a high pass component DOR.Low-pass component AOR is carried out for the second time and for the third time and decomposes, and produces additional capable detail D 1R and D2R and row mean value A2R.
Then, the line output DOR of the row level that Fig. 6 (a) is shown, D1R, D2R and A2R are applied to the shown level of Fig. 6 (b) by row with connecing row.Each level among Fig. 6 (b) in shown 3 grades comprises that a filter that is employed in the mode identical with the top mode of being discussed with reference to figure 6 (a) is right, down-sampled and normalization process.Conversion output is as top that discussed and as the sets of subbands 34 as shown in Fig. 4 (b) with reference to figure 3.
With reference now to Fig. 4 (b),, in order to discern, each subband is by with a sub-tape identification SB
IjSign, wherein concerning each row, i=0,1,2, perhaps 3, concerning each row, j=0,1,2 or 3.The subband piece of band shade is for example at SB
00In subband piece 116 and at SB
33In the QCIF image 30 of subband piece 118 and Fig. 4 (a) in IMB
2,4Corresponding.Because described above minute sampling process, each corresponding subband piece is reduced pro rata, so that SB for example
00In subband piece 116 comprise 8 * 8 coefficients and SB
33In subband piece 118 comprise 2 * 2 coefficients.As discussed above, relevant subband piece, for example those are found in subband position 2,4, each subband (SB
00To SB
33) in subband piece during module 36 is carried out subband macro block encapsulation step (Fig. 3 and 5) be collected, to realize specific treatment step easily.
With reference now to Fig. 7,, according to a feature of disclosed execution mode, the remainder of each grade of sub-band coding process is passed to opposite filter paths, the error of being introduced in the limited conversion process of precision with compensation.The remainder of being transmitted is utilized to regulate the coefficient on the opposite filter paths, to solve the loss of precision.This process has produced a nonlinear conversion.Further, the process that changes filter will make them neither biorthogonal neither quadrature.
Fig. 7 has shown an implementation of opposite filter channel that is used for remainder is delivered to the first order of the shown line translation of Fig. 6 (a).Comprised a similar implementation in each row level and row level.Use normal mode coefficient to incoming frame 30 in low pass filter 108 and high pass filter 110 to carry out filtering.The result is down-sampled in sampler 112 and 114 respectively.Separating resulting to low pass filter 108 in a low pass normalization process 120 carries out normalization, to produce 122 and low pass remainder rl of a low pass normalization output.Separating resulting to high pass filter 110 in a high pass normalization process 124 carries out normalization, to produce 126 and high pass remainder rh of a high pass normalization output.Remainder rl that is produced from each normalization process 120 and 124 and rh are respectively by by shown power function g (rl, rh) 128 and f (rl, rh) 130.(rl, rh) 128 result is added to low pass normalization output 122 to produce AOR (first order is average) to function g in adder 132.(rl, rh) 130 result is added to high pass normalization output 126 to produce DOR (details that the first order is lost) to function f in adder 133.
To filter L={ 1,2,6,2 ,-1 } and H={ 1,2 ,-1 }, the execution mode of remainder function is: and f (fl, rh)=floor (rh+1/2), wherein nh=1/2; And g (rl, rh)=rh.The computing of above-mentioned remainder to being repeated, has reduced the distribution number of bit to each filter in conversion output.
The form of the execution mode that tensor product small echo is right is:
Wherein: X
2i=input data; X
2i-1=at input data X
2iPreceding data; X
2i+1=at input data X
2iAfter data; D
i=details item (the high pass filter output that divides sampling); D
I+1=at details item D
iAfter the details item; And A
i=approximate (the low pass filter output that divides sampling).
Description about the tensor product wavelet transformation has shown a two-way separation high pass (details) and low pass (being similar to) component above.In addition, describe and to have shown remainder is delivered to one second band from one first band, be delivered to first band, perhaps carry out simultaneously being delivered to one second band, be delivered to the possibility of first band from second band from one first band from second band.The purpose of execution mode described above is to show basic conception of the present invention, in any case and can not be understood that scope of the present invention is produced restriction.
For example, tensor product wavelet transformation has one of them and 3 comprises a high pass filter to separation, a first order of a bandpass filter and a low pass filter in.Then, the output of low pass filter is iterated, and promptly has 1 and can be applied to the output of low pass filter to a second level of separating, and produces and adds up to 5 subband.In such execution mode, remainder can by from low pass filter and high pass filter to bandpass filter.This execution mode only is how the tensor product wavelet transformation is changed and still remains on an example in the scope and spirit of the present invention.The person skilled in art will understand at an easy rate, can have a lot of methods to come input to be separated and to iterate in each level, and have many other methods to ask the transmission remainder at subband.
In addition, the foregoing description of remainder transmission is not confined to its use the meaning of a tensor product wavelet transformation.It can be used for any other conversion.For example, the transmission of remainder can be used together with a discrete cosine transform (DCT).In addition, the cosine transmission can a mode harmless or that diminish be carried out.
As discussed above, the output of forward wavelet 32 can be that one of QCIF image 30 expression completely or one are after representing completely.A perfect representation of QCIF image 30 comprises the sets of subbands of just enough presentation image contents.One of QCIF image 30 cross perfect representation and comprise perfect representation and redundant, alternatively, perhaps Fu Jia subband is represented, the motion compensation of describing later can be implemented easily.Each expression has the value in disclosed execution mode.For example, cross perfect representation and can comprise that various images change for example translation, rotation change, and telescopic variation.These variations are essential during motion compensation, and have reduced the problem of coming presentation image to change with an index.
It should be noted that with regard to above-mentioned forward direction small echo to change that although the shown frame structure that is transformed is right to brightness, this structure also is right here concerning chromatic component, so, do not described respectively.
With reference to figure 3 described change detection modules 46, it should be noted that one zero (0) reference with regard to top, perhaps some other reference for example can be used to weight detection group 44 and how to be changed by postponing 48 set of weights that be provided, the past.An execution mode that changes detection module 46 comprises that changes a detection metric, and the general type that is applied to the change detection metric of set of weights 44 is:
Wherein, e
cThe measurement of the distortion of=relative reference R;
W
iThe weight of=application;
The subband transform coefficient group that G=is current; With
Reference of R=, for example, zero (0) or one passes through the last sub-band coefficients group that Postponement module 48 obtains.
A more detailed form that changes detection metric is:
In addition, 46 information that can utilize feedback 132 (Fig. 3) to be provided from coded bit stream group 78 that detect are provided, eliminate the particular weights macro block in the set of weights 44, allow it not detect output 46, if judge with regard to its cost of Bit Allocation in Discrete too expensive from changing.Further, change detection module 46 and can think that the feature of having represented feature better substitutes a feature, for example subband piece with another.
As described above shown with Fig. 3, ordering subband group 53 and ordering subband macro block group 54 quilts process line 55 respectively are fed to Postponement module 62 and piece 56, to carry out estimation.In module 56, carry out ordering subband macro block group 54, promptly ' current ' frame and the ordering subband group 57 that is delayed, promptly ' comparison procedure that the relevant Search Area of reference ' frame carries out.The person skilled in art will recognize the advantage of the delay ordering subband group 57 of the ordering subband macro block group 54 of utilizing present frame and reference frame.But, also it should be noted that, can be used with other group of the unity of thinking of the present invention and combination.The comparison procedure of carrying out in module 56 has produced and has been fed to module 60 to carry out motion prediction and to be fed to module 76 to carry out the position encoded motion vector set 58 that becomes bit stream, as top description simply.
With reference to figure 8 and 9, with the generation of estimation in the describing module 56 more particularly and motion vector set 58.Fig. 8 has shown delay ordering subband group 57.It is discrete to postpone 34 set of the shown subband of ordering subband group 57 and Fig. 4 (b), but it is carried out the subband block sequencing in module 52 (Fig. 3), be delayed at least one frame in Postponement module 62, to be further processed.In order to determine independent motion vector easily, at least one subband (SB
00To SB
33) the middle Search Area that defines about the subband piece.Subband piece in selected each subband with the Search Area that defines for them is that those are defined as effective subband piece in changing detection module 46.According to SB
00In the effective subband piece motion vector of deriving often be enough.
Continuation is with reference to figure 8, Fig. 8 shown for visual macro block 2, the 4 (IMB of QCIF image 30 (Fig. 4 (a))
2,4) each subband piece and the Search Area of deriving accordingly.The size of Search Area can change.But, always proportional with the branch relation of image according to them about the Search Area of subband piece.For example, a basic search zone P * P point is changed about SB in the QCIF image 30 (Figure 13)
00The Search Area P/2 * P/2 (Fig. 8) of subband piece 137, as shown in 136, conversion is about SB
01A Search Area P/4 * P/2 of subband piece 140, as shown in 139.
To the example of the estimation that provides below, the P of Figure 13 * P point Search Area comprises 32 * 32 points, and this is IMB
2, 4 size, 16 * 16 4 times.So Search Area P/2 * P/2 (Fig. 8) comprises 16 * 16 coefficients, this is 4 times of subband 137 (8 * 8 coefficient).In addition, Search Area P/4 * P/2 (139) comprises 16 * 8 coefficients, and this is 4 times of subband piece 140 (8 * 4 coefficient).As will be described, the subband Search Area is used to determine easily some or all subband (SB
00To SB
33) in the motion vector of each effective subband piece (0,0 to 8,10).
Can be rule of thumb, for example perhaps consider the desired amount of movement of interframe and carry out statistical analysis, decide (the P * P) of size substantially of Search Area.In addition, also should consider in a given Search Area, to carry out one and search needed amount of calculation.The person skilled in art is readily appreciated that bigger Search Area needs more computational resource, so concerning a given processor, just mean more inter-frame delay.On the contrary, less Search Area needs less computational resource, but has sacrificed image quality.This is especially correct especially during high picture motion.That is, because componental movement may be positioned at outside the Search Area, so just can not carry out accurate motion vectors and select, so image quality descends.
As above-mentioned, ordering subband group 53 and ordering subband macro block group 54 are fed to Postponement module 62 and motion estimation module 56 through line 55 from module 52 respectively.To following shown example, a Search Area is placed on the SB that postpones ordering subband group 57 (Fig. 8)
00 Subband piece 2,4 around.SB in the subband macro block 2,4 of ordering subband macro block group 54 (the reference sub pieces 116 among Fig. 5 (c))
00A subband piece be used to pass Search Area seek to change.But,,, can use any subband to select or all subbands according to following described method as above-mentioned.
With reference now to Fig. 3,, 8 and 9, as above-mentioned, ordering subband group 53 postpone to be delayed in 62 be delayed with generation, ordering subband group 57 (' reference ' frame).Be delayed, ordering subband group 57 is fed to motion estimation module 56, in motion estimation module 56, Search Area 136 be represented as around subband piece 137, at SB
00In have P/2 * P/2 zone.To this example, Search Area equals 16 * 16 coefficients.Ordering subband macro block group 54 (' current ' frames) also are fed to motion estimation module 56, and wherein similar with the shadow region of the middle subband piece 116 of Fig. 5 (c), a subband piece 138 (Fig. 9 (a)) is retrieved to be used for following described comparison procedure.
Arrive (d) with particular reference to Fig. 9 (a) now, shown among the figure in the motion estimation module 56 of Fig. 3 to determine motion vector (MV
X, x) process.In the example below, to subband piece, i.e. a SB
00Subband piece 2,4 determine a motion vector.But, also can be to each subband (SB
00To SB
33) in each effective subband piece decision motion vector.
With reference to figure 9 (a), the subband piece 138 of ordering subband macro block group 54 is arranged in Search Area 136 that be delayed, ordering subband group 57 (Fig. 8).Subband piece 138 is superimposed on the subband piece 137 that be delayed, ordering subband group 57 basically.As discussed above, ordering subband macro block group 54 has the structure with the similar of the shown subband macro block group 40 of Fig. 5 (c).In addition, be delayed, ordering subband group 57 has and the structure of the similar of the shown sets of subbands 34 of Fig. 4 (b).With reference to figure 9 (a), the coefficient 142 (being shown as 4 circles) of coefficient 141 of Search Area 136 (be shown as 4 circles, one ' x ' is arranged in each circle) and subband piece 138 is used to describe easily the method for decision motion vector here again.To this example, the value of supposing coefficient 141 and 142 about equally, and remaining coefficient (not having to show) is and coefficient 141 and 142 different values, but also about equally.Coefficient 141 and 142 position difference are represented two changes between the frame of video, for example translations.
With reference to figure 9 (b), subband piece 138 uses a predetermined mode that progressively increases progressively, and passes, and promptly searches Search Area 136 and decides at the total antipode of each step between subband piece 138 and Search Area 136.The person skilled in art will recognize at an easy rate and can use the various modes of passing.In addition, the criterion that is not total antipode also can be used in basis as a comparison.Initial relatively is the increase that utilizes subband piece 138, and perhaps moving of whole steps found optimum Match.The mobile of an increase is a skew or a stepping completely, and this can be x direction or y direction.For example, in searching whole Search Area 136, subband piece 138 in Search Area 136, on the x direction with ± 4 increment, i.e. conversion coefficient, mobile Search Area 136 in, the increment with ± 4 on the y direction moves.Because subband piece 138 has 8 * 8 coefficient, and Search Area 136 has 16 * 16 coefficient, so subband piece 138 moves with ± 4 increment in the x and y direction.
Continuation is with reference to figure 9 (b), after continuing an increment and searching, finds that best coupling is to move 3 complete increments to move on positive x direction, and on y direction just, carry out 2 completely increment move.Then, as shown in Fig. 9 (c), decision component difference is represented the difference between subband piece 138 and the Search Area 136 more accurately.In order to realize this process easily, the mask that expression is moved the suitable component of this subband is applied to subband piece 138.
For example, because SB
00Size be that the relevant macro block of source image (is seen the IMB of Fig. 4 (a)
2,4) size 1/4th, move the IMB that regenerates more accurately so subband 138 can carry out four components
2,4Meticulous move.That is, subband piece 138 can on the x direction, move an increment ± 1/2, on the y direction mobile increment ± 1/2.So 4 component masks 143 are used to change subband piece 138, to search optimum Match.
Continuation is with reference to figure 9 (c), and 4 masks 143 are applied to subband piece 138, is using between each mask total antipode of the coefficient between decision subband piece 138 and the Search Area 136.If with comparing of determining, found an optimum Match during above-mentioned increment search, the component mask is added in the motion vector.In this example, optimum Match is decided to be on positive x direction+and 1/2 component moves the place.As a result, the x of the motion vector that is produced and y component be respectively+3 1/2 and+2.
The person skilled in art will recognize that it is abnormal obtaining as the described accurate coupling of top example.From this angle, between the coefficient that subband is determined and the coefficient of Search Area ' optimum Match ' can be described as more accurately between two ' nearest approximate '.The back uses motion prediction to compensate this inexactness.
With reference to figure 9 (d), the symbol of the x of motion vector and y component is inverted and is stretched.More particularly, in this example, each x and y component be multiply by-1, wherein SB
00Be used for estimation, each x and y component be multiply by 2.The symbol of x and y component is inverted so that when (below will discussing in more detail) during motion prediction, motion vector be applied to be delayed, during ordering subband group 57, suitable coefficient is moved to ' current ' frame position from ' last ' frame position.And x and y component are stretched, with determined above representing, with respect to source image QCIF (IMB
2,4) relevant macro block move that (x=3 1/2, y=2).The flexible permission determines more simply at mobile subband SB during moving projection
00To SB
33In suitable coefficient the time employed x and y component.
In this example, produced, the expression SMB
2,4The motion vector that moves of interior subband piece is x=-7 and y=-4 (MV
2,4).MV
2,4Be stored in the memory with motion vector set 58.So, MV
2,4Expression particular factor set from be delayed, ordering subband group 57 (' reference ' frame) is to their the moving of reposition, with predict sort, subband group 53 (' current ' frame).At for example SB
00In, each effective subband piece is repeated said process.Typically, handle according to ordering, that is, from macro block with maximum amount of movement to macro block with minimum amount of movement.Whole non-effective subband piece will not be considered, so be not assigned with any motion vector yet.For example, when the position between those frames does not change or non-when effective, this situation just appears.When as previously discussed, the subband piece be referenced as zero (0) time, this situation also appears.
If a different subband will be used to calculating kinematical vector, will use the similar method of describing with above-mentioned use particular sub-band and QCIF image 30 relations in direct ratio of method, decision increment and component move.For example, if SB
01Interior subband piece is used to the motion vector of deriving, and just uses following criterion: Search Area size=16 * 8 coefficients; X component mask=± 1/4, ± 1/2 and ± 3/4 increment; Y component mask=± 1/2 increment; X stretches=4, and y flexible=2.
Using an advantage of said method is to use the filter of separation.In other words, being used for increment and component that the increment of a subband piece and filter that component moves can be used to another subband piece moves.For example, SB
00In the subband piece have 4 possible components to move: x=± 1/2 and y=± 1/2.And, SB
01In the subband piece have 8 possible components and move: x=± 1/4, ± 1/2 and ± 3/4, and y=± 1/2.Because SB
00And SB
01Have common component and move x=± 1/2 and y=± 1/2, move x=+1/2, y=+1/2, x=-1/2 and y=-1/2 so in two subbands, can use the filter of single separation to be used for component.All common components that this method can be used for being delayed, ordering subband group 57 move.In motion prediction module 60, also can carry out the same advantage of using separation filter.
With reference to Figure 10, motion estimation module 56 treated behind all effective subband pieces, motion vector set 58 is output to motion prediction module 60 and position encoded module 76.In motion prediction module 60, motion vector is used to calculate moving of particular factor set from each subband that be delayed, the ordering subband group 57 (' reference ' frame) to their reposition, to predict the subband group 53 (' current ' frame) that sorts.
In order to determine to use which mask to produce such moving, multiply by x and y component with the inverse of the corresponding modulus of each subband piece.For example, be used for moving to be judged moving to SB in order to determine
00In the x and the y component of 8 * 8 coefficient sets 138 of 2,4 positions, MV
2,4X and each in the y component be multiplied by the inverse of corresponding modulus 2.This calculating has produced x=-3 1/2 and y=-2.So increment moves the mask of x=-3, component moves mask of x=-1/2 and a mask that increment moves y=-2 is applied to 8 * 8 coefficients 148.
As one second example, in order to determine to be used for moving to be judged moving to SB
01In the x and the y component of 8 * 4 coefficient sets 139 of 2,4 positions, MV
2,4The x component be multiplied by the inverse of modulus 4, MV
2,4The y component be multiplied by the inverse of modulus 2.This calculating has produced x=-1 3/4 and y=-2.So increment moves the mask of x=-1, component moves mask of x=-3/4 and a mask that increment moves y=-2 is employed.
Figure 10 has shown that all coefficient sets arrive and SMB
2,4Moving of corresponding subband piece.With all motion vector (MV in the motion vector set 58
X, x) be applied to be delayed, ordering subband group 57 (' reference ' frame) produced the prediction of subband group 53 (' current ' frame) of sorting, and has been known as prediction group 66 (Fig. 3).
Be used to determine an alternate embodiments that component moves between the frame, said process to comprise and use 3 * 3 coefficient masks.These masks use around the average weight of the coefficient of a selected coefficient.In this method that substitutes, as shown in above-mentioned and Fig. 9 (a) and 9 (b), to each subband (SB
00To SB
33) in or subband of selecting number SB for example only
00In each effective subband piece decision comprise the motion vector set 58 that increment only moves.Motion vector set 58 is fed to motion prediction module 60.
In motion prediction module 60, the shown similar mode of mode of use and Figure 10 is used motion vector set 58, moves so that effective subband piece that be delayed, ordering subband group 57 carries out increment.Then, each each coefficient that is moved coefficient sets has been used one 3 * 3 mask thereon.The mask decision that is employed is moved the average weight of the coefficient of coefficient around each.This result calculated is to being moved the prediction of coefficient, promptly new coefficient value.
The motion vector of all motion vectors set 58 be applied to be delayed, ordering subband group 57, and after all coefficients that the passive movement vector moves have been used 3 * 3 masks thereon, the result is exported from motion prediction module 60, and as prediction group 66.Certainly, in the prediction module 98 of decoder 82, repeat this process, to be made in this performed in the motion prediction module 60 mask process again.
After stating any one method in the use and having determined prediction, prediction group 66 is passed to difference block 68, in difference block 68, and the difference between decision ordering subband group 53 and the prediction group 66.As above-mentioned, difference block 68 has produced group difference 70.
Although it is the function of a tensor product small echo of contact that motion compensation process as described herein is shown as, be important to note that these methods can be used with the conversion of other type.This comprises or in time-domain, perhaps uses motion compensation process with other conversion in transform domain.For example, the data of using a DCT to carry out conversion can be used with the similar method of said method and carry out motion compensation.That is, 64 conversion coefficients of each in 8 * 8 of DCT can use and the SB that is used for the tensor product wavelet transformation
00In each 64 conversion coefficients of 8 * 8 subband pieces identical method of method of carrying out motion compensation carry out motion compensation.
With reference now to Figure 11,, shown another execution mode of video encoder.Identical with execution mode shown among above-mentioned execution mode and Fig. 3, in module 150 and 152, carry out estimation and motion prediction in the transform domain respectively.In addition, shown similar of the forward part of this execution mode and above-mentioned and Fig. 3.In more detail, CIF image 26 is converted into a QCIF image 30 in transducer 28.QCIF image 30 is transformed and converts to a subband macro block group 40 by image to characteristic matching parts 20.In addition, the feature 22 relevant parts that changed with decision respectively of sets of subbands 34 and subband macro block group 40 convert ordering subband group 53 and ordering subband macro block group 54 to.
In addition, the execution mode shown with Fig. 3 similarly be, ordering subband macro block group 54 is fed to a motion estimation module 150, and ordering subband group 53 is fed to difference block 68.But, be not to use a delay, ordering subband group 57 is as a reference frame, but has an error correcting subband group 171 of add up error on it, is fed to Postponement module 156, produces the subband group 172 (' reference ' frame) that postpones thus.When quantizing (perhaps flexible) is so big, so that it has been when having changed the predicated error 70 that is produced in the difference block 68 basically, such change is necessary.
For the error correcting subband group 171 of deriving, when the reference of system is zero (0), for example be initialised or when the reference in the decoder need be refreshed when system, a copy of ordering subband group 53 is not transmitted with changing passes through difference block 168, and is stored in the memory.Then, when each when the predicated error 70 of frame is by quantization modules 158 subsequently, predicated error 70 is accumulated to, and promptly is added to reference to last.This reference picture that is updated is fed to Postponement module 156, produce thus be delayed, subband group 172.By utilizing this method, the reference in the encoder keep with decoder in reference between synchronous.The person skilled in art will recognize, when motion prediction and position encoded between carry out flexible and/or during effective quantity of quantizing, such structure between the encoder is useful keeping synchronously.
After motion estimation module 150 and motion prediction module 152 have received the subband group 172 that is delayed from Postponement module 156, by with the above-mentioned and shown similar process of process of Fig. 8 to 10, decision estimation and motion prediction.In addition, provide a feedforward 159 between change detection 46 and quantization modules 158, to regulate the quantized amount that will carry out on one specific, this is relevant with quantity that piece changes.When the quantity that detects change in change detection module 46 is big, just give to quantize to distribute the bigger bit of a number.On the contrary, when changing the quantity hour that detects change in the detection module 46, just give pro rata to quantize to distribute the less bit of a number.
With reference now to Figure 12,, Figure 12 has shown another execution mode of video encoder.The shown execution mode of the forward part of this execution mode and above-mentioned and Fig. 3 and Figure 11 is similar.But different from the embodiment described above is to carry out estimation in image field.This execution mode has utilized the concrete hardware configuration that can obtain on some processor.
Among Figure 12, a CIF image 26 is converted into a QCIF image 30 in conversion module 28.QCIF image 30 is transformed and converts to a subband macro block group 40 by image to characteristic matching parts 20.The feature 22 relevant parts that subband macro block group 40 is changed with decision are handled, and sort with decision subband macro block.The result is applied to sets of subbands 34, produces the subband group 53 of ordering.Then, the subband group 53 of ordering is fed to difference block 68.
Be known as ' current in addition, ' the QCIF image 30 of frame is fed to motion estimation module 160 and Postponement module 166, to determine a motion vector set 162.In more detail, in delay 166, a picture frame 30 is postponed,, also be known as ' reference ' frame to produce a picture frame that is delayed 167.With reference to Figure 13, the picture frame 167 that is delayed is fed to motion estimation module 160, the Search Area 107 that the P * P that wherein derives around each effective visual macro block is ordered.For example, at visual macro block 2,4 (IMB
2,4) around set up the Search Area 107 that a P * P is ordered.Rule of thumb analyze, one 32 * 32 Search Area 107 is used as one 16 * 16 the visual macro blocks Search Area on every side of a QCIF picture frame.
In motion estimation module 160, each efficient image macro block (IMB of current QCIF image 30 frames
X, x) be positioned at the corresponding Search Area of the picture frame 167 that is delayed, with the decision motion vector.For example, IMB
2,4From QCIF image 30, retrieved, and be positioned at the Search Area 107 that is delayed picture frame 167.The shown process of this process and top described ÷ performed process and Fig. 8 and Fig. 9 (a) in transform domain is similar.
Use and the similar method of method above-mentioned and that Fig. 9 (b) is shown IMB
2,4 Pass Search Area 107 and seek, decide in each step, at IMB
2,4And the minimum total antipode between the Search Area 107.But different with the search of above-mentioned subband, when searching in image field, the component search is unnecessary.So, at decision IMB
2,4Increment move after, being inverted of x and y component (multiply by-1) and being kept in the memory with motion vector set 162.Motion vector is fed to motion prediction module 154 and position encoded module 76.Then, use and above-mentioned just Fig. 3 and the similar mode of 11 modes described and that Figure 10 is shown, motion vector is applied to is delayed subband group 172.
With reference now to Figure 14,, shown another execution mode of video encoder, wherein forward part and above-mentioned execution mode and Fig. 3,11 and 12 shown execution modes are similar.But different from the embodiment described above is that estimation and motion prediction all carry out in image field.
In Figure 14, use and above-mentioned and Figure 12 and the 13 shown similar modes of mode decision motion vector set 162.Motion vector set 162 is fed to module 164 to carry out motion prediction, and it is position encoded to carry out to be fed to module 76.Use and above-mentioned and Figure 11 and the 12 shown similar modes of mode, an error correction subband group 171 that has add up error on it is fed to Postponement module 156, the subband group 172 (' reference frame ') that is delayed with generation.But different from the embodiment described above is to come reconstruct to be delayed subband group 172 by anti-wavelet transformation module 174 then, to form one by the image 176 of reconstruct.Had structure with the similar of the shown QCIF image 30 of Fig. 4 (a) by the image of reconstruct.
Alternatively, by not being that intactly reconstruct is delayed subband group 172, but only reconstruct part group is raised the efficiency.For example, one 3,5 filter can be used to obtain to have a reconstruction region of 48 * 48.According to the validity in the visual macro block (16 * 16) of regional center, the zone is selected in promptly detected change.
In motion prediction module 164, motion vector set 162 is applied to reconstructed image 176 (perhaps a zone, reconstruct 48 * 48, if only the zone is carried out anti-wavelet transformation).Use and the above-mentioned and shown similar mode of mode of Figure 10, motion vector set 162 is applied to reconstructed reference image 176, with the set of transform coefficients in the subband that moves expression QCIF image.Then, a prediction 178 is fed to forward wavelet module 180, to produce prediction group 66.Then, in difference block 68, from ordering subband group 53, deduct prediction group 66, with generation group difference 70.In module 158, carry out to quantize, and add up error to be keeping with reference to (as above-mentioned), and to be forwarded to position encoded module 76.Carry out the position encoded of quantization error and motion vector 162 as above-mentioned, and forwarded to decoder through transmission line 80.
Although shown here is a software realization mode, the principle of embodiment of the present invention can realize with hardware, for example, realizes with an application-specific integrated circuit (ASIC) (ASIC).Preferably, the ASIC implementation comprises necessary memory requirement, should be operated in speed a little, make the power consumption minimum that realizes this execution mode, (ⅱ) allow the full color video compression with (ⅰ), full CCIR601 for example, its data rate is not less than 13.5MHz.Can predict,, compare with the processor implementation, power consumption can be reduced to 1/10th with traditional software by utilizing ASIC.
Alternatively, can use method of optics further to save power consumption.As above-mentioned, each level of wavelet transformation to image carry out one approximate, and record carries out the details that this is lost when approximate.In the implementation of photoelectricity or an optics, can regulate the mode of mode that light assembled and relevant charge inducing, to collect sampling of each approximate image.If these approximate images are recorded in together by parallel, just can calculate the details item from these medians by device simulation or numeral.Preferably, use analogue means to come computational details item, and as the output of an inert stage.
A bit serial AD converter having realized quantization strategy by use comes the details item is quantized.The bit stream that is produced is compressed.Use this mode, optics/speed of the device work of light, i.e. the number of times of digital saltus step is the packed data rate, rather than pictorial data rate (situation as ASIC is identical) or processor data speed (identical with traditional processor situation).This will produce an execution mode that consumes very little electric current, promptly so just need less power.Can predict, compare with the execution mode of ASIC, the execution mode of an optical means can further be reduced to 1/10th with power consumption.
Should be understood that shown and the execution mode of describing here and changing only is the example of the principle of the invention, and the person skilled in art can carry out many changes, and can not depart from scope and spirit of the present invention.
Claims (60)
1. compression comprises the method for the data of first and second data acquisition systems, comprising:
First and second data acquisition systems are varied to corresponding first and second set of transform coefficients;
Produce the data of difference between expression first and second set of transform coefficients; With
The data that produced are encoded to be used for transmission.
2. this method that is used for packed data as claimed in claim 1 wherein uses a tensor product wavelet transformation to carry out conversion to first and second data acquisition systems.
3. this method that is used for packed data as claimed in claim 2, wherein remainder is sent to another subband from a subband.
4. this method that is used for packed data as claimed in claim 1, the data that wherein produce difference between expression first and second set of transform coefficients comprise:
Estimate that the difference between first and second set of transform coefficients provides motion vector;
Motion vector is applied to first set of transform coefficients, produces a prediction second set of transform coefficients; With
Second set of transform coefficients is deducted this prediction, produce a predicated error set.
5. this method that is used for packed data as claimed in claim 4, wherein first and second set of transform coefficients are carried out error correction.
6. this method that is used for packed data as claimed in claim 4, wherein motion vector is applied to first set of transform coefficients and further comprises the mask of application about each conversion coefficient that is affected, and obtains a weighted average of adjacent transform coefficients.
7. this method that is used for packed data as claimed in claim 4, estimate that wherein the difference between first and second set of transform coefficients comprises:
A Search Area around the conversion coefficient subclass that generation is selected from first and second set of transform coefficients;
Another correlating transforms coefficient subclass that to select from first and second set of transform coefficients is applied to Search Area; With
In Search Area, pass relevant conversion coefficient subclass, to a position of a best increment coupling of expression increment.
8. this method that is used for packed data as claimed in claim 7 further is included in the Search Area, passes relevant conversion coefficient subclass component, to flux matched position of best branch of expression.
9. this method that is used for packed data as claimed in claim 1 is wherein carried out conversion to first and second data acquisition systems and has been produced as first set of transform coefficients of one first sets of subbands with as second set of transform coefficients of one second sets of subbands.
10. this method that is used for packed data as claimed in claim 9 further comprises second sets of subbands is carried out the macro block encapsulation, to form a subband macro block group.
11., further comprise weight is applied to subband macro block in the subband macro block group as this method that is used for packed data of claim 10.
12., further comprise the change that detects between subband macro block group and the reference as this method that is used for packed data of claim 10.
13. as this method that is used for packed data of claim 12, wherein detect subband macro block group and with reference between change be based on the equation of a common version:
14. as this method that is used for packed data of claim 13, the change that wherein detects between subband macro block group and the reference is based on according to a more distortion evaluation of detailed form:
15. as this method that is used for packed data of claim 10, the data that wherein produce difference between expression first and second set of transform coefficients comprise:
Estimate the difference between first sets of subbands and the subband macro block group, so that motion vector to be provided;
Motion vector is applied to first sets of subbands, produces a prediction second sets of subbands; With
Second sets of subbands is deducted this prediction, produce a predicated error set.
16., estimate that wherein the difference between first sets of subbands and the subband macro block group comprises as this method that is used for packed data of claim 15:
A Search Area around the conversion coefficient subclass that generation is selected from first sets of subbands;
A correlating transforms coefficient subclass to selecting from subband macro block group is applied to Search Area; With
In Search Area, pass relevant conversion coefficient subclass, to a position of a best increment coupling of expression increment.
17. this method that is used for packed data as claim 16 further is included in the Search Area, passes relevant conversion coefficient subclass component, to flux matched position of best branch of expression.
18. this method that is used for packed data as claimed in claim 1, wherein the data that produced, be used to transmit being encoded comprises that further sign is null, the subclass of the data that produce.
19. a compression comprises the method for the data of first and second data acquisition systems, comprising:
First and second data acquisition systems are varied to corresponding first and second set of transform coefficients;
Estimate that difference between first and second set of transform coefficients is to provide motion vector;
Produce prediction by motion vector being applied to first set of transform coefficients to second set of transform coefficients;
Second set of transform coefficients is deducted this second predicted set of transform coefficients, to obtain predicated error; With
The predicated error and the motion vector that are produced are encoded to be sent to a decoder.
20., wherein use a tensor product wavelet transformation to carry out conversion to first and second data acquisition systems as this method that is used for packed data of claim 19.
21., estimate that wherein the difference between first and second set of transform coefficients comprises as this method that is used for packed data of claim 19:
A Search Area around the conversion coefficient subclass that generation is selected from first and second set of transform coefficients;
Another correlating transforms coefficient subclass that to select from first and second set of transform coefficients is applied to Search Area; With
In Search Area, pass relevant conversion coefficient subclass, to a position of a best increment coupling of expression increment.
22. this method that is used for packed data as claim 21 further is included in the Search Area, passes relevant conversion coefficient subclass component, to flux matched position of best branch of expression.
23., wherein first and second data acquisition systems are carried out conversion and have produced as first set of transform coefficients of one first sets of subbands with as second set of transform coefficients of one second sets of subbands as this method that is used for packed data of claim 19.
24., further comprise second sets of subbands carried out the macro block encapsulation, to form a subband macro block group as this method that is used for packed data of claim 23.
25., further comprise weight is applied to the subband macro block of forming subband macro block group as this method that is used for packed data of claim 24.
26., further comprise the change that detects between subband macro block group and the reference as this method that is used for packed data of claim 24.
27. as this method that is used for packed data of claim 26, wherein detect change between subband macro block group and the reference be based on a common version a distortion estimate:
28. as this method that is used for packed data of claim 19, wherein predicated error and motion vector being encoded further comprises the null predicated error subclass of sign to be sent to decoder.
29. a compression comprises the method for the data of first and second data acquisition systems, comprising:
First and second data acquisition systems are varied to corresponding first and second set of transform coefficients;
Estimate that difference between first and second set of transform coefficients is to provide motion vector;
Produce prediction by motion vector being applied to first set of transform coefficients to second set of transform coefficients; With
Second set of transform coefficients is deducted this second predicted set of transform coefficients, to obtain predicated error.
30. as this method that is used for packed data of claim 29, wherein first set of transform coefficients is carried out error correction.
31. be used for data being compressed at an encoder, be sent to a method of the bit number of a decoder with minimizing, comprising:
First and second data acquisition systems are varied to corresponding first and second set of transform coefficients;
Estimate that difference between first and second set of transform coefficients is to provide motion vector;
Produce prediction by motion vector being applied to first set of transform coefficients, and this is predicted the outcome carry out conversion then second set of transform coefficients; With
Second set of transform coefficients is deducted predicting the outcome that this is transformed, to obtain predicated error.
32. this method that is used for packed data as claim 31 further comprises first set of transform coefficients is carried out inverse transformation, and at predictive period, first set of transform coefficients is provided as a reference.
33. as this method that is used for packed data of claim 32, wherein first set of transform coefficients is carried out error correction.
34. the method that the corresponding subband piece of subclass with a data set is encapsulated comprises:
A relevant subbands set of blocks from closing, a subband set is separated;
This relevant subbands piece is packaged together forms a subband macro block; With
Each relevant subbands set in the sets of subbands is repeated top separation and encapsulation step, to form a subband macro block group.
35. as this method that macro block is encapsulated of claim 34, wherein encapsulation step is included in the subband set of blocks of being correlated with by the relative position arrangement identical with the position of subband piece in sets of subbands in the subband macro block.
36. as this method that macro block is encapsulated of claim 34, wherein encapsulation step comprises the subband macro block in the subband macro block group is placed on the identical locus, position that is arranged in data acquisition system with corresponding data set.
37. be used for becoming a method of conversion coefficient to comprise to use a tensor product wavelet transformation to come data set to carry out conversion a data set transform, and have at least two filter paths, and the remainder that during being delivered in conversion between at least two filter paths, is produced.
38. this method as claim 37, wherein the next remainder of one first filter paths from least two filter paths is passed to one second filter paths at least two filter paths, and is passed to first filter paths from the remainder of second filter paths.
39. as this method of claim 37, wherein this tensor product wavelet transformation is to be used to determine that a tensor product small echo of a high pass component and a low-pass component is right.
40. as this method of claim 39, wherein set is carried out conversion and transmit remainder between filter paths comprising to data:
The low-pass component and the high pass component of determination data set;
Low-pass component is carried out normalization, to produce low pass normalization output and one first remainder (rl);
High pass component is carried out normalization, to produce high pass normalization output and one second remainder (rh);
To first and second remainders (rl, rh) carry out one first operation (g (rl, rh)), and with add low pass normalization output to from the result who wherein produces, approximate to produce one; With
(rl rh) carries out one second operation (f (rl, rh)), and with add high pass normalization output to from the result who wherein produces, to produce a details to first and second remainders.
41., further comprise and carry out down-sampled to low-pass component and high pass component as this method of claim 40.
42. as this method of claim 39, wherein using a value is-1,2,6,2 ,-1 filter decides low-pass component; Using a value is-1,2, and-1 filter decides a high pass component; And further comprise one first operation with following function (g (rl, rh)) and one second operate (f (rl, rh)):
G (rl, rh)=rh; With
F (rl, rh)=floor (rh+1/2), wherein nh=1/2.
44. become a method of conversion coefficient to comprise to use a coding method to come set to carry out conversion a data collective encoding and the remainder that will be obtained during the coding is delivered to one second filter paths from one first filter paths to data.
45., further comprise remainder is delivered to first filter paths from second filter paths as this coding method of claim 44.
46. as this coding method of claim 44, wherein this coding method is a tensor product wavelet transformation.
47. as this coding method of claim 44, wherein this coding method is a discrete cosine transform.
48. one to data set carry out Methods for Coding, comprising:
One first filter component of determination data set in one first filter paths;
One second filter component of determination data set in one second filter paths;
The first filter component is carried out normalization, to produce normalization output and a remainder; With
Remainder is delivered to second filter paths.
49. estimate a method of the change that taken place between one first data acquisition system and one second data acquisition system, comprising:
A Search Area around the data subclass that generation is selected from first and second data acquisition systems;
To be applied to Search Area according to another data acquisition system of selecting the set from first and second parameters; With
In Search Area, pass relevant data subset, to a position of a best increment coupling of expression increment.
50. this method that is used to estimate the change that taken place between one first data acquisition system and one second data acquisition system as claim 49, further be included in the Search Area, pass relevant data subset, to flux matched position of best branch of expression component.
51. an encoder apparatus comprises:
A converting means has an input that is configured to receive one first and second data acquisition system, and further is configured to produce corresponding first and second sets of subbands; With
A motion compensation unit has an input that is connected to this converting means, is configured to receive first and second sets of subbands, and further is configured to represent effectively the difference between first and second sets of subbands.
52. as this encoder apparatus of claim 51, wherein motion compensation unit is carried out all operations that in transform domain first and second sets of subbands is carried out.
53. this encoder apparatus as claim 51, further comprise a difference block, this difference block is configured to receive a prediction from motion compensation unit, and receive second sets of subbands from converting means, and further be configured to determine the difference between this prediction and second sets of subbands, to produce a predicated error.
54. as this encoder apparatus of claim 51, wherein motion compensation unit comprises:
A movement estimation apparatus is connected to converting means, is configured to first and second sets of subbands are compared, to produce motion vector; With
A motion prediction device is connected to movement estimation apparatus and converting means, is configured to receive the motion vector and first sets of subbands, and further is configured to produce a prediction to second sets of subbands.
55. an encoder apparatus comprises:
A converting means has an input that is configured to receive one first and second data acquisition system, and further is configured to produce respectively corresponding first and second sets of subbands; With
A macro block packaging system has and is connected to converting means and an input that be configured to receive first sets of subbands and second sets of subbands, and further is configured to produce respectively one first subband macro block and represents to represent with one second subband macro block.
56. this encoder apparatus as claim 55, further comprise a weighting device, have an input that is configured to communicate and be configured to receive and represent to represent according to the flexible first subband macro block of sense organ importance then with the second subband macro block with the macro block packaging system.
57. this encoder apparatus as claim 55, comprise that further changes a checkout gear, have and be configured to an input communicating with the macro block packaging system, and be configured to the first subband macro block is represented and the second subband macro block is represented to compare determining the change between them, change checkout gear and further be configured to produce of this change of reaction and change test set.
58., further comprise a macro block collator, have an input that is connected to the change checkout gear, and be configured to sort changing test set as this encoder apparatus of claim 57.
59. as this encoder apparatus of claim 57, wherein to the first subband macro block represent with the second subband macro block represent to carry out relatively be based on a universal equation a distortion estimate:
60. as this encoder apparatus of claim 59, wherein to the first subband macro block represent with the second subband macro block represent to carry out relatively be based on one more detailed form an equation a distortion estimate:
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US6663897P | 1997-11-14 | 1997-11-14 | |
US60/066,638 | 1997-11-14 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1281618A true CN1281618A (en) | 2001-01-24 |
CN1190084C CN1190084C (en) | 2005-02-16 |
Family
ID=22070754
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB988119668A Expired - Fee Related CN1190084C (en) | 1997-11-14 | 1998-11-13 | Apparatus and method for compressing video information |
Country Status (7)
Country | Link |
---|---|
EP (1) | EP1031238A4 (en) |
JP (2) | JP4675477B2 (en) |
KR (1) | KR100614522B1 (en) |
CN (1) | CN1190084C (en) |
AU (1) | AU752219B2 (en) |
CA (1) | CA2310602C (en) |
WO (1) | WO1999026418A1 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1302666C (en) * | 2001-02-12 | 2007-02-28 | 株式会社摩迩迪 | Appts. and method of coding moving picture |
CN100531348C (en) * | 2005-02-04 | 2009-08-19 | 索尼株式会社 | Encoding apparatus and method, decoding apparatus and method, image processing system and method |
CN101300847B (en) * | 2005-11-02 | 2010-11-03 | 安泰科技有限公司 | Method for transferring encoded data and image pickup device |
CN1599462B (en) * | 2003-07-18 | 2011-04-13 | 三星电子株式会社 | Image encoding and decoding apparatus and method |
CN101543078B (en) * | 2007-03-30 | 2011-10-05 | 索尼株式会社 | Information processing device and method |
CN102411786A (en) * | 2010-09-29 | 2012-04-11 | 微软公司 | Low complexity method for motion compensation of dwt based systems |
CN101242534B (en) * | 2007-02-08 | 2013-01-16 | 三星电子株式会社 | Video encoding apparatus and method |
CN103354614B (en) * | 2008-10-31 | 2016-08-24 | Sk电信有限公司 | The device that motion vector is encoded |
CN106688229A (en) * | 2014-05-30 | 2017-05-17 | 陈仕东 | Transform-based methods to transmit the high-definition video |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE69901525T2 (en) * | 1998-02-13 | 2003-01-09 | Koninklijke Philips Electronics N.V., Eindhoven | METHOD AND DEVICE FOR VIDEO CODING |
FR2813486B1 (en) * | 2000-08-31 | 2003-01-24 | Canon Kk | DATA TRANSFORMATION METHOD AND DEVICE |
FI111592B (en) * | 2001-09-06 | 2003-08-15 | Oulun Yliopisto | Method and apparatus for encoding successive images |
KR100440567B1 (en) * | 2001-11-06 | 2004-07-21 | 한국전자통신연구원 | A method for forming binary plane for motion search and a motion estimating apparatus using the same |
KR100472476B1 (en) * | 2002-08-31 | 2005-03-10 | 삼성전자주식회사 | Interpolation apparatus and method for moving vector compensation |
KR100788983B1 (en) * | 2005-11-02 | 2007-12-27 | 엠텍비젼 주식회사 | Method for transferring encoded data and image pickup device performing the method |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5777678A (en) * | 1995-10-26 | 1998-07-07 | Sony Corporation | Predictive sub-band video coding and decoding using motion compensation |
US5808683A (en) * | 1995-10-26 | 1998-09-15 | Sony Corporation | Subband image coding and decoding |
US5764814A (en) * | 1996-03-22 | 1998-06-09 | Microsoft Corporation | Representation and encoding of general arbitrary shapes |
-
1998
- 1998-11-13 WO PCT/US1998/024189 patent/WO1999026418A1/en active IP Right Grant
- 1998-11-13 CN CNB988119668A patent/CN1190084C/en not_active Expired - Fee Related
- 1998-11-13 KR KR1020007005298A patent/KR100614522B1/en not_active IP Right Cessation
- 1998-11-13 EP EP98958556A patent/EP1031238A4/en not_active Withdrawn
- 1998-11-13 JP JP2000521650A patent/JP4675477B2/en not_active Expired - Fee Related
- 1998-11-13 CA CA002310602A patent/CA2310602C/en not_active Expired - Fee Related
- 1998-11-13 AU AU14577/99A patent/AU752219B2/en not_active Ceased
-
2008
- 2008-04-03 JP JP2008096670A patent/JP2008289132A/en active Pending
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1302666C (en) * | 2001-02-12 | 2007-02-28 | 株式会社摩迩迪 | Appts. and method of coding moving picture |
CN102065306B (en) * | 2003-07-18 | 2012-09-26 | 三星电子株式会社 | Image decoding apparatus |
CN1599462B (en) * | 2003-07-18 | 2011-04-13 | 三星电子株式会社 | Image encoding and decoding apparatus and method |
CN102065304B (en) * | 2003-07-18 | 2012-09-26 | 三星电子株式会社 | Image encoding and decoding apparatus |
CN102065307B (en) * | 2003-07-18 | 2012-09-26 | 三星电子株式会社 | Image encoding and decoding method |
CN102065303B (en) * | 2003-07-18 | 2012-09-26 | 三星电子株式会社 | Image decoding method |
CN102065308B (en) * | 2003-07-18 | 2013-03-27 | 三星电子株式会社 | Image decoding device |
CN100531348C (en) * | 2005-02-04 | 2009-08-19 | 索尼株式会社 | Encoding apparatus and method, decoding apparatus and method, image processing system and method |
CN101300847B (en) * | 2005-11-02 | 2010-11-03 | 安泰科技有限公司 | Method for transferring encoded data and image pickup device |
CN101242534B (en) * | 2007-02-08 | 2013-01-16 | 三星电子株式会社 | Video encoding apparatus and method |
CN101543078B (en) * | 2007-03-30 | 2011-10-05 | 索尼株式会社 | Information processing device and method |
CN103354614B (en) * | 2008-10-31 | 2016-08-24 | Sk电信有限公司 | The device that motion vector is encoded |
CN102411786A (en) * | 2010-09-29 | 2012-04-11 | 微软公司 | Low complexity method for motion compensation of dwt based systems |
CN106688229A (en) * | 2014-05-30 | 2017-05-17 | 陈仕东 | Transform-based methods to transmit the high-definition video |
Also Published As
Publication number | Publication date |
---|---|
JP4675477B2 (en) | 2011-04-20 |
CA2310602A1 (en) | 1999-05-27 |
KR100614522B1 (en) | 2006-08-22 |
CA2310602C (en) | 2009-05-19 |
JP2001523928A (en) | 2001-11-27 |
KR20010032113A (en) | 2001-04-16 |
AU1457799A (en) | 1999-06-07 |
WO1999026418A1 (en) | 1999-05-27 |
EP1031238A4 (en) | 2003-05-07 |
EP1031238A1 (en) | 2000-08-30 |
AU752219B2 (en) | 2002-09-12 |
CN1190084C (en) | 2005-02-16 |
JP2008289132A (en) | 2008-11-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN1190084C (en) | Apparatus and method for compressing video information | |
CN1231863C (en) | Method and apparatus for compressing and decompressing image | |
CN1220391C (en) | Image encoder, image decoder, image encoding method, and image decoding method | |
CN1125409C (en) | Apparatus and method for performing scalable hierarchical motion estimation | |
US8634479B2 (en) | Decoding a video signal using in-loop filter | |
CN1257614C (en) | Signal encoding method and apparatus and decording method and apparatus | |
CN1112045C (en) | Carry out video compression with error information coding method repeatedly | |
US20100284469A1 (en) | Coding Device, Coding Method, Composite Device, and Composite Method | |
CN1280709C (en) | Parameterization for fading compensation | |
CN1947426A (en) | Method and apparatus for implementing motion scalability | |
JP4429968B2 (en) | System and method for increasing SVC compression ratio | |
US20060039472A1 (en) | Methods and apparatus for coding of motion vectors | |
CN1574970A (en) | Method and apparatus for encoding/decoding image using image residue prediction | |
CN1914921A (en) | Apparatus and method for scalable video coding providing scalability in encoder part | |
CN1098584A (en) | The method and apparatus of transmitted image signal | |
CN1758765A (en) | Be used to encode and/or the method and apparatus of decoding moving picture | |
CN1906945A (en) | Method and apparatus for scalable video encoding and decoding | |
CN101049026A (en) | Scalable video coding with grid motion estimation and compensation | |
CN1906624A (en) | Data compression using matching pursuits algorithms | |
CN1697328A (en) | Fast video codec transform implementations | |
US20070242895A1 (en) | Scalable Encoding Method and Apparatus, Scalable Decoding Method and Apparatus, Programs Therefor ,and Storage Media for Storing the Programs | |
CN1122247C (en) | Prediction treatment of motion compensation and coder using the same | |
CN1085471C (en) | Method of reducing mosquito noise generated during decoding process of image data and device for decoding image data using the same | |
CN1864177A (en) | Video encoding and decoding methods and corresponding devices | |
JP2007143176A (en) | Compression method of motion vector |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20050216 Termination date: 20131113 |