CN101841705A - Video lossless compression method based on adaptive template - Google Patents

Video lossless compression method based on adaptive template Download PDF

Info

Publication number
CN101841705A
CN101841705A CN 201010123520 CN201010123520A CN101841705A CN 101841705 A CN101841705 A CN 101841705A CN 201010123520 CN201010123520 CN 201010123520 CN 201010123520 A CN201010123520 A CN 201010123520A CN 101841705 A CN101841705 A CN 101841705A
Authority
CN
China
Prior art keywords
prediction
video
sample
redundancy
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 201010123520
Other languages
Chinese (zh)
Inventor
郭宝龙
武晓玥
葛川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN 201010123520 priority Critical patent/CN101841705A/en
Publication of CN101841705A publication Critical patent/CN101841705A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention discloses a video lossless compression method based on adaptive model selection. The method comprises the following steps: judging the type of an input video, determining operand, performing video pre-processing operation; adopting the airspace redundancy-removing method, the time domain redundancy-removing method and the direct mode redundancy-removing method to perform redundancy-removing operation to the current sample and obtain the predictive value of the current sample; utilizing the redundant information of space, time and frequency domain to separately perform infraframe prediction and interframe prediction to the original YUV sequence; if the original sequence is RGB sequence, performing infraframe prediction, interframe prediction and direct prediction to the converted sequence; using an adaptive prediction model selector to judge the predictive value of the current sample and determine that the minimum is the optimal prediction error; and finally performing context-based arithmetic encoding to the map value of the final prediction error and outputting to obtain bit stream. The invention uses the adaptive prediction model selector to replace the additional bit prediction model, thus obviously increasing the compression efficiency of video; and the method can be used for video compression in the fields of aviation and navigation.

Description

Video lossless compression method based on adaptive template
Technical field
The invention belongs to the digital image processing techniques field, relate to video lossless compression method, this method can be used for the higher aviation of resolution requirement, navigation field.
Background technology
In recent years, digital picture and video compression have had very big development, MPEG-4, JPEG2000 have successively occurred, have H.264/MPEG-4AVC waited advanced compression standard.These standard great majority are conceived on the lossy compression method, but in a lot of practical applications, harmless digital picture and video compression but seem more important.For example, must compress to save the storage area, improve propagation efficiency the aviation of magnanimity and remote sensing images, if but use lossy compression method, the image that has been compressed detailed information will be omitted important target information, thereby causes error.Therefore, the research of harmless digital picture and video compression has progressively been obtained scholars' attention.
The redundant information the original video sequence is mainly removed in the compression of digital video from three aspects: colourity, room and time.The colourity redundancy generally removes by the conversion of rgb space to the YcbCr space.Spatial redundancy mainly is that this point is particularly evident in the natural image of Continuous Gray Scale owing to the correlation between the pixel in the frame.Remove the existing a lot of methods of spatial redundancy, some method has been applied in the compression of lossless image, as LOCO-I etc.Time redundancy mainly is because the time is gone up the correlation between the consecutive frame.In some video compression algorithms that diminish such as MPEG1, MPEG2, mainly be to remove temporal correlation effectively by inter-frame prediction method.This correlation does not exist only between the consecutive frame, is present between the frame close relatively on the time yet.At the present Research of present video lossless compression, press for the correlation technique of lossless video compression field is done further deep research.
Harmless video compression is similar with the video compression standard that diminishes, and number of modules has adopted new coding techniques.Wherein some algorithm is to the expansion of algorithm in the past, as supports new block matching algorithm, the more estimation of multi-reference frame and motion compensation etc.; Some then is to the improvement of algorithm in the past, reaches 1/4 pixel as the precision of estimation; Also having some then is algorithm in the complete standard different from the past, as the predictive coding of spatial domain, and integer wavelet transformation, based on contextual entropy coding etc.This shows that the raising of its code efficiency realizes by above all multimodes are common.
At present, the lossless video compression method mainly contains following several:
1.Memon a kind of incorporation time territory that proposed in 1996 Deng the people and the compression method of spatial domain, the author has also inquired into adaptive algorithm such as the document N.Memon and X.Wu. " Context-based; adaptive; lossless image coding " that is used for based on the optimum prediction of block motion compensation and frequency domain, IEEE Transactions onCommunications, 1997,45 (4), pp:437-444. is described.The CAL1C algorithm X Wu that people such as Wu proposed in 1998, W Choi, N Memon. " Lossless Interframe Image Compression via ContextModeling ", Proc of Data Compression Conf, pp:378-387,1998 and Elias carotti etc. behind the interframe neighborhood that proposed in 2002 in adaptive predictor and frame spatial domain prediction device Carotti E S G, DeMartin J C, Meo A R. " Low-complexity Iossless video coding via adaptive spatio-temporal predietionProeeedings ", 2003International Conference on Image Processing, Bareelona, 2003, pp:197-200.The compression ratio of these compression methods is unusual between 2-3 times according to video flowing.If JPEG-LS and CALIC are directly used in the compression of lossless video, do not consider video correlation in time like this, compression ratio is not high.Though the method for Wu has been considered the fusion of time and spatial prediction, do not adopt adaptive strategy, so compression ratio improves seldom.The method of Elias has been used new time domain prediction device, has reduced algorithm complex, but has also reduced the performance of prediction.Calendar year 2001, G.C.K. analyzed wavelet transformation to the later residual image of motion compensation, find that it can not reduce entropy effectively, also just can not effectively compress G CK Abhayaratne.D M Monco. " Embedded to Loeeless Coding of Compensated Prediction Residuals inLoseless Video Coding ", Proc of SPIE, 2001,4310, pp:75-85.
2003, combination such as open and propose a kind of self-adapting compressing algorithm Z.Ming-Feng based on the time domain prediction of block motion compensation and the spatial domain prediction method of in CALIC, describing, H.Jia, and Z.Li-Ming. " Lossless videocompression using comnination of temporal and spatial prediction ", in Proc.IEEE.Int.Conf.Neural Netwoks Signal Processing, Dec.2003, pp.1193-1196..Bu Luneiluo etc. have proposed a kind of new time domain prediction technology and optimum 3-D linear prediction algorithm D.Brunello based on the block motion compensation device, G.Calvagno, G.A.Mian, and R.Rinaldo. " Lossless compression of video using temporalinformation ", IEEE Transactions on Image Processing, 2003., 12 (2), pp:132-139.2004, palaces etc. have proposed the lossless video encoding algorithm based on wavelet transformation, the author proposes a kind of method Y.Gong that changes according to the motion vector situation of consecutive frame between two kinds of models, S.Pullalarevu, and S.Sheikh. " Awavelet-based lossless video coding scheme ", in Proc.Int.Conf.Signal Processing, 2004, pp:1123-1126.Thereafter the video sequence self-adaptive non-loss that proposed a kind of combined with wavelet transformed such as Park is pressed algorithm, obtain predicted value S.-G.Park by in reference frame, seeking the optimum Match target window, E.J.Delp, and H.Yu. " Adaptive lossless video compression using an integer wavelet transform ", in Proc.Int.Conf.Image Processing, 2004, pp:2251-2254..Though more than these proposed algorithm in recent years and were having some improvement aspect the lifting of performance, its algorithm complex is higher, and the selection of adaptive model is still very ineffective, the compression efficiency that causes video is not very high.
Summary of the invention
The objective of the invention is to overcome the deficiency of above-mentioned prior art, proposed a kind of adaptive video lossless compression method, further to improve lossless compress efficient by adaptive selection coding method video based on predictive coding.
The object of the present invention is achieved like this:
The present invention makes full use of the thought of block-based hybrid coding structural framing, utilizes the time, and the redundant information of space and frequency domain is carried out lossless compress to adaptive model to video by the back, reduces the transmission of boundary information simultaneously.Its
The specific implementation step is as follows:
(1) determines operation domain according to the type of different input videos,, and adopt the integer wavelet transformation method to carry out preliminary treatment if input video is that rgb format then is converted into yuv format; Then directly use spatial domain method to carry out preliminary treatment when being input as yuv format;
(2) go redundancy method or time domain to go redundancy method or Direct Model to go redundancy method that each pixel or wavelet coefficient are predicted to pretreated video data by the spatial domain;
(3) make up adaptive prediction Model Selection device e (S), e (T), e (N):
e ( S ) = | e 1 ( x , y - 2 ) | + | e 1 ( x - 1 , y - 1 ) | + | e 1 ( x , y - 1 ) | + | e 1 ( x + 1 , y - 1 ) |
+ | e 1 ( x - 21 , y ) | + | e 1 ( x - 1 , y ) | + | e 1 ( x - 1 , y + 1 ) |
+ 1 2 ( | e 1 ( x - 1 , y - 2 ) | + | e 1 ( x + 1 , y - 2 ) | + | e 1 ( x - 2 , y - 1 ) | + | e 1 ( x - 1 , y + 1 ) | )
e ( T ) = | e 2 ( x , y - 2 ) | + | e 2 ( x - 1 , y - 1 ) | + | e 2 ( x , y - 1 ) | + | e 2 ( x + 1 , y - 1 ) |
+ | e 2 ( x - 21 , y ) | + | e 2 ( x - 1 , y ) | + | e 2 ( x - 1 , y + 1 ) |
+ 1 2 ( | e 2 ( x - 1 , y - 2 ) | + | e 2 ( x + 1 , y - 2 ) | + | e 2 ( x - 2 , y - 1 ) | + | e 2 ( x - 1 , y + 1 ) | )
e ( N ) = | e 3 ( x , y - 2 ) | + | e 3 ( x - 1 , y - 1 ) | + | e 3 ( x , y - 1 ) | + | e 3 ( x + 1 , y - 1 ) |
+ | e 3 ( x - 21 , y ) | + | e 3 ( x - 1 , y ) | + | e 3 ( x - 1 , y + 1 ) |
+ 1 2 ( | e 3 ( x - 1 , y - 2 ) | + | e 3 ( x + 1 , y - 2 ) | + | e 3 ( x - 2 , y - 1 ) | + | e 3 ( x - 1 , y + 1 ) | )
E wherein 1The predicated error of removing redundancy method for the spatial domain,
Figure GSA000000559562000310
The MED predicted value of expression sample;
e 2The predicated error of removing redundancy method for time domain method,
Figure GSA000000559562000312
Time domain prediction value for sample;
e 3Go the predicated error of redundancy method for the Direct Model method;
Figure GSA000000559562000314
Wavelet coefficient values for sample;
e (S), e (T), e (N)Represent current pixel or wavelet coefficient p in the step (2) respectively i(x, e y) 1, e 2, e 3Predicted value;
(4) adopt adaptive prediction Model Selection device to determine the minimum value that predicts the outcome, obtain prediction error value:
mod?e=arg(min {S,T,N}{e (S),e (T),e (N)})
Wherein distinguish S, T, N are spatial domain, time domain and direct predictive mode;
(5) adopt arithmetic coding method that the mapping value of predicated error is carried out entropy coding, finally obtain output bit flow.
The present invention has following effect:
1) because the present invention has adopted improved MED model, can make the predicted value in spatial domain more accurate preferably on the one hand, simultaneously obtain to a certain degree raising in the compression ratio of infra-frame prediction, MED model with respect to classics, improved MED fallout predictor can carry out more efficient compression to video sequence, improved pixel value prediction precision in frame, compression efficiency improves nearly 10%;
2) the present invention is directed to video at frequency domain, the characteristic of spatial domain and time domain, designed a kind of new adaptive prediction Model Selection device, this selector utilizes three kinds of different predictive modes that the prediction error value that obtains is compared, the model that obtains the difference minimum is optimum forecast model, can make the code word of transmission reach purpose minimum and the realization compression like this;
3) image compensation often all needs the translatory movement vector, promptly transmit boundary information, to be that data volume increases like this, compression efficiency is reduced, the present invention utilizes adaptive prediction Model Selection device, in conjunction with the residual error that forecast model produces, image is compressed, can avoid the transmitting moving vector, further reduce data volume.
Description of drawings
Fig. 1 is an adaptive video lossless compress flow chart of the present invention;
The current sample neighborhood schematic diagram that Fig. 2 makes up for the present invention;
The pixel samples time domain prediction device that Fig. 3 makes up for the present invention;
Fig. 4 is an adaptive prediction Model Selection device schematic diagram of the present invention;
Fig. 5 is interior and interframe context model figure for frame of the present invention;
Fig. 6 is the compression analogous diagram of the present invention to Akiyo (352*288) video sequence;
Fig. 7 is the compression analogous diagram of the present invention to News (720*480) video sequence.
Embodiment
Followingly describe the present invention with reference to accompanying drawing:
With reference to accompanying drawing 1, video coding step of the present invention is as follows:
Step 1:, carry out preliminary treatment according to the video input type.
If video is input as the RGB colour gamut, carry out following operation:
1-1) RGB is carried out color space conversion, its conversion formula is:
Y = 0.299 * R + 0.587 * G + 0.114 * B U = - 0.147 * R - 0.289 * G + 0.436 * B V = 0.615 * R - 0.515 * G - 0.100 * B
Wherein R, G, B are respectively 3 color components of RGB colour gamut, and Y, U, V are respectively the color component of YUV colour gamut;
1-2) adopt the lifting formula of following tight support biorthogonal wavelet that the YUV sequence that obtains is carried out reversible integer (5,3) wavelet transformation:
Direct transform: d j - 1 , k i - = s j - 1 , k + 1 i + s j - 1 , k i 2 s j - 1 , k i + = d j - 1 , k i + d j - 1 , k - 1 i + 2 2
Inverse transformation: s j , 2 k i = s j - 1 , k i - d j - 1 , k - 1 i + d j - 1 , k i + 2 2 d j , k - 1 i = d j - 1 , k i + s j , 2 k i + s j , 2 k + 2 i 2
D wherein, s is two mutually disjoint subclass that original signal sequence is divided into by even number and odd indexed, and i is the wavelet decomposition number of plies, and j, k are the coordinate vector of element;
If the YUV colour gamut in the video sequence because the integer wavelet transformation method is not suitable for the YUV sequence, thereby will not done any conversion in spatial domain to this YUV colour gamut.
Step 2:, adopt diverse ways to the sample p in the present frame according to redundancy properties different with interframe in frame i(x y) removes redundant operation.
2-1) the redundancy method operation is gone in the spatial domain of all preliminary treatment samples: in order to reduce spatial redundancy, the present invention adopts a kind of improved spatial domain prediction device to current sample, and the expression formula of this fallout predictor is:
Figure GSA00000055956200053
Figure GSA00000055956200054
x ^ = A + B - C Other
Wherein: p i(x, the y) sample in the expression present frame, (x y) is the sample coordinate figure,
Figure GSA00000055956200056
The spatial domain prediction value of expression sample, A, B, C, D represent p respectively i(x, y) adjacent sample, as shown in Figure 2;
T 1, T 2Be the set threshold value of fallout predictor, threshold value T here 1>T 2, and T 1=15, T 2=0.
Obtaining the spatial domain prediction errors table by the spatial domain prediction device is shown:
Figure GSA00000055956200057
2-2) time domain of all samples is gone redundancy method operation: time domain go redundant target be in reference frame [i-1], seek and present frame in p i(x, optimum Match sample y).
It is with p that time domain is removed redundancy method i(x y) searches in the zone of the target window of frame [i] W*H in frame [i-1], and target window is by sample p i(x, y) upper left adjacent sample constitutes, as shown in Figure 3.The order of search reaches minimum point for finding accumulation absolute difference CAD from left to right, from top to down.The accumulation absolute difference is expressed as:
CAD ( T w ) = Σ ( m , n ) ∈ T w | p i ( x , y ) - p i - 1 ( x + m , y + n ) |
Wherein: T wThe target window of expression, p i(x, y) and p I-1(x, the y) sample value of expression present frame sample value and reference frame,
(m n) is motion vector, and (m n) is defined within the zone to this motion vector
Figure GSA00000055956200062
Figure GSA00000055956200063
In, so that the CAD value is minimum.
Defining the optimum movement vector that the target window of minimum CAD obtains is:
(m 0,n 0)=arg {m,n}{min?CAD(T w)}
Obtaining the time domain prediction errors table by the time domain prediction device is shown: e 2 = p i ( x , y ) - p ^ i T ( x , y )
p ^ i T ( x , y ) = p i - 1 ( x + m 0 , y + n 0 ) , Time domain prediction value for sample;
2-3) the Direct Model method of the sample of wavelet pretreatment is gone the redundancy method operation: to the video sequence of handling in wavelet field, because integer wavelet has the characteristic of concentration of energy, wavelet coefficient at high-frequency sub-band LH, HL, HH has smaller value usually, and directly the coding to this wavelet coefficient will further reduce the prediction redundancy.
Going redundancy to obtain predicated error by the Direct Model method can be expressed as e 3 = p i ( x , y ) - p ^ i N ( x , y ) ,
Wherein
Figure GSA00000055956200067
For being the Direct Model predicted value of sample.
Step 3: make up a kind of new adaptive prediction Model Selection device as shown in Figure 4, according to the different predicated error (e of step 2 gained 1, e 2, e 3), therefrom adaptive selection best model, the selection of adaptive predictor is to judge according to the predicated error sum of neighbor.
3-1) calculate spatial domain prediction error sum e (S):
e ( S ) = | e 1 ( x , y - 2 ) | + | e 1 ( x - 1 , y - 1 ) | + | e 1 ( x , y - 1 ) | + | e 1 ( x + 1 , y - 1 ) |
+ | e 1 ( x - 21 , y ) | + | e 1 ( x - 1 , y ) | + | e 1 ( x - 1 , y + 1 ) |
+ 1 2 ( | e 1 ( x - 1 , y - 2 ) | + | e 1 ( x + 1 , y - 2 ) | + | e 1 ( x - 2 , y - 1 ) | + | e 1 ( x - 1 , y + 1 ) | )
3-2) calculate time domain prediction error sum e (T):
e ( T ) = | e 2 ( x , y - 2 ) | + | e 2 ( x - 1 , y - 1 ) | + | e 2 ( x , y - 1 ) | + | e 2 ( x + 1 , y - 1 ) |
+ | e 2 ( x - 21 , y ) | + | e 2 ( x - 1 , y ) | + | e 2 ( x - 1 , y + 1 ) |
+ 1 2 ( | e 2 ( x - 1 , y - 2 ) | + | e 2 ( x + 1 , y - 2 ) | + | e 2 ( x - 2 , y - 1 ) | + | e 2 ( x - 1 , y + 1 ) | )
3-3) calculate Direct Model predicated error sum e (N):
e ( N ) = | e 3 ( x , y - 2 ) | + | e 3 ( x - 1 , y - 1 ) | + | e 3 ( x , y - 1 ) | + | e 3 ( x + 1 , y - 1 ) |
+ | e 3 ( x - 21 , y ) | + | e 3 ( x - 1 , y ) | + | e 3 ( x - 1 , y + 1 ) |
+ 1 2 ( | e 3 ( x - 1 , y - 2 ) | + | e 3 ( x + 1 , y - 2 ) | + | e 3 ( x - 2 , y - 1 ) | + | e 3 ( x - 1 , y + 1 ) | )
Step 4: determine that according to step (3) final forecast model is:
mod?e=arg(min {S,T,D}{e (S),e (T),e (N)}),
With this final forecast model the predicated error of minimal mode is encoded.
Step 5: the corresponding sample p of final forecast model that step (4) is obtained i(x, predicated error e y) i(i=1~3) adopt context to carry out arithmetic coding.
At first, adopt two kinds of context template as shown in Figure 5 to make up p i(wherein a kind of is to be used for frame mode for x, context y), and another is used for inter-frame mode.In frame mode, produce current sample p by 9 adjacent samples i(x, context y); In inter-frame mode, current sample p i(x, context y) is by p i(x, the adjacent sample at frame [i] and frame [i-1] y) is formed jointly.
Context template can obtain by following formula:
C=|c 0|+|c 1|+|c 2|+|c 3|+|c 4|+|c 5|+|c 6|+|c 7|+|c 8|
C wherein i(i=0~8) are illustrated in respective sample p among Fig. 5 i(x, y) predicated error of Dui Ying final forecast model.
At last, use based on this contextual arithmetic coding predicated error is carried out entropy coding and output code flow.
The present invention carries out compression effects to video sequence and can further specify by experiment:
This experiment utilizes PC to realize the compression of image sequence under the VC++ programming.The video sequence of experiment employing standard detects algorithm, uses the RGB sequence of standard sequence Coastguard (176*144), Mohter (176*144), Foreman (176*144), Akiyo (352*288), Hall (352*288), Carphone (352*288), Football (720*480), News (720*480), Tennis (720*480) and Claire, Football, Moblie (360*288).
Fig. 6 and Fig. 7 have compared the simulation result of JPEG-LS algorithm, CALIC method, Park method and this method, the lossless compress efficient of method of the present invention as can be seen from Figure video in the figure of Akiyo and News remains on a relative higher level, this mainly be since under the little situation of image scene conversion aspect the inter prediction, effectively eliminated time redundancy.
Different video sequence used bit value of unit picture element when compression has been write down in this experiment simultaneously, and experimental result is with shown in the table 1.
Table 1 YUV﹠amp; RGB cycle tests unit: Bits/pixel (bpp)
From table 1, can significantly find out, the code check of the every frame of video has had tangible improvement, but for the Coastguard sequence, the efficient of video compression is improved to some extent, but it is obvious to be not so good as other sequences, this mainly is because the scene more complicated of video can not effectively be eliminated spatial redundancy and time redundancy in the video sequence.

Claims (4)

1. the video lossless compression method based on the adaptive model prediction comprises the steps:
(1) determines operation domain according to the type of different input videos,, and adopt the integer wavelet transformation method to carry out preliminary treatment if input video is that rgb format then is converted into yuv format; Then directly use spatial domain method to carry out preliminary treatment when being input as yuv format;
(2) go redundancy method or time domain to go redundancy method or Direct Model to go redundancy method that each pixel or wavelet coefficient are predicted to pretreated video data by the spatial domain;
(3) make up adaptive prediction Model Selection device e (S), e (T), e (N):
e ( S ) = | e 1 ( x , y - 2 ) | + | e 1 ( x - 1 , y - 1 ) | + | e 1 ( x , y - 1 ) | + | e 1 ( x + 1 , y - 1 ) |
+ | e 1 ( x - 21 , y ) | + | e 1 ( x - 1 , y ) | + | e 1 ( x - 1 , y + 1 ) |
+ 1 2 ( | e 1 ( x - 1 , y - 2 ) | + | e 1 ( x + 1 , y - 2 ) | + | e 1 ( x - 2 , y - 1 ) | + | e 1 ( x - 1 , y + 1 ) | )
e ( T ) = | e 2 ( x , y - 2 ) | + | e 2 ( x - 1 , y - 1 ) | + | e 2 ( x , y - 1 ) | + | e 2 ( x + 1 , y - 1 ) |
+ | e 2 ( x - 21 , y ) | + | e 2 ( x - 1 , y ) | + | e 2 ( x - 1 , y + 1 ) |
+ 1 2 ( | e 2 ( x - 1 , y - 2 ) | + | e 2 ( x + 1 , y - 2 ) | + | e 2 ( x - 2 , y - 1 ) | + | e 2 ( x - 1 , y + 1 ) | )
e ( N ) = | e 3 ( x , y - 2 ) | + | e 3 ( x - 1 , y - 1 ) | + | e 3 ( x , y - 1 ) | + | e 3 ( x + 1 , y - 1 ) |
+ | e 3 ( x - 21 , y ) | + | e 3 ( x - 1 , y ) | + | e 3 ( x - 1 , y + 1 ) |
+ 1 2 ( | e 3 ( x - 1 , y - 2 ) | + | e 3 ( x + 1 , y - 2 ) | + | e 3 ( x - 2 , y - 1 ) | + | e 3 ( x - 1 , y + 1 ) | )
E wherein 1The predicated error of removing redundancy method for the spatial domain,
Figure FSA000000559561000110
Figure FSA000000559561000111
The MED predicted value of expression sample;
e 2The predicated error of removing redundancy method for time domain method,
Figure FSA000000559561000112
Figure FSA000000559561000113
Time domain prediction value for sample;
e 3Go the predicated error of redundancy method for the Direct Model method;
Figure FSA000000559561000114
Figure FSA000000559561000115
Wavelet coefficient values for sample;
e (S), e (T), e (N)Represent current pixel or wavelet coefficient p in the step (2) respectively i(x, e y) 1, e 2, e 3The predicated error sum;
(4) adopt adaptive prediction Model Selection device to determine the minimum value that predicts the outcome, obtain prediction error value:
mode=arg(min {S,T,N}{e (S),e (T),e (N)})
Wherein distinguish S, T, N are spatial domain, time domain and direct predictive mode;
(5) adopt arithmetic coding method that the mapping value of predicated error is carried out entropy coding, finally obtain output bit flow.
2. the video lossless compression method based on the adaptive model prediction according to claim 1, wherein step (2) is described predicts pretreated video data, is the spatial redundancy amount of utilizing in the spatial domain in the improved MED spatial domain prediction device removal frame; In time domain, utilize the mode of asking minimum accumulation absolute difference to remove the correlation of interframe pixel; Utilize direct method to obtain wavelet coefficient in the high-frequency sub-band.
3. the video lossless compression method based on the adaptive model prediction according to claim 2, wherein said improved MED fallout predictor, representation is as follows:
Figure FSA00000055956100021
Figure FSA00000055956100022
x ^ = A + B - C Other
Wherein The predicted value of expression sample, A, B, C, D represent the sample that x is adjacent, T respectively 1, T 2Be respectively set threshold value, and T 1>T 2
4. the video lossless compression method based on the adaptive model prediction according to claim 2, the wherein said correlation of in time domain, utilizing the mode of asking minimum accumulation absolute difference to remove the interframe pixel, carry out as follows:
4a) this p of sampling in present frame i(x, y) upper left adjacent sample is a target window;
4b) search for the accumulation absolute difference minimum value that obtains optimum Match (m in the region of search with this target window W*H in frame [i-1] as sample 0, n 0), the accumulation absolute difference is expressed as:
CAD ( T w ) = Σ ( m , n ) ∈ T w | p i ( x , y ) - p i - 1 ( x + m , y + n ) |
T wherein wThe target window of expression, p i(x y) is the present frame sample value, p I-1(x y) is the sample value of reference frame, and (m n) is the interframe movement vector value.
CN 201010123520 2010-03-12 2010-03-12 Video lossless compression method based on adaptive template Pending CN101841705A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010123520 CN101841705A (en) 2010-03-12 2010-03-12 Video lossless compression method based on adaptive template

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010123520 CN101841705A (en) 2010-03-12 2010-03-12 Video lossless compression method based on adaptive template

Publications (1)

Publication Number Publication Date
CN101841705A true CN101841705A (en) 2010-09-22

Family

ID=42744778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010123520 Pending CN101841705A (en) 2010-03-12 2010-03-12 Video lossless compression method based on adaptive template

Country Status (1)

Country Link
CN (1) CN101841705A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069819A (en) * 2015-07-23 2015-11-18 西安交通大学 Predicted value compensation method based on MED predication algorithm
CN108156462A (en) * 2017-12-28 2018-06-12 上海通途半导体科技有限公司 A kind of compression of images, decompression method, system and its ME of application frameworks
CN108347602A (en) * 2017-01-22 2018-07-31 上海澜至半导体有限公司 Method and apparatus for lossless compression video data
CN109561314A (en) * 2018-10-26 2019-04-02 西安科锐盛创新科技有限公司 The adaptive template prediction technique of bandwidth reduction
CN109660809A (en) * 2018-09-19 2019-04-19 福州瑞芯微电子股份有限公司 Based on the decoded colmv data lossless compression method of inter and system
CN112669396A (en) * 2020-12-18 2021-04-16 深圳智慧林网络科技有限公司 Image lossless compression method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1585486A (en) * 2004-05-27 2005-02-23 复旦大学 Non-loss visual-frequency compressing method based on space self-adaption prediction
CN1700255A (en) * 2004-03-30 2005-11-23 株式会社东芝 Image transmitter, image receiver, and image transmitting system
US20080111721A1 (en) * 2006-11-14 2008-05-15 Qualcomm, Incorporated Memory efficient coding of variable length codes

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1700255A (en) * 2004-03-30 2005-11-23 株式会社东芝 Image transmitter, image receiver, and image transmitting system
CN1585486A (en) * 2004-05-27 2005-02-23 复旦大学 Non-loss visual-frequency compressing method based on space self-adaption prediction
US20080111721A1 (en) * 2006-11-14 2008-05-15 Qualcomm, Incorporated Memory efficient coding of variable length codes

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《电子与信息学报》 20060331 夏杰等 一种新型的无损视频压缩方法 第28卷, 第3期 2 *
《计算机工程与科学》 20041231 张明锋等 基于时空自适应预测的无损视频压缩 第26卷, 第10期 2 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069819A (en) * 2015-07-23 2015-11-18 西安交通大学 Predicted value compensation method based on MED predication algorithm
CN105069819B (en) * 2015-07-23 2018-06-26 西安交通大学 A kind of predicted value compensation method based on MED prediction algorithms
CN108347602A (en) * 2017-01-22 2018-07-31 上海澜至半导体有限公司 Method and apparatus for lossless compression video data
CN108156462A (en) * 2017-12-28 2018-06-12 上海通途半导体科技有限公司 A kind of compression of images, decompression method, system and its ME of application frameworks
CN109660809A (en) * 2018-09-19 2019-04-19 福州瑞芯微电子股份有限公司 Based on the decoded colmv data lossless compression method of inter and system
CN109561314A (en) * 2018-10-26 2019-04-02 西安科锐盛创新科技有限公司 The adaptive template prediction technique of bandwidth reduction
CN112669396A (en) * 2020-12-18 2021-04-16 深圳智慧林网络科技有限公司 Image lossless compression method and device
CN112669396B (en) * 2020-12-18 2023-09-12 深圳智慧林网络科技有限公司 Lossless image compression method and device

Similar Documents

Publication Publication Date Title
US9781443B2 (en) Motion vector encoding/decoding method and device and image encoding/decoding method and device using same
CN103098469B (en) For the method and apparatus that conversion coefficient is carried out entropy code/entropy decoding
JP3887178B2 (en) Signal encoding method and apparatus, and decoding method and apparatus
CN102137263B (en) Distributed video coding and decoding methods based on classification of key frames of correlation noise model (CNM)
CN102763410B (en) To the method that the bit stream using oriented conversion to generate is decoded
CN102598670B (en) With reference to multiple frame, image is carried out to the method and apparatus of coding/decoding
CN104702958B (en) A kind of HEVC inner frame coding methods and system based on spatial coherence
US20060209961A1 (en) Video encoding/decoding method and apparatus using motion prediction between temporal levels
CN101841705A (en) Video lossless compression method based on adaptive template
CN102084655A (en) Video encoding by filter selection
CN1719735A (en) Method or device for coding a sequence of source pictures
CN105325004A (en) Video encoding method and apparatus, and video decoding method and apparatus based on signaling of sample adaptive offset parameters
CN105474642A (en) Re-encoding image sets using frequency-domain differences
CN101272489B (en) Encoding and decoding device and method for video image quality enhancement
CN102291582A (en) Distributed video encoding method based on motion compensation refinement
KR100597397B1 (en) Method For Encording Moving Picture Using Fast Motion Estimation Algorithm, And Apparatus For The Same
CN102256133A (en) Distributed video coding and decoding method based on side information refining
CN101014129A (en) Video data compression method
CN102647598A (en) H.264 inter-frame mode optimization method based on maximin MV (Music Video) difference value
US20090028241A1 (en) Device and method of coding moving image and device and method of decoding moving image
CN102595132A (en) Distributed video encoding and decoding method applied to wireless sensor network
KR101845622B1 (en) Adaptive rdpcm method for video coding, video encoding method based on adaptive rdpcm and video decoding method based on adaptive rdpcm
Wu et al. A two-stage lossless compression algorithm for aurora image using weighted motion compensation and context-based model
Jeong et al. An overhead-free region-based JPEG framework for task-driven image compression
CN116437102B (en) Method, system, equipment and storage medium for learning universal video coding

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20100922