1232675 玖、發明說明: 【發明所屬之技術領域】 本發明是有關於一種視訊壓縮裝置與方法,尤指一種具 有良好視訊壓縮比且具有可調性設計的視訊編碼方式,藉以 增進視訊壓縮的可調性功能以及增進連續畫面小波視訊轉 換(Interframe Wavelet Video Coding)在低位元率的表現。 【先前技術】 按’一般習用連續畫面小波視訊轉換(Interframe Wavelet Video Coding)產生之位元流含兩類資訊··一為移 動資訊(主要為移動向量),二為小波係數及其相關資訊, 而目刚只有第·一類負訊具可調性,因此,在低位元率時表現 不佳; 因為習用視訊可調性設計主要係針對轉換係數與小波 係數資料,在低位率的應用下仍嫌不足,而由於移動資訊 (motion information)仍占整體資料流相當一部份,因此, 本技術可使移動資訊亦具有可調性設計,可增進連續畫面小 波視訊轉換(Interframe Wavelet Video Coding)在低位元 率的表現。 又,視訊的可調性設計上,主要可分為三大類:空間可 調性(spatial scalability),時間可調性(temporal scalabi 1 ity)及雜訊比可調性(SNR scalabi 1 ity);而該 雜訊比可調性之方法,係利用位元平面(bit plane)的特性 1232675 達成畫質的漸近性調整,唯其編碼效率不佳。 【發明内容】 因此,本發明之主要目的係在於,可具有良好視訊壓 縮比且具有可調性设計的視訊編碼方式,藉以增進視訊壓 縮的可調性功能。 本發明之另一目的在於,可使移動資訊亦具有可調性 設计’可增進連續畫面小波視訊轉換(Interframe Wavelet Video Coding)在低位元率的表現。 為達上述之目的,本發明係一種視訊壓縮裝置與方 法,其係包括一編碼器、一解碼器及一拉取單元,藉以達 到提供可調性需求的視訊壓縮裝置,使該裝置具有對移動 資訊分層切割達成可調性要求的功能,並可接受可調性要 求,將分層之移動資訊傳送給接收端之能力,可用以對移 動資訊編碼單元作分層切割達成可調性(sca 1 ab 1 e),而將 移動資訊依空間(spatial)精準度(precision)、時間 (temporal)精準度、以及數值(numericai)精準度分層編 碼;使該移動資訊接受一可調性之要求,並可適當調整前 述三類精確度,以傳送對應移動資訊之資料;進而使該視 訊壓縮裝置與方法,具有良好視訊壓縮比且具有可調性設 計的視訊編碼方式,藉以增進視訊壓縮的可調性功能以及 增進連續畫面小波視訊轉換(Interframe Wavelet Video Coding)在低位元率的表現。 1232675 【實施方式】 請參閱『第1〜5圖』所示,係本發明視訊壓縮裝置之 方塊不意圖、本發明移動向量估測單元之流程圖、本發明移 動貝訊編碼單元之流程圖、本發明拉取單元之流程圖、本發 明移動資訊解碼單元之流程圖 。如圖所示:本發明係一種視 訊壓縮裝置與方法,可具有良好視訊壓縮比且具有可調性設 δ十的視訊編碼方式,藉以增進視訊壓縮的可調性功能以及增 進連、、只晝面小波視訊轉換(Interframe Wavelet Video1232675 发明 Description of the invention: [Technical field to which the invention belongs] The present invention relates to a video compression device and method, and particularly to a video coding method with a good video compression ratio and adjustable design, so as to improve the video compression. Tonal function and improve the performance of Interframe Wavelet Video Coding at low bit rate. [Previous technology] The bit stream generated by the Interframe Wavelet Video Coding in accordance with the general practice contains two types of information. One is the mobile information (mainly the motion vector), and the other is the wavelet coefficient and related information. However, Mugang only has the first type of negative signal with adjustable performance. Therefore, it does not perform well at low bit rates. Because the conventional video adjustable design is mainly for conversion coefficient and wavelet coefficient data, it is still suspect in low bit rate applications. Insufficient, and because motion information still accounts for a considerable part of the overall data flow, this technology can also make mobile information have a tunable design, which can improve the continuous frame wavelet video coding. The performance of the rate. In addition, the design of video tunability can be divided into three categories: spatial scalability, temporal scalability (temporal scalabi 1 ity), and noise ratio tunability (SNR scalabi 1 ity); The method of adjusting the noise ratio uses the characteristic 1232675 of the bit plane to achieve the asymptotic adjustment of the picture quality, but the coding efficiency is not good. [Summary of the Invention] Therefore, the main object of the present invention is to provide a video coding method with a good video compression ratio and adjustable design, so as to improve the adjustable function of the video compression. Another object of the present invention is to make mobile information also tunable. Design can improve the performance of continuous frame wavelet video coding at low bit rates. In order to achieve the above object, the present invention is a video compression device and method, which includes an encoder, a decoder, and a pull unit, so as to achieve the video compression device that provides the adjustability requirement, so that the device has the advantages of mobile The function of hierarchical information cutting to achieve the tunability requirements, and the ability to accept the tunability requirements, the ability to transmit hierarchical mobile information to the receiving end, can be used to hierarchically cut the mobile information coding unit to achieve tunability (sca 1 ab 1 e), and hierarchically encode mobile information according to spatial accuracy, temporal accuracy, and numerical accuracy; so that the mobile information accepts a request for tunability The three types of accuracy described above can be adjusted appropriately to transmit data corresponding to mobile information; furthermore, the video compression device and method have a good video compression ratio and adjustable video coding method, thereby improving the video compression. Tonal function and improve the performance of Interframe Wavelet Video Coding at low bit rate. 1232675 [Embodiment] Please refer to "Figures 1 to 5", which are not intended for the blocks of the video compression device of the present invention, the flow chart of the motion vector estimation unit of the present invention, the flow chart of the mobile beacon encoding unit of the present invention, The flowchart of the pulling unit of the present invention and the flowchart of the mobile information decoding unit of the present invention. As shown in the figure: The present invention is a video compression device and method, which can have a good video compression ratio and a tunable video coding method of δ, so as to improve the adjustable function of video compression and improve the connection time 2. Interframe Wavelet Video Conversion
Coding)在低位元率的表現,該視訊壓縮裝·置係由一編碼器 1、一解竭器2以及一連接該編碼器1與解碼器2之拉取單 元3所構成; 上述所提之編碼器1係用以作為視訊輸入,且該編碼器 1係包括: 一移動向量補償時間濾波分析單元1 1,該移動向量補 償時間遽波分析單元1 1係用以作為時間轴上畫面的分 析’利用將移動向量估測單元1 5所得之移動向量將輸入的 畫面拆解成高頻與低頻的晝面,而達到輸入原始影像,輸出 高頻與低頻畫面; 一空間分析單元1 2,係與上述移動向量補償時間濾波 分析單元1 1連接,而該空間分析單元1 2係用以將所得之 南低頻晝面再經過小波轉換(Discrete Wavelet Transform, 簡稱DWT)拆解成在空間中的高頻與低頻訊號,達到輸入高頻 與低頻晝面,輸出經DWT空間拆解後之高頻與低頻畫面; 一嵌入式零值樹狀編碼單元1 3,係與上述空間分析單 1232675 元1 2連接,而該嵌入式零值樹狀編碼單元2 3係利用經空 間分析單元1 2所得之高頻低頻訊號間零值(zer〇 value)的 關係做壓縮編碼,達到輸入經DWT空間拆解後之高頻與低頻 畫面,輸出壓縮後之畫面内容資料流(vide〇 cMtent bitstream); -封包化單元1 4,係與上述嵌人式零值樹狀編碼單元 1 3連接,而該封包化單元! 4係収將㈣後之畫面内容 資料流與壓縮後之移動資訊包裝成單—合㈣縮資料流,達 到輸入:壓縮後之畫面内容資料流與壓縮後之移動資訊,輸 出單一合併壓縮資料流; 一移動向量估測單元1 5,禕盥μ、+、1 …早^係與上述移動向量補償時間 遽波刀析早疋1 1連接,而該移動向量估測單元i 5,係用 二 =3,移動向量’藉以得到此層之移動向量否已搜 的:龍丨孩如第2圖所不)’且其係利用相鄰兩張畫面 的相關性’以移動向量型式纟絲 L『 錄所對應最小差異的方塊 (block)位置,達成壓縮目的, 動運到輸入兩張畫面,輸出移 層(layer) ’並針對基本層鱼 力话 coding)(如第3圖所示將做异術編碼(arithmetic =:資達到輪八移動資訊,_ 移動*訊料單元16縣移動資訊 1232675 依空間(spatial)精準度或時間(temp〇ral)精準度或數值 (numerical)精準度分層切割編碼,而該空間精準度係為所 切割的移動方塊(motion block) ’該時間精準度係為每秒畫 幅(f rame)張數,該數值精準度係為移動向量(m〇t丨〇n vect〇r ) 數字表不法之準確度,又該移動資訊編碼單元1 6係為協助 重建移動向量估測單元1 5之相關資訊; 該解碼器2係用以作為視訊輪出,且該解碼器2係包 括: 一解封包化單元2ι; 一嵌入式零值樹狀解碼單元2 2,係與上述解封包化單 兀2 1連接’而該嵌入式零值樹狀解碼單元2 2係利用經空 間分析所得之高頻低頻訊號間零值(zer。value)的關係做壓 縮解碼,達到輸入壓縮後之晝面内容資料流(vide〇 content bitstream),輸出空間上高頻與低頻晝面; -空間合成單τ〇2 3,係與上述嵌人式零值樹狀解碼單 το 2 2連接,而該空間合成單元2 3用以將所得之空間中的 高頻與低頻訊號經過边小波轉換(Inverse Discrete Wavelet Transform,簡稱IDWT)拆解成在時間上之高頻與低 頻晝面,達到輸入空間上高頻與低頻畫面,輸出經IDWT後所 得之時間上1¾頻與低頻畫面; 一移動向量補償時間濾波合成單元2 4,係與上述空間 合成單凡2 3連接,而該移動向量補償時間濾波合成單元2 4係用以作時間轴上高頻與低頻畫面的合成,利用所得之移 動向量將輸入的高低頻畫面合成為原始影像,達到輸入經 1232675 IDWT後所得之時間上高頻與低頻晝面,輸出原始影像; 一移動資訊解蝎單元2 5,係分別與上述之解封包化單 元及移動向ϊ補償時間濾波合成單元2 4連接,而該移動資 訊解碼單70 2 5,係用以對移動資訊編碼單元1 6之基本層· 與加強層與做算術解碼(arithmetic decoding),將所得之 · 基本層與加強層與移動資訊編碼單元1 6合併為,移動向量 (如第5圖所示),而將所得之壓縮後移動資訊利用算術解 碼(arithmetic decoding),達到輸入壓縮後之移動資訊, 輪出移動資訊; · 該拉取單元3係分別與上述之編碼器1及解碼器2連 接’而該拉取單元3係用以讀取位元率資訊,以切割壓縮影 像内容資料流,並判斷所讀取之位元率資訊是否需要加強 層’再送出基本層之移動資訊(mot ion information) ’ 且根 據所需位元率切割加強層之移動資訊,合併切割後之影像内 各資料流及切割後之移動資訊成為新的壓縮資料流(如第4 圖所示);藉由上述之裝置達到提供可調性需求的視訊壓縮 | I置’使該裝置具有對移動資訊分層切割達成可調性要求的 功能’並可接受可調性要求,將分層之移動資訊傳送給接收 端之能力。 而本發明之視訊壓縮方法,係用以對移動資訊做分層切 ~ 割達成可調性(scalable)之方法,該方法係將移動資訊編碼 ‘ 單元1 6依空間(spatial)精準度(precision)、時間 temporal)精準度、以及數值(numericai)精準度分層編 碼;使該移動資訊接受一可調性之要求,並可適當調整前述 1232675 三類精確度,以傳送對應移動資訊之資料,而該空間精準度 係為所切割的移動方塊(motion block),該時間精準度係為 每秒畫幅(frame)張數,該數值精準度係為移動向量(m〇ti〇n vector)數字表示法之準確度,且該可調性要求係傳輸位元 率之大小或前述三類精準度之個別或多項要求,又該移動資 訊係為移動向量(motion vector)及協助重建移動向量之相 關資訊,另該視訊壓縮方法可為連續畫面小波視訊轉換 (Inter^rame Wavelet Video coding)或視訊編碼方式中含Coding) performance at a low bit rate. The video compression device is composed of an encoder 1, a deactivator 2 and a pull unit 3 connecting the encoder 1 and the decoder 2. The above mentioned The encoder 1 is used as a video input, and the encoder 1 includes: a motion vector compensation time filter analysis unit 11, the motion vector compensation time chirp analysis unit 11 is used as a picture analysis on the time axis 'Using the motion vector obtained by the motion vector estimation unit 15 to decompose the input picture into high-frequency and low-frequency diurnal surfaces, so as to input the original image and output high-frequency and low-frequency pictures; a spatial analysis unit 12 is a system It is connected to the above-mentioned motion vector compensation time filter analysis unit 11, and the space analysis unit 12 is used to disassemble the obtained low-frequency daytime surface of the South through the Discrete Wavelet Transform (DWT for short) into the high-frequency in space. High-frequency and low-frequency signals, reaching the input high-frequency and low-frequency daytime surface, and outputting the high-frequency and low-frequency pictures after DWT space disassembly; an embedded zero-value tree coding unit 1 3, which is the same as the above-mentioned space analysis unit 1 232675 yuan 1 2 connection, and the embedded zero-value tree coding unit 2 3 uses the relationship between the zero frequency (zero value) of the high-frequency and low-frequency signals obtained by the spatial analysis unit 12 to perform compression coding, and the input is DWT The high-frequency and low-frequency pictures after space disassembly, and the compressed picture content data stream (videocctent bitstream) is output;-the packetization unit 14 is connected to the embedded zero-value tree-like coding unit 13 described above, and The packetization unit! 4 is to pack the subsequent screen content data stream and compressed mobile information into a single package—combining the compressed data stream to achieve input: the compressed screen content data stream and the compressed mobile information, and output a single combined compressed data stream ; A motion vector estimation unit 15, 祎 μ, +, 1… is connected to the above-mentioned motion vector compensation time 遽 wave analysis 疋 1 1, and the motion vector estimation unit i 5 uses two = 3, the motion vector 'whether the motion vector of this layer has been searched: Dragon 丨 as in Figure 2)' and it uses the correlation between two adjacent pictures 'to reel in the motion vector pattern L' Record the block position corresponding to the smallest difference, to achieve the purpose of compression, move to the input two pictures, output the layer (layer) and code for the basic layer fish words) (as shown in Figure 3 will be different Arithmetic coding (arithmetic =: information reached eight rounds of mobile information, _ mobile * information unit 16 county mobile information 1232675 layered cutting according to spatial accuracy or temporal accuracy or numerical accuracy Encoding, and the spatial accuracy is Cut motion block 'The time accuracy is the number of frames per second (frame), and the value accuracy is the motion vector (m〇t 丨 〇n vect〇r) The accuracy of the digital table is not accurate The mobile information coding unit 16 is to assist in reconstructing the relevant information of the motion vector estimation unit 15; the decoder 2 is used as a video rotation, and the decoder 2 includes: a decapsulation unit 2ι; an embedded zero-valued tree-like decoding unit 2 2 is connected to the above-mentioned decapsulation packetized unit 2 1 ′, and the embedded zero-valued tree-like decoding unit 2 2 uses high-frequency and low-frequency signals obtained by spatial analysis. The relationship of zero value (zer.value) is compressed and decoded to achieve the input compressed daytime content data stream (vide〇content bitstream), and output high-frequency and low-frequency day-planes in space;-spatial synthesis unit τ〇2 3, system It is connected to the embedded zero-value tree-like decoding unit το 2 2 described above, and the spatial synthesizing unit 23 is used to decompose the obtained high-frequency and low-frequency signals in the space through Inverse Discrete Wavelet Transform (IDWT) Resolve in time The high-frequency and low-frequency day surfaces reach the high-frequency and low-frequency pictures in the input space, and output the 1¾ frequency and low-frequency pictures in time obtained after IDWT; a motion vector compensation time filtering and synthesis unit 2 4 is a unitary unit with the above space. 3, and the motion vector compensation time filtering and synthesis unit 2 4 is used to synthesize high-frequency and low-frequency pictures on the time axis. The obtained high-low-frequency pictures are used to synthesize the original high-frequency and low-frequency pictures into the original image. The high-frequency and low-frequency diurnal surfaces obtained at the later time output the original image; a mobile information de-scorpion unit 25 is connected to the above-mentioned de-encapsulation unit and movement-direction compensation time filter synthesis unit 24 respectively, and the movement The information decoding sheet 70 2 5 is used to combine the basic layer and the enhancement layer of the mobile information coding unit 16 with arithmetic decoding, and merge the obtained base layer and the enhancement layer with the mobile information coding unit 16 For the motion vector (as shown in Figure 5), the obtained compressed motion information is subjected to arithmetic decoding to achieve input compression. The mobile information is rotated out. The pull unit 3 is connected to the above-mentioned encoder 1 and decoder 2 respectively, and the pull unit 3 is used to read bit rate information to cut the compressed image content. Data stream, and determine whether the read bit rate information requires the enhancement layer 'send the mobile layer's movement information' and cut the movement information of the enhancement layer according to the required bit rate to combine the cut images Each data stream and the cut mobile information becomes a new compressed data stream (as shown in Figure 4); the video compression provided by the above device to provide the adjustability requirements | I 'makes the device have mobile information The ability of layered cutting to achieve tunability requirements and the ability to accept tunability requirements to transmit layered mobile information to the receiving end. The video compression method of the present invention is a method for hierarchically cutting through mobile information to achieve scalability. The method is to encode mobile information. The unit 16 is based on spatial precision. ), Time) precision, and numericai precision hierarchical coding; make the mobile information accept a tunability requirement, and can adjust the above-mentioned 1232675 three types of accuracy to transmit the data corresponding to the mobile information, The spatial accuracy is the cut motion block, the time accuracy is the number of frames per second, and the numerical accuracy is the numerical representation of the motion vector. The accuracy of the method, and the tunability requirement is the size of the transmission bit rate or the individual or multiple requirements of the three types of accuracy described above, and the motion information is the motion vector and related information to help reconstruct the motion vector In addition, the video compression method may include continuous picture wavelet video coding (Inter ^ rame Wavelet Video coding) or video coding.
有移動資訊者,如是,形成一全新之視訊壓縮裝置與方法。 _請參閱『第6、7圖』所示,係本發明之移動向量估測 不意圖、本發明之將移動向量分層編碼示意圖。如圖所示: 本發明在作移動向量編碼時第一步係以移動向量估測單元 1 5在原始編碼過程中所採用的為多階層式移動向量估測 (hierarchical motion estimati〇n),其主要目的為可利用Those with mobile information, if so, form a new video compression device and method. _ Please refer to "Figures 6 and 7", which is a schematic diagram of the motion vector layered coding of the present invention, which is not intended for motion vector estimation. As shown in the figure: The first step in the present invention when performing motion vector coding is to use the motion vector estimation unit 15 in the original coding process as a multi-level motion vector estimation (hierarchical motion estimati). Main purpose is available
多階層移動向量的特性,可產生不同階層(可為不同準確 度,不同方塊(block))大小之移動向量;(如第6圖所示) 可利用原始影象’下降解析度二分之―,與下降解析度四分 之-之影象找出由64x64’ 32x32’ 16x16, 8x8及4x4五種不 同方塊大小之所有移動向量,而在下—步中可利用此階層特 性做可調性設計; 第-步係以移動資訊編碼單元工6將移動向量分層編 :^上步中可得到五層之移動向量,而將此階層分層編 可調陡叹计+,在拉取單元3之拉取過程⑽^ ^ Process)會依據所須資料量大小(如位元率)決定傳送資料 1232675 之多寡,因此將移動向量分層,依所需資料量大小決定傳送 之總階層數;(如第7圖所示),可將第一步之例子將五層移 j 動向量分為兩個階層,較大的移動向量方塊三階層(64x46, 32x32,及16x16為基本層(base layer),為傳送時必需傳輸 之基本移動向量,較小的移動向量方塊二階層(8X8及4x4), 為加強層(enhancement layer),可依據傳送所要求之資料 量決定傳或不傳; 第三步係將分層後之移動向量寫入壓縮資料流 (bitstream) ’以第二步之例子為例,將分為基本層與加強 層之移動向量分別編碼,寫入資料流; 而上述拉取單元3之拉取過程(pUii process)係包括下 列步驟: 第一步:依系統提供之位元率切割壓縮資料流,依系統 所提供之位元率,若位元高,則傳送基本層與加強層若位元 率低,則僅傳送基本層。藉此可滿足系統之可調性 (scalability)要求。 第二步:將切割後之資料流重新合併組成新壓縮資料 流’將最後切割過之移動向量資料流重新與切割過之視訊内 今壓縮資料合併成為新資料流,此資料流符合系統之要求資 料量。 一經上述拉取單元3之拉取過程(pull pr〇cessw^,讀取 經拉取過程後之移動向量進行解碼,在本發明中, 解碼端會讀取經拉取過程處理之㈣向量,可為基本層或是 基本層加上加強層。 12 1232675 可使本發明達到: 1·在低位元率時,通道頻寬(channei bandwidth)隨時間變 化’可利用連續畫面小波視訊轉換(Interframe Wavelet Video Coding)的可調性功能加上此移動資訊可調性設計 之技術,使視訊壓縮資料順利傳輸並保有品質。 2·在視訊會議應用中’利用pda做為終端(terminal)裝置’ 由於PDA硬體功能不夠強大,僅能利用較低位元率的傳輸 達成即時壓縮解壓縮,可利用此技術配合習用連續畫面小 波視 sfl 轉換(Inter frame Wavelet Vi ddo Coding)的功能 達成更好的可調性設計。 惟以上所述者,僅為本發明之較佳實施例而已,當不能 % 以此限定本發明實施之範圍;故,凡依本發明.申請專利範圍 及發明說明書内容所作之簡單的等效變化與修飾,皆應仍屬 本發明專利涵蓋之範圍内。 【圖式簡單說明】 第1圖係本發明視訊壓縮裝置之方塊示意圖。 第2圖係本發明移動向量估測單元之流程圖。 第3圖係本發明移動資訊編碼單元之流程圖。 第4圖係本發明拉取單元之流程圖。 第5圖係本發明移動資訊解碼單元之流程圖。 第6圖係本發明之移動向量估測示意圖。 第7圖係本發明之將移動向量分層編碼示意圖。 13 1232675 【元件標號對照】 編碼ι§ 1 移動向量補償時間濾波分析單元11 空間分析單元1 2 β 嵌入式零值樹狀編碼單元13 封包化單元14 移動向量估測單元1 5 移動資訊編碼單元16 解碼器2 解封包化單元2 1 嵌入式零值樹狀解碼單元2 2 空間合成單元2 3 移動向量補償時間濾波合成單元2 4 移動資訊解碼單元2 5 拉取單元3The characteristics of multi-level motion vectors can generate motion vectors of different levels (which can have different accuracy and different blocks); (as shown in Figure 6) The original image can be used to reduce the resolution by one-half, The image with quarter-of-the-down resolution finds all movement vectors with five different block sizes of 64x64 '32x32' 16x16, 8x8, and 4x4, and in the next step, you can use this hierarchical feature to make the adjustable design; -The step is to use the mobile information coding unit 6 to hierarchically edit the motion vector: ^ In the previous step, five layers of motion vectors can be obtained, and this hierarchical layer can be edited with adjustable sigh meter +. The fetch process ⑽ ^ ^ Process) will determine the amount of data to be transmitted 1232675 according to the required amount of data (such as bit rate), so the motion vector is layered, and the total number of levels to be transmitted is determined according to the required amount of data; As shown in Figure 7), the example of the first step can be divided into two layers of the five-layer motion vector, and the larger motion vector squares are three layers (64x46, 32x32, and 16x16 are the base layers). The basic movement direction that must be transmitted during transmission The second layer of smaller motion vector squares (8X8 and 4x4) is the enhancement layer, which can be passed or not transmitted according to the amount of data required for transmission. The third step is to write the layered motion vector into Compress the data stream (bitstream) 'Take the example of the second step as an example, encode the movement vectors divided into the basic layer and the enhancement layer and write them into the data stream; and the pull process of the above-mentioned pull unit 3 (pUii process) is It includes the following steps: Step 1: Cut the compressed data stream according to the bit rate provided by the system. According to the bit rate provided by the system, if the bit is high, then the base layer and the enhancement layer are transmitted. If the bit rate is low, only the bit rate is transmitted. The basic layer. This can meet the scalability requirements of the system. The second step: re-combined the cut data streams to form a new compressed data stream. 'Move the last cut motion vector data stream back to the cut video The compressed data is merged into a new data stream, and this data stream meets the system's required data volume. Once the pull process of the above-mentioned pull unit 3 (pull pr0cessw ^, read the movement vector after the pull process Line decoding, in the present invention, the decoding end reads the unitary vector processed by the pull process, which can be the base layer or the base layer plus the enhancement layer. 12 1232675 enables the present invention to achieve: 1. At a low bit rate The channel bandwidth (channei bandwidth) changes over time. The tunability of Interframe Wavelet Video Coding can be used together with the technology of this mobile information tunability design, so that the compressed video data can be transmitted smoothly and retained. Quality. 2. 'Using pda as a terminal device' in the video conference application. Because the PDA hardware is not powerful enough, it can only use lower bit rate transmission to achieve real-time compression and decompression. You can use this technology to meet your common needs. The function of Inter frame Wavelet Viddo Coding achieves a better design of adjustability. However, the above are only the preferred embodiments of the present invention. When this cannot be used to limit the scope of implementation of the present invention; therefore, any simple equivalent changes made according to the scope of the present invention. And modifications should still fall within the scope of the invention patent. [Brief description of the drawings] FIG. 1 is a block diagram of a video compression device of the present invention. FIG. 2 is a flowchart of a motion vector estimation unit of the present invention. FIG. 3 is a flowchart of a mobile information coding unit according to the present invention. FIG. 4 is a flowchart of a pulling unit of the present invention. FIG. 5 is a flowchart of a mobile information decoding unit according to the present invention. FIG. 6 is a schematic diagram of motion vector estimation according to the present invention. FIG. 7 is a schematic diagram of layered coding of a motion vector according to the present invention. 13 1232675 [Comparison of component labels] Coding ι § 1 Motion vector compensation time filter analysis unit 11 Space analysis unit 1 2 β Embedded zero-value tree coding unit 13 Encapsulation unit 14 Motion vector estimation unit 1 5 Mobile information coding unit 16 Decoder 2 Decapsulation unit 2 1 Embedded zero-value tree-like decoding unit 2 2 Space synthesis unit 2 3 Motion vector compensation time filter synthesis unit 2 4 Mobile information decoding unit 2 5 Pull unit 3