TW201143459A - Apparatus for encoding dynamic image and apparatus for decoding dynamic image - Google Patents

Apparatus for encoding dynamic image and apparatus for decoding dynamic image Download PDF

Info

Publication number
TW201143459A
TW201143459A TW100111976A TW100111976A TW201143459A TW 201143459 A TW201143459 A TW 201143459A TW 100111976 A TW100111976 A TW 100111976A TW 100111976 A TW100111976 A TW 100111976A TW 201143459 A TW201143459 A TW 201143459A
Authority
TW
Taiwan
Prior art keywords
unit
prediction
image
binarization
coding
Prior art date
Application number
TW100111976A
Other languages
Chinese (zh)
Inventor
Yoshimi Moriya
Shunichi Sekiguchi
Kazuo Sugimoto
Kohtaro Asai
Tokumichi Murakami
Original Assignee
Mitsubishi Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Electric Corp filed Critical Mitsubishi Electric Corp
Publication of TW201143459A publication Critical patent/TW201143459A/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

This invention provides an apparatus for encoding dynamic image, in which a frequency information generating section 93 counts the occurrence frequency of a most appropriate encoding mode 7 and generates a frequency information 94, and a binarization table updating section 95 updates the corresponding relationship between a multi-values signal in the binarization table in a binarization table memory 105 and a binarization signal, base on the frequency information 94. A binarization section 92 converts a most appropriate encoding mode 7a in the multi-values signal into a binarization signal 103 by using the appropriately updated binarization table, and the binarization signal is encoded by a calculation encoding processing section 104.

Description

201143459 ·. 六、發明說明: 【發明所屬之技術領域】 本發明係關於將動晝像分割成預定區域,以區域單位 進行編碼之動畫像編碼裝置、及以預定區域單位將被編碼 的動畫像進行解碼之動晝像解碼裝置。 【先前技術】 駕知 MPEG(Moving Picture Experts Group 動晝像專 家系統)及「ITU-T Η, 26χ」等的國際標準影像編碼方式係 採用以亮度訊號16x16像素與對應的色差訊號8χ8像素量 匯集在一起之區塊資料(稱為「巨集區塊」)為單位,根據 移動補償技術及直交變換'變換係數量化技術,來壓縮影像 訊號的各圖框之方法。 所明移動補償技術為利用視訊圖框間所存在的高度相 關性按每—巨錢_減切間方向之減的冗餘度之技 且為將過去已編碼完成的圖框作為參考晝像事先儲存 錢體内’再由參考晝像巾的預定搜尋範_,搜尋已 成為移動補償預測的對象之現巨集區塊及差分電力最小的 區塊區域’將現巨集區塊的空間位置與參考晝像中之搜尋 結果區塊的空間位置的偏移,作為移動向量編碼之技術。 這種習知的國際標準影像編碼方式中’因巨集區塊大 二、已固定’尤其在畫像的解像度高的情況時,固定的巨集 Α龙大J中£集區塊涵蓋的區域易於局部化。於是,會 發生時而周邊巨集區塊成為相同編碼模式,時而分配相同 移動向量的情形。這種情形,預測效率不會提高’且還會 323005 4 201143459 *- 使編碼的編碼模式資訊和移動向量資訊等的總負荷 (overhead)增大,因此就編碼器整體而言,編碼效率會降 低。 針對這種問題,習知已有以藉由晝像的解像度或内容 切換巨集區塊大小的方式形成的裝置(例如,參考專利文獻 1) °專利文獻1中的動晝像編碼裝置係形成為可因應畫像 内容、解像度、設定檔(profile)等,將巨集區塊大小適應 地轉換成片段單位、圖框單位、順序單位等而進行編碼。 習知的國際標準影像編碼方式及專利文獻i所記載之 裝置’係在巨集區塊内存在移動不同的物件時,藉由選擇 =割巨集區塊的編碼模式’可對應於局部性小的動態物 件。但是,國際標準影像編馬方 -,1乃式,除了巨集區塊分割之 必須編:集區塊内的分割之編碼模式的資訊也 對此,專利文獻j中, 叙能榀从备m 在比已決疋的巨集區塊為小的 動〜、物件在圖框内存在許多 區塊內Μ绝1 丨月/凡甲,為使選擇分割巨集 尾内的編碼模式的比例變 小的方式,由於在選擇大^縣選擇小的巨集區塊大 的指示巨集區塊内之分^維的集區塊大小的情況所產生 ^印使相同料&_;^模4式的相關碼量不致發 〔先前技術文獻〕 〔專利文獻〕 【發獻υ國際公開公報霞圓34918號 323005 5 201143459 〔發=所欲解決之課題〕 要決=佳!明存在有"下的課題:為了 内容必須經過 、、扁馬別正確地評估晝像 得很魔大。又、月⑶前處理的相關處理量變 本發明係用以解決如同上述的 不依賴畫像内容,即使在預先設定的=目的為提供 可抑制編石馬模式等有關總負荷的石馬量,而^大小中,仍 編碼之動晝像編碼裝置及動晝像解可有效率地壓縮 〔用以解決課題之手段〕 裝置。 本發明的動畫像編碼裝置,俜 中選擇指定有根據編碼效率之預定部由編碼模式 值化部,變長度編碼部係具: 號的編碼根編碼控制部所選擇之多:: 將一值化部所變換過的二值訊號予=里運异部,係 位兀列,並將該編碼位元列予 位竭:輪出蝙石馬 化表更新部,係根據編碼模式各者藉由編;及二值 頻度’來更斜前述二值化表之多=碼:制部的選擇 關係。 °與—值訊號的對應 本發日月的動畫像解碼裝置,且 部,係由可變長度解碼部將表干解碼處理運算 位几列施以算術解喝,而生成二值訊就的,模 u ’反二值 323〇〇5 201143459 化部,係使用指定有表示編碼模式之二值訊號與多值訊號 的對應關係之二值化表,將以算術解碼處理運算部生成的 二值訊號來表示之編碼模式變換為多值訊號。 〔發明效果〕 依據本發明,由於使用根據編碼模式之藉由編碼控制 部的選擇頻度而更新之二值化表,將表示編碼模式的多值 訊號變換為二值訊號而進行算術編碼,並多工化至位元 流。所以可提供一種不依賴晝像内容,即使在預先設定的 巨集區塊大小中,仍可抑制編碼模式等有關總負荷的碼 量,而有效率地壓縮編碼之動晝像編碼裝置及動晝像解碼 裝置。 【實施方式】 以下,參考圖面來詳細說明本發明的實施形態。 (第1實施形態) 本第1實施形態中,係針對使用影像的各圖框晝像作 為輸入,在附近圖框間進行移動補償預測,對獲得的預測 差分訊號施予直交變換-量化的壓縮處理後,進行可變長度 解碼而生成位元流之動晝像編碼裝置、及將其位元流解碼 之動晝像解碼裝置進行說明。 第1圖為顯示本發明的第1實施形態之動晝像編碼裝 置的構成之方塊圖。第1圖所示的動晝像編碼裝置包括: 區塊分割部2,係把將輸入影像訊號1的各圖框晝像分割 成巨集區塊大小4的複數個區塊之區塊畫像,因應編碼模 式7而輸出分割成1個以上的子區塊之巨集/子區塊晝像 7 323005 201143459 5 ;晝面内預測部8,係於輸入巨集/子區塊晝像5時,使 用畫面内預測用記憶體28的晝像訊號,對該巨集/子區塊 晝像5進行圖框内預測而生成預測晝像11 ;移動補償預測 部9,係於輸入巨集/子區塊畫像5時,使用移動補償預 測圖框記憶體14的參考晝像15,對該巨集/子區塊畫像5 進行移動補償預測而生成預測晝像17 ;切換部6,係因應 編碼模式7將巨集/子區塊晝像5輸入於晝面内預測部8 或移動補償預測部9的任一方;減法部12,係由區塊分割 4 2輸出之巨集/子區塊晝像5,減去晝面内預測部8或 移動補償預測部9的任一方所輸出之預測畫像u、17而生 成預測差分訊號13 ;變換-量化部19,係對預測差分訊號 13進行文換和量化處理而生成壓縮資料u ;可變長度編碼 部f係將壓縮資料21予以熵編碼而多工化為位元流30; 化反變換部22,係將壓縮資料2ι進行反量化和反變 化^而^成局部解碼差分訊號24 ;加法部25,係在反量 的任-H22加上畫面内預測部8或移動補償預測部9 萝26.查戶P出之預測晝像11、17而生成局部解瑪畫像訊 si迴=預測用記憶㈣,她 Zb,迴路濾波部27 處理而生成局部解碼晝像=像訊號26進行滤波 Η,係儲存局部解碼晝像29 :及移_償預顧框記憶體 編碼控制部3,係輸出 區塊大小4、編碼模式7 =1 的處理所必要的資訊(巨集 最佳預測參數1〇 、,碼模式以、預測參數10、 數他和18a 1縮參數2()、最佳壓縮參數 323005 8 201143459 、 20a)。以下,詳細說明巨集區塊大小4及編碼模 立 他的資訊於後詳述。 /、 編碼控制部3,係對區塊分割部2指定輸入影像訊號1 之各圖框晝像的巨集區塊大小4,並且按編碼對象的每一 巨集區塊’指示可因應圖像類型而選擇的全部編碼模式7。 此外編馬控制部3,係可由編碼模式組(set)中選擇 預疋編碼模式,但此編碼模式組為任意,設成可由例如以 下所示的第2A圖或第2B圖的組中選擇預定編碼模式。 第2A圖為顯示進行在時間方向的預測編竭之p Gfedictive)圖像的一編碼模式例之圖。第2A圖中,此 mode 〇至2為藉由圖框間預測將巨集區塊(MxL像素區塊) 編碼之模式(inter)。mb_mode 0為對巨集區塊全體分配1 個移動向量之模式,mb_mode 1、2分別為水平或垂直地等 刀巨集區塊,在被分割的各子區塊分別分配各不相同的移 動向量之模式。 mb_jnode 3為將巨集區塊予以4分割’在所分割的各 子區塊分配不同的編碼模式(subjnb一mode)之模式。 sub—mbjnode 0至4係為當以巨集區塊的編碼模式選 rab〜m〇de 3時,對4分割該巨集區塊所得之各子區塊(m Μ像素區塊)分別分配之編碼模式,而sub_jnb_jn〇de 0為 藉由圖拖内預測將子區塊編碼之模式(intra)。除此之外, 為藉由圖框間預測予以編碼之模式(inter),sub_mb_mode 1 為野子區塊全體分配1個移動向量之模式,subjnbjnode 2、 3分別為水平或垂直地等分子區塊,且在所分割的各子區 9 323005 201143459 、 塊分別分配各不相同的移動向篁之模式,sub_mb_m〇(ie 4 為4分割子區塊’在所分割的各子區塊分別分配不同的移 動向量之模式° 另外,第2B圖為顯示進行在時間方向的預測編碼之p 圖像的編碼模式的另一例圖。第2B圖中,mbjnode 〇至6 為藉由圖框間預測將巨集區塊(MxL像素區塊)予以編,之 模式(inter)。mbjnode 0為對巨集區塊全體分配1個移動 向量之模式’ mb—mode 1至6分別為朝水平、垂直或對角 方向分割巨集區塊’在所分割的各子區塊分別分配各不相 同的移動向量之模式。 mb_mode 7為將巨集區塊予以4分割,在所分割的子 區塊分配不同的編喝模式(sub_mb一mode )之模式。 sub一mbjnode 0至8為以巨集區塊的編碼模式選擇吣 mode7時,對4分割該巨集區塊所得之各子區塊像素 區塊)分別分配之編蝎模式,subjnl3_m〇de 〇為藉由圖框内 預測將子區塊予以編碼之模式(intra)。除此之外為藉由圖 框間預測進行編碼之模式(inter),sub—mb_m〇de 1為對子 =塊全體分配1個移動向量之模式,___« 2至7 /刀別為朝水平、垂直或對角方向分割子區塊,對所分割的 各子區塊分別分配各不相同的移動向量之模式, mode 8為將子區塊予以4分割,在所分割的各 ^ 不同的移動向量之模式。 见 區塊分割部2,係將輪入於動晝像編碼裴置之輸入影 像訊號1的各圖框畫像,分割成由編碼控制部3指定之巨 323005 10 201143459 集區塊大小4的巨集區塊晝像。再者,區塊分割部2,係 在由編碼控制部3指定的編碼模式7包含有對分割巨集區 塊所得之子區塊分配不相同的編碼模式之模式(第2A圖的 sub_mb_mode 1 至 4 或第 2B 圖的 sub_mb_mode 1 至 8)的情 況時,將巨集區塊晝像分割成編碼模式7所示之子區塊晝 像。因而,由區塊分割部2輸出的區塊晝像,即因應編碼 模式7而成為巨集區塊晝像或子區塊晝像的任一方。以 下,將此區塊晝像稱為巨集/子區塊畫像5。 此外,當輸入影像訊號1之各圖框的水平或垂直大小 非為巨集區塊大小4之各水平大小或垂直大小的整數倍 時,對輸入影像訊號1的各圖框,生成在水平方向或在垂 直方向擴充像素之圖框(擴充圖框)直到圖框大小成為巨集 區塊大小的整數倍為止。就擴充區域的像素的生成方法而 言,例如在垂直方向擴充像素的情況時,有將原來圖框的 下端之像素重複進行填充,或者,以具有固定的像素值 (灰、黑、白等)之像素進行填充等方法。在水平方向擴充 像素的情況也是同樣地,有將原來圖框的右端之像素重複 進行填充,或者,以具有固定的像素值(灰、黑、白等)之 像素進行填充等方法。對輸入影像訊號1的各圖框生成之 圖框大小為巨集區塊大小的整數倍之擴充圖框,可取代輸 入影像訊號1的各圖框晝像,而輸入到區塊分割部2。 此外,巨集區塊大小4及輸入影像訊號1之各圖框的 圖框大小(水平大小及垂直大小),由於係以由1圖框以上 的圖像構成之順序單位或圖像單位多工化為位元流,故可 11 323005 201143459BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a moving picture coding apparatus that divides a moving image into a predetermined area, encodes it in a unit of area, and a moving picture to be encoded in a predetermined area unit. A moving image decoding device that performs decoding. [Prior Art] The international standard video coding method such as MPEG (Moving Picture Experts Group) and "ITU-T Η, 26χ" is used to collect the luminance signal 16x16 pixels and the corresponding color difference signal 8χ8 pixels. The block data (called "macroblock") is a unit, and the method of compressing each frame of the image signal according to the motion compensation technique and the orthogonal transform 'transform coefficient quantization technique. The known motion compensation technique is to use the high degree of correlation between the video frames to reduce the redundancy in the direction of each money, and to use the frame that has been encoded in the past as the reference image. In the storage money body, the predetermined search range of the reference image towel is used to search for the current macro block and the smallest differential power block area that have become the target of the mobile compensation prediction. The offset of the spatial position of the search result block in the reference image is used as a technique for moving vector coding. In this conventional international standard image coding method, the area covered by the fixed block is easy to be covered by the large block of the large set of blocks, especially when the resolution of the image is high. Localized. Thus, the situation occurs when the peripheral macroblocks become the same coding mode and the same motion vector is assigned. In this case, the prediction efficiency will not increase' and 323005 4 201143459 *- will increase the total load of the encoded coding mode information and motion vector information, so the coding efficiency will be reduced as a whole for the encoder. . In order to solve such a problem, it is known that a device is formed by switching the size of a macroblock by the resolution of the artifact or the content (for example, refer to Patent Document 1). The moving image encoding device in Patent Document 1 is formed as The macro block size can be adaptively converted into a slice unit, a frame unit, a sequence unit, and the like in accordance with the image content, the resolution, the profile, and the like. The conventional international standard image coding method and the device described in the patent document i are related to the small locality by selecting the coding mode of the cut macroblock when there are different objects in the macroblock. Dynamic objects. However, the international standard image editing horse--, 1 is the formula, in addition to the macroblock partition must be edited: the information of the coding mode of the segmentation within the block is also in this regard, in the patent document j, the can be prepared by m In the larger block than the already determined macro block, the object is in a number of blocks in the frame, and the ratio of the coding mode in the tail of the selected macro is reduced. The way, due to the selection of the large giant block in the large county, the size of the block size in the macro block is large, and the same material &_; The relevant code quantity is not issued [previous technical literature] [patent literature] [Development of the International Public Gazette Xiayuan 34918 No. 323005 5 201143459 [Fa] The subject to be solved] To be determined = good! Question: In order to pass the content, it is very magical to evaluate the image correctly. In addition, the amount of processing related to the pre-processing of the month (3) is used to solve the problem of the image-free content as described above, and even if it is set in advance, the purpose is to provide a stone horse quantity that can suppress the total load such as the stone-horse mode, and In the size, the still encoded image encoding device and the moving image solution can efficiently compress the means for solving the problem. In the moving picture coding apparatus of the present invention, the predetermined portion selected according to the coding efficiency is selected by the coding mode value unit, and the coded root coding control unit of the variable length coding unit: number is selected: The binary signal converted by the Ministry is in the middle of the division, the system is in the ranks, and the coded bits are listed as exhausted: the bat stone update table is rotated, according to the coding mode. And the binary frequency 'to slant more of the aforementioned binarization table = code: the choice relationship of the department. The moving picture decoding device corresponding to the time-and-value signal of the present day and the month is subjected to arithmetic-depletion by the variable-length decoding unit to generate a binary signal by the arithmetic decoding solution. The modulo u 'reverse binary value 323 〇〇 5 201143459 is a binary signal that is generated by the arithmetic decoding processing unit using a binarization table that specifies a correspondence relationship between the binary signal indicating the coding mode and the multi-value signal. The coding mode indicated to be converted into a multi-value signal. [Effect of the Invention] According to the present invention, since the binarization table updated by the selection frequency of the encoding control unit according to the encoding mode is used, the multi-value signal indicating the encoding mode is converted into a binary signal for arithmetic coding, and more Industrialization to the bit stream. Therefore, it is possible to provide an image-capturing device that can efficiently suppress the coding mode, such as the coding mode, and the like, even in the preset macroblock size. Like a decoding device. [Embodiment] Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. (First Embodiment) In the first embodiment, motion compensation prediction is performed between adjacent frames with respect to each frame image using video, and orthogonal compression-quantization compression is applied to the obtained prediction difference signal. After the processing, a moving image encoding device that performs variable length decoding to generate a bit stream and a moving image decoding device that decodes the bit stream will be described. Fig. 1 is a block diagram showing the configuration of a moving image encoding device according to a first embodiment of the present invention. The moving image encoding device shown in Fig. 1 includes: a block dividing unit 2 that divides each frame image of the input image signal 1 into a block image of a plurality of blocks of the macro block size 4, According to the coding mode 7, the macro/sub-block image divided into one or more sub-blocks is output 7 323005 201143459 5; the in-plane prediction unit 8 is input when the macro/sub-block image 5 is input. The macro/sub-block image 5 is intra-frame predicted using the imaging signal of the intra-frame prediction memory 28 to generate a prediction artifact 11; the motion compensation prediction unit 9 is connected to the macro/sub-region. In the case of the block image 5, the reference image 15 of the motion compensation prediction frame memory 14 is used, and the macro/subblock image 5 is subjected to motion compensation prediction to generate a prediction artifact 17; the switching unit 6 is in response to the coding mode 7 The macro/subblock image 5 is input to either the in-plane prediction unit 8 or the motion compensation prediction unit 9; the subtraction unit 12 is a macro/subblock image 5 outputted by the block division 4 2 The prediction images u and 17 output from either one of the in-plane prediction unit 8 or the motion compensation prediction unit 9 are subtracted to generate a prediction difference. The signal-to-transformation-quantization unit 19 performs a text conversion and quantization process on the prediction difference signal 13 to generate a compressed data u. The variable-length coding unit f entropy-codes the compressed data 21 into a bit stream 30. The inverse transform unit 22 inverse-quantizes and inversely converts the compressed data 2 to the local decoded differential signal 24; the addition unit 25 adds the intra-frame prediction unit 8 or the motion compensation to the any-H22 of the inverse amount. The prediction unit 9 generates a local gamma image for the prediction images 11 and 17 generated by the searcher, and generates a partial solution image (si), and the Zb, the loop filter unit 27 processes the local decoded image = image signal 26 Filtering is performed to store the local decoding image 29 and the shifting frame memory encoding control unit 3, which is necessary for outputting the block size 4 and the encoding mode 7 = 1 (major best prediction) The parameter 1〇, the code mode is, the prediction parameter 10, the number and the 18a 1 contraction parameter 2 (), the optimal compression parameter 32350 8 201143459, 20a). Hereinafter, the details of the macro block size 4 and the coding model will be described in detail later. /, the encoding control unit 3 specifies the macro block size 4 of each frame image of the input image signal 1 for the block dividing unit 2, and indicates the coping image according to each macro block of the encoding object All encoding modes selected by type 7. Further, the hummer control unit 3 may select a pre-coding mode from among the coding mode groups, but the coding mode group is arbitrary, and is set to be selected by, for example, a group of the 2A picture or the 2B picture shown below. Encoding mode. Fig. 2A is a diagram showing an example of a coding mode of a p Gfedictive) image in which prediction in the time direction is performed. In Fig. 2A, this mode 〇 to 2 is a mode (inter) in which a macroblock (MxL pixel block) is encoded by inter-frame prediction. Mb_mode 0 is a mode in which one motion vector is allocated to the entire macroblock, and mb_mode 1 and 2 are horizontally or vertically equal-sized macroblocks, and different mobile vectors are allocated in the divided sub-blocks. Mode. Mb_jnode 3 is a mode in which a macroblock is divided into four groups, and a different coding mode (subjnb-mode) is assigned to each divided sub-block. Sub-mbjnode 0 to 4 is to allocate rab~m〇de 3 in the coding mode of the macroblock, and allocate each sub-block (m Μ pixel block) obtained by dividing the macroblock into 4 The coding mode, and sub_jnb_jn〇de 0 is an intra (intra) coding of the sub-block by intra-picture prediction. In addition, for the mode (inter) coded by inter-frame prediction, sub_mb_mode 1 assigns a pattern of motion vectors to all the wild sub-blocks, and subjnbjnode 2, 3 are horizontal or vertical molecular blocks, respectively. And in each of the divided sub-areas 9 323005 201143459, the blocks are respectively assigned different modes of moving direction, sub_mb_m〇 (ie 4 is 4 divided sub-blocks), and each sub-block is divided into different sub-blocks. The mode of the motion vector. In addition, FIG. 2B is another example of the coding mode for displaying the p-picture for predictive coding in the time direction. In FIG. 2B, mbjnode 〇 to 6 is a macro set by inter-frame prediction. The block (MxL pixel block) is edited and the mode (inter). mbjnode 0 is a mode for assigning one motion vector to the entire macro block' mb-mode 1 to 6 are horizontal, vertical or diagonal directions respectively. The segmentation macroblock' assigns a pattern of different motion vectors to each of the divided sub-blocks. mb_mode 7 divides the macroblock into 4 segments, and assigns different brewing modes to the divided sub-blocks. Model of (sub_mb_mode) Sub-mbjnode 0 to 8 is a compilation mode in which each sub-block pixel block obtained by dividing the macroblock block is selected in the encoding mode of the macroblock block, subjnl3_m〇de 〇 An intra (intra) coded sub-block by intra-frame prediction. In addition to the inter-frame prediction coding mode (inter), sub-mb_m〇de 1 is a mode for assigning one motion vector to the sub-blocks, ___« 2 to 7 / knife is horizontal, The sub-blocks are divided vertically or diagonally, and the different sub-blocks are respectively assigned different modes of motion vectors, and mode 8 is to divide the sub-blocks into 4, and the different moving vectors in the divided Mode. Referring to the block dividing unit 2, each frame image of the input image signal 1 that is rotated in the moving image coding device is divided into macros of the size of the block 323005 10 201143459 by the coding control unit 3. Block image. Further, the block dividing unit 2 includes a mode in which the encoding mode 7 specified by the encoding control unit 3 is different in the encoding mode of the sub-blocks obtained by dividing the macroblocks (sub_mb_mode 1 to 4 in FIG. 2A). Or in the case of sub_mb_mode 1 to 8) of Fig. 2B, the macroblock image is divided into sub-block images shown in the coding mode 7. Therefore, the block image outputted by the block dividing unit 2 becomes either a macro block image or a sub-block image in response to the encoding mode 7. Hereinafter, this block image is referred to as a macro/subblock image 5. In addition, when the horizontal or vertical size of each frame of the input image signal 1 is not an integer multiple of each horizontal size or vertical size of the macro block size 4, each frame of the input image signal 1 is generated in the horizontal direction. Or expand the frame of the pixel in the vertical direction (expanded frame) until the frame size becomes an integer multiple of the size of the macro block. In the method of generating pixels in the extended area, for example, when the pixels are expanded in the vertical direction, the pixels at the lower end of the original frame are repeatedly filled, or have fixed pixel values (gray, black, white, etc.). The pixels are filled and the like. Similarly, in the case where the pixels are expanded in the horizontal direction, there are methods in which pixels on the right end of the original frame are repeatedly filled, or pixels having fixed pixel values (grey, black, white, etc.) are filled. The expansion frame of the frame size of the input image signal 1 is an integer multiple of the macro block size, and can be input to the block division unit 2 instead of the frame image input to the image signal 1. In addition, the frame size (horizontal size and vertical size) of each of the macroblock block size 4 and the input image signal 1 is multiplexed in the order unit or image unit formed by the image above the 1 frame. Turn into a bit stream, so it can be 11 323005 201143459

此外,也可不將巨集區塊大小的值直接多 流,而以設定檔來規定。在此情況,用以依川員 設定檔之識別資訊係予以多工化為位元流。 用以依順序單位識別In addition, the value of the macro block size may not be directly multi-streamed, but may be specified by a profile. In this case, the identification information for the profile of the Chuanchuan staff is multiplexed into a bit stream. Used to identify units in sequence

(以下’稱為圖框間預測模式)的情況時,將巨集/子區塊 晝像5輸入到移動補償予側部9。 晝面内預測部8係針對所輸人之巨集,子區塊晝像 5 ’以巨集區塊大小4所指定之編碼對象的巨集區塊或以編 碼模式7指定之子區塊為單位,進行圖框内預測。此外, 晝面内預測部8係針對由編碼控制部3指示的預測參數1〇 所包含的全部畫Φ内預測模式,使用晝面内預測用記憶體 28内所儲存之圖框内的晝像訊號,分別生成預測晝像u。 此處,詳細說明預測參數1〇。在編碼模式7為圖框内 預測模式的情況時,編碼控制部3指定晝面内預測模式作 為對應於該編碼模式7之預測參數1〇。此晝面内預測模式 有例如:使巨集區塊或子區塊内形成為4χ4像素區塊單 位,使用畫面内預測用記憶體28内的晝像訊號之單位區塊 周圍的像素生成預測畫像之模式;使巨集區塊或子區塊内 形成為8x8像素區塊單位,使用畫面内預測用記憶體28内 12 323005 201143459 的晝像訊號之單位區塊周邊的像素生成預測畫像之模式; 使巨集區塊或子區塊内形成為16x16像素區塊單位,使用 晝面内預測用記憶體28内的晝像訊號之單位區塊周邊的 像素生成!預測晝像之模式;由縮小巨集區塊或子區塊内所 得之晝像生成預測畫像之模式等。 移動補償預測部9,係由移動補償預測圖框記憶體14 所儲存之1個圖框以上的參考晝像資料中指定用於生成預 測晝像之參考晝像15,並使用此參考畫像15及巨集/子 區塊晝像5,因應由編碼控制部3指示的編碼模式7進行 移動補償預測,而生成預測參數18及預測晝像17。 此處,詳細說明預測參數18。在編碼模式7為圖框間 預測模式的情況時,移動補償預測部9求取移動向量、各 移動向量所指之參考晝像的識別號碼(參考晝像指標)等, 作為對應於該編碼模式7的預測參數18。預測參數18的 生成方法容後詳述。 減法部12,係由巨集/子區塊畫像5減去預測晝像11 或預測晝像17的任何一方,而獲得預測差分訊號13。此 外,預測差分訊號13,係因應預測參數10指定的全部晝 面内預測模式,而相對於晝面内預測部8生成的全部預測 晝像11分別地生成。 因應預測參數10指定的全部畫面内預測模式分別地 生成之預測差分訊號13係在編碼控制部3予以評估,以決 定包含有最佳當的畫面内預測模式之最佳預測參數l〇a。 就評估方法而言,可使用例如將預測差分訊號13予以變換 13 323005 201143459 並 選置擇獲得之壓縮資料21來計算後述的編竭成本兀 選擇使、扁碼成本h成為最小之畫面内預測模式。 編馬控制部3,係對在晝面内預測 :對編碼模式7所包含的全部模式,評== 號13 ’根據該評估結果,由編碼模式7二 獲仔最佳當的編碼效率之最V决疋 制部3 ’係由預測參數1〇、18及二:。二卜二碼控 於最佳編碼模式7a之最佳預測參數lQa、18a及最=應 參數2Ga。有關各個決定順序容後敘述。 錢(Hereinafter referred to as the inter-frame prediction mode), the macro/subblock image 5 is input to the motion compensation side portion 9. The in-plane prediction unit 8 is for the macro of the input person, and the sub-block image 5' is a macro block of the coding target specified by the macro block size 4 or a sub-block specified by the coding mode 7. , for intra-frame prediction. Further, the in-plane prediction unit 8 uses the image in the frame stored in the in-plane prediction memory 28 for all the picture Φ intra prediction modes included in the prediction parameter 1 指示 instructed by the coding control unit 3. Signals, respectively, to generate prediction artifacts u. Here, the prediction parameter 1〇 is explained in detail. When the coding mode 7 is the intra-frame prediction mode, the coding control unit 3 specifies the intra-plane prediction mode as the prediction parameter 1 对应 corresponding to the coding mode 7. The intra-plane prediction mode has, for example, forming a macroblock or a sub-block into a 4 pixel block unit, and generating a prediction image using pixels around the unit block of the image signal in the intra-screen prediction memory 28. a mode in which a macroblock or a sub-block is formed into an 8x8 pixel block unit, and a mode of predicting an image is generated using pixels around a unit block of the image signal of 12 323005 201143459 in the intra-screen prediction memory 28; The macroblock or the sub-block is formed into a 16x16 pixel block unit, and the pixel is generated by using the pixels around the unit block of the image signal in the memory 28; the mode of predicting the image is reduced; A pattern obtained by generating a predicted image in a block or a sub-block. The motion compensation prediction unit 9 specifies the reference image 15 for generating the prediction artifact from the reference image data of one frame or more stored in the motion compensation prediction frame memory 14, and uses the reference image 15 and The macro/subblock image 5 is subjected to motion compensation prediction in the coding mode 7 instructed by the coding control unit 3, and the prediction parameter 18 and the prediction artifact 17 are generated. Here, the prediction parameter 18 will be described in detail. When the coding mode 7 is the inter-frame prediction mode, the motion compensation prediction unit 9 obtains the motion vector, the identification number of the reference image (refer to the imaging index) indicated by each motion vector, and the like as the corresponding coding mode. Prediction parameter 18 of 7. The generation method of the prediction parameter 18 will be described in detail later. The subtraction unit 12 subtracts any one of the prediction artifact 11 or the prediction artifact 17 from the macro/subblock image 5 to obtain the prediction difference signal 13. Further, the prediction difference signal 13 is generated for each of the prediction artifacts 11 generated by the in-plane prediction unit 8 in response to all the in-plane prediction modes specified by the prediction parameter 10. The prediction difference signal 13 generated in response to all the intra-screen prediction modes specified by the prediction parameter 10 is evaluated by the encoding control unit 3 to determine the optimum prediction parameter l〇a including the optimum intra-picture prediction mode. For the evaluation method, for example, the predicted difference signal 13 is transformed by 13 323005 201143459 and the compressed data 21 obtained by the selection is used to calculate the in-screen cost, which is described later, and the flat code cost h is minimized. . The horse-horse control unit 3 predicts in-plane prediction: for all modes included in the coding mode 7, the evaluation == 13 ' according to the evaluation result, the coding efficiency is best obtained by the coding mode 7 The V-Decision Department 3' is composed of prediction parameters 1〇, 18 and 2:. The second and second codes are controlled by the optimal prediction parameters lQa, 18a and the most = parameter 2Ga in the optimum coding mode 7a. The order of each decision is described later. money

心:及二同上述,在圖框内預測模式的情況時,預測 m及最佳預測參數服包含有晝面内 、J 方面,在圖框間預測模式的情況中 、、式。另- 測參數18a包含有移動向量、 數18及最佳預 的識別號碼(參考晝像指標)等。°里所&之參考晝像 另外,壓縮參數20及最佳壓縮參數2〇 區塊大小、量化階大小等。 匕3有變換 依此決定順序的結果,編碼控制 的巨集區塊或子區塊之最佳編碼 ^竭對象 10a和l8a、最佳壓縮參數2〇a輸出到可二产預測參數 另外,編碼控制部3係將壓縮參數S3碼部仏 _出到變換-量化部19及反量化、反=『縮參數 變換-量化部19,係從對應編螞楹 模式而生成之複數個預測差分訊號133的全部 控制部3所蚊㈣她_%刪參 323005 14 201143459 18a而生成的預測畫像η和17相對應之預測差分訊號 13(以下’稱為最佳預測差分訊號㈣,並根據在編竭控 ,部3所決定之最佳壓縮參數2〇a的變換區塊大小,對該 最佳_差分訊號13a實施DGT(離散餘弦轉換)等^ 處理而算出變換係數,並且根據由編碼控制部3指示之最 佳壓縮參數20a的量化階大小,將該變換係數予以量 將屬於量化後的變換係數之壓縮資料21輸出到反 變換部22及可變長度編碼部23。 反^化-反變換部22,係使用最佳壓縮參數2 =了化部19輸入之壓縮資料21進行反量化由: 知反DCT等的反變換處理,生成預測差分訊號13a =實 解碼預測差分訊號2 4,且輸出到加法部2 5。 局°p 像二法:25 ’係將局部解碼預測差分訊號24與預測查 像或U或預測晝像17相加而生成局部解 =忠 將此局部解满全德如咕 里像就2 6, 存在全^ 象號輸出到迴路濾、波器27,並μ 成為圖樞内預測用的晝像訊號。…,,、旦像訊號26則 迴路濾波器27,係對由加法 像訊號26,進行預定遽波處理^入之局部解碼晝 晝像儲存在移動補㈣、目丨丨^ ,慮波處理後的局部解碼 29係成為移動補償;測用二 27所實施的瀘波声裡, 考"·像15。错由迴路濾波器 塊之局部解蝴訊㈣輸人 == 行集區 323005 201143459 可變長度編碼部23,係將由變換-量化部19輸出之壓 縮資料21、由編碼控制部3輸出之最佳編碼模式7 a、最佳 預測參數10a和18a、及最佳壓縮參數20a實施熵編碼, 而生成顯示這些編碼結果之位元流30。此外,最佳預測參 數10a和18a及最佳壓縮參數20a係以對應最佳編碼模式 7 a所指的編碼模式之單位來編碼。 如上所述,本第1實施形態的動晝像編碼裝置,係藉 由與編碼控制部3連動而使移動補償預測部9及變換-量化 部19分別動作,而得以決定獲得最佳當的編碼效率之編碼 模式、預測參數、壓縮參數(亦即,最佳編碼模式7a、最 佳預測參數10a和18a、最佳壓縮參數20a)。 此處,有關藉由編碼控制部3獲得最佳當的編碼效率 之編碼模式、預測參數、壓縮參數的決定順序,茲依1.預 測參數、2.壓縮參數、3.編碼模式的順序作說明。 1.預測參數的決定順序 此處是說明當編碼模式7為圖框間預測模式時,決定 包含該圖框間預測的移動向量、各移動向量所指之參考晝 像的識別號碼(參考晝像指標)等之預測參數18的順序。 在移動補償預測部9,係與編碼控制部3連動,而對 由編碼控制部3向移動補償預測部9指示的全部編碼模式 7(例如第2A圖或第2B圖所示之編碼模式組),分別決定預 測參數18。以下,說明其詳細的順序。 第3圖為顯示移動補償預測部9的内部構成之方塊 圖。第3圖所示的移動補償預測部9包含有移動補償區域 16 323005 201143459 ㈣部40、移動檢測部42、及内插畫像生成部43。另外, 就輸入資料而言,有:由編碼控制部3輸人的編碼模式7、 由切換部6輸人的巨集/子區塊晝像5、及由移動補償預 測圖框記憶體14輸入的參考畫像15。 移動補你區域分割部4〇,係因應由編碼控制部3指示 的編碼模式7,將由切換部6輸入的巨集/子區塊晝像5 分割成形成移動補償的單位之區塊,將此移動補償區塊晝 像41輸出到移動檢測部42。 内插畫像生成部43’係由移動補償預測圖框記憶體14 所儲存之1個圖框以上的參考晝像資料巾指㈣於生成預 測旦像之參考畫像15,移動檢測部42則在所指定的參考 晝像15上之預定移動搜尋範圍内檢測出移動向量44。此 外,移動向量之檢測則是與MPEG_4AVC規格等同樣地,藉 由虛擬樣本精度的移動向量來進行。此檢測方法係對參考 晝像所具有之像素資訊(稱為整數晝像),在整數像素之間 =内插運算而製作出虛_縣(像素),並將該虛擬的 ’像素)作為預測畫像來利用。在MPEG-4 AVC規格中, ΓνΓ=18像讀度的虛擬樣本加以·此外,在㈣-4 在水平扣’j/2像素精度的虛_本可藉由在垂直方向或 6個整數像素之6個分接頭“。_渡波器 生成 鄰接之1/2 I/成。1/4像素精度的虛擬樣本係藉由使用 成。素或整數像素的平均運㈣ 本第1實施形態的移動補償預測部”’也是由内插 323005 201143459 晝像生成部43生成與由移動檢料9所指 44的精度相對應的虛擬像素的預測晝像奶。 : 擬像素精度的移動向量檢測順序之—例。 ♦出虛 (移動向量檢測順序I) 由内插晝像生成部43生成相對於處在移 區塊晝像41之料軸搜職圍内之 == 動向量之顏錄45。時數像素精度生成的預=像= (預測晝像⑺係輪㈣減法部12,藉由減法部i2 ==:41(巨集,子區塊晝像5)減去而成為預 控制部3對預測差分訊號13及整數 ! (預測參數18)進行預測效率的評 ::預測效率的評估係例如由下式(m十算預測成本L,在 =移動搜#_決定使刪成本L成為 素精度的移動向量44。 堂歎像 (1)Heart: and the above, in the case of the prediction mode in the frame, the prediction m and the optimal prediction parameter include the in-plane and J-direction, and the mode in the inter-frame prediction mode. In addition, the measurement parameter 18a includes a motion vector, a number 18, and an optimal pre-identification number (refer to the imaging indicator). Reference image of the & in addition, compression parameter 20 and optimal compression parameters 2 〇 block size, quantization step size, and so on.匕3 has the result of transforming the order determined by this, the optimal encoding of the macroblock or sub-block of the encoding control, the optimal encoding parameters 10a and l8a, the optimal compression parameter 2〇a output to the second-prediction prediction parameter, and the encoding The control unit 3 outputs the compression parameter S3 code unit to the conversion-quantization unit 19 and the inverse quantization and inverse=the reduced parameter conversion-quantization unit 19, and the plurality of prediction difference signals 133 generated from the corresponding seeding mode. All the control units 3 mosquitoes (4) her _% delete the 323005 14 201143459 18a and the predicted image η and 17 corresponding to the predicted differential signal 13 (hereinafter referred to as the best predictive differential signal (4), and according to the editor The transform block size of the optimum compression parameter 2〇a determined by the unit 3 is subjected to DGT (Discrete Cosine Transform) processing or the like for the optimum_differential signal 13a to calculate a transform coefficient, and is instructed by the encoding control unit 3 The quantization step size of the optimum compression parameter 20a is used to output the compressed data 21 belonging to the quantized transform coefficient to the inverse transform unit 22 and the variable length coding unit 23. The inverse-inverse transform unit 22 , using the best compression The number 2 = the compressed data 21 input from the unit 19 is inversely quantized by the inverse transform processing such as the inverse DCT, and the predicted difference signal 13a = the real decoded prediction difference signal 2 4 is generated and output to the addition unit 25. p is like the second method: 25' is to add the local decoded prediction difference signal 24 to the prediction image or the U or the prediction image 17 to generate a local solution = loyalty to this partial solution to the full German image such as the image 2 2 exists in the whole ^ The image is output to the loop filter, the waver 27, and μ is the image signal for the intra-frame prediction. ...,,, the image signal 26 is the loop filter 27, and the pair is subjected to the predetermined image wave 26 for predetermined chopping. The local decoding image processed by the input is stored in the mobile complement (4), the target ^, and the local decoding 29 after the wave processing becomes the motion compensation; the measurement is performed in the chopping sound of the 27th, and the image is tested. 15. The partial solution of the loop filter block (4) input == line set area 323005 201143459 The variable length coding unit 23 outputs the compressed data 21 outputted by the transform-quantization unit 19 and output by the code control unit 3. Optimal coding mode 7 a, best prediction parameters 10a and 18a, and optimal pressure The parameter 20a performs entropy coding, and generates a bit stream 30 displaying the result of the encoding. Further, the optimal prediction parameters 10a and 18a and the optimal compression parameter 20a are in units of coding modes corresponding to the optimal coding mode 7a. In the moving image encoding device of the first embodiment, the motion compensation prediction unit 9 and the conversion-quantization unit 19 are operated in conjunction with the encoding control unit 3, and it is determined that the optimum is obtained. The coding mode, prediction parameters, compression parameters (i.e., optimal coding mode 7a, optimal prediction parameters 10a and 18a, optimal compression parameters 20a) of coding efficiency. Here, the order of determining the coding mode, the prediction parameter, and the compression parameter for obtaining the optimum coding efficiency by the coding control unit 3 is explained by the order of 1. prediction parameter, 2. compression parameter, and 3. coding mode. . 1. Determining the order of the prediction parameters Here, when the coding mode 7 is the inter-frame prediction mode, the motion vector including the inter-frame prediction and the identification number of the reference image indicated by each motion vector are determined (refer to the image). The order of the prediction parameters 18 such as the indicator). The motion compensation prediction unit 9 is associated with the coding control unit 3, and all the coding modes 7 instructed by the coding control unit 3 to the motion compensation prediction unit 9 (for example, the coding mode group shown in FIG. 2A or FIG. 2B). Determine the prediction parameter 18 separately. Hereinafter, the detailed order will be described. Fig. 3 is a block diagram showing the internal configuration of the motion compensation prediction unit 9. The motion compensation prediction unit 9 shown in Fig. 3 includes a motion compensation area 16 323005 201143459 (four) unit 40, a motion detecting unit 42, and an inner illustrator generating unit 43. Further, the input data includes an encoding mode 7 input by the encoding control unit 3, a macro/subblock image 5 input by the switching unit 6, and input from the motion compensation prediction frame memory 14. Reference portrait 15. In the coding mode 7 instructed by the coding control unit 3, the macro/sub-block image 5 input by the switching unit 6 is divided into blocks forming a unit of motion compensation, and this is determined by the coding mode 7 indicated by the coding control unit 3. The motion compensation block block 41 is output to the motion detecting portion 42. The internal illustrator image generating unit 43' is a reference image data towel finger (four) of one frame or more stored by the motion compensation prediction frame memory 14 to generate a reference image 15 of the predicted image, and the motion detecting unit 42 is located at A motion vector 44 is detected within a predetermined motion search range on the designated reference image 15. Further, the detection of the motion vector is performed by the motion vector of the virtual sample precision as in the MPEG_4 AVC standard or the like. This detection method is based on the pixel information (called an integer image) of the reference image, and creates a virtual_count (pixel) between integer pixels = interpolation, and uses the virtual 'pixel' as a prediction. Image to use. In the MPEG-4 AVC specification, ΓνΓ=18 is used as a virtual sample of readness. In addition, in (4)-4, the horizontal yd 'j/2 pixel precision imaginary can be used in the vertical direction or 6 integer pixels. Six taps ". _ ferrites generate 1/2 I / contiguous. 1/4 pixel precision virtual samples are obtained by using averaging or integer pixels. (4) Motion compensated prediction of the first embodiment The portion "' is also generated by the interpolation 323005 201143459. The artifact generation unit 43 generates the predicted image milk of the virtual pixel corresponding to the accuracy of the finger 44 indicated by the moving sample 9. : Example of moving vector detection order of pseudo pixel precision. ♦ Virtual (moving vector detection sequence I) The interpolated image generating unit 43 generates a photo book 45 of the == motion vector with respect to the search axis of the shift block image 41. Pre-image = (predicted image (7) wheel (four) subtraction unit 12, which is subtracted by the subtraction unit i2 ==:41 (major, sub-block image 5), becomes the pre-control unit 3 The prediction efficiency of the prediction difference signal 13 and the integer! (prediction parameter 18): The evaluation of the prediction efficiency is determined by, for example, the following formula (m-calculated prediction cost L, and = mobile search #_ Accurate movement vector 44. Donation (1)

Ji = Di + λ Ri 2處’則是設成使用Dl、Rl作為評估值。L為預測差 =訊號之巨集區塊内或子區塊内的絕對值和⑽)4為移 動向量及此㈣向量所指之參考晝像_職碼的推定碼 量’ λ·為正數。 此卜田要求取雜值Ri時,移動向量的碼量係使用 附近的移動向量的值,來預測第2A圖或第2b圖的各模式 之^向量的值’根據概率分布將預測差分值藉由烟編瑪 而求取,或進彳τ相當於此的碼量推定而求取。 第4圖為說明第2B圖所示之各編碼模式7的移動向量 323005 18 201143459 的預測值(以下稱為預測向量)的決定方法之圖。第4圖 中、mb_mode 0、sub_mb_mode 1等的矩形區塊中,使用位 於其左橫(位置A)、上(位置B)、右上(位置C)之各個已編 碼完成的移動向量MVa、MVb、MVc,由下式(2)算出該矩形 區塊的預測向量PMV。medianO為對應於中值濾波處理, 而輸出移動向量Mva、MVb、MVc的中值之函數。 PMV = median (MVa, MVb, MVc) (2) 另一方面’在具有對角形狀之對角區塊!nb_mode 1、 sub_mb_mode 2 、 mb_mode 2 、 sub—mb—mode 3 、 mb—mode 3 、 sub—mb一mode 4、mb_mode 4、sub_mb」node 5 的情況中’ 由於形成為可適用與矩形區塊同樣的處理,故可因應對角 形狀以變更取得中值之位置A、B、C的位置。藉此,不用 變更算出預測向量PMV的方法本身,即可因應各移動向量 分配區域的形狀而算出,並可使評估值匕的成本抑制得很 小〇 (移動向量檢測順序II) 内插畫像生成部43係對位於依上述「移動向量檢測順 序I」决疋之整數像素精度的移動向量的周圍之五個以上 的】「/2像素精度的移動向量44,生成預測書像45。以下, 與移=向量檢測順序!」同樣地,以Μ像素精度生成 畫像17) ’藉由減法部12由移動補償 子區塊畫像5)予以減去,而獲得 13及二像素"者’編碼控制部3對此預測差分訊號 13及_錄度__44(_參數⑻進行賴 323005 19 201143459 效率的評估’由位於整數像素精度的移動向量的周圍之1 精度的移動向量中決定使預測成本l成 為竑小之1/2像素精度的移動向量44。 (移動向量檢測順序111) 移動部3及移動補償預測部9對1/4像素精度的 -里也同樣地’由位於依上述「移動向量檢測順序Η」 度的移動向量的周圍之1個以上的1/4 像素定使預測成 (移動向量檢測順序Iv) —以下同樣地,編碼控制部3及移動補償預測部9合進 仃心測虛擬像素精度的移動向量,直到成為預定精度為止。 ^卜,本實施形態中,係進行檢測虛擬像素精度的移 ㈣直到成為預定精度為止,但在例如先決定相對於 . 的閾值,而使預測成本Jl成為比預定閾值小的情 向L二在成為預定精度之前停止虛擬像素精度的移動 框外移,量也可為參考以參考圖框大小規定之圖 梅冰_、。細情況巾,必須生成®框外的像素。就圖 ^之^素的-種生成方法而言,有以畫面端的像素填充 此外,在輪入影像訊號1之各圖框的圖樞大小並非巨 =區塊大小的整數部時,則取代輸人影像1的各圖框,而 别入擴充圖框的情況時,擴充為巨集區塊大小的整數倍的 323005 20 201143459 大小(擴充圖框的大小)乃成為參考圖框的圖框大小。另一 方面,在不參考擴充區域的局部解碼部分,只參考相對於 原來的圖框之局部解碼部分作為圖框内的像素的情況中, 參考圖框的圖框大小係成為原來的輸入影像訊號之圖框大 小0 如此,移動補償預測部9係對於將巨集/子區塊晝像 5内分割成編碼模式7所示之成為移動補償的單位之區塊 單位之移動補償區域區塊畫像41,輸出各個所決定之預定 精度的虛擬像素精度之移動向量及其移動向量所指之參考 晝像的識別號碼作為預測參數18。另外,移動補償預測部 9將藉由該預測參數18生成之預測晝像45(預測畫像17) 輸出到減法部12,並藉由減法部12由巨集/子區塊晝像5 予以減去而獲得預測差分訊號13。由減法部12輸出之預 測差分訊號13係輸出到變換-量化部19。 2.壓縮參數的決定順序 此處是說明將根據在上述「1.預測參數的決定順序」 按每一編碼模式7決定之預測參數18所生成之預測差分訊 號13用於變換-量化處理時之壓縮參數20(變換區塊大小) 的決定順序。 第5圖為顯示因應第2B圖所示的編碼模式7之變換區 塊大小的-適應化例子之圖。第5圖中,就MxL像素區塊而 言,以32x32像素區塊為例。當編碼模式7所指定的模式 為mb_mode 0至6時,變換區塊大小可適應性地選擇16x 16或8x8像素的任何一方。當編碼模式7為mb_mode 7時, 21 323005 201143459 ㈣㈣大小可按每-4分割巨集區塊所得之腕6像素 子區塊’從8x8或4x4像素中適應性地選擇。 此外,可按母一各個編碼模式而選擇之變換區塊大小 係可由藉編碼模式而均等分割之子區塊大小以下的任 思矩形區塊大小中來定義。 第6圖為顯示因應第找圖所示 塊大小的另—適應化例 模^之夂換& 4 7 Μ社—λα m 圖第6圖的例子中,當編碼模 "日疋的模式為前述的mb—mode 〇、5、6時,就選 擇的變換區塊大小而言,除_6 ;字= 屬於移動補償的單位之子”卜選擇與 小。在一。的情:對二之變換區塊大 中適應性地選擇。在mb X8、32x32像素 啊素中適應性地選擇= 舰6、8x8、32xlfi德Γ! ' ―崎6的情況,可由 了圖亍 、中適應性地選擇。另外,雖省略 2可由二 像料適應性地選擇,在 _素中選擇,對於矩形的=:二_、 選擇適應化。 —、】由8x8、4x4像素中 模式==係將對應第5圖及第6圖所例示的編喝 、之變換區塊大小組作為壓縮參數2〇。 此外’第5圖及第6圖的例子中,田處口隹广仏 碼模式7而預先Mm 因應巨集區塊的編 单位或子區塊單位可適應性地選擇,但同樣 323005 22 201143459 :割巨集區塊所得之子區塊的編碼模 (mode !至8等) :广圖的_ 擇或進一步分割子區塊的區塊草位可適應性地: 同樣地,編碼控制部3在使用第2A _示的 ^ ,預先決定因應該編碼模式7的變換區塊电二 成可適應性地選擇即可。 、开 變換-量化部19係與編碼控制部3連動,以 大小4所指定之子區塊單位、或以因應編碼模式7進一歩 :割該巨集區塊單位之子區塊單位,由變換區塊大小中決 疋最佳當的變換區塊大小。以下,說明其詳細的順序。 第7圖為顯示變換-量化部19的内部構成之方塊圖。 第7圖所示的變換-量化部19包含變換區塊大小分割部 50、變換部52、及量化部54。另外,就輸入資料而言,有 由編碼控制部3輸入之壓縮參數20(變換區塊大小及量化 1¾大小等)、及由編碼控制部3輸入之預測差分訊號Η。 變換區塊大小分割部50係將屬於決定變換區塊大小 的對象之每一巨集區塊或子區塊的預測差分訊號13變換 成因應壓縮參數20的變換區塊大小之區塊,作為變換對象 區塊51而輸出到變換部52。 此外’在以壓縮參數20對1個巨集區塊或子區塊選擇 才曰疋複數個憂換區塊大小的情況時,係將各.變換區塊大小 的變換對象區塊51依序輸出給變換部52。 變換部52係對於所輸入的變換對象區塊51,依照 23 323005 201143459 DCT、以整數近似DCT的變換係數之整數變換、哈達瑪捭變 換(Hadamard transform)等變換方式實施變換^理了 =生 成的變換係數53輸出到量化部54。 量化部54則依照由編竭控制部3指示之壓縮參數2〇 的量化階大小,將所輸入的變換健53量化,將屬化 後的變換係數之壓縮資料21輸出到反量化_反變、^及 編碼控制部3。 此夕卜,變換部52及量化部54,在以壓縮參數㈣^ 固=集區塊或子區塊選擇指定有複數個變換 :中,會對這些全部的變換區塊大小進行上述嶋: 處理,並輸出各個壓縮資料21。 由量化部54輸出之壓縮資料21係輸入至 ’用於對壓縮參數2Q的㈣區塊大小之編碼效率進行評 二係使用相對於就編碼模式7所包含之各 壓各者可選擇的全部編換區塊大小各者所獲得之 縮貝料21’由例如下式⑶計算編縣本了2,並選擇使 、碼成本L成為最小之變換區塊大小。 j2==D2+ AR2 (3) 胳# ^處,疋使用D2、R2作為評估值。就D2而言,係使用 舛!^換區塊大小所獲得的壓縮資料21輸入到反量化'反 栌、卩22,在將壓縮資料21實施反量化_反變換處理所獲 ^局部解碼預測差分滅24加人預測畫像17所獲得之 ;:碼晝像訊號26、與巨集/子區塊畫像5之間的畸變 等。就R2而言’所使用之碼量係對變換區塊大小所 24 323005 201143459 獲得之壓縮資料2卜與壓縮資料21之相關編碼模式7和 預測參數10、18,科變長度編碼部23實際編碼所獲得 之碼量(或推定碼量)。 編碼控制部3在依後述的「3·編碼模式的決定順序」 決定最佳編碼模式7讀,龍於所決定的最佳編碼模 式7a之變換區塊大小,包含在最佳壓縮參數咖中,並輸 出到可變長度編碼部23。可變長度編碼部23在將此最佳 壓縮參數20a進行熵編碼之後多工化為位元流3〇。 此處’變換區塊大小由按照巨集區塊或子區塊的最佳 編碼模式7a預先Μ定義的變換區塊大小組(例示於第5 圖及第6圖)中選擇’所以按每__變換區塊大小組對該組中 所包含之變換區塊大小,分配ID等的識別資訊,並將該識 別資訊進行熵編碼作為變換區塊大小的資訊,再多工化為 位70流3G即可。在此情況中,也在解碼m賴定編換區 塊大小組的識別資訊。^變換區塊大小組所包含之變 :奐區,大小為1個的情況時,由於在解碼裝置侧可由袓 (set)中自動決定變換區塊大小’所以不必在 胳 變換區塊大小的識別資訊多工化為位元流3〇。… 3.編碼模式的決定順序 依上述1.預測參數的決定順序 決定順序」,對編碼控㈣。H縮參數的 決定預測參數1G、18 ^ 3 ^的全部編碼模式7,分別 使用將使用各個編嘴槿時,編碼控制部3即 縮參數2〇所獲得知之與那時的預測參數1〇、18及壓 預/則差分訊號13進一歩變換-量化所 323005 25 201143459 獲得之壓縮資料21,由上式(3)求 碼模式7,並選擇其編碼模式7 、,馬成本㈣小的編 碼模式7a。 巨集區塊的最佳編 此外,也可由在第2Α圖或第 ^ (skip mode)^ ^ ^ 決定最佳― 才曰將在、.扁馬羞置側使用鄰接的巨 =:償_畫像作為局部二= i工化為位元* M式料的_晝像絲縮參數並 誉側,係忙盥1’故可以抑制碼量而進行編碼。在解崎裝 戍子區二移同樣的順序使用鄰接的巨集區塊 像訊號予以輪^向里把以移動福償的預測畫像作為解蜗畫 佳隼ίΐ大入影像訊號1之各圖框的圖框大小非為 巨集h塊大小的整數件聋 q 輸入擴充圖框的情況;輸入影像訊號1的各圖框 0區塊,Θ n ,、可對&含擴錢域的巨集區塊 成子尾I、選擇跳過模⑽ 模式’以抑制擴充區域所耗費的碼量。疋編碼 編碼控制部3將Μ山 序」、「2·壓縮參數的、上的Γ1.預測參數的決定順 Μ定之貝序」、「3.編補式的決定順序」 ϋ變長;效率所得之最佳編碼模式7a輸出 :預測二 F句敢佳預測參數1〇 18 同樣對應於最佳編碼模式7a之壓縮參數20作^= 323005 26 201143459 參數20a,輸出到可變長度編碼部23。可變長度編碼部23 會將最佳編碼模式7a、最佳預測參數10a、18a及最佳壓 縮參數20a進行熵解碼,並多工化為位元流30。 另外,由根據所決定的最佳編碼模式7a及最佳預測參 數10a、18a及最佳壓縮參數20a之預測晝像11、17所獲 得之最佳預測差分訊號13a,如同上述,係由變換-量化部 19予以變換-量化而成為壓縮資料21。此壓縮資料21在可 變長度編碼部23予以熵編碼,並多工化為位元流30。另 外,此壓縮資料21經過反量化-反變換部22、加法部25 而成為局部解碼晝像訊號26,並輸入到迴路濾波部27。 其次,說明本第1實施形態的動晝像解碼裝置。 第8圖為表示本發明第1實施形態之動晝像解碼裝置 的構成之方塊圖。第8圖所示的動畫像解碼裝置包括:可 變長度解碼部61,係由位元流60以巨集區塊單位將最佳 編碼模式62進行熵解碼,並且以按照該經解碼的最佳編碼 模式62所分割的巨集區塊或子區塊單位,將最佳預測參數 63、壓縮資料64、最佳壓縮參數65進行熵解碼;晝面内 預測部69,係於最佳預測參數63輸入時,使用該最佳預 測參數63所包含的晝面内預測模式及晝面内預測用記憶 體77所儲存的解碼畫像74a生成預測晝像71 ;移動補償 預測部70,係於最佳預測參數63輸入時,使用該最佳預 測參數63所包含的移動向量、及以該最佳預測參數63所 包含的參考晝像指標(index)所指定之移動補償預測圖框 記憶體75内的參考晝像76進行移動補償預測而生預測晝 27 323005 201143459 像72,·切換部68,係按照所解碼的最佳編碼桓 可變長度解碼部61解碼的最佳預測參數63輪 '式62,將 測部69或移動補償預測部70的任何一方;反^^畫面内預 部66,係使用最佳壓縮參數65,對壓縮資料〜反變換 化及反變換處理,並生成預測差分訊號值67; 4進行反量 係在預測差分訊號值67加入晝面内預測部69去。卩73, 預剩部7G的任何^方輸出的預測晝像71 ,動補償 7晝4像、74;晝面内預測用記憶體77,係用以儲存 成,逦路濾波部78,係將解碼晝像74進行濾波處理而生 再生晝像79 ;及移動補償預測圖框記憶體,係用以儲 存再生晝像79» 擎可變長度解碼部61,係本第1實施形態的動晝像解碼 接收到位元流60時,即將該位元流60進行熵解碼處 位將由1個圖框以上的圖像所構成的順序單位或圖像單 塊巨集區塊大小及圖框大小進行解碼。此外,在巨集區 况日f丨不直接多工化為位元流而以設定檔等予以規定的情 訊,係根據以順序單位由位元流解碼之設定檔的識別資 小及來决定E集區塊大小。且以各圖框的解碼巨集區塊大 塊數解碼圖框大小為基礎,來決定各圖框所包含的巨集區 最#並將圖_包含之各巨集區塊的最佳編碼模式62、 料)、預剛參數63、壓縮資料64(亦即,量化變換係數資 等進^^壓縮參數65(變換區塊大小資訊、量化階大小) 匕外,在解碼裴置側解碼的最佳編碼模式62、最佳預 323005 28 201143459 ^數63 I缩貝枓64、最佳壓缩參數 碼裝置側編碼的最佳編㈣、最佳 18a、壓縮貢料2卜最佳壓縮參數咖。 和 竭裝==?7::r塊大小資訊係為在編 義-換區塊大小組中指定所選擇==:: r::變=:為由最佳編碼模式-及二s 區塊大小/…、#訊指定巨集區塊或子區塊的變換 反量化-反變換部66,仫你ra丄 入之壓縮資料64及最佳壓::用數^ 資:指定的區塊單位’進行反量化_反變換處理,、= = 測差分訊號解碼值67。 Jt异出預 另外,可變長度解碼部Μ,係 曰 考已解碼完成的周邊區塊的移動向旦°置解碼時,參 處理來決定預測向量,並藉由加^ 曰第4圖所示的 差分值,而獲得移動向量:解碼值=流6°解石馬的預測 1移動向量崎錢包含錢佳獅 切換部68為因應最佳編碼模式62來切換 =63的輸入目的地之開關。此切換部58,在由可 =(=Γ由可變長度解竭部61輸入的最佳預測參 數63(晝面内預測模式)輸出到晝面内預測部69,而在最佳 323005 29 201143459 編碼模式62顯示圖框間預測模式的情況時,將最佳預測參 數63(移動向量、各移動向量所指之參考晝像的識別號碼 (參考晝像指標)等),輸出到移動補償預測部70。 晝面内預測部69,係參考晝面内預測用記憶體77所 儲存之圖框内的解碼晝像(圖框内的已解碼完成晝像訊號) 74a,生成與最佳預測參數63所指示的畫面内預測模式對 應之預測晝像71並予以輸出。 此外,以晝面内預測部69生成預測晝像71的方法與 編碼裝置側之晝面内預測部8的動作相同,不過晝面内預 測部8係生成與編碼模式7所指示的全部晝面内預測模式 對應之預測晝像11,相對於此,此晝面内預測部69則只 生成與最佳編碼模式62所指示的晝面内預測模式對應之 預測畫像71,在這點上,兩者並不相同。 移動補償預測部70,係根據所輸入的最佳預測參數63 所指示的移動向量、參考晝像指標等,由移動補償預測圖 框記憶體75所儲存之1個圖框以上的參考晝像76,生成 預測晝像72,並予以輸出。 此外,以移動補償預測部70生成預測晝像72的方法, 在編碼裝置側之移動補償預測部9的動作中,除了由複數 個參考畫像搜尋移動向量的處理(相當於第3圖所示的移 動檢側部42及内插晝像生成部43的動作)之外,係依照由 可變長度解碼部61提供的最佳預測參數63,只進行生成 預測晝像72的處理。移動補償預測部70,係與編碼裝置 同樣地,在移動向量參考以參考圖框大小規定之圖框外的 30 323005 201143459 像素的情況時,以由晝面端的像素填充圖框外的像素等的 方法來生成預測晝像72。此外,參考圖框大小有以解碼圖 框大小擴充到成為解碼巨集區塊大小的整數倍的大小來規 定的情況、及以解碼圖框大小來規定的情況,且依與編碼 裝置同樣的順序來決定參考圖框大小。 加法部73,係將預測晝像71或預測晝像72的任何一 方、與由反量化-反變換部66輸出的預測差分訊號值67相 加而生成解碼晝像7 4。 此解碼晝像74,由於係作為用來生成以後的巨集區塊 的畫面内預測畫像之參考晝像(解碼畫像74a)使用,故予 以儲存在晝面内預測用記憶體77,並且輸入迴路濾波部 78。 迴路濾波部78,係進行與編碼裝置側的迴路濾波部27 相同的動作,以生成再生畫像79,並由此動畫像解碼裝置 輸出。另外,此再生裝置79由於係作為用生成以後的預測 晝像之參考晝像76使用,故予以儲存在移動補償預測圖框 記憶體75。此外,圖框内的全部巨集區塊解碼後所獲得之 再生晝像的大小為巨集區塊大小的整數倍的大小。在再生 晝像的大小比被輸入編碼裝置的影像訊號之各圖框的圖框 大小所對應之解碼圖框大小為大的情況中,再生晝像在水 平方向上或在垂直方向上包含有擴張區域。在此情況,由 再生晝像除去擴充區域部分的解碼畫像後之解碼畫像係由 解碼裝置輸出。 此外,在參考圖框大小以解碼圖框大小予以規定的情 31 323005 201143459 況時’移動補償預測圖框記憶體75所儲存的再生畫像之擴 域::的解碼畫像’在以後的預測晝像生成中則不予 * ’也可形成為將由再生晝像除去擴充區域部分 的解UI·後之解《I畫像儲存在移動補償預測圖框記憶體 75 〇 由X上可知,依據第丨實施形態的動晝像編碼裝置之 構f係對於因應巨集區塊的編碼模式7而分割之巨集/ :區塊晝$ 5 ’將因應巨錢塊或子區塊的大小而預先設 疋匕3複數個、憂換區塊大小之變換區塊纟且,編碼控制部3 則=變換區塊大小財,將編碼效率成為最佳當的i個變 ,區塊大小包含在最佳壓縮參數2Ga並指示給變換-量化 :而i_換量化部19則將最佳預測差分訊號13a分割 成最佳壓縮參數2〇a所包含之變換區塊大小的區塊,並進 仃變換及量化處理,以生成壓縮資料2卜所以,相較於變 ^區鬼大小組並不拘巨集區塊或子區塊的大小均予固定之 習知方法,可以同等的碼量使編碼影像的品質提高。 另外,因可變長度編碼部23由變換區塊大小組中因鹿 編碼模式7而適應性地選擇之變換區塊大小予以多工化^ t疋流30的方式構成’所以,與此相對應地,第1實施形 先'的動晝像解碼裝置之構成中,可變長度解碼部61係以巨 集區塊或子區塊單位由位元流6〇將最佳壓縮參數阽解 且反畺化-反變換部66根據此最佳壓縮參數65所包含 之變換區塊大小資絲決定變換區塊大小,独該巨集區 塊大小的區塊單位將壓縮資料64進行反變換及反量化處 323005 32 201143459 理因此由於可從動晝像解石馬裝ί與動畫像編碼裝置同 樣地被定義的變換區塊大小組中遂擇編喝裝置侧使用的 菱換區塊大h ’並將壓縮資料解碼,所以可將經由第i實 把升凡的動ί像解碼裝置解竭的位$流正確地解碼。 (第2實施形態) ㈣形態中’係說明上述第1實施形態的動晝 、之可憂長度編碼部23的變形例、及同樣上述第 1實施形態的動晝像解碼裝置之可變長度㈣部61的變形 例0Ji = Di + λ Ri 2 is set to use Dl, Rl as the evaluation value. L is the prediction difference = the absolute value in the macroblock of the signal or in the sub-block and (10)) 4 is the motion vector and the estimated code quantity λ· of the reference image_ job code indicated by the (four) vector is a positive number. When the current value of the mobile vector is taken, the code quantity of the motion vector uses the value of the nearby motion vector to predict the value of the vector of each mode of the 2A picture or the 2b picture, and the predicted difference value is borrowed according to the probability distribution. It is determined by the smoke code, or the input 彳τ is equivalent to the estimation of the code amount. Fig. 4 is a view for explaining a method of determining a predicted value (hereinafter referred to as a predictive vector) of a motion vector 323005 18 201143459 of each coding mode 7 shown in Fig. 2B. In the rectangular block of mb_mode 0, sub_mb_mode 1 and the like in FIG. 4, the respective encoded motion vectors MVa, MVb located at the left horizontal (position A), upper (position B), and upper right (position C) are used. MVc, the prediction vector PMV of the rectangular block is calculated by the following equation (2). The medianO is a function corresponding to the median filtering process and outputs the median of the motion vectors Mva, MVb, and MVc. PMV = median (MVa, MVb, MVc) (2) On the other hand 'on the diagonal block with diagonal shape! Nb_mode 1, sub_mb_mode 2, mb_mode 2, sub-mb_mode 3, mb_mode 3, sub-mb_mode 4, mb_mode 4, sub_mb "in the case of node 5" is formed to be applicable to the same rectangular block Since the processing is performed, the positions of the positions A, B, and C at which the median value is obtained can be changed by coping with the angular shape. Thereby, the method itself for calculating the prediction vector PMV can be changed without changing the shape of each motion vector allocation region, and the cost of the evaluation value 匕 can be suppressed to a small value (moving vector detection sequence II). The part 43 generates a predicted book image 45 for five or more "/2 pixel precision motion vectors 44 located around the motion vector of the integer pixel precision determined by the "moving vector detection order I". In the same manner, the image 17) is generated by the subtraction unit 12 by the subtraction unit 12, and the subtraction unit 12 is subtracted from the motion compensation sub-block image 5 to obtain the 13- and 2-pixel " 3 The prediction difference signal 13 and the _recording degree __44 (the evaluation of the efficiency of the _parameter (8) lai 323005 19 201143459' is determined by the motion vector of 1 precision around the motion vector of the integer pixel precision, so that the prediction cost l becomes 竑a motion vector 44 having a small 1/2 pixel precision. (Moving Vector Detection Sequence 111) The moving unit 3 and the motion compensation prediction unit 9 also have the same 1/4 pixel accuracy as described above. One or more 1/4 pixels around the motion vector of the motion vector detection order are predicted to be (moved vector detection order Iv). In the same manner, the coding control unit 3 and the motion compensation prediction unit 9 are combined. The motion vector of the virtual pixel precision is measured until the predetermined accuracy is obtained. In the present embodiment, the shift of the virtual pixel accuracy is detected (four) until the predetermined accuracy is obtained, but for example, the threshold value relative to . The prediction cost J1 is set to be smaller than the predetermined threshold, and the movement frame L is stopped before the predetermined accuracy is reached. The amount may also be referred to by reference to the frame size. The pixels outside the frame must be generated. In terms of the generation method of the image, there is a pixel filled at the end of the screen. In addition, the size of the frame of each frame in which the image signal 1 is rotated is not huge = block When the integer part of the size is substituted, the frame of the input image 1 is replaced, and when the frame is expanded, the size is 323005 20 201143459 which is an integral multiple of the size of the macro block (expanded frame) The size is the frame size of the reference frame. On the other hand, in the case of not referring to the local decoding portion of the extended region, only the local decoded portion relative to the original frame is referred to as a pixel in the frame, The frame size of the frame is the frame size of the original input image signal. Thus, the motion compensation prediction unit 9 divides the macro/sub-block image into the coding mode 7 and becomes the unit of motion compensation. The motion compensation area block image 41 of the block unit outputs the motion vector of the virtual pixel precision of each predetermined precision determined and the identification number of the reference image indicated by the motion vector as the prediction parameter 18. Further, the motion compensation prediction The portion 9 outputs the prediction artifact 45 (predicted image 17) generated by the prediction parameter 18 to the subtraction unit 12, and subtracts the macro/subblock image 5 from the subtraction unit 12 to obtain a prediction difference signal. 13. The prediction difference signal 13 output from the subtraction unit 12 is output to the transform-quantization unit 19. 2. Determination procedure of compression parameters Here, the prediction difference signal 13 generated based on the prediction parameter 18 determined for each coding mode 7 in the above-mentioned "1. Determination order of prediction parameters" is used for conversion-quantization processing. The order in which the compression parameter 20 (transform block size) is determined. Fig. 5 is a view showing an adaptation example of the conversion block size in accordance with the coding mode 7 shown in Fig. 2B. In Fig. 5, for the MxL pixel block, a 32x32 pixel block is taken as an example. When the mode specified by the coding mode 7 is mb_mode 0 to 6, the transform block size can adaptively select either of the 16x 16 or 8x8 pixels. When the coding mode 7 is mb_mode 7, 21 323005 201143459 (4) (4) The size can be adaptively selected from 8x8 or 4x4 pixels by the wrist 6 pixel sub-block obtained by dividing the macroblock by -4. In addition, the transform block size that can be selected in the parent-specific coding mode can be defined by the size of the rectangular block below the sub-block size that is equally divided by the coding mode. Fig. 6 is a diagram showing the mode of the coding mode according to the size of the block shown in the first figure, and the mode of the coding mode " For the aforementioned mb-mode 〇, 5, 6, in terms of the size of the selected transform block, except _6; word = the sub-unit of the mobile compensation unit" Bu and the small. In the case of: the second The transform block is adaptively selected in large and medium. It is adaptively selected in mb X8 and 32x32 pixels = ship 6, 8x8, 32xlfi German! ' 崎6's situation can be selected by the map and the medium In addition, although omitting 2 can be adaptively selected by two image materials, it is selected in _ prime, and == two _, for the rectangle, the selection is adapted. -,] by 8x8, 4x4 pixels, the mode == system will correspond to the fifth The map and the illustrated block size group illustrated in Fig. 6 are used as the compression parameters 2〇. In addition, in the examples of Fig. 5 and Fig. 6, the field is in the mouth of the wide code mode 7 and the premise Mm is large. The unit or sub-block unit of the cluster block can be adaptively selected, but the same is 323005 22 201143459: the sub-block obtained by cutting the macro block The coding mode (mode! to 8, etc.): the _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ The conversion block of the coding mode 7 should be adaptively selected. The open conversion-quantization unit 19 is associated with the coding control unit 3, and is in the sub-block unit specified by the size 4 or in the coding mode. 7 into one: cut the sub-block unit of the macro block unit, and determine the optimal transform block size from the transform block size. The detailed sequence is explained below. Figure 7 shows the transform-quantization unit. The block diagram of the internal structure of Fig. 7. The transform-quantization unit 19 shown in Fig. 7 includes a transform block size division unit 50, a conversion unit 52, and a quantization unit 54. Further, as for the input data, there is an encoding control unit. The input compression parameter 20 (the conversion block size and the quantization size, etc.) and the prediction difference signal 输入 input by the coding control unit 3. The conversion block size division unit 50 is an object belonging to the object that determines the size of the conversion block. Prediction of a macroblock or subblock The sub-symbol 13 is converted into a block corresponding to the transform block size of the compression parameter 20, and is output as the transform target block 51 to the transform unit 52. Further 'in the compression parameter 20 for one macro block or sub-block selection In the case where a plurality of blocks are changed, the conversion block 51 of each transform block size is sequentially output to the conversion unit 52. The conversion unit 52 is for the input conversion target block 51, The transform coefficient 53 is output to the quantization unit 54 in accordance with 23 323005 201143459 DCT, integer transform such as integer transform DCT transform coefficient, and Hadamard transform. The quantization unit 54 quantizes the input transform key 53 according to the quantization step size of the compression parameter 2〇 indicated by the editing control unit 3, and outputs the compressed data 21 of the transformed transform coefficient to the inverse quantization_reversal. ^ and the encoding control unit 3. Further, the conversion unit 52 and the quantization unit 54 perform the above-described processing for all of the conversion block sizes in the compression parameter (4)^========================================================================= And output each compressed data 21. The compressed data 21 outputted by the quantizing unit 54 is input to 'the encoding efficiency for the (four) block size of the compression parameter 2Q, and is used in relation to all the voltages selected for the encoding mode 7 The shrinkage material 21' obtained by changing the block size is calculated by, for example, the following formula (3), and the size of the transform block whose code cost L is the smallest is selected. J2==D2+ AR2 (3) At #^, use D2 and R2 as evaluation values. In the case of D2, the compressed data 21 obtained by using the block size is input to the inverse quantization 'reverse 栌, 卩22, and the compressed data 21 is subjected to inverse quantization _ inverse transform processing to obtain the local decoded prediction difference. The image obtained by the 24th person prediction image 17 is obtained; the distortion between the code image signal 26 and the macro/subblock image 5 is obtained. For R2, 'the amount of code used is the size of the transform block. 24 323005 201143459 The compressed data obtained 2 and the compressed data 21 are related to the encoding mode 7 and the prediction parameters 10, 18, the scientific variable length encoding unit 23 is actually encoded. The amount of code obtained (or the estimated code size). The coding control unit 3 determines the optimum coding mode 7 in accordance with the "3. coding mode determination order" which will be described later, and the conversion block size of the determined optimal coding mode 7a is included in the optimum compression parameter coffee. And output to the variable length coding unit 23. The variable length coding unit 23 multiplexes the optimum compression parameter 20a into a bit stream 3〇 after entropy coding. Here, the 'transform block size' is selected by the transform block size group (exemplified in FIG. 5 and FIG. 6) defined in advance according to the best coding mode 7a of the macroblock or sub-block, so that each _ The _transform block size group assigns identification information such as ID to the transform block size included in the group, and entropy encodes the identification information as information for transforming the block size, and then multiplexes it into a bit stream 70G 3G Just fine. In this case, the identification information of the block size group is also decoded. ^Transform block size group includes changes: 奂 area, when the size is one, since the decoding device size can be automatically determined by the set on the decoding device side, it is not necessary to identify the block size. Information multiplexing into a bit stream 3 〇. ... 3. Determining the encoding mode According to the above 1. Predicting the order of the parameters, the order is determined, and the encoding is controlled (4). Determination of the H-contraction parameter All the coding modes 7 of the prediction parameters 1G and 18^3^, respectively, when the respective coding ports are used, the coding control unit 3 obtains the prediction parameter 1〇, which is obtained from the reduction parameter 2〇, 18 and the pressure pre-/difference signal 13 into a conversion-quantization 323005 25 201143459 obtained compression data 21, from the above equation (3) code mode 7, and select its coding mode 7 ,, horse cost (four) small coding mode 7a. In addition, it is also possible to determine the best in the 2nd map or the (skip mode) ^ ^ ^ ― 曰 在 . . . . . . . . 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接As a partial two = i industrialization into a bit * * M material _ 丝 丝 丝 丝 并 并 并 并 并 并 丝 丝 丝 丝 丝 丝 丝 丝 丝 丝 丝 丝 丝 丝 丝 丝 丝 丝 丝 丝In the same order, the second order of the 解 戍 戍 使用 使用 使用 使用 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 邻接 解 解 解 解 解 解 解 解 解 解 解 解 解 解 解 解The size of the frame is not the integer of the macro block h size 聋q input expansion frame; input frame 1 block 0 block, Θ n , , can be & The block becomes sub-tail I, and the skip mode (10) mode is selected to suppress the amount of code consumed by the extended area. The 疋 coding and coding control unit 3 increases the efficiency of the Μ 序 序 、 「 2 2 2 2 2 2 上 上 上 上 上 上 上 上 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 ; ; ; ; The optimum encoding mode 7a output: predicting the second F sentence prediction parameter 1 〇 18 The compression parameter 20 corresponding to the optimal encoding mode 7a is also subjected to the variable length encoding unit 23 as ^= 323005 26 201143459 parameter 20a. The variable length coding unit 23 entropy decodes the optimum coding mode 7a, the optimum prediction parameters 10a and 18a, and the optimum compression parameters 20a, and multiplexes them into the bit stream 30. Further, the optimum predicted difference signal 13a obtained from the predicted artifacts 11 and 17 based on the determined optimum coding mode 7a and the optimum prediction parameters 10a, 18a and the optimum compression parameters 20a is as described above. The quantization unit 19 converts and quantizes the compressed data 21. This compressed data 21 is entropy encoded in the variable length coding unit 23, and multiplexed into the bit stream 30. Further, the compressed data 21 is subjected to the inverse quantization-inverse conversion unit 22 and the addition unit 25 to become the local decoded artifact signal 26, and is input to the loop filter unit 27. Next, a moving image decoding device according to the first embodiment will be described. Figure 8 is a block diagram showing the configuration of a moving image decoding device according to the first embodiment of the present invention. The moving picture decoding apparatus shown in Fig. 8 includes a variable length decoding unit 61 that entropy decodes the optimum encoding mode 62 in a macroblock unit by the bit stream 60, and performs the best according to the decoding. The macroblock or sub-block unit divided by the coding mode 62 entropy decodes the optimal prediction parameter 63, the compressed data 64, and the optimal compression parameter 65; the in-plane prediction unit 69 is based on the optimal prediction parameter 63. At the time of input, the prediction image 71 is generated using the in-plane prediction mode included in the optimal prediction parameter 63 and the decoded image 74a stored in the in-plane prediction memory 77. The motion compensation prediction unit 70 is based on the optimal prediction. When the parameter 63 is input, the motion vector included in the optimal prediction parameter 63 and the reference in the motion compensation prediction frame memory 75 specified by the reference imaging index included in the optimal prediction parameter 63 are used. The image 76 performs motion compensation prediction and generates a prediction 昼 27 323005 201143459 Image 72, the switching unit 68 is the best prediction parameter 63 rounded by the decoded optimal code 桓 variable length decoding unit 61, and will Measuring unit 69 or Any one of the motion compensation prediction units 70; the intra-screen pre-stage 66 uses the optimal compression parameter 65 to process the compressed data to the inverse transform and the inverse transform, and generates a predicted differential signal value 67; The prediction difference signal value 67 is added to the in-plane prediction unit 69.卩73, the prediction image 71 of any of the pre-remaining parts 7G is output, the motion compensation 7昼4 image, 74; the in-plane prediction memory 77 is stored, the circuit filtering unit 78 is The decoded image 74 is subjected to filtering processing to generate a reproduced image 79; and the motion compensated prediction frame memory is used to store the reproduced image 79. The variable length decoding unit 61 is the moving image of the first embodiment. When the bit stream 60 is decoded and received, the bit stream 60 is subjected to entropy decoding, and the bit unit or the image block macro block size and the frame size formed by the image of one frame or more are decoded. In addition, in the case of the macro-region, the situation is not directly multiplexed into a bit stream and is defined by a profile, etc., based on the identification of the profile decoded by the bit stream in sequential units. E set block size. Based on the decoding block size of the decoding macroblocks of each frame, the optimal coding mode of each macroblock included in each macroblock is included in each frame. 62, material), pre-parameter parameter 63, compressed data 64 (that is, the quantized transform coefficient is equal to the compression parameter 65 (transform block size information, quantization step size) ,, the most decoded on the decoding side Good coding mode 62, best pre-323005 28 201143459 ^ number 63 I 枓 枓 64, the best compression parameter code device side coding best editing (four), best 18a, compression tribute 2 Bu optimal compression parameters. Exhaustion ==?7::r block size information is specified in the edit-change block size group ==:: r::change=: is the optimal coding mode - and the second s block size /..., #信号Specify the macro-block or sub-block transform inverse quantization-inverse transform unit 66, 压缩 you drag the compressed data 64 and the optimal pressure:: use the number: the specified block unit' Perform inverse quantization _ inverse transform processing, = = measure differential signal decoding value 67. Jt different output pre-addition, variable length decoding unit 曰, the system refers to the decoded peripheral When the movement of the block is decoded, the reference process determines the prediction vector, and by adding the difference value shown in FIG. 4, the motion vector is obtained: the decoded value = the predicted 1 movement of the stream 6° solution stone The vector raffinate includes a switch in which the money lion switching unit 68 switches the input destination of 63 in response to the optimum encoding mode 62. The switching unit 58 is input by the variable length decommissioning unit 61. The optimum prediction parameter 63 (in-plane prediction mode) is output to the in-plane prediction section 69, and when the optimal inter-frame prediction mode is displayed in the optimum 32350 29 201143459 coding mode 62, the optimal prediction parameter 63 (moving vector) is used. The identification number (reference image index) of the reference image indicated by each motion vector is output to the motion compensation prediction unit 70. The in-plane prediction unit 69 is stored by referring to the in-plane prediction memory 77. The decoded image (the decoded image signal in the frame) 74a in the frame generates a prediction image 71 corresponding to the intra prediction mode indicated by the optimal prediction parameter 63 and outputs it. The method of generating the prediction artifact 71 by the intra prediction unit 69 The operation of the in-plane prediction unit 8 on the side of the encoding device is the same, but the in-plane prediction unit 8 generates the prediction image 11 corresponding to all the in-plane prediction modes indicated by the encoding mode 7. The intra prediction unit 69 generates only the prediction image 71 corresponding to the in-plane prediction mode indicated by the optimal encoding mode 62. In this regard, the two are not the same. The motion compensation prediction unit 70 is based on the most input. The motion vector indicated by the good prediction parameter 63, the reference imaging index, and the like are generated by the reference image 76 of one frame or more stored in the motion compensation prediction frame memory 75, and are output. Further, the motion compensation prediction unit 70 generates a prediction image 72, and the motion compensation prediction unit 9 performs processing for searching for a motion vector from a plurality of reference images (corresponding to FIG. 3). In addition to the motion prediction side unit 42 and the operation of the interpolation artifact generation unit 43, only the processing for generating the prediction artifact 72 is performed in accordance with the optimal prediction parameter 63 supplied from the variable length decoding unit 61. Similarly to the encoding device, when the motion vector refers to the case of 30 323005 201143459 pixels outside the frame defined by the frame size, the motion compensation prediction unit 70 fills the pixels outside the frame with the pixels at the side of the frame. The method generates a predicted artifact 72. Further, the reference frame size is defined by the case where the size of the decoding frame is expanded to an integer multiple of the size of the decoding macro block, and the case where the size of the decoding frame is specified, and in the same order as the encoding device. To determine the size of the reference frame. The addition unit 73 adds any one of the prediction artifact 71 or the prediction artifact 72 to the prediction difference signal value 67 outputted by the inverse quantization-inverse transformation unit 66 to generate a decoded artifact 74. This decoded image 74 is used as a reference image (decoded image 74a) for generating an intra-frame prediction image of a subsequent macroblock, and is stored in the in-plane prediction memory 77, and is input to the loop. Filter unit 78. The loop filter unit 78 performs the same operation as the loop filter unit 27 on the encoder side to generate the reproduced image 79, and outputs it to the motion picture decoding device. Further, since the reproducing device 79 is used as the reference image 76 for the predicted image after generation, it is stored in the motion compensation prediction frame memory 75. In addition, the size of the reproduced artifact obtained after decoding all the macroblocks in the frame is an integer multiple of the size of the macroblock. In the case where the size of the reproduced image is larger than the size of the frame corresponding to the frame size of each frame of the image signal input to the encoding device, the reproduced image includes expansion in the horizontal direction or in the vertical direction. region. In this case, the decoded image obtained by removing the decoded image of the extended area portion from the reproduced image is output from the decoding device. In addition, in the case of the size of the reference frame, the size of the decoded frame is specified as 31 323005 201143459, and the 'decoding image of the reproduced image stored in the motion compensation prediction frame memory 75:: the decoded image' is predicted in the future. In the case of generation, *' may be formed as a solution to remove the extended area from the reconstructed image. The "I image is stored in the motion compensation prediction frame memory 75", which is known from X, according to the third embodiment. The structure of the moving image encoding device f is divided into macros corresponding to the encoding mode 7 of the macroblock/: the block 昼$5' will be preset to 3 according to the size of the huge money block or the sub-block. The plurality of transform blocks of the size of the block are changed, and the coding control unit 3 = transforms the block size and wealth, and the coding efficiency becomes the optimal one, and the block size is included in the optimal compression parameter 2Ga. The conversion-quantization is instructed: and the i-scaling unit 19 divides the optimal prediction difference signal 13a into blocks of the transform block size included in the optimal compression parameter 2〇a, and performs transformation and quantization processing to generate Compressed data 2 ^ Variable regions in large groups and informal ghost macroblocks or sub-block sizes herein are the conventional fixed method, that the code amount equivalent to improve the quality of encoded video. Further, the variable length coding unit 23 is configured by multiplying the size of the transform block that is adaptively selected by the deer coding mode 7 in the transform block size group, so that it corresponds to this. In the configuration of the moving image decoding apparatus of the first embodiment, the variable length decoding unit 61 decomposes the optimal compression parameter by the bit stream 6〇 in a macroblock or subblock unit. The deuteration-inverse transform unit 66 determines the size of the transform block according to the transform block size information included in the optimal compression parameter 65, and the block unit of the macro block size inversely transforms and dequantizes the compressed data 64. At 323 005 32 201143459, therefore, the 菱 区 大 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与 与Since the compressed data is decoded, the bit stream stream depleted by the moving picture decoding device can be correctly decoded. (Embodiment 2) (4) In the embodiment, the modification of the first embodiment, the modification of the worryable length coding unit 23, and the variable length of the dynamic image decoding device of the first embodiment are described. Modification of part 61

首先說明上述第2 f_㈣ 變長度編碼部23。 不构碼裒罝之J f 9圖為顯示本發明第 形態 的可變長度編钟23的内 H扁碼裝置 中,與第1 此外’第9圖 且省略其說明。分係標註相叫元件符號, 裝置的構成係與上述第】實施^實施形態之動晝像編碼 編碼部23以外之各構成要件二:相二可變長度 態相同,故將第!圖至第 ”上述第1實施形 說明,本第2 圖予以援用。另外,為了方便 模式組為前提之;=及:為以使用第2A圖所示的編碼 用於以使用第==理方法,不過,當然也可適 及處理方法 圖所讀編频式組為前提之裝置構成 體:==長,部23包含:二值化表記憶 曰疋有表不糾模式7(或最佳預測參數10a 323005 33 201143459 和18a、最佳壓縮參數20a)之多值訊號的指標值與二值訊 號的對應關係之二值化表;二值化部92,係使用此二值化 表’將編碼控制部3選擇之多值訊號的最佳編碼模式(或 最佳預測參數l〇a和18a、最佳壓縮參數2〇a)之多值訊號 的才曰標值變換成二值訊號103 ;算術編碼處理運算部1 , 係參考上下文(CONTEH)生成部99所生成的上下文°識別資 訊102、上下文資訊記憶體96、概率表記憶體97及狀態遷 移表記憶體98,將二值化部92變換過的二值峨1〇^進 行算術編碼而輸出編碼位元列m,且使該編碼位元列m 多工化為位元流30 ;頻度資訊生成部93,係計算最佳編碼 模式7a(或最佳預測參數i〇a和18a、最佳壓縮參數罚幻 的發生頻度*生成頻度資訊94 ;及二值化表更新部95,係 根據頻度資訊94來更新二值化表記憶體⑽之二值化表的 多值訊號與二值訊號的對應關係。 、 抓.么綱、娜螞的爹数,以由編碼控制部3 碼模式〜為例,說明可變長度編碼部 23之可變長度編碼順序。有關同樣為編碼對象的參數之最 佳預測參數10a和l8a、最佳壓縮參數2()a,由於可依 佳編碼模式7a同樣的順序進行可變長度編 _ 明省略。 j队,、詋 卜m、μ 碼控制部^係設為輸 上下文―貝⑽始化旗標9卜_訊號lQ 資 1〇卜二值化表更新旗標113者。各資訊於後詳述。免身 初始部9〇,係因應由編碼控制部3所指示的上下文 323005 34 201143459 汛初,化旗標91,進行上下文資訊記憶 始化處理,IS:成為初始狀態。藉由初始化部90之初 值化:值ΠΜ2 ’係參考二值化表記憶體⑽所儲存的二 種類之多由編敬㈣3輸人之最佳編簡式7a的 種類之ί值峨的指標值變換為二值訊號1〇3,輪出J 術編碼處理運算部1〇4。 珣出給异 第10圖為顯示二值化表記憶體105所保存之First, the second f_(four) variable length coding unit 23 will be described. The J f 9 diagram of the non-constructed code is shown in the internal H flat code device of the variable length chime 23 according to the first aspect of the present invention, and the first and second figures are omitted. The components are labeled with the corresponding component symbols, and the configuration of the device is the same as the component 2 of the second embodiment: the second variable length state is the same as the phase 2 variable length state of the embodiment. Illustrated in the above-mentioned first embodiment, the second embodiment is used. In addition, it is assumed to facilitate the mode group; = and: the code shown in Fig. 2A is used to use the == method. However, of course, it is also possible to adapt to the device composition of the processing method diagram read the frequency group: == long, part 23 contains: binarization table memory 表 table not correct mode 7 (or best prediction Parameter 10a 323005 33 201143459 and 18a, optimal compression parameter 20a) binarization table of the correspondence between the index value of the multi-value signal and the binary signal; the binarization unit 92 uses the binarization table to encode The optimum value of the multi-valued signal of the multi-valued signal selected by the control unit 3 (or the optimal prediction parameters l〇a and 18a, the optimal compression parameter 2〇a) is converted into a binary signal 103; arithmetic The encoding processing unit 1 converts the context identifying unit 102, the context information memory 96, the probability table memory 97, and the state transition table memory 98 generated by the reference context (CONTEH) generating unit 99, and converts the binarizing unit 92. The binary value 峨1〇^ is arithmetically encoded and the output coding bit is output. m, and the coding bit sequence m is multiplexed into the bit stream 30; the frequency information generation unit 93 calculates the optimal coding mode 7a (or the optimal prediction parameters i〇a and 18a, the optimal compression parameter penalty) The occurrence frequency* generation frequency information 94; and the binarization table update unit 95 updates the correspondence between the multi-valued signal and the binary signal of the binarization table of the binarization table memory (10) based on the frequency information 94. The variable length encoding order of the variable length encoding unit 23 will be described by taking the code control unit 3 code mode as an example. The optimum prediction parameter 10a for the parameter which is also the encoding target is explained. And l8a, the optimal compression parameter 2()a, the variable length can be edited in the same order as the best coding mode 7a. The j team, the m, and the m code control unit are set to the context. The (10) initialization flag 9 _ signal lQ 1 〇 二 二 二 二 二 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 323005 34 201143459 At the beginning of the dynasty, the flag was 91, and the context information memory initialization process was performed. S: The initial state is initialized by the initialization unit 90: the value ΠΜ 2 ' is the type of the best type of editing 7a that is stored in the reference binarization memory (10). The index value of the value of ί is converted into a binary signal 1〇3, and the arithmetic coding processing unit 1〇4 is rotated. The image of the output is saved in the memory table 105.

的一個例子之圖1 10圖所示的「編碼模式」為"^表 圖所示的編碼模式(mh H n s m 」為在第2A skip:盆传將在總一 3)加入跳過模式(毗 曰p,、铺在一裝置侧使用鄰接之巨純塊 之的預測晝像’在解碼裝置側用於解碼書 像之模式)之5種類編碼模式7,且儲存有對應 的「指標」值。另夕卜,這些編碼模式的指標值係分別以 至3位το予以二值化’且作為「二值訊號」予以儲存。此 處則是將二值訊號的各位元稱為rbin檔」號碼。 此外,第10圖的例子中,係在發生頻度高的編碼模式 分配小的指標值,且二值訊號也設定為短的丨個位元, 關内容於後詳述。 編碼控制部3輸出的最佳編碼模式7a ,係輸入到二值 化部92,並且也輸入到頻度資訊生成部93。 頻度資訊生成部93,係計算此最佳編碼模式?a所勺 含之編碼模式的指標值的發生頻度(編碼控制部選擇之= 碼模式的選擇頻度)而作成頻度資訊94,並輸出到後述的 323005 35 201143459 二值化表更新部95。 概率表記憶體97為保持有 中儲存有碰_二值^ 記賴,該儲存表 ^ γ〇 ^ η , α虎1〇3所包含之各bin檔的符號An example of the "encoding mode" shown in Figure 1 is the encoding mode (mh H nsm " shown in the table of "2h skip: the 2A skip: the total will be 3) added to the skip mode (曰 曰 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . In addition, the index values of these coding modes are binarized by up to 3 bits το and stored as "binary signals". In this case, the elements of the binary signal are referred to as rbin files. Further, in the example of Fig. 10, a small index value is assigned to a coding mode having a high frequency of occurrence, and the binary signal is also set to a short one bit, and the details will be described later. The optimum encoding mode 7a output from the encoding control unit 3 is input to the binarizing unit 92, and is also input to the frequency information generating unit 93. The frequency information generating unit 93 calculates this optimal encoding mode. The frequency of occurrence of the index value of the coding mode included in a (the selection frequency of the code mode selected by the coding control unit) is used as the frequency information 94, and is output to the 323005 35 201143459 binarization table update unit 95 which will be described later. The probability table memory 97 is a symbol of each bin file included in the storage table ^ γ 〇 ^ η , α tiger 1 〇 3 in order to keep the 碰 二 二 ^ 储存 储存 储存

Probable S」h it生機率較高的任何一個符號(MPS:M〇St Probable SymbGl)與其發生概率址合。 個例概率表記憶體97所保存之概率表的- 率值(「ll ^ ^中,係對〇. 5至1-0之間的離散性概 ut」)分財個「概率表號碼」。 广矣Φ:六己憶體98為保持有儲存表之記憶體,該儲 :礁、二*數組由概率表記憶體97所儲存的「概率表 二。、從其概率表號碼表示的「〇」4「lj中的MPS之 碼;;概率狀態朝編錢概輪態進龍態_的組合。 第12圖為顯示狀態遷移記憶體⑽所保存之狀態遷移 的-個例子之圖。第12圖的「概率表號碼」、「[Μ編碼 ^的概率遷移」、「MPS編碼後的概率遷移」係分別對應於 第11圖所示概率表號碼。 ’當第框中之「概率表號碼^的概率狀 ,(由第11圖可知’MPS的發生概率為〇.527)時,藉由將 0」或1」中發生概率較低的任何一個符號(LPS : Least Probabie SymbQl)騎編碼,概率㈣將衫由「⑽編 碼後的概率遷移」遷移至概率表號碼Q(由第u圖可知, MPS的發生概率為〇. 5〇〇)。亦即,藉由使Lps發生,Mps 的發生概率會變小。 相反地,將MPS編碼時’概率狀態係表示由「Mps編 323005 36 201143459 碼後的概率遷移」遷移至概率表號碼2(由第11圖可知, MPS的發生概率為0. 550)。亦即,藉由使MPS發生,MPS 的發生概率會變大。 上下文生成部99,係參考表示由編碼控制部3輸入之 編碼對象的參數(最佳編碼模式7a、最佳預測參數10a和 18a、最佳壓縮參數20a)的類別之類別訊號100及周邊區 塊訊號101,按將編碼對象的參數二值化所獲得之二值訊 號103的每一 bin檔生成上下文識別資訊102。本說明中, 類別訊號100為編碼對象巨集區塊的最佳編碼模式7a。另 外,周邊區塊資訊101為與編碼對象巨集區塊鄰接之巨集 區塊的最佳編碼模式7 a。 以下,說明藉由上下文生成部99之上下文識別資訊的 生成順序。 第13(a)圖為以二分樹表現表示第10圖所示的二值化 表之圖。此處,是以第13(b)圖所示之粗框的編碼對象巨 集區塊、及與此編碼對象巨集區塊鄰接的周邊區塊A、B為 例進行說明。Any symbol (MPS: M〇St Probable SymbGl) with a high probability of Probable S"h it is combined with its probability of occurrence. The probability value table of the probability table stored in the probability table memory 97 ("ll ^ ^, the system is 〇. 5 to 1-0 between the discreteness ut") is divided into a "probability table number".广矣Φ: The six-remembered body 98 is a memory that holds a storage table. The storage: reef, two * array is stored in the probability table memory 97. "Probability table II." "4" The code of the MPS in lj; the combination of the probabilistic state and the morphing of the money into the dragon state _. Figure 12 is a diagram showing the state transition of the state transition memory (10). The "probability table number", "[probability transfer of Μ code^", and "probability transfer after MPS coding" of the figure correspond to the probability table numbers shown in Fig. 11, respectively. 'When the probability of the probability table number ^ in the box, (from the 11th figure, the probability of occurrence of MPS is 〇.527), by using any symbol with a low probability of 0" or 1" (LPS: Least Probabie SymbQl) Riding code, probability (4) Migrating the shirt from "(10) coded probability migration" to probability table number Q (as seen from figure u, the probability of occurrence of MPS is 〇. 5〇〇). That is, by causing Lps to occur, the probability of occurrence of Mps becomes small. Conversely, when the MPS is encoded, the probability state indicates that the migration from "probability migration after Mps code 323005 36 201143459 code" is moved to the probability table number 2 (as shown in Fig. 11, the probability of occurrence of MPS is 0.550). That is, by causing MPS to occur, the probability of occurrence of MPS becomes large. The context generating unit 99 refers to the category signal 100 and the peripheral block indicating the types of the encoding target parameters (the optimal encoding mode 7a, the optimal prediction parameters 10a and 18a, and the optimal compression parameter 20a) input by the encoding control unit 3. The signal 101 generates the context identification information 102 for each bin file of the binary signal 103 obtained by binarizing the parameters of the encoding object. In the present description, the category signal 100 is the best coding mode 7a of the coding target macroblock. In addition, the peripheral block information 101 is an optimum encoding mode 7a of the macroblock adjacent to the encoding target macroblock. Hereinafter, the procedure for generating the context identification information by the context generating unit 99 will be described. Fig. 13(a) is a diagram showing the binarization table shown in Fig. 10 in a binary tree representation. Here, the coding target macroblock shown in the thick frame shown in Fig. 13(b) and the peripheral blocks A and B adjacent to the coding target macroblock will be described as an example.

第13(a)圖中,將黑圓點稱為節點,將連結黑圓點間 的線稱為路徑(path)。在二分樹的終端節點分配有二值化 對象的多值訊號的指標。另外,紙面上由上向下,二分樹 的深度係對應於bin檔號碼,結合由根節點至終端節點的 各路徑所分配的符號(0或1)之位元列,係成為與分配於各 終端節點之多值訊號的指標相對應之二值訊號103。對於 二分樹的各上層節點(非終端的節點),因應周邊區塊A、B 37 323005 201143459 的資訊而準備有1個以上的上下文識別資訊。 C1、C2 99係參考 選擇CO、Cl、C2的3個上下文識別資訊中的任 下文生成部99輸出所選擇的上下文識別資訊作為上;= 例如,第13(a)圖中,在對於根節點備有c〇 的3個上下文識別資訊的情況時,上下文生成部 相鄰接之周邊區塊A、B的周邊區塊資訊, 識別資訊102。 0 (巨集區塊X的編竭模式非〇) 1 (巨集區塊X的編碼模式為0)In Fig. 13(a), a black dot is called a node, and a line connecting black circles is called a path. An indicator of the multi-valued signal of the binarized object is assigned to the terminal node of the binary tree. In addition, the depth of the binary tree corresponds to the bin file number, and the bit column of the symbol (0 or 1) assigned by each path from the root node to the terminal node is assigned to each other. The indicator of the multi-value signal of the terminal node corresponds to the binary signal 103. For each upper node (non-terminal node) of the binary tree, one or more context identification information is prepared in response to the information of the surrounding block A, B 37 323005 201143459. The C1, C2, 99, and the three context identification information of the selection CO, Cl, and C2 output the selected context identification information as the upper; = for example, in the figure 13(a), in the root node When three context identification information of c〇 is provided, the context generation unit adjacent to the peripheral block information of the peripheral blocks A and B and the identification information 102. 0 (The compilation mode of macroblock X is not 〇) 1 (The encoding mode of macroblock X is 0)

V C0 C1 C2 Γ (Α)+ Γ (B) = 0 Γ (Α)+ Γ (Β) = 1 Γ (Α)+ Γ (Β) = 2 (4) 上式(4)係以下述假定為基礎而準備者,亦即,假定. ,將周邊區塊Α、Β當作巨集區塊X的情況中,若周邊區塊 1的蝙碼模式成為“〇”(mb_skip)的話,編碼對象巨集 區塊的編賴式也成為(mb_skip)的概率會很高。因 而’由上式⑷選擇之上下文識別資訊1()2也是根據同樣的 假定。 此外,在根節點之外的上層節點係分別分配有i個上 下文識別資訊(Cl、C2、C3)。 乂上下文識別資訊102所識別的上下文資訊係保持有 的值(0或1)及近似其發生概率的概率表號碼,現在係 處在初始狀態。此上下文資訊為由上下文資訊記憶體96儲 38 323005 201143459V C0 C1 C2 Γ (Α)+ Γ (B) = 0 Γ (Α)+ Γ (Β) = 1 Γ (Α)+ Γ (Β) = 2 (4) The above equation (4) is assumed as follows In the case of the base, the preparer, that is, the assumption that the peripheral block Α, Β is regarded as the macro block X, if the bat code pattern of the peripheral block 1 becomes "〇" (mb_skip), the coding target is huge. The probability that the block block will also become (mb_skip) will be high. Therefore, the context identification information 1() 2 selected by the above equation (4) is also based on the same assumption. Further, upper context nodes other than the root node are assigned i context identification information (Cl, C2, C3). The context information identified by the context identification information 102 is a value (0 or 1) and a probability table number approximating its probability of occurrence, which is now in an initial state. This contextual information is stored by contextual information memory 96 323005 201143459

運算部104參考概率表號碼97, 之上下文資訊106。接著, 06。接著,算術編碼處理 確定與保持在上下文資訊 算術編碼處理運算 1至3bin檔的二值訊: 1〇6,概率表號碼107對應之bin檔〇的MPS發生概率1〇8。 接著,算術編碼處理運算部104根據上下文資訊1〇6 所保持之MPS的值(G或1)、及經確定的㈣發生概率. 將Μη檔0的符號值1〇9(0或1)進行算術編碼。接著,算 術編碼處理運算部綱參考狀態遷移表記憶體98,根據保 持於上下文資訊1G6的概率表號碼1G7、及先前已經算術 編碼過之bin檔0的符號值1〇9,而獲得Mn;ft 〇的符號 編碼後的概率表號碼110。 接著,算術編碼處理運算部104將上下文資訊記憶體 96所儲存之bin檔0的上下文資訊1〇6之概率表號碼(亦 即’概率表號碼107)的值更新為狀態遷移後的概率表號碼 (亦即’先前由狀態遷移表記憶體98取得之bin檔0的符 號編碼後的概率表號碼110)。 算術編碼處理運算部104亦針對bin檔1、2,與bin 檔0同樣地,根據各個上下文識別資訊102所識別的上下 39 323005 201143459 f ^資訊=進行算術料,在各bin權的符號編碼後,進 行上下文資訊106之更新。 μ算術編碼處理運算部1〇4係將全部Μη標的符號進行 ==所獲得之編碼位元列川予以輸出,並由可變長 度編碼部23多工化為位元流30。 部iTfi上:以上下文識別資訊102所識別之上下文資 L立n a'.、母二將*號施行算術編碼即予以更新。亦即’ 二。缺後,广卩點<的概率狀態會按每:欠符號編碼即行遷 ”俜萨由一二貝讯106的初始化(亦即,概率狀態的重 s又)係猎由則述的初始化部9〇進行。 初始化部90,係因康 初始化旗標91之指㈣部3的上下文資訊 (slize)^_ =㈣贿化,獨此初純係在片段 (一生!Λ各上下文資訊106的初始狀態 可預先備妥複數組,編概=號碼的初始值),也 :態包含在上下文資—化旗標 二值化表更新部95 值化表更新旗標113,二艮據由柄控制部3指示的二 用以表示編媽對象炎數由頻度資訊生成部93生成之 的發生頻度之類戶# 处為最佳編碼模式7a)的指標值 以下,說明藉由t 4 ’來更新二值化表記憶體105。 序。胃由一值化表更新部95來更新二值化表的順 在此例中,係因應屬於編碼對象參數的最佳編碼模式 323005 40 201143459 7a所指定之編碼模式的發生頻度’以能用較短的編碼字元 (codeword)將發生頻度最高的編碼模式予以二值化的方 式’來更新二值化表的編碼模式與指標的對應關係,以達 到減少碼量的目的。 第14圖為顯示更新後的二值化表的一個例子之圖,且 為假定更新前之二值化表的狀態為第1〇圖所示的狀態的 情況之更新狀態。二值化表更新部95,係依照頻度資訊 94,在例如mb一mode 3的發生概率最高的情況,以能對其 mb_mode 3分配較短編碼字元的二值訊號的方式,分配最 小的指標值。 另外’二值化表化更新部95 ’在二值化表已更新過的 情況時’必須生成在解碼裝置側可識別更新過的二值化表 所需之二值化表更新識別資訊112,並多工化為位元流 30。例如,每一編碼對象參數有複數個二值化表的情況中, 將可識別各編碼對象參數之ID分別預先賦予至編碼裝置 側及解碼裝置側,二值化表更新部95也可為將更新後的二 值化表之ID作為二值化表更新識別資訊112予以輸出,並 使多工化為位元流。 更新時序之控制,係在編碼控制部3在片段的前頭參 考編碼對象參數的頻度資訊94,判定編碼對象參數的發生 頻度分布大幅改變為預定容許範圍以上的情況時,即輸出 二值化表更新旗標113以進行控制。可變長度編碼部23只 要將二值化表更新旗標113多工化為位元流30的片段標頭 即可。另外,可變長度編碼部23,在二值化表更新旗標^13 323005 41 201143459 顯示「二值化表有更新」的情況時,會將顯示編碼模式、 壓縮參數、預測參數的二值化表中哪個二值化表已更新過 之二值化表更新識別資訊112多工化為位元流3〇。 另外,編碼控制部3亦可在片段的前頭之外的時序护 示更新二值化表,也可在例如任意的巨集區塊的前頭輪= 一值化表更新旗標113而指示更新。在此情況中,必須使 二值化表更新部95輸出確定已進行二值化表更新的巨集 區塊位置之資訊,且由可變長度編碼部23使該資訊多工” 為位元流30。 °夕工化 此外,編碼控制部3,在將二值化表更新旗標丨 出到二值化表更新部95使二值化表更新的情沉^,^ 上下文資訊初始化旗標91輸出給初始化部9〇,並二、: 下文資訊記憶體96的初始化。 ’亍上 置的可變 其次,說明本第2實施形態之動晝像解碼袭 長度解碼部61。 第15圖為顯示本發明第2實施形態 的可變長度解碎部的内部構成之方塊圖。此;像 2實施形態之動畫像解碼裝置的構成係與上述第i ::第 態相同,除了 0Γ拉4 E 貫' 形 動作也二述第t度解碼部Μ以外,其他各構成要件的 編目同,錢料1圖^第8圖。 圖所不的可變長度解竭部61包含 理運算部m,係參考上下文生成部122所生== ==、上下文資訊記憶體128、概率表記憶體= 遷移表記憶體135,將表示已多工化為位元流60的 323005 42 201143459 ' 最佳編碼模式62(或最佳預測來數 100 ,外 ^致b3、最佳壓縮參數65) 之編碼位元列133進行讀解碼而生成二值訊號137 •二 值化表記憶體U3,係將指定有以二值訊號表;之最佳: 碼模式62(或最佳預測參數63、最佳壓縮參數65)盘多值 訊號的對應關係之二值化表139予以儲存;及反二值化部 138,係使用二值化表139,將算術解嗎處理運算部12?^ 生成的二值汛號137變換成多值訊號的解碼值14〇。 以下,就施以熵解碼的的參數,以位元流6〇所包含之 巨集區塊的最佳編碼模式62為例,說明可變長度解碼部 61的可變長度解碼順序。有關同樣屬於解碼對象的參數之 最佳預測參數63、最佳壓縮參數65,由於只要以與最佳編 碼模式62同樣的順序進行可變長度解碼即可,其說明省 略。 此外’本第2實施形態的位元流6 0包含有:經由編碼 裝置側被多工化之上下文初始化資訊121、編碼位元列 133、二值化表更新旗標142、二值化表更新識別資訊144。 各資訊之内容容後詳述。 初始化部丨2〇,係以片段(siice)的前頭等將儲存在上 文資"fU己憶體128之上下文資訊進行初始化。或者,也 I形成為:針對上下文資訊的初始狀態(MPS之值及近似其 +尤率之概率表號碼之初始值),在初始化部120預先備 有複數級’並從組令選擇與上下文初始化資訊121的解碼 值對應的初始狀態。 上下文生成部122,係參考顯示解碼對象的參數(最佳 43 323005 201143459 ' 編碼模式62、最佳予側參數 資訊126。 塊貝幻24,而生成上下文識別The computing unit 104 refers to the context information 106 of the probability table number 97. Then, 06. Next, the arithmetic coding process determines the binary signal that is held in the context information arithmetic coding processing operation 1 to 3bin: 1〇6, and the MPS occurrence probability of the bin file corresponding to the probability table number 107 is 1〇8. Next, the arithmetic coding processing operation unit 104 performs the value of the MPS (G or 1) held by the context information 1〇6 and the determined (four) occurrence probability. The symbol value of the 档n file 0 is 1〇9 (0 or 1). Arithmetic coding. Next, the arithmetic coding processing operation unit refers to the state transition table memory 98, and obtains Mn according to the probability table number 1G7 held in the context information 1G6 and the symbol value 1〇9 of the bin file 0 which has been previously arithmetically coded. The probability table number 110 after the symbolization of the symbol. Next, the arithmetic coding processing unit 104 updates the value of the probability table number (that is, the 'probability table number 107) of the context information 1 〇 6 of the bin file 0 stored in the context information memory 96 to the probability table number after the state transition. (That is, the probability table number 110 after the symbol encoding of the bin file 0 previously obtained by the state transition table memory 98). Similarly to the bin file 0, the arithmetic coding processing unit 104 also performs arithmetic calculation based on the upper and lower 39 323005 201143459 f ^ information recognized by each context identification information 102, after the symbol encoding of each bin weight. , update the context information 106. The μ arithmetic coding processing unit 1〇4 outputs the coded bit sequence obtained by all the symbols of the Μn target ==, and the variable length coding unit 23 multiplexes the bit stream 30. On the iTfi: the context identifier identified by the context identification information 102 is L a n., and the parent 2 is updated by performing arithmetic coding on the *. That is, 'two. After the absence, the probabilistic state of the 卩 卩 & 会 会 : : : : 欠 欠 欠 欠 欠 欠 欠 欠 欠 欠 欠 欠 欠 欠 欠 初始化 初始化 初始化 初始化 初始化 初始化 初始化 初始化 初始化 初始化 初始化 初始化 初始化 初始化 初始化 初始化 初始化 初始化 初始化 初始化 初始化 初始化 初始化9. The initialization unit 90 is the context information (slize) of the finger (4) part 3 of the Kang initialization flag 91 (4) = (4) bribery, the original pure is in the segment (a lifetime! 初始 initial of each context information 106 The state can be prepared in advance with a complex array, and the initial value of the number is edited. The state is also included in the context-based flagization binarization table update unit 95 value table update flag 113, and the data is controlled by the handle control unit. 3 indicates that the number of inflammations is equal to or lower than the index value of the optimal coding mode 7a in the household frequency # generated by the frequency information generation unit 93, and the binary value is updated by t 4 '. The table memory 105. The stomach is updated by the digitization table update unit 95 in this example, in accordance with the coding mode specified by the optimal coding mode 323005 40 201143459 7a of the coding object parameter. Frequency of occurrence 'to be able to occur with a shorter codeword (codeword) The highest coding mode is binarized' to update the correspondence between the coding mode and the index of the binarization table to achieve the purpose of reducing the code amount. Fig. 14 is an example showing the updated binarization table. The figure is an update state in a case where the state of the binarization table before the update is the state shown in the first diagram. The binarization table update unit 95 is in accordance with the frequency information 94, for example, mb_mode3. In the case where the probability of occurrence is the highest, the smallest index value is assigned in such a manner that the binary signal of the shorter coding character can be allocated to mb_mode 3. In addition, the 'binarization table update unit 95' has been updated in the binarization table. In the case of the case, it is necessary to generate the binarization table update identification information 112 required for recognizing the updated binarization table on the decoding device side, and multiplex it into the bit stream 30. For example, each encoding object parameter has a plural number. In the case of the binarization table, the IDs of the respective encoding target parameters are respectively assigned to the encoding device side and the decoding device side, and the binarization table updating unit 95 may be the ID of the updated binarization table. As a binarization table update The identification information 112 is outputted and converted into a bit stream. The control of the update timing is such that the encoding control unit 3 refers to the frequency information 94 of the encoding target parameter at the head of the segment, and determines that the frequency distribution of the encoding target parameter is greatly changed. When the predetermined tolerance range or more is exceeded, the binarization table update flag 113 is output for control. The variable length coding unit 23 multiplexes the binarization table update flag 113 into the segment header of the bit stream 30. In addition, the variable length coding unit 23 displays the coding mode, the compression parameter, and the prediction parameter when the binarization table update flag ^13 323005 41 201143459 displays "the update table is updated". In the binarization table, which binarization table has been updated, the binarization table update identification information 112 is multiplexed into a bit stream 3〇. Further, the encoding control unit 3 may update the binarization table at a timing other than the head of the slice, or may update the flag in the front wheel = the digitization table update flag 113 of an arbitrary macro block, for example. In this case, it is necessary to cause the binarization table updating section 95 to output the information of the macroblock position at which the binarization table update has been performed, and the variable length encoding section 23 causes the information to be multiplexed as the bit stream. 30. In addition, the encoding control unit 3 extracts the binarization table update flag to the binarization table update unit 95 to update the binarization table, and the context information initialization flag 91 It is output to the initialization unit 9A, and the initialization of the information memory 96 is as follows: 'The variable of the upper side is set to be the next, and the moving image decoding length decoding unit 61 of the second embodiment will be described. Fig. 15 is a view A block diagram of the internal structure of the variable length decomposing unit according to the second embodiment of the present invention. The configuration of the moving picture decoding apparatus according to the second embodiment is the same as that of the i-th: the first aspect, except for 0 Γ 4 E The shape operation is also described in addition to the t-th decoding unit, and the other components are the same as the catalogue, and the data is shown in Fig. 8. The variable length depletion unit 61 includes the rational operation unit m, which is a reference. Context generating unit 122 generates == ==, context information memory 128, probability table Memory = Migration Table Memory 135, which will represent 323005 42 201143459 'Optimal Encoding Mode 62 (or best prediction number 100, external ^b3, optimal compression parameter 65) that has been multiplexed into bit stream 60 The coded bit column 133 performs read decoding to generate a binary signal 137. • The binarization table memory U3 is assigned a binary signal table; the best: code mode 62 (or optimal prediction parameter 63, most The compression parameter 65) the binarization table 139 corresponding to the correspondence of the multi-value signals is stored; and the inverse binarization unit 138 uses the binarization table 139 to process the arithmetic solution 12 to generate the second The value 汛 137 is converted into a decoded value of the multi-valued signal 14 〇. Hereinafter, the entropy-decoded parameter is applied, and the optimal coding mode 62 of the macroblock included in the bit stream 6 为 is taken as an example. The variable length decoding order of the variable length decoding unit 61. The optimum prediction parameter 63 and the optimum compression parameter 65 which are parameters belonging to the decoding target are subjected to variable length decoding in the same order as the optimal encoding mode 62. The description may be omitted. Further, the bit stream of the second embodiment 60 includes: context initialization information 121, coded bit column 133, binarization table update flag 142, and binarization table update identification information 144 that are multiplexed via the encoding device side. The initialization unit 初始化2〇 initializes the context information stored in the above-mentioned “fight” 128, or the I is formed as: the initial state for the context information ( The initial value of the MPS value and the initial value of the probability table number approximating the + special rate is set in advance by the initialization unit 120 and selects an initial state corresponding to the decoded value of the context initialization information 121 from the group command. The context generating unit 122 refers to the parameter for displaying the decoding target (best 43 323005 201143459 'encoding mode 62, optimal pre-parameter information 126. Block Béch 24, and generates context recognition)

類別訊號123為表轉销象的參數 照可變長度解碼部61内所保存的語法 對H 者。因此:蝴裝置側及解碼裝置侧保存 此處,是設為由編竭農置側的編碼控制部3保持Μ法。 在編碼裝置側’係成為依據編碼控制部3 將下一個應編碼參數的類別及其參數的值(指桿值^,亦 即,類別訊號_,依序輸㈣可變長度編碼部23。 另外,周邊區塊|訊124係為將巨集區塊或子區塊予 以解碼所獲得之編碼模式等的資訊,為了要作為用以將以 後的巨集區塊或子區塊進行解碼 … 用,故事先儲存在可變長度解甲 =邊區塊資訊124來使 且依需要而輪出到上下長文=6;。的記憶體(未圖示广 此外,藉由上下文生成部122來生成 120.^ 相同。在解碼裝置侧的上下文生成部122,也在由反二值 化部138所參考之二值化表139 @每-bin檔生成上下文 識別資訊126。 ^ 在各bin檔的上下文資邙φ ^ 貧巩中保持有MPS的值(〇或1) ==的發生概率之概率表號竭,作為用以將該… 檔進仃舁術解碼的概率資訊。 另外,概率表記憶體131及狀態遷移表記憶體咖, 323005 44 ^1143459 •係儲存與編碼裝置側的概率 憶體98相同之概率表(第11 w 97和狀態遷移表記 算術解碼處理運算部127_)和狀夕態遷移表(第12圖)。 編碼位元列133依每一 bin w係將多工化為位元流之 號137,並輸出到反二值于算術解竭而生成二值訊 算術解碼處理運算部i 8 體128,而獲得根據與編石馬位」、 1下文資訊記憶 上下文識別資訊126之 7°列133的各bin檔對應的The category signal 123 is a parameter of the table reversal image, and the grammar pair stored in the variable length decoding unit 61 is H. Therefore, the butterfly device side and the decoding device side are stored here, and the code control unit 3 on the side of the editing device is kept. On the side of the encoding device, the variable length coding unit 23 is sequentially input (four) in accordance with the value of the type of the next coding parameter and its parameter (the finger value ^, that is, the class signal_). , the surrounding block | News 124 is the encoding mode obtained by decoding the macro block or sub-block, etc., in order to be used to decode the future macro block or sub-block... The story is first stored in the variable length solution=edge block information 124, and the memory of the upper and lower long text=6 is rotated as needed (not shown, the context generation unit 122 generates 120. Similarly, the context generating unit 122 on the decoding device side also generates the context identification information 126 in the binarization table 139 @per-bin file which is referred to by the inverse binarization unit 138. ^ Context φ in each bin file ^ The probability that the probability of occurrence of the MPS value (〇 or 1) == is maintained in the poor, and the probability information is used as the probability information for decoding the file. In addition, the probability table memory 131 and the state Migration Table Memory Coffee, 323005 44 ^1143459 • Department Storage and Editing The probability side of the device side is the same probability table (the 11th w 97 and the state transition table arithmetic decoding processing unit 127_) and the state transition table (Fig. 12). The coded bit column 133 is based on each bin w system. Converting the multiplex into the bit stream number 137, and outputting the inverse binary value to the arithmetic depletion to generate the binary arithmetic decoding processing operation unit i 8 body 128, and obtaining the information according to the editing stone position, 1 below Corresponding to each bin file of the 7° column 133 of the memory context identification information 126

處理運算部127會參考概=資訊129 °接著,算術解石馬 文資訊129所保持的概率表表:己憶體131,而指定與上下 發生概率132。 J辱130對應之各bin檔的MPS 二著,算術解碼處理運算部127根據保持在上下文資 °之MPS的值(0或D、及所指定的MPS發生概率132, ,輸入到算術解碼處理運算部127之編碼位元列133予以 算,解碼’獲得各bin稽的符號值134(0或1)。各bin檔 的符號值解碼後,算術解碼處理運算部127即參考狀態遷 移表記憶體135,依與編碼裝置側的算術編碼處理運算部 同樣的順序,根據被解碼之各wn檔的符號值丨34及 保持在上下文資訊129之概率表號碼130 ’而獲得各bin 檔的符號解碼後(狀態遷移後)的概率表號碼136。 接著,算術解碼處理運算部127將上下文資訊記憶體 128所儲存之各bin檔的上下文資訊129之概率表號碼(亦 即,概率表號碼130)的值,更新為狀態遷移後的概率表號 碼(亦即,先前由狀態遷移表記憶體135取得之各bin檔的 45 323005 201143459 ’ 符號解碼後的概率表號碼136)。 算術解碼處理運算部127將上述算術解碼的結果與〜 之各bin檔的符號相結合所得之二值訊號137,輪出到许 二值化部138。 '反 反二值化部138,係由按二值化表記憶體143所儲疒 之解碼對象參數的每一類別而準備的二值化表中,選擇= 編碼時相同的二值化表139而進行參考,由算術解碼處理 運算部127所輸入的二值訊號137,將解碼對象參數 碼值140輸出。 此外,解碼對象參數的類別為巨集區塊的編碼模式(最 佳編碼模式62)時,二值化表139係與第1〇圖所示之編碼 裝置側的二值化表相同。 二值化表更新部141,係根據由位元流60解碼之二值 化表更新旗標142及二值化表更新識別資訊144,進行一 值化表記憶體143所儲存之二值化表之更新。 二值化表更新旗標142係為對應於編碼裝置側的二值 化表更新旗標113之資訊,且包含在位元流6〇的標頭資訊 等,為表示二值化表有無更新之資訊。在二值化表更新旗 標142的解碼值表示「二值化表有更新」的情況時,成為 由位元流60進一步使二值化表更新識別資訊144被解碼。 二值化表更新識別資訊144為對應於編碼裝置側的二 值化表更新識別資訊112之資訊,且為用以在編碼裳置側 識別已更新之參數的二值化表更新旗標113之資訊。例 如,如同上述,在每一編碼對象參數預先有複數個二值化 323005 46 201143459 表的情況時’係將可識別編碼對象參數之ID及二值化表的 id分別預先賦予至編碼裝置側及解碼裝置側,使二值化表 更新部141將與由位元流6〇解碼之二值化表更新識別資訊 144中的ID值對應之二值化表予以更新。此例子中,在二 值化表記憶體143預先備有第1〇圖及第14圖的2種二值 化表及其ID,並假定更新前之二值化表的狀態為第1〇圖 所不的狀態的情況時,二值化表更新部141會依照二值化 表更新旗標142及二值化表更新識別資訊144實施更新處The processing operation unit 127 refers to the probability table 139 °, and then the probability table held by the arithmetic calculus information 129: the memory 131, and the probability of occurrence 132. The MPS of each bin file corresponding to the humiliation 130 is followed, and the arithmetic decoding processing calculation unit 127 inputs the value to the arithmetic decoding processing operation based on the value of the MPS held in the context (0 or D, and the specified MPS occurrence probability 132). The coded bit column 133 of the unit 127 calculates and decodes the symbol value 134 (0 or 1) of each bin. After the symbol value of each bin file is decoded, the arithmetic decoding processing unit 127 refers to the state transition table memory 135. In the same order as the arithmetic coding processing unit on the encoding device side, the symbol of each bin file is decoded based on the symbol value 丨34 of each decoded wn file and the probability table number 130' held in the context information 129 ( The probability table number 136 after the state transition. The arithmetic decoding processing unit 127 then sets the value of the probability table number (that is, the probability table number 130) of the context information 129 of each bin file stored in the context information memory 128. The probability table number after the state transition is updated (that is, the probability table number 136 after the symbol decoding of each bin file previously acquired by the state transition table memory 135). The processing unit 127 rotates the binary signal 137 obtained by combining the result of the above-described arithmetic decoding with the sign of each bin file to the binary binarization unit 138. The inverse anti-binarization unit 138 is pressed by two. In the binarization table prepared for each type of the decoding target parameter stored in the value table memory 143, the same binarization table 139 at the time of encoding is selected and referred to, and is input by the arithmetic decoding processing unit 127. The binary signal 137 outputs the decoding target parameter code value 140. Further, when the type of the decoding target parameter is the encoding mode of the macroblock (optimal encoding mode 62), the binarization table 139 is the first map. The binarization table on the side of the encoding device shown is the same. The binarization table updating unit 141 updates the identification information 144 based on the binarization table update flag 142 and the binarization table update information 144 decoded by the bit stream 60. The update of the binarization table stored in the value table memory 143. The binarization table update flag 142 is information corresponding to the binarization table update flag 113 on the side of the encoding device, and is included in the bit stream 6标's header information, etc., to indicate whether the binarization table is updated or not. When the decoded value of the binarization table update flag 142 indicates that "the binarization table is updated", the binarization table update identification information 144 is further decoded by the bit stream 60. Binaryization table The update identification information 144 is information corresponding to the binarization table update identification information 112 on the side of the encoding device, and is information for updating the flag 113 for identifying the updated parameter on the encoding side. For example, As described above, when each of the encoding target parameters has a plurality of binarized 323005 46 201143459 tables in advance, the ID of the identifiable encoding target parameter and the id of the binarization table are respectively assigned to the encoding device side and the decoding device, respectively. On the side, the binarization table update unit 141 updates the binarization table corresponding to the ID value in the binarization table update identification information 144 decoded by the bit stream 6〇. In this example, the binarization table memory 143 is provided with two types of binarization tables and their IDs in the first and fourth figures, and assumes that the state of the binarization table before the update is the first map. In the case of the non-state, the binarization table update unit 141 performs an update in accordance with the binarization table update flag 142 and the binarization table update identification information 144.

理的話’則選擇與二值化表更新識別資訊144所包含的ID 對應之二值化表,所以更新後之二值化表的狀態成為第14 圖所示的狀態,且成為與編碼裝置側之更新後的二值化表 相同》。 由上述可知’若依據第2實施形態的動畫像編碼裝置 之構成,因編碼控制部3係選擇編碼效率為最佳當的最佳 編碼模式7a、最佳預測參數i〇a和18a、最佳壓縮參數2〇a 之編碼對象參數並予以輸出,可變長度編碼部23的二值化 部92係使用二值化表記憶體105的二值化表,將以多值訊 號表示之編碼對象參數變換為二值訊號1〇3,算術編碼處 理運算部1〇4係將二值訊號1〇3進行算術編碼而輸出編碼 位元列111,頻度資訊生成部93係生成編碼對象參數的頻 度資訊94,二值化表更新部95係根據頻度資訊94來更新 二值化表的多值訊號與二值訊號的對應關係,所以,相較 於二值化表為始終固定之習知方法’可以在同等的編竭影 像品質之基礎上削減碼量。 〜 323005 47 201143459 另外,因以二值化表更新部95使表示二值化表有無更 新之二值化表更新識別資訊112及用以識別更新後的二值 化表之二值化表更新識別資訊112予以多工化為位元流3〇 的方式構成’所以’與此相對應地,第2實施形態的動畫 像解碼裝置係構成為:以可變長度解碼部61的算術解碼處 理運算部127係將多工化為位元流之編碼位元列133予 以算術解碼而生成二值訊號137,反二值化部138係使用 二值化表記憶體143的二值化表139,將二值訊號137變 換為多值訊號而取得解碼值丨4〇,二值化表更新部ι41係 根據由被多工化為位元流6〇之標頭資訊予以解碼之二值 化表更新旗標142及二值化表更新識別資訊144,來更新 二值化表記憶體143中之預定二值化表。因此’由於動書 像解碼裝置可依與動畫像編碼裝置同樣的順序進行二值化 表之更新而將編碼對象參數予以反二值化,所以可將在第 2實施形態的動畫像編碼裝置被編碼之位元流正確地解碼。 (第3實施形態) 本第3實施形態中,係就上述第1、第2實施形態的 動晝像編碼裝置及動晝像解碼裝置中,藉由移動補償預測 邻9的移動補償預測進行預測晝像的生成處理的變形例 以說明。 首先’說明本第3實施形態的動畫像編碼裝置之移動 補仏預測部9。此外,本第3實施形態之動晝像編碣裝置 的構成,由於與上述第1實施形態或第2實施形態相同, 且除了移動補償預測部9以外之各構成要件的動作也相 323005 48 201143459 同,故於下文陳述中援用第1圖至第15圖。 本第3實施形態的移動補償預測部9 ,除了在虛擬樣 本精度的預測畫像生成處理之相關構成及動作上與上述第 1、第2實施形態不同之外,其餘為相同構成及動作。亦即, 上述第卜第2實施形態中,如第3圖所示,移動補償預 測部9的内插畫像生成部43係生成半像素或1/4像素等之 虛擬像素精度的參考晝像資料,並練據此虛擬像素精度In the case of 'the same thing', the binarization table corresponding to the ID included in the binarization table update identification information 144 is selected, so that the state of the updated binarization table is in the state shown in FIG. 14 and becomes the coding device side. The updated binarization table is the same. As described above, according to the configuration of the motion picture coding apparatus according to the second embodiment, the coding control unit 3 selects the optimum coding mode 7a, the optimum prediction parameters i〇a and 18a, which are optimal for the coding efficiency, and the best. The encoding target parameter of the parameter 2〇a is compressed and output, and the binarizing unit 92 of the variable length encoding unit 23 uses the binarization table of the binarization table memory 105 to represent the encoding target parameter represented by the multivalued signal. The frequency conversion processing unit 1〇4 performs arithmetic coding on the binary signal 1〇3 to output the coded bit sequence 111, and the frequency information generation unit 93 generates the frequency information 94 of the encoding target parameter. The binarization table update unit 95 updates the correspondence between the multi-valued signal of the binarization table and the binary signal based on the frequency information 94. Therefore, the conventional method of always fixing is compared to the binarization table. Reduce the amount of code based on the same quality of editing. ~ 323005 47 201143459 In addition, the binarization table update unit 95 causes the binarization table update identification information 112 indicating the presence or absence of the binarization table to be updated, and the binarization table update identification for identifying the updated binarization table. In the case where the information 112 is multiplexed into the bit stream 3', the moving picture decoding device of the second embodiment is configured by the arithmetic decoding processing unit of the variable length decoding unit 61. In the 127 system, the coded bit stream 133 of the bit stream is arithmetically decoded to generate a binary signal 137, and the inverse binarization unit 138 uses the binarization table 139 of the binarization table memory 143 to The value signal 137 is converted into a multi-valued signal to obtain a decoded value 丨4〇, and the binarization table updating unit ι41 updates the flag based on the binarization table decoded by the header information that is multiplexed into the bit stream 6〇. 142 and the binarization table update identification information 144 to update the predetermined binarization table in the binarization table memory 143. Therefore, since the moving picture image decoding device can update the binarization table in the same order as the moving picture coding device and de-binarize the coding target parameter, the motion picture coding device according to the second embodiment can be used. The encoded bit stream is decoded correctly. (Third Embodiment) In the third embodiment, in the moving image encoding device and the moving image decoding device according to the first and second embodiments, the prediction is performed by the motion compensation prediction of the motion compensation prediction neighbor 9. A modification of the generation processing of the artifact will be described. First, the movement supplement prediction unit 9 of the moving picture coding apparatus according to the third embodiment will be described. In addition, the configuration of the moving image editing device according to the third embodiment is the same as that of the first embodiment or the second embodiment, and the operation of each component other than the motion compensation prediction unit 9 is also 323005 48 201143459 In the same manner, Figures 1 to 15 are used in the following statements. The motion compensation prediction unit 9 of the third embodiment has the same configuration and operation except that the configuration and operation of the prediction image generation processing of the virtual sample accuracy are different from those of the first and second embodiments. In other words, in the second embodiment, as shown in Fig. 3, the internal illustrator generating unit 43 of the motion compensation predicting unit 9 generates reference image data of virtual pixel precision such as a half pixel or a 1/4 pixel. And practicing the virtual pixel precision

的參考見像資料生成預測晝像45時,藉由如同mpeg-4 AVC 規格那樣在垂直方向或在水平方向使用6個整數像素之6 個分接頭紐器之内插運算㈣製作^虛擬像素以生成預 測晝像’相對於此’本第3實施形態的移動補償預測部9 則是藉由超解像處理將移動補償預測圖框記憶體14所儲 存之整數像素精度的參考晝像15放大,以生成虛擬像素精 度的參考晝像207’並根據此虛擬像素精度的參考晝像2〇7 來生成預測晝像。 — 其次,援用第3圖來說明本第3實施形態的移動 預測部9。 $與上述本第1、第2實施形態同樣地,本第3實施形 内插畫像生成部43也由移動補償預測圖框記憶體Μ 丄2圖框以上的參考晝像15,由移動檢測部42在所 =之參考晝冑15上之預定移動搜尋範圍内檢測出移動 。移動向量之檢測係以MpEG_4就規格等同樣地, ^查傍擬像素精度的移動向量來進行。此檢測方法為對參 "所持有的像素資訊(稱為整數像素),藉由内插運算 323005 49 201143459 在整數像素之間製作出虛擬的樣本(像素),並將此樣本作 為參考畫像來利用的方法。 為了生成虛擬像素精度的參考畫像,必須將整數像素 精度的參考畫像放大(高精細化),以生成由虛擬像素所組 成的樣本計劃(sampleplan)。因此,本第3實施形態的内 插晝像生成部43中,在必須要虛擬像素精度的移動搜尋用 參考晝像的情況時’可利用「w. τ Freeman,E. c· Paszt〇r and 0. T. Carmichael, "Learning Low-Level Vision,> , 1Ιη1:2ΓηΤΐ1〇ηΗΐ J〇Urnal 〇fComputer Vision, vol. 40, no. 考^ 示之超解像技術,來生成虛擬像素精度的參 所储存的參考畫像資料,《超解像 該參考畫像進晝像2G7,且移動檢測部42使用 移動向量搜畀處理之構成予以陳述。 的移動補償預9本發明第3實施形態之動晝像編竭裝置 圖。第16圖所㈣内 ==生f部=内部構成方塊 =2°5,係將移動補償預測圖=二= 15施以放大處理;書他,隨14中的參考晝像 施以縮小處理;高_=^部⑽,係將參考晝像15 部200抽出高頻1^1 ,係由畫像端小處理 纖,係由參考書像^刀中的特徵量;高頻特徵抦出部 計算部202,俜Γ十算^出向頻範圍成分的特徵量;相關 203,係由相量間的相關值;高頻成分推定部 同頻成分圖案記憶體204的事前學習資 323005 50 201143459 料來推定高頻成分;及加法部施’係使用所推定的高頻 成分修正放大晝像的高縣分,而生成虛擬像素 者當偾9Π7。 第16圖中,用於移動搜尋處理的範圍之參考晝像15, 係從移動補償預測圖樞記憶體14所儲存的參考=像資料 中輸入内插晝像生成部43肖,此參考晝像15會分^The reference image is used to generate the prediction image 45, by using the interpolation function of the six taps of six integer pixels in the vertical direction or in the horizontal direction as in the mpeg-4 AVC specification. The motion compensation prediction unit 9 of the third embodiment is enlarged by the super-resolution processing, and the reference image 15 of the integer pixel precision stored in the motion compensation prediction frame memory 14 is enlarged by the super-resolution processing. A prediction artifact is generated by generating a reference artifact 207' of virtual pixel precision and based on the reference artifact 2〇7 of the virtual pixel precision. - Next, the movement prediction unit 9 of the third embodiment will be described with reference to Fig. 3 . In the same manner as in the above-described first and second embodiments, the illustrator image generating unit 43 of the third embodiment is also moved by the motion compensation prediction frame memory Μ2 frame or more. 42 detects the movement within the predetermined movement search range on the reference 昼胄15 of =. The detection of the motion vector is performed by MpEG_4 in the same manner as the specification, and the motion vector of the pseudo pixel precision is checked. This detection method is a pixel information (called an integer pixel) held by the reference parameter, and a virtual sample (pixel) is created between integer pixels by interpolation operation 323005 49 201143459, and this sample is used as a reference image. The method to use. In order to generate a reference image of virtual pixel precision, a reference image of integer pixel precision must be enlarged (high definition) to generate a sample plan composed of virtual pixels. Therefore, in the case of the interpolation imaging reference image in which the virtual pixel accuracy is required, the interpolation imaging generating unit 43 of the third embodiment can use "w. τ Freeman, E. c. Paszt〇r and 0. T. Carmichael, "Learning Low-Level Vision,> , 1Ιη1:2ΓηΤΐ1〇ηΗΐ J〇Urnal 〇fComputer Vision, vol. 40, no. Test the super resolution technology to generate virtual pixel precision The reference image data stored in the reference, "Super Resolution", the reference image is entered into the image 2G7, and the motion detecting unit 42 is described using the configuration of the motion vector search process. The motion compensation pre-9 is the third embodiment of the present invention. Like the editing device diagram. Figure 16 (4) inside == raw f part = internal composition block = 2 ° 5, the system will move compensation prediction map = two = 15 to enlarge the processing; book him, with reference in 14 Like the reduction processing; the high _=^ part (10), the reference to the image 15 part 200 extracted high frequency 1 ^ 1, is the small end of the image processing fiber, the reference book like the ^ knife in the feature quantity; high frequency The feature output unit computing unit 202 calculates the feature quantity of the frequency range component; The correlation value between the quantities; the high-frequency component estimation unit of the same-frequency component pattern memory 204, 321005 50 201143459 to estimate the high-frequency component; and the addition unit to use the estimated high-frequency component correction to magnify the image The high score is generated, and the virtual pixel is generated as 偾9Π7. In Fig. 16, the reference image 15 for the range of the mobile search processing is input from the reference = image data stored in the motion compensation prediction map pivot memory 14. The interpolation image generation unit 43 is omitted, and the reference image 15 is divided into ^

晝像縮小處理部200、高頻特徵抽出部2〇1 J ftir 久畫像放大處 晝像縮小處理部200,係由參考晝像丨5生 ^ 9 , ^ 〇λα 赤‘ 王&、祕橫 1/Ν(Ν 為2、4等,2的乘方值)大小的縮小晝像,並輪 特徵抽出部201a。此縮小處理係藉由一般的蚩 引间頻 A A像縮小滤波 高頻特徵抽出部201a,係由晝像縮小處理部2卯 成之縮小畫像,抽出關於邊緣(edge)成分等的古步、 U特徵量。作為U特徵量,可利用例如表^ 内之DCT或Wavelet變換係數分布之參數等。 高頻特徵抽出部201b,係進行與高頻特徵 同樣的高頻特徵抽出,由參考畫像丨5抽出高 ° a 第丄特徵量不同之第2特徵量。,2特徵量係^::= 计算部202,並且也輸出到高頻成分推定部2〇3。 相關計算部202 ,於由高頻特徵抽出部2〇h ★ 特徵量,從高頻特徵抽出部2〇ib輸入第2特徵旦日3入第1 算參考畫像15與其縮小晝像之間的局部區塊單^寺即°十 徵量基礎下之高頻成分範圍的相關值。就此相關值=特 323005 51 201143459 ‘有例如第1特徵量與第2特徵量之間的距離。 高頻成分推定部2〇3,係根據由高頻特徵抽出部 輸入的第2特徵量、及由相關計算部2〇2輸入的相關值, 而由高頻成分樣式記憶體204指定高頻成分的事前學 式,推定並生成虛擬像素精度的參考晝像207應具備的: 頻成分。所生成的高頻成分係輸出給加法部裏。 ^ 圖像放大處理部2G5 ’係對所輸人的參考晝像15,血 依據MPEG-4 AVC規格之半像素精度樣本的生成處理隻 地’施予藉由在垂直方向上或在水平方向上使用6個分接 頭的濾、波器之内插運算或雙線性m等的放大遽波# 理’而生成將參考晝像15放大成縱橫N倍大小之放大畫像= 加法部206,係在由晝像放大處理部2〇5輸入的放大 晝像加入由高頻成分推定部2G3輸入的高頻成分,亦即, 修正放大晝像的高頻成分,而生成被放大成縱橫N倍大小 的放大參考晝像。内插晝像生成部43則是將此玫大參考查 像資料作為將1/N設為1之虛擬像素精度的參考晝像如旦 來使用。 此外,内插畫像生成部43也可構成為:設N=2來生 半像素(1/2像素)精度的參考晝像2Q7後,藉由使用 接的1/2像素或整數像素的平均值濾波器之内插運算,來 生成1/4畫像精度的虛擬樣本(像素)。 另外,内插晝像生成部43也可構成為:除了第16 所不的構成外,在是否在晝像放大處理部2〇5所輪出的 大晝像加入高頻成分推定部203所輸出的高頻成分間進行 323005 52 201143459 在此構成的情況中2 =參考晝像207的生成結果。 致高頻成分推定部有"旦像樣式特異等某種原因而導 效率的不良影響之效果的推定精度變差時,抑制對該編碼 在選擇性決定 =定請輸出的高頻成分的情況==高頻成分 =入的情況等兩種情形的 的情況 ㈣測,且將其結果進行 二像45以進行移動補 顯示是否已加上之加法處理的資訊疋效率較佳者。然後, 工化為位元流30。 貝δ,係作為控制資訊而多 或者,内插晝像生成部⑸也 的其他參數無歧義地決定,以控制^/為位元流30 理。就由其他參數決定的例子而〜m6的加法處 圖或第2B圖所示之編竭模式7的。類用例如第以 區塊内的移動補償區域區塊㈣报在^擇了表示巨集 時,屬於劇烈移動模樣的概率會較Γ、=碼模式的情況 成部43會視為超解像的效果很差,内插晝像生 入高頻成分推定部2〇3所輸出的高^^法部2〇6加 制。另-方面,在選擇了表示巨集:、方式進行控 :的大小报大之編碼模式或區塊心較大的1多:補償區域 式的情況時,屬於比較靜止的晝 的思面内預測模 而,内插畫像生成部43會視為像會較心因 由在加法部鳩加入高頻成分推定^^好’而以經 分的方式進行控制。 斤輸出的高頻成 323005 53 201143459 除了利用編碼模式7 u 顧及到移動向量大小、巧邊乍5他的參數以外,也可利用 數。藉由移動動向㈣的偏差之參 制資訊多卫化為位使不直接將加法處理的控 机30 ’俾能提高壓縮效率。 义二外*可以在儲存於移動補償 預測圖框記憶體14之 二理,使移動娜酬圖框記憶越 207之後,再予储存^形成為虛擬像素精度的參考畫像 測圖框記憶體Η所必要的此^的;^下’就移動補償預 向量搜尋及生成預測書像;:::會二加,但正在移動 :,降低移動補償二不:== ===度的參考畫…生成: 儍207*援用第3圖來顯示使用虛擬像素精度的參考金 =量=度;移動向_ *像==?係用以生成與移動補償區域區境 旦像41處在預疋移動搜尋範圍内之 息 像向量44相對應之預測晝像45。以度的移動 之預測晝像45(預測晝像17)係輸 4度所生成 減法部12由移動補償區域區塊書像叫咸法/ 12 ’並藉由 »減去而成為預測差分訊號13。編碼 區塊晝像 分訊號13及整數像素精度的移動向量《 ° 3㈢對預測差 4(預測參數18)進 323005 54 201143459 行預測效率的評估。此預測效率之評估只要藉由上述第】 實施形態已說明過的上式⑴進行即可,其說明省略。 (移動向量檢測順序11,) 内插晝像生成部43 ’對於依上述「移動向量檢 I」決定之位於紐像素精度的移動向量的周圍之Μ 精度的移動向量44,使用第16圖所示的内插晝像生成部 43内。卩所生成之虛擬像素精度的參考晝像挪來生成預測 晝像45。以T ’與上述「移動向量檢測順彳!」同樣地, 其以1/2像素精度生成之預測晝像45(預測畫像⑺,係萨 由減法部12由移動補償區域區塊畫像41(巨集/子區塊^ 像5)減去,而獲得預測差分訊號13。接著,編碼控制部1 分:號13及1/2像素精度的移動向量“(預測 移動収率的評估,並從位於整數像素精度的 2向I周圍之1個以上的1/2像素精度的移動向量中, 2使預測成本L成為最小之1/2像素精度的移動向量 (移動向量檢測顧序111,) 、扁碼控制部3及移動補償預測部9,對1/4像素精产 的移動向量,也间掸从丄 a彳冢京精度 決定之位於i/2俊上 述「移動向量檢測順序Π」 决疋之位於1/2像素精度的移動向 :=的移動向量,,決定使預測成本j,成為= 像素精度的移動向|44。 ^ J ^174 (移動向量檢測順序IV,) 以下同樣地’編碼控制部3及移動補償預剩部9進行 323005 55 201143459 檢測虛擬像素精度的移動向量, 如此,移動補償預測部9到成為預定精度為止。 内分割成為編碼模式7所示之^將巨集/子區塊晝像5 移動補償區域H塊晝像4卜輪^償單位的區塊單位之 虛擬像素精度的義向量㈣^個所決定之預定精度的 識別號竭,作為預測參數18。另^量所指之參考畫像的 藉由該預測參數18所生成之預’移動補償預測部9將 出到減法部12,藉由減法部:卜象45(預測晝像⑺輸 去’獲得預測差分訊號13。m巨集/子區塊晝像5減 訊號13係輸出到變換-量化部19'部12輸出之預測差分 與在上述第丨實施形態已說下的處理動作,由於 其次,說明本第3實施形“理相同,其說明省略。 本第3實施形態之動晝像^晝像解碼裝置。 述第1、第2實施形態的移動補俨箱:置的構成’除了在上 度的預測晝像生成處理之構成'畴70之虛擬像素精 上述第卜第2實施形態的動晝=上^以外’由於與 第1圖至第16圖。 像解碼裝置相同,故援用 根據=卜第2實施形態中,係在移動補償預測部70 據+像素或1/4像料的额像錄度的參考晝 預剛晝像時,如同MPEG_4AVC規格那樣,藉由在垂直方= ^或在水平方向上使用6個整數像素之6個分接頭的二 器之内插運算等製作出虛擬像素,來生成預測晝像,= 於此,本第3實施形態的移動補償預測部7〇,則是藉由士 解像處理來放大移動補償預測圖框記憶體75所儲‘之: 323005 56 201143459 數像素精度的參考晝像76,以斗> a 像。 生成虛擬像素精度的參考晝 本第3實施形態的移動補 笛?管竑拟能^ 飛彳員預測部70,係與上述第b 第2實施形態冋樣地,根據 含的移動向量、及各㈣向佳制參數63所包 (參考晝像指標)等,由移動補償9 '考晝像的識別號竭 7R ^ - 该預測圖框記憶體75所儲存 的參考晝像76’生成預測晝像72並予以輸出。存 加法部73,係將由移動補彳告 像72,加在由反量化-反變^貝/員測部70輸入之預測晝 化反變換部66輪入之預測差分吼 碼值67中,而生成解碼晝像74。 制差刀㈣解 藉㈣_償預測部7Q生成預測畫像巧的方 ^,係在編碼裝置侧之移動補償預測部9的動作中,將由 複數個參考畫像搜尋移動向量 由 的移動檢測部42及内:書=理4相當於第3圖所示 並依據由可變長度解科61 ㈣作)予以除外’ 只進行生成預測畫像72=理所^㈣佳賴參數63, 係^處、’在/虛擬像素精度生成預測晝像Μ的情況時, '、·移動補俏預測圖樞記憶體75上以n 碼(參考晝像檢索)指定的參考金傻=考晝像的識別號 70進行與第16圖所Γ的處在移動補償預測部 又的多考旦像’並使用解碼過的移動 、 像72。這時,在編碼f置側 ’則晝 :广頻成分推定物輸出的高頻成分加入放大1: 的情況時,在解碼h側,會選擇是否從位元流⑽抽^ 57 323〇〇5 201143459 示有無加法處理的栓制 定,以控制移動補償 次由其他的參數無歧義地決 的參數決定的情况時,可内部的加法處理。在由其他 用顧及編碼模式7、移動向的編碼I置側同樣地,利 量場的偏差等,且可籍的大*、周邊區域的移動向 有並判斷參數的種_ 1耗預測部70與蝙碼裝置共 處理的控制資訊多上 、爲碼裝置侧亦可不直接將加法 此『移動補償預測成缩效率。 旦像之處理,也可僅^1_士 擬像素精度的參考 18a(亦即’解碼置側輸出的最佳預測參數 向量指示虛擬像素精度的情^^參數^所包含的移動 中,移動補償預測部9會 也/在此構成的情况 移動補償預測圖框記憶體14的:;= 生成部43生成虛擬像素精度的4:=:内播晝像 參考畫像15或虛擬像素^象207來使用,而由 17。 ’、精又的參考晝像207生成預測晝像 體為:對儲存在移_償_圖框· 體75之刖的參考晝像實施第16 己隐 大處理及高頻成分修正過之虛擬像素精戶^ ^敌 時,應準備作為移動補償預測圖式。=:情况 之次數較多的情況時,不必擬樣本位置的像素 故可削減運算量。另冰 &第16圖所不的處理, 异里另外’移動向量所指的位移範圍已在解 58 323〇〇5 201143459 碼裝置侧為預4nAA , „ ^ 範圍進粁筮!的話’也此以移動補償預測部70僅限於該 第16圖所示的處理的方式構&。& θ 位移範圍,只要霉成移動向置所指的 工化為例如為Γ 移指的位移範圍之值域多 裝置側及解流6〇並予以傳輸,或在運用上,在編石馬 知的話即可。卿互取決設疋,㈣解碼裝置側為已 因其’依據本第3實施形態的動晝像編碼敦置, 4二大處二移動補償預測部… 15並且修正龙_償預測圖框記憶體Μ中的參考晝像 心並切換=成分,而生成虛擬像素精度的參考晝像 :素_參::===成虛擬 之輸入影像訊號丨進行高壓^料的⑤頻成分 局頻成分的參考晝:士Ί,仍可由含有多數 畫像17,而有‘地壓縮二。移動補償預測生成之預測 另外,楚〇 3實施形態的動書像飨Mi ^移動補償預測部7〇具有絲佥^置’也因其構成 序生成虛擬像素精产 ;、動4編碼裝置同樣的順 ,是否因應多工化;位元流;生成部,且切 偵預測圖框記憶體75的參考書像76二里而使用移動補 度的參考晝像來❹,以生成預㈣成虛擬像素精 將經由第3實施形態的動晝像編解的方式’所以可 地解碼。 我置編碼的位元流正確 323005 59 201143459 • 此外,上述第3實施形態的内插晝像生成部43中, 構成為:藉由以上述W. T. Freeman et ai(2〇〇〇)中揭, 的技術為基礎之超解像處理,來生成虛擬像素精度的夂不 晝像207,不過,超解像處理本身並不侷限於該二術> 考 =其他任意的超解像技術,來生成虛擬像;精度的: 考晝像207的方式。 J苓 另外,在以電腦構成上述第43實施形態 編碼裝置的情況中,也可形成為將 ’ 像 編碼控制部3、切換部6、晝面内預 ^ =移動補償預測圖框記憶體1、=^^ 里化-反變換部22、可變長度編碼部 、反 晝面内預測用記憶體28的處理内容叙:路濾波部27、 存在電腦的記憶體,㈣電腦的c &像編碼程式儲 動晝像編碼程式。 執行記憶體所儲存的 同樣地,在以電腦構成上述第 像解碼裝置的情況中,也可形成至3實施形態的動晝 碼部6卜反量化-反變換部66、己述有:可變長度解 69、移動補償預測部70、移動預:68、畫面内預測部 面内預測用記憶體77、迴路,虫Λ _記憶體75、畫 像解媽種式储存在電腦的記部78的處理内容之動畫 憶體所儲存的動晝像編碼程^ ’而由電腦的⑽執行記 [產業上的可利用性] 本發明的動晝像編碼裝置 獲得一種不依賴晝像内容,即^像解竭裝置,由於可 隹預先設定的巨集區塊大 323005 60 201143459 小中,仍可抑制編碼模式等有關總負荷的碼量,而可有效 率地壓縮編碼之動晝像編碼裝置及動畫像解碼裝置,故適 用於將動晝像分割成預定區域而以區域單位進行編碼之動 晝像編碼裝置、及以預定區域單位將已編碼的動晝像進行 解碼之動晝像解碼裝置。 【圖式簡單說明】 第1圖為顯示本發明第1實施形態之動晝像編碼裝置 的構成之方塊圖。 第2A圖為顯示進行在時間方向上的預測編碼之圖像 的另一編碼模式例之圖。 第2B圖為顯示進行在時間方向上的預測編碼之圖像 的另一編碼模式例之圖。 第3圖為顯示第1實施形態之動晝像編碼裝置的移動 補償預測部的内部構成之方塊圖。 第4圖為因應編碼模式之移動向量的預測值的決定方 法之說明圖。 第5圖為顯示因應編碼模式之變換區塊大小的適應化 的一例圖。 第6圖為顯示因應編碼模式之變換區塊大小的適應化 的另例圖。 第7圖為顯示第1實施形態之動晝像編碼裝置的變換-量化部的内部構成之方塊圖。 第8圖為表示本發明第1實施形態之動畫像解碼裝置 的構成之方塊圖。 61 323005 201143459 第9圖為顯示本發明第2實施形態之動晝像編碼裝置 的可變長度編碼部的内部構成之方塊圖。 第10圖為顯示二值化表的一例圖,其為顯示更新前的 狀態。 第11圖為顯示概率表的一例圖。 第12圖為顯示狀態遷移表的一例圖。 第13圖為說明上下文識別資訊的生成順序之圖;第 13(a)圖為以二分樹表現表示二值化表之圖,第13(b)圖為 顯示編碼對象巨集區塊與周邊區塊的位置關係之圖。 第14圖為顯示二值化表的一例圖,其為顯示更新後的 狀態。 第15圖為顯示本發明第2實施形態之動晝像解碼裝置 的可變長度解碼部的内部構成之方塊圖。 第16圖為顯示本發明第3實施形態之動晝像編碼裝置 的移動補償預測部的内部構成之方塊圖。 【主要元件符號說明】 1 輸入影像訊號 2 區塊分割部 3 編碼控制部 4 巨集區塊大小 5 巨集/子區塊晝像 6 切換部 7 編碼模式 7a 最佳編碼模式 8 晝面内預測部 9 移動補償預測部 10 預測參數 10a 最佳預測參數 11 預測晝像 12 減法部 13 預測差分訊號 13a 最佳預測差分訊號 62 323005 201143459 14 15 18 19 20a 22 24 26 28 30 40 41 43 45 50 52 54 61 63 65 67 69 71 移動補償預測圖框記憶體 73 參考晝像 預測參數 變換-量化部 最佳壓縮參數 反量化-反變換部 局部解碼預測差分訊號 局部解碼晝像訊號 晝面内預測用記憶體 位元流 移動補償區域分割部 移動補償區域區塊晝像 内插晝像生成部 預測晝像 變換區塊大小分割部 變換部 量化部 可變長度解碼部 最佳預測參數 最佳壓縮參數 預測差分訊號解碼值 晝面内預測部 預測晝像 加法部 17 預測晝像 18a 最佳預測參數 20 壓縮參數 21 壓縮資料 23 可變長度編碼部 25 加法部 27 迴路濾波部 29 局部解碼晝像 42 移動檢測部 44 移動向量 51 變換對象區塊 53 變換係數 60 位元流 62 最佳編碼模式 64 壓縮資料 66 反量化-反變換部 68 切換部 70 移動補償預測部 72 預測晝像 74、74a解碼晝像 63 323005 201143459 75 移動補償預測圖框記憶體 76 參考晝像 77 晝面内預測用記憶體 78 迴路濾波部 79 再生畫像 90 初始化部 91 上下文資訊初始化旗標 92 二值化部 93 頻度資訊生成部 94 頻度資訊 95 二值化表更新部 96 上下文資訊記憶體 97 概率表記憶體 98 狀態遷移表記憶體 99 上下文生成部 100 類別訊號 101 周邊區塊資訊 102 上下文識別資訊 103 二值訊號 104 算術編碼處理運算部 105 二值化表記憶體 106 上下文資訊 107 概率表5虎碼 108 MPS發生概率 109 符號值 110 概率表號碼 111 編碼位元列 112 二值化表更新識別資訊 113 二值化表更新旗標 120 初始化部 121 上下文初始化資訊 122 上下文生成部 123 類別訊號 124 周邊區塊資訊 126 上下文識別資訊 127 算術解碼處理運算部 128 上下文資訊記憶體 129 上下文資訊 130 概率表號碼 131 概率表記憶體 132 MPS發生概率 133 編碼位元列 134 符號值 135 狀態遷移表記憶體 136 概率表號碼 64 323005 201143459 137 二值訊號 138 反二值化部 139 二值化表 140 解碼值 141 二值化表更新部 142 二值化表更新旗標 143 二值化表記憶體 144 二值化表更新識別資訊 200 晝像縮小處理部 201a、 201b高頻特徵抽出部 202 相關計鼻部 203 高頻成分推定部 204 南頻成分樣式記憶體 205 晝像放大處理部 206 加法部 207 虛擬像素精度的參考晝像 65 323005The image reduction processing unit 200 and the high-frequency feature extraction unit 2〇1 J ftir The long image enlargement image reduction processing unit 200 is generated by the reference image 丨5, ^ 〇λα 赤'王 & 1/Ν (Ν is 2, 4, etc., the power of 2) is reduced in size, and the wheel feature extraction unit 201a. This reduction processing is performed by the general-purpose inter-frequency AA image reduction filter high-frequency feature extracting unit 201a, and the reduced image is formed by the image reduction processing unit 2, and the ancient step, the edge component, and the like are extracted. Feature amount. As the U feature amount, for example, parameters such as DCT or Wavelet transform coefficient distribution in the table can be used. The high-frequency feature extracting unit 201b extracts the high-frequency features similar to the high-frequency characteristics, and extracts the second feature amount having a different feature value from the reference image 丨5. The 2 feature quantity system ^::= The calculation unit 202 is also output to the high-frequency component estimating unit 2〇3. The correlation calculation unit 202 inputs the second feature day 3 from the high-frequency feature extraction unit 2〇ib into the local portion between the first calculation reference image 15 and the reduced image image by the high-frequency feature extraction unit 2〇h ★ feature amount. The block single ^ temple is the correlation value of the high frequency component range under the ten levy. In this case, the correlation value = special 323005 51 201143459 ‘The distance between the first feature quantity and the second feature quantity is, for example. The high-frequency component estimating unit 2〇3 specifies the high-frequency component from the high-frequency component pattern memory 204 based on the second feature amount input by the high-frequency feature extracting unit and the correlation value input by the correlation calculating unit 2〇2. The pre-existing equation, the reference artifact 207 that estimates and generates the virtual pixel precision should have: Frequency component. The generated high frequency component is output to the addition unit. ^ The image enlargement processing unit 2G5' is for the reference image 15 of the input person, and the blood is generated by the half pixel precision sample generation processing according to the MPEG-4 AVC standard only by being applied in the vertical direction or in the horizontal direction. An enlarged image in which the reference image 15 is enlarged to N times the vertical and horizontal directions by using an interpolation filter of six taps, an interpolation operation of a bilinear m, or the like, and an addition unit 206 is generated. The high-frequency component input by the high-frequency component estimating unit 2G3, that is, the high-frequency component of the magnified image is corrected, and the magnified image is magnified to N times the vertical and horizontal directions by the high-frequency component input from the high-frequency component estimating unit 2G3. Zoom in on the reference image. The interpolation key generation unit 43 uses the same reference image as the reference image of the virtual pixel precision with 1/N set to 1. Further, the internal illustrator image generating unit 43 may be configured to set an average value of 1/2 pixels or integer pixels by using N=2 to generate a half-pixel (1/2 pixel) precision reference image 2Q7. The interpolation operation of the filter is to generate a virtual sample (pixel) of 1/4 image accuracy. In addition, the interpolation imaging generating unit 43 may be configured to add the high-frequency component estimating unit 203 to the large image that is rotated by the imaging magnification processing unit 2〇5 in addition to the 16th configuration. The high-frequency component is carried out 323005 52 201143459 In the case of this configuration 2 = the result of the reference image 207 is generated. When the estimation accuracy of the effect of the adverse effect of the efficiency is deteriorated due to some reason such as the pattern specificity, the high-frequency component estimation unit suppresses the high-frequency component of the code that is selectively determined to be output. == High-frequency component = In-case case, etc. (4) Measurement, and the result of the two-image 45 is used to perform the motion-compensated display. Then, it is converted into a bit stream 30. The δ is mostly used as control information. Alternatively, the other parameters of the interpolation artifact generating unit (5) are determined unambiguously, and the control is performed as a bit stream. In the example determined by other parameters, the addition map of ~m6 or the compilation mode 7 shown in Fig. 2B. The class uses, for example, the motion compensation region block in the block (4) to report that when the macro is selected, the probability of belonging to the violent moving pattern is higher than that of the 码, = code mode, and the portion 43 is regarded as the super-resolution. The effect is very poor, and the interpolation image is added to the high-modulus portion 2〇6 outputted by the high-frequency component estimating unit 2〇3. On the other hand, when selecting a coding mode that indicates a macro:, a mode control, or a larger block size or a larger block heart: a compensation area type, it is a relatively static 昼 in-plane prediction. In the meantime, the internal illustrator image generating unit 43 considers that the image is controlled in a divided manner by adding a high frequency component estimation to the addition unit 较. The high frequency of the output of the jin is 323005 53 201143459 In addition to the use of the coding mode 7 u to take into account the size of the mobile vector, the parameters of the edge 乍 5, can also use the number. By adjusting the information of the deviation of the movement direction (4), the controller 30 ’ that does not directly process the addition can improve the compression efficiency. Yiyiwai* can be stored in the mobile compensation prediction frame memory 14, so that the mobile reward frame is more than 207, and then stored to form a virtual image precision reference image frame memory. The necessary ^ ^ ^ ^ on the mobile compensation pre-vector search and generate prediction book image;::: will add two, but moving:, reduce the motion compensation two not: == === degree of reference painting... generation : Silly 207* uses Figure 3 to show the reference gold = quantity = degree using virtual pixel precision; moving to _ * like ==? is used to generate and move the compensation area area image 41 in the preview mobile search range The inner image is compared to the predicted image 45 corresponding to the vector 44. The subtraction unit 12 generated by the prediction of the degree of motion 45 (predicted image 17) is transmitted by 4 degrees. The motion compensation area block image is called the salt method / 12 ' and is subtracted by » to become the prediction difference signal 13 . Code block image Signal 13 and integer pixel accuracy of the motion vector "° 3 (3) for the prediction difference 4 (predicted parameter 18) into 323005 54 201143459 line prediction efficiency evaluation. The evaluation of the prediction efficiency may be performed by the above formula (1) described in the above embodiment, and the description thereof is omitted. (Moving Vector Detection Sequence 11) The interpolation artifact generation unit 43' uses the motion vector 44 of the 精度 precision around the motion vector determined by the above-mentioned "moving vector detection I", as shown in Fig. 16 The interpolation artifact generation unit 43 is inside. The reference image of the virtual pixel precision generated by 卩 is generated to generate a prediction image 45. Similarly to the above-mentioned "moving vector detection 彳!", T' is a prediction image 45 generated by 1/2 pixel precision (predictive image (7), and the subtraction portion 12 is moved by the motion compensation region block image 41 (giant The set/subblock ^ is subtracted from 5) to obtain the predicted differential signal 13. Next, the encoding control section 1 points: No. 13 and 1/2 pixel precision of the motion vector "(predicted mobile yield estimation, and is located from In the motion vector of one or more 1/2 pixel precision around the two-direction I of the integer pixel precision, 2 the motion vector (moving vector detection order 111,) which makes the prediction cost L the smallest 1/2 pixel precision, The code control unit 3 and the motion compensation prediction unit 9 determine the motion vector of the 1/4 pixel precision, which is determined by the accuracy of 丄a彳冢京. The movement vector of the movement direction of 1/2 pixel precision: =, determines the prediction cost j, becomes the movement of the pixel precision to |44. ^ J ^ 174 (moving vector detection sequence IV,) Part 3 and the motion compensation pre-remaining unit 9 perform 323005 55 201143459 Detecting virtual pixels The motion vector of the degree is such that the motion compensation prediction unit 9 reaches a predetermined accuracy. The inner division becomes the coding mode 7 and the macro/sub-block image 5 is moved to the compensation region H block image 4 The meaning vector of the virtual pixel precision of the block unit of the unit (4) is determined by the predetermined precision, as the prediction parameter 18. The pre-'move generated by the prediction parameter 18 of the reference image indicated by the other quantity The compensation prediction unit 9 outputs the subtraction unit 12 to the subtraction unit: the image 45 (predicted image (7) is input to obtain the prediction difference signal 13. m macro/subblock image 5 subtraction 13 is output to the transform - The prediction difference outputted by the quantization unit 19' 12 is the same as the processing operation described in the above-described embodiment, and the third embodiment will be described in the same manner, and the description thereof will be omitted. The third embodiment is not limited. The image processing device according to the first and second embodiments, the configuration of the above-described second embodiment of the virtual pixel in the configuration of the domain of the upper-level prediction image generation process. The shape of the movement = the upper ^ other than due to the first figure It is the same as that of the decoding device. Therefore, in the second embodiment, the motion compensation prediction unit 70 is based on the reference image of the image of the + pixel or the 1/4 image. As in the MPEG_4AVC specification, a virtual pixel is created by interpolating two devices of six taps of six integer pixels in the vertical direction = ^ or in the horizontal direction to generate a prediction image, = The motion compensation prediction unit 7A of the third embodiment enlarges the reference image 76 of the pixel compensation precision by the motion compensation prediction frame memory 75 by the lexicographic process. Bucket > a like. Reference for generating virtual pixel accuracy 移动 The mobile fillet of the third embodiment? The 彳 竑 预测 预测 预测 预测 预测 预测 预测 彳 彳 彳 彳 彳 彳 彳 彳 彳 彳 彳 彳 彳 彳 彳 彳 彳 彳 彳 彳 彳 彳 彳 彳 彳 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测 预测The motion compensation 9' test image identification number 7R ^ - the reference image memory 76 stored in the prediction frame memory 75 generates a prediction image 72 and outputs it. The accumulating unit 73 adds the moving supplement image 72 to the predicted difference weight value 67 in which the inverse quantization/inversion unit 66 input by the inverse quantization-reverse conversion unit 70 is inserted. A decoded image 74 is generated. In the operation of the motion compensation prediction unit 9 on the side of the encoding device, the motion detecting unit 42 that searches for the motion vector by the plurality of reference pictures is used. Inside: Book = Theory 4 is equivalent to the one shown in Figure 3 and is excluded according to the variable length solution 61 (4). Only the generated prediction image 72 = rationality (4) Jialai parameter 63, system ^, 'in When the virtual pixel precision is used to generate the predicted image ,, the reference number 70 of the reference gold silo=reference image specified by the n code (refer to the image search) is performed on the mobile patch prediction matrix 75. The image of Figure 16 is in the motion compensation prediction unit and the multi-docal image is used and the decoded motion, image 72 is used. In this case, when the code f is set to the side of the code: when the high-frequency component of the wide-frequency component estimation output is added to the amplification 1:, on the decoding h side, it is selected whether or not to extract from the bit stream (10). 57 323 〇〇 5 201143459 If the plug is set to have no addition processing, and the motion compensation time is determined by the parameters of other parameters unambiguously determined, the internal addition processing can be performed. In the same manner as the coding I of the moving direction, the variation of the profit field, and the like, and the movement of the surrounding area and the movement of the surrounding area are determined by the type _1 consumption prediction unit 70 The control information co-processed with the bat code device is more, and the code device side may not directly add the "movement compensation prediction to the reduction efficiency." Once the image is processed, it can also be only the reference 18a of the pseudo-pixel precision (ie, the best prediction parameter vector of the decoded side output indicates the virtual pixel precision). The prediction unit 9 also moves/compensates the prediction frame memory 14 in the case of the configuration: the generation unit 43 generates the virtual pixel precision 4:=: the intra-framed image reference image 15 or the virtual pixel image 207 to use. And the 17', the refined reference image 207 generates the predicted image body as: the 16th hidden large processing and the high frequency component are performed on the reference image stored in the shift_frame_body 75 Corrected virtual pixel master ^ ^ enemy time, should be prepared as a motion compensation prediction pattern. =: When the number of times is large, the pixel does not need to be sampled, so the amount of calculation can be reduced. Another ice & 16th In the case of the processing of the figure, the displacement range indicated by the other 'moving vector' has been pre-set 4nAA on the side of the solution 58 323 〇〇 5 201143459 code device, „ ^ 范围 粁筮 粁筮 的 也 也 也 移动 移动 移动 移动 移动It is limited to the manner of processing shown in Fig. 16 && θ displacement range, as long as the mold into the moving direction refers to the industrialization of the shift range of the shift finger, for example, the multi-device side and the de-flow 6〇 and transmitted, or in the application, in the Ma Shima If you want to do it, then, the decoding device will be set up. (4) The side of the decoding device is already based on the dynamic image encoding of the third embodiment, and the second two mobile compensation prediction unit... The reference image in the memory cell 并 and switch = component, and generate a reference pixel of virtual pixel precision: prime _ reference::=== virtual input image signal 丨 high frequency material 5 frequency component The reference of the frequency component: gentry, still can contain the majority of the image 17, and there is a 'ground compression two. The prediction of the motion compensation prediction generation. In addition, the motion picture of the implementation form of the Chu 〇 3 ^ mobile compensation prediction unit 7 〇 has丝佥^置' also generates virtual pixel production due to its composition order; the same smoothness of the moving 4 encoding device, whether it is multiplexed; bit stream; generating part, and cutting the reference book of predictive frame memory 75 Like the 76 second, use the reference image of the mobile complement to The pre- (4) virtual pixel is decoded by the method of the moving image of the third embodiment. Therefore, the bit stream of the encoding is correct. 323005 59 201143459 • In addition, the interpolation of the third embodiment described above The image generation unit 43 is configured to generate a virtual pixel precision image 207 by super-resolution processing based on the technique disclosed in WT Freeman et ai (2〇〇〇), however, The super-resolution processing itself is not limited to the second technique> test = other arbitrary super-resolution technology to generate a virtual image; the precision: the method of 207. Further, in the case where the coding apparatus of the 43rd embodiment is configured by a computer, the image coding control unit 3, the switching unit 6, and the motion compensation prediction frame memory 1 may be formed. =^^ The processing contents of the internal-inverse conversion unit 22, the variable-length coding unit, and the inverse-plane prediction memory 28 are as follows: the path filter unit 27, the memory of the computer, and (4) the c & image coding of the computer The program stores the image encoding program. Similarly to the storage of the memory, when the image decoding apparatus is configured by a computer, the dynamic code unit 6 of the third embodiment can be formed, and the inverse quantization-inverse conversion unit 66 can be changed. The length solution 69, the motion compensation prediction unit 70, the movement pre: 68, the intra prediction unit in-plane prediction memory 77, the circuit, the insect _ memory 75, and the image interpretation type are stored in the processing unit 78 of the computer. The moving image encoding program stored in the animation memory of the content is executed by the computer (10). [Industrial Applicability] The moving image encoding device of the present invention obtains an image-independent image, that is, the image is solved. Exhaustion device, because it can pre-set the macro block block large size 323005 60 201143459, it can still suppress the code amount of the total load such as the coding mode, and can efficiently compress the coded moving image coding device and the motion picture decoding. The device is suitable for a moving image encoding device that divides a moving image into a predetermined region and encodes the image in a region, and a moving image decoding device that decodes the encoded moving image in a predetermined area unit. BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a block diagram showing the configuration of a moving image encoding apparatus according to a first embodiment of the present invention. Fig. 2A is a diagram showing another example of an encoding mode for performing an image of predictive coding in the time direction. Fig. 2B is a diagram showing another example of an encoding mode for performing an image of predictive coding in the time direction. Fig. 3 is a block diagram showing the internal configuration of a motion compensation prediction unit of the moving image coding apparatus according to the first embodiment. Fig. 4 is an explanatory diagram of a method of determining a predicted value of a motion vector in response to an encoding mode. Fig. 5 is a diagram showing an example of adaptation of the transform block size in response to the coding mode. Figure 6 is a diagram showing another example of adaptation of the transform block size in response to the coding mode. Fig. 7 is a block diagram showing the internal configuration of a conversion-quantization unit of the moving image coding apparatus according to the first embodiment. Figure 8 is a block diagram showing the configuration of a moving picture decoding device according to the first embodiment of the present invention. 61 323005 201143459 FIG. 9 is a block diagram showing the internal configuration of a variable length coding unit of the moving image coding apparatus according to the second embodiment of the present invention. Fig. 10 is a view showing an example of a binarization table showing the state before the update. Figure 11 is a diagram showing an example of a probability table. Fig. 12 is a view showing an example of a state transition table. Figure 13 is a diagram illustrating the generation order of context identification information; Figure 13(a) is a diagram showing a binarization table in a binary tree representation, and Figure 13(b) is a diagram showing a coding object macroblock and a peripheral area. A diagram of the positional relationship of the blocks. Fig. 14 is a view showing an example of a binarization table showing the updated state. Figure 15 is a block diagram showing the internal configuration of a variable length decoding unit of the moving image decoding device according to the second embodiment of the present invention. Figure 16 is a block diagram showing the internal configuration of a motion compensation prediction unit of the moving image coding apparatus according to the third embodiment of the present invention. [Description of main component symbols] 1 Input video signal 2 Block division unit 3 Encoding control unit 4 Macro block size 5 Macro/sub-block image 6 Switching unit 7 Encoding mode 7a Best encoding mode 8 In-plane prediction Part 9 Motion compensation prediction unit 10 Prediction parameter 10a Optimal prediction parameter 11 Prediction image 12 Subtraction unit 13 Prediction difference signal 13a Best prediction difference signal 62 323005 201143459 14 15 18 19 20a 22 24 26 28 30 40 41 43 45 50 52 54 61 63 65 67 69 71 Motion compensation prediction frame memory 73 Reference image prediction parameter conversion-quantization unit optimal compression parameter inverse quantization-inverse transformation unit local decoding prediction difference signal local decoding image signal in-plane prediction memory Volume bit stream motion compensation region division unit motion compensation region block key image interpolation image generation unit prediction artifact transformation block size division unit conversion unit quantization unit variable length decoding unit optimal prediction parameter optimal compression parameter prediction difference signal Decoding value, in-plane prediction unit prediction image addition unit 17 prediction image 18a optimal prediction parameter 20 compression parameter 21 compression capital 23 variable length coding unit 25 addition unit 27 circuit filter unit 29 local decoding image 42 motion detection unit 44 motion vector 51 conversion target block 53 transform coefficient 60 bit stream 62 optimal coding mode 64 compressed data 66 inverse quantization-reverse Conversion unit 68 switching unit 70 Motion compensation prediction unit 72 Prediction image 74, 74a decoding image 63 323005 201143459 75 Motion compensation prediction frame memory 76 Reference image 77 In-plane prediction memory 78 Circuit filter unit 79 Reproduction image 90 initialization unit 91 context information initialization flag 92 binarization unit 93 frequency information generation unit 94 frequency information 95 binarization table update unit 96 context information memory 97 probability table memory 98 state transition table memory 99 context generation unit 100 Category signal 101 Peripheral block information 102 Context identification information 103 Binary signal 104 Arithmetic coding processing operation unit 105 Binarization table memory 106 Context information 107 Probability table 5 Tiger code 108 MPS occurrence probability 109 Symbol value 110 Probability table number 111 Encoding Bit column 112 Binarization table update identification information 113 binary Table update flag 120 initialization unit 121 context initialization information 122 context generation unit 123 category signal 124 peripheral block information 126 context identification information 127 arithmetic decoding processing operation unit 128 context information memory 129 context information 130 probability table number 131 probability table memory 132 MPS occurrence probability 133 coded bit column 134 symbol value 135 state transition table memory 136 probability table number 64 323005 201143459 137 binary signal 138 inverse binarization unit 139 binarization table 140 decoded value 141 binarization table update unit 142 Binarization table update flag 143 Binarization table memory 144 Binary table update identification information 200 Image reduction processing unit 201a, 201b High-frequency feature extraction unit 202 Related measurement unit 203 High-frequency component estimation unit 204 South Frequency component pattern memory 205 Image enlargement processing unit 206 Adding unit 207 Reference pixel of virtual pixel precision 65 323005

Claims (1)

201143459 七、申請專利範圍: 1. 一種動畫像編碼裝置,係具備: 編碼控制部,係由要成為對輸入畫像之移動補償預 測或圖框内預測之處理單位之複數個區塊分割類型 中,輸出指定有1個以上的區塊分割類型的編碼模式; 區塊分割部,係將由前述輸入晝像分割為預定大小 的複數個區塊之巨集區塊晝像,因應前述編碼模式而分 割成1個以上的區塊之區塊晝像並予以輸出; 晝面内預測部,係於輸入前述區塊晝像時,對該區 塊晝像,使用圖框内的畫像訊號進行圖框内預測,以生 成預測晝像; 移動補償預測部,係於輸入前述區塊晝像時,對該 區塊晝像,使用1個圖框以上的參考晝像進行移動補償 預測來生成預測晝像; 切換部,係因應前述區塊分割部輸出之區塊晝像的 編碼模式,將該區塊晝像輸出至前述晝面内預測部或前 述移動補償預測部的任一方; 減法部,係由前述區塊分割部所輸出之區塊晝像, 減去前述畫面内預測部或前述移動補償預測部的任一 方所輸出之預測晝像,而生成預測差分訊號; 變換-量化部,係對前述預測差分訊號進行變換和 量化處理而生成壓縮資料;及 可變長度編碼部,係將前述編碼模式和前述壓縮資 料進行熵編碼並多工化至位元流; 1 323005 201143459 而前述編碼控制部 據_而指定有預定:::=模式中’選擇根 為多值訊號予以輸出,束刀害域型之編竭模式,作 月j述可變長度編碼部係具有: 表:^模式之多值 :部所選和前—=,^編:: 算術編碼處理運算部、 ::值:號施以算術編碼而輪出 碼位7〇列予以多 ^位兀列,並將該編 二值化表Ϊ新:逑位元流;及 述編碼控制部所選擇碼模式各者藉由前 2號與二值c前述二值化表 2. 如申絢專利範圍 前述可變長度編d:畫像編瑪裝置,其+, r化表的更新時序:二值:二表新更:部係輪出表 資訊多工化至位元μ一值化表更新旗標之標頭 3. 如申請專利範圍第 前述可變長度㈣部係所述之動晝像編^置,其中, 前述二值化部使用指定有表示壓縮 b、-值§fl錢對應關係之二值化、、將之多值訊 化部用於變換和蕃各疮 將則述變換·量 化處理的以多值訊號表示之前述】 323005 2 201143459 縮參數變換為二值訊號, 前述二值化表更新部則根據前述壓縮參數各者的 藉由前述變換-量化部所使用的使用頻度,更新前述二 值化表之多值訊號與二值訊號的對應關係,並且輸出表 示更新時序之二值化表更新旗標, 前述算術編碼處理運算部將由前述二值化部變換 過的二值訊號施以鼻術編碼以輸出編碼位元列,並使該 編碼位元列與前述二值化表更新旗標一起多工化至位 元流。 4.如申請專利範圍第1項所述之動晝像編碼裝置,其中, 在可變長度編碼部中, 前述二值化部係使用指定有表示預測參數之多值 訊號與二值訊號的對應關係之二值化表,並將前述畫面 内預測部或前述移動補償預測部用於圖框内預測或移 動補償預測之以多值訊號表示之前述預測參數變換為 二值訊號, 前述二值化表更新部根據前述預測參數各者藉由 前述晝面内預測部或前述移動預測部所使用的使用頻 度,來更新前述二值化表之多值訊號與二值訊號的對應 關係,並且輸出表示更新時序之二值化表更新旗標, 前述算術編碼處理運算部將前述二值化部之變換 過的二值訊號施以算術編碼以輸出編碼位元列,使該編 碼位元列與前述二值化表更新旗標一起多工化至位元 流0 3 323005 201143459 5. 如申請專利範圍第3項所述之動晝像編碼裝置,其中, 前述可變長度編碼部之前述二值化表更新部在前述二 值化表具有複數個種類的情況時,輸出用來識別已更新 過的二值化表之二值化表更新識別資訊,且前述可變長 度編碼部係使前述二值化表更新識別資訊多工化至位 元流。 6. 如申請專利範圍第4項所述之動晝像編碼裝置,其中, 前述可變長度編碼部之前述二值化表更新部在前述二 值化表具有複數個種類的情況時,輸出用來識別已更新 過的二值化表之二值化表更新識別資訊,且前述可變長 度編碼部係使前述二值化表更新識別資訊多工化至位 元流。 7. —種動畫像解碼裝置,係具備: 可變長度解碼部,係將以分割畫像成預定大小的複 數個區塊之巨集區塊單位被壓縮編碼的位元流作為輸 入,由該位元流,以前述巨集區塊單位將編碼模式進行 熵解碼,並且以因應該經解碼的編碼模式而分割的區塊 單位,將預測參數、壓縮參數及壓縮資料進行熵解碼; 晝面内預測部,係於輸入前述預測參數時,則使用 該預測參數所包含的晝面内預測模式及圖框内的解碼 完成晝像訊號來生成預測晝像; 移動補償預測部,係於輸入前述預測參數時,則使 用該預測參數所包含的移動向量及以該預測參數所包 含的參考晝像指標所指定的參考晝像,來進行移動補償 4 323005 201143459 預測,而生成預測晝像; 切換部,係因應前述被解碼的編碼模式,將前述可 變長度解碼部解碼過的預測參數予以輸入至前述晝面 内預測部或前述移動補償預測部的任一方; 反量化-反變換部,係使用前述壓縮參數,對前述 壓縮資料進行反量化及反變換處理,生成解碼預測差分 訊號;及 加法部,係在前述解碼預測差分訊號,加上前述畫 面内預測部或前述移動補償預測部的任一方輸出的預 測晝像’而輸出解碼畫像訊號; 前述可變長度解碼部係具有: 算術解碼處理運算部,係將表示多工化為前述位元 流的前述編碼模式之編碼位元列予以算術解碼而生成 二值訊號;及 反二值化部,係使用指定有表示前述編碼模式之二 值訊號與多值訊號的對應關係之二值化表,將以前述算 術解碼處理運算部所生成的前述二值訊號來表示之前 述編碼模式變換為多值訊號。 8.如申請專利範圍第7項所述之動晝像解碼裝置,其中, 前述可變長度解碼部係: 前述算術解碼處理運算部將多工化為位元流之壓 縮參數的編碼位元列予以算術解碼而生成二值訊號, 前述反二值化部使用指定有表示前述壓縮參數之 二值訊號與多值訊號的對應關係之二值化表,將以前述 5 323005 201143459 鼻術解碼處理運算部所生成 前述壓縮參數予以變換為多值”之-值訊號來表示之 9.如申請專鄉_7項所述之動佥“ 前述可變長度解石馬部係: 里像解竭震置,其中, 則述算術解碼處理運算部將 測參數的編碼位元列予以算術解石馬而生成、位疋流之預 前述反二值化部使用指定有 成二值訊號, -值訊號與多值訊號的對應關係之二預謂參數之 算術解碼處理運算部所生成的前述二以前述 前述預測參數予以變換為多值訊號。 來表示之 10· ^申請專利範圍第7項所述之動晝像 :Γ變長度解物'具有二值化表更新部上據 標標頭資訊解碼之二值化表更新旗 11.如申明專利範圍第8項所述之動畫像解碼裝置,其中, 刚述可變長度解碼部之前述二值化表更新部在二值化 表具有複數個種類的情況時,根據由多工化為位元流的 標頭資訊所解碼之二值化表更新識別資訊,來更新複數 個前述二值化表中預定二值化表。 12·如申請專利範圍第9項所述之動晝像解碼裝置,其中, 前述可變長度解碼部之前述二值化表更新部在二值化 表具有複數個種類的情況時,根據由多工化為位元流的 標頭資訊所解碼之二值化表更新識別資訊’來更新複數 個前述二值化表中的預定二值化表。 323005 201143459 " 13.如申請專利範圍第10項所述之動晝像解碼裝置,其 中,前述可變長度解碼部之前述二值化表更新部在二值 化表具有複數個種類的情況時,根據由多工化為位元流 的標頭資訊所解碼之二值化表更新識別資訊,來更新複 數個前述二值化表中的預定二值化表。 7 323005201143459 VII. Patent application scope: 1. A motion picture coding device, comprising: an coding control unit, which is a plurality of block division types to be a processing unit for motion compensation prediction or intra-frame prediction of an input image, The output specifies an encoding mode having one or more block division types. The block division unit divides the input image into a macroblock image of a plurality of blocks of a predetermined size, and is divided into the above-described coding mode. The block image of one or more blocks is output and outputted; the in-plane prediction unit is used to input the image of the block, and the image of the block is used for intra-frame prediction using the image signal in the frame. The motion compensation prediction unit generates a prediction artifact by using the reference artifact of one frame or more to generate a prediction artifact when the block imaging is input; And outputting the block artifact to one of the in-plane prediction unit or the motion compensation prediction unit according to an encoding mode of the block image output by the block dividing unit. The subtraction unit generates a prediction difference signal by subtracting the prediction artifact outputted by one of the intra-frame prediction unit or the motion compensation prediction unit by the block image outputted by the block division unit, and generates a prediction difference signal; And converting and quantizing the predicted difference signal to generate compressed data; and the variable length coding unit entropy encoding and multiplexing the coding mode and the compressed data to the bit stream; 1 323005 201143459 The foregoing coding control unit specifies that there is a predetermined:::= mode in which the selection root is a multi-value signal for output, and the bundle cutter domain-type compilation mode, the variable length coding unit has the following: ^Multiple values of mode: Part selected and pre-=, ^Edited:: Arithmetic coding processing operation part, :: Value: No. Apply arithmetic coding and turn out code bit 7 column to give multiple positions, and The coded binarization table is new: the bit stream; and the code mode selected by the coding control unit is each of the foregoing binarization table by the first 2 and the second value c. Edit d: portrait editing device, its + , update table of r-table: two values: two tables new more: department round-out table information multiplexed to bit element μ value table update flag header 3. If the patent application scope of the aforementioned variable length (4) The animation device described in the department, wherein the binarization unit uses a binarization specifying a correspondence between the compression b and the value §fl, and uses the multi-valued portion for conversion. And the various lesions are described above by the multi-valued signal of the transformation and quantization process. 323005 2 201143459 The reduced parameter is converted into a binary signal, and the binarization table update unit is transformed by the aforementioned compression parameter. - the frequency of use used by the quantization unit, updating the correspondence between the multi-value signal of the binarization table and the binary signal, and outputting a binarization table update flag indicating the update timing, wherein the arithmetic coding processing operation unit is The converted binary signal is subjected to nasal coding to output a coded bit column, and the encoded bit sequence is multiplexed with the aforementioned binarization table update flag to the bit stream. 4. The moving image encoding device according to claim 1, wherein in the variable length encoding unit, the binarization unit uses a correspondence between a multi-value signal indicating a prediction parameter and a binary signal. a binarization table of the relationship, wherein the intra-frame prediction unit or the motion compensation prediction unit is used for intra-frame prediction or motion compensation prediction, and the prediction parameter represented by the multi-value signal is converted into a binary signal, and the binarization is performed. The table update unit updates the correspondence between the multi-valued signal and the binary signal of the binarization table by using the frequency of use used by the in-plane prediction unit or the motion prediction unit, and outputting the representation. And updating the timing update table flag, wherein the arithmetic coding processing unit performs arithmetic coding on the transformed binary signal of the binarization unit to output a coded bit column, and the coded bit column and the second The value-added table update flag is multiplexed to the bit stream 0 3 323005 201143459. The movable image encoding device according to claim 3, wherein the variable length is The binarization table update unit of the degree coding unit outputs a binarization table update identification information for identifying the updated binarization table when the binarization table has a plurality of types, and the variable The length coding unit multiplexes the binarization table update identification information into a bit stream. 6. The apparatus according to claim 4, wherein the binarization table update unit of the variable length coding unit outputs a plurality of types when the binarization table has a plurality of types. The binarization table update identification information of the updated binarization table is identified, and the variable length coding unit multiplexes the binarization table update identification information into a bit stream. 7. A motion picture decoding apparatus, comprising: a variable length decoding unit that inputs, as a bit, a bit stream compressed and encoded in a macroblock unit of a plurality of blocks of a predetermined size; The elementary stream performs entropy decoding on the coding mode in the aforementioned macroblock unit, and entropy decodes the prediction parameter, the compression parameter, and the compressed data in a block unit that is divided according to the decoded coding mode; When the prediction parameter is input, the prediction target is generated by using the intra-plane prediction mode included in the prediction parameter and the decoded image signal in the frame; the motion compensation prediction unit inputs the prediction parameter. And using the motion vector included in the prediction parameter and the reference image specified by the reference imaging index included in the prediction parameter to perform motion compensation 4 323005 201143459 prediction, and generating a prediction artifact; The prediction parameters decoded by the variable length decoding unit are input to the aforementioned in-plane prediction unit in response to the encoding mode decoded as described above. Or one of the motion compensation prediction units; the inverse quantization-inverse transformation unit performs inverse quantization and inverse transform processing on the compressed data to generate a decoded prediction difference signal using the compression parameter; and an addition unit is used in the decoding prediction The differential signal is outputted with the prediction image outputted by one of the intra prediction unit or the motion compensation prediction unit, and the decoded image signal is output. The variable length decoding unit has an arithmetic decoding processing unit. The coded bit sequence of the encoding mode of the bit stream is arithmetically decoded to generate a binary signal; and the inverse binarization unit uses a correspondence between the binary signal and the multi-valued signal indicating the encoding mode The binarization table of the relationship converts the coding mode indicated by the binary signal generated by the arithmetic decoding processing calculation unit into a multi-value signal. 8. The dynamic image decoding apparatus according to claim 7, wherein the variable length decoding unit: the arithmetic decoding processing unit multiplexes the encoded bit sequence of the compression parameter of the bit stream Performing arithmetic decoding to generate a binary signal, and the inverse binarization unit uses a binarization table that specifies a correspondence relationship between the binary signal indicating the compression parameter and the multi-value signal, and performs the above-mentioned 5 323005 201143459 nasal decoding processing operation. The compression parameter generated by the part is converted into a multi-valued-value signal to represent it. 9. As described in the application for the hometown _7 item, the aforementioned variable length calculus horse system: In the arithmetic decoding processing unit, the coded bit sequence of the measured parameter is generated by arithmetic calculus, and the pre-binarization unit of the bit turbulence is used to specify the binary signal, the value signal and the multi-value signal. The second of the correspondence between the value signals and the arithmetic decoding unit calculated by the arithmetic decoding processing unit is converted into a multi-value signal by the aforementioned prediction parameters. To represent the dynamic image described in item 7 of the patent application scope: the variable length solution "has a binary table update flag on the information update of the information table on the update table of the binarization table. 11. The moving picture decoding device according to the eighth aspect of the invention, wherein the binarization table updating unit of the variable length decoding unit has a plurality of types in the binarization table, and is based on multiplex processing. The binarization table decoded by the header information of the meta stream updates the identification information to update the predetermined binarization table in the plurality of the binarization tables. The moving image decoding device according to the ninth aspect of the invention, wherein the binarization table updating unit of the variable length decoding unit has a plurality of types in the binarization table, The binarization table update identification information decoded by the header information of the bit stream is updated to update the predetermined binarization table in the plurality of the aforementioned binarization tables. The image decoding device according to claim 10, wherein the binarization table update unit of the variable length decoding unit has a plurality of types in the binarization table. And updating the identification information according to the binarization table decoded by the header information of the bit stream, and updating the predetermined binarization table in the plurality of the binarization tables. 7 323005
TW100111976A 2010-04-09 2011-04-07 Apparatus for encoding dynamic image and apparatus for decoding dynamic image TW201143459A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010090536A JP2013131786A (en) 2010-04-09 2010-04-09 Video encoder and video decoder
PCT/JP2011/001955 WO2011125314A1 (en) 2010-04-09 2011-03-31 Video coding device and video decoding device

Publications (1)

Publication Number Publication Date
TW201143459A true TW201143459A (en) 2011-12-01

Family

ID=44762285

Family Applications (1)

Application Number Title Priority Date Filing Date
TW100111976A TW201143459A (en) 2010-04-09 2011-04-07 Apparatus for encoding dynamic image and apparatus for decoding dynamic image

Country Status (3)

Country Link
JP (1) JP2013131786A (en)
TW (1) TW201143459A (en)
WO (1) WO2011125314A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI562598B (en) * 2011-12-21 2016-12-11 Sun Patent Trust
CN109257048A (en) * 2013-04-08 2019-01-22 索尼公司 Method, data deciphering device and the video receiver of decoding data value sequence

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6717562B2 (en) * 2015-02-06 2020-07-01 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Image coding method, image decoding method, image coding device, and image decoding device
US10142635B2 (en) * 2015-12-18 2018-11-27 Blackberry Limited Adaptive binarizer selection for image and video coding
US20170180757A1 (en) * 2015-12-18 2017-06-22 Blackberry Limited Binarizer selection for image and video coding
JP7352364B2 (en) * 2019-03-22 2023-09-28 日本放送協会 Video encoding device, video decoding device and program

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010135864A (en) * 2007-03-29 2010-06-17 Toshiba Corp Image encoding method, device, image decoding method, and device
JP4687998B2 (en) * 2007-09-18 2011-05-25 ソニー株式会社 Encoding apparatus and encoding method
JP2009081728A (en) * 2007-09-26 2009-04-16 Canon Inc Moving image coding apparatus, method of controlling moving image coding apparatus, and computer program
JP2008104205A (en) * 2007-10-29 2008-05-01 Sony Corp Encoding device and method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI562598B (en) * 2011-12-21 2016-12-11 Sun Patent Trust
US9794583B2 (en) 2011-12-21 2017-10-17 Sun Patent Trust Image coding method including selecting a context for performing arithmetic coding on a parameter indicating a coding-target coefficient included in a sub-block
US9826246B2 (en) 2011-12-21 2017-11-21 Sun Patent Trust Image coding method including selecting a context for performing arithmetic coding on a parameter indicating a coding-target coefficient included in a sub-block
US10362324B2 (en) 2011-12-21 2019-07-23 Sun Patent Trust Image coding method including selecting a context for performing arithmetic coding on a parameter indicating a coding-target coefficient included in a sub-block
US10595030B2 (en) 2011-12-21 2020-03-17 Sun Patent Trust Image coding method including selecting a context for performing arithmetic coding on a parameter indicating a coding-target coefficient included in a sub-block
CN109257048A (en) * 2013-04-08 2019-01-22 索尼公司 Method, data deciphering device and the video receiver of decoding data value sequence

Also Published As

Publication number Publication date
WO2011125314A1 (en) 2011-10-13
JP2013131786A (en) 2013-07-04

Similar Documents

Publication Publication Date Title
JP7129958B2 (en) video coded data
JP6072678B2 (en) Image encoding device, image encoding method, image encoding program, image decoding device, image decoding method, and image decoding program
CN111656783B (en) Method and apparatus for video signal processing using sub-block based motion compensation
CN107295347B (en) Apparatus for decoding motion information in merge mode
WO2011125256A1 (en) Image encoding method and image decoding method
JP6706357B2 (en) Method, coding device and corresponding computer program for coding a current block of a first image component with respect to a reference block of at least one second image component
TWI774141B (en) Method and apparatus for video conding
KR20110020214A (en) Video coding method and apparatus by using adaptive motion vector resolution
CN106068527A (en) Depth perception for stereo data strengthens
KR20110020212A (en) Reference picture interpolation method and apparatus and video coding method and apparatus using same
TW201143459A (en) Apparatus for encoding dynamic image and apparatus for decoding dynamic image
CN108141595A (en) Image coding/decoding method and its equipment
WO2012035640A1 (en) Moving picture encoding method and moving picture decoding method
CN106031173A (en) Flicker detection and mitigation in video coding
CN113273204A (en) Inter-frame prediction method and picture decoding device using the same
JP6503014B2 (en) Moving picture coding method and moving picture decoding method
KR20110020213A (en) Motion vector coding method and apparatus in consideration of differential motion vector precision, and video processing apparatus and method therefor
US20230188709A1 (en) Method and apparatus for patch book-based encoding and decoding of video data
KR20220077096A (en) Method and Apparatus for Video Coding Using Block Merging
JP6510084B2 (en) Moving picture decoding method and electronic apparatus
JP2008141407A (en) Device and method for converting coding system
JP5367161B2 (en) Image encoding method, apparatus, and program
JP2016106494A (en) Moving image encoding method and moving image decoding method
JP5649701B2 (en) Image decoding method, apparatus, and program
KR100495001B1 (en) Image compression encoding method and system