TW200948092A - Dynamic image encoding/decoding method and device - Google Patents

Dynamic image encoding/decoding method and device Download PDF

Info

Publication number
TW200948092A
TW200948092A TW098106980A TW98106980A TW200948092A TW 200948092 A TW200948092 A TW 200948092A TW 098106980 A TW098106980 A TW 098106980A TW 98106980 A TW98106980 A TW 98106980A TW 200948092 A TW200948092 A TW 200948092A
Authority
TW
Taiwan
Prior art keywords
image
filter
animation
information
recorded
Prior art date
Application number
TW098106980A
Other languages
Chinese (zh)
Inventor
Naofumi Wada
Takeshi Chujoh
Akiyuki Tanizawa
Goki Yasuda
Takashi Watanabe
Original Assignee
Toshiba Kk
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/JP2008/073636 external-priority patent/WO2009110160A1/en
Application filed by Toshiba Kk filed Critical Toshiba Kk
Publication of TW200948092A publication Critical patent/TW200948092A/en

Links

Abstract

Provided is a dynamic image encoding method which uses an encoded image as a reference image for predicting the next image to be encoded. The method includes: a step for applying a filter to a locally decoded image of an encoded image so as to generate a restored image; a step for setting filter coefficient information on the filter; a step for encoding the filter coefficient information; a step for encoding the locally decoded image to be used as a reference image or specific information indicating the restored image; and a step for storing the locally decoded image or the restored image as a reference image in a memory according to the specific information.

Description

200948092 六、發明說明: 【發明所屬之技術領域】 本發明係有關於動畫編碼/解碼方法及裝置,尤其是 * 有關於’在編碼側設定迴圈濾鏡的濾鏡係數資訊並傳送之 ’在解碼側使用之’藉此而可獲得畫質提升效果的動畫編 碼/解碼方法及裝置。 魯 【先前技術】 將影像以像素區塊單位進行正交轉換,並進行轉換係 數之量化處理的動畫編碼/解碼方法當中,在解碼影像中 會產生稱作區塊失真的畫質劣化。針對這點, G.Bjontegaard, “Deblocking filter for 4x4 based coding” ,ITU-T Q.15/SG16 VCEG document,Q 1 5 - J-27,May 2000 ( 以下簡稱爲 “Deblocking filter for 4x4 based coding” ) 中所記載之去區塊濾鏡,係藉由在區塊交界適用低通濾鏡 〇 來使上記區塊失真在視覺上較不顯眼,可獲得主觀上良好 的影像。“Deblocking filter for 4x4 based coding” 中的 具備有去區塊濾鏡之編碼/解碼裝置的區塊圖,示於圖34 〇 上記去區塊濾鏡,係如圖34的去區塊濾鏡處理部901 一樣,是被用於編碼及解碼裝置的迴圈內,因此也被稱爲 迴圈濾鏡。藉由設計成迴圈濾鏡,就可減低預測時所使用 之參照影像的區塊失真,尤其是在容易發生區塊失真的高 壓縮之位元速率帶上,可提升編碼效率,具有如此效果。 -5- 200948092 可是,上記去區塊濾鏡,係藉由將區塊交界模糊化以 降低視覺上顯眼之劣化的處理,不一定是與輸入影像的誤 差較小者,有時候可能反而造成喪失細緻紋理等,而導致 畫質降低。再者,由於是將濾鏡處理後的影像當成參照影 像而使用於下個進行編碼的影像之預測上,因此會有濾鏡 所致之畫質降低之影響,會傳播至預測影像之問題。 另一方面,異於迴圈濾鏡,僅作用於由解碼器所輸出 之影像的濾鏡,稱作後段濾鏡。後段濾鏡,係由於不將濾 鏡處理後之影像當作參照影像來使用,因此具有濾鏡之影 響不會傳播至預測影像的特徵。日本特開200 1 -275 1 1 0號 公報係提供一種動畫解碼方法,係於解碼側中,動態地切 換要將去區塊濾鏡當成迴圈濾鏡使用,還是當成後段濾鏡 使用。日本特開200 1 -275 1 1 0公報中的具備有迴圈/後段濾 波之切換方法的編碼/解碼裝置之區塊圖,示於圖35。 在日本特開2001-275110公報的動畫解碼方法中,藉 由圖35的動畫解碼裝置內部所具備之去區塊濾鏡處理部 902,來生成適用了去區塊濾鏡的解碼影像,並當成輸出 影像而加以輸出。另一方面,在編碼參數抽出部904中, 係從編碼資料中抽出量化參數;在切換部903中,係基於 前記量化參數的値,來控制是否要將濾鏡處理後之影像當 作參照影像而使用。藉由切換部903中的動作,在去區塊 濾鏡效果高的高壓縮之位元速率帶上係將去區塊濾鏡當成 迴圈濾鏡而使用,在低壓縮之位元速率帶上則將去區塊濾 鏡當成後段濾鏡使用,可進行如此控制。可是,在日本特 -6- 200948092 開2001-275 1 10公報中,由於在編碼側沒有進行同樣之處 理,所以編碼側與解碼側會發生不匹配之問題。又,由於 * 並不是在編碼側提升參照影像之畫質的技術,所以無法獲 , 得編碼效率提升效果。 另一方面,S.Wittmann and T.Wdi, “Post-filter SEI message for 4:4:4 coding" ,JVT of ISO/IEC MPEG & ITU-T VCEG,JVT-S030,April 2006 (以下簡稱爲 “ Post-filter 0 SEI message for 4:4:4 coding” )中則記載 了一種動畫編 碼/解碼方法,其係在編碼側設定後段濾鏡的濾鏡係數資 訊而進行編碼,在解碼側則使用已解碼之濾鏡係數資訊來 進行後段濾鏡處理。“ Post-filter SEI message for 4:4:4 coding”中的動畫編碼/解碼裝置的區塊圖,示於圖36。 •圖36的動畫編碼裝置所具備的後段濾鏡設定部905, 係設定所定之濾鏡係數資訊,並將瀘鏡係數資訊90予以 輸出。濾鏡係數資訊90係被編碼,在解碼側進行解碼, Q 在動畫解碼裝置所具備的後段濾鏡處理部906中被使用來 進行後段濾鏡處理。 在 “Post-filter SEI message for 4:4:4 coding” 的動 畫編碼/解碼方法中,在編碼側爲了縮小解碼影像與輸入 影像之誤差而設定了濾鏡係數資訊,藉此就可使解碼側上 適用了後段濾鏡的輸出影像之畫質被提升。可是,“ Postfilter SEI message for 4:4:4 coding” 的方法 ,係並 非將畫 質已經提升的影像當作參照影像來使用,在編碼側無法獲 得編碼效率提升效果。 200948092 【發明內容】 如上述, Deblocking filter for 4x4 based coding” 所揭露之方法,係並不一定會使畫質提升,存在著濾鏡所 產生之畫質劣化會傳播至預測影像之問題。 又,日本特開2001-275 1 10公報所揭露之方法,係僅 在解碼側進行迴圈/後段濾波之切換,因此在編碼側與解 碼側會產生不匹配之問題。 又,“Post-filter SEI message for 4:4:4 coding” 中 所揭露的方法,係在解碼側提升所輸出之影像畫質的處理 ,無法促使預測時所用之參照影像的畫質提升而無法獲得 編碼效率提升效果。 本發明的目的在於提供一種動畫編碼/解碼方法及裝 置,係在將編碼側上所設定之濾鏡係數資訊加以編碼,並 將濾鏡係數資訊在解碼側加以解碼而使用之動畫編碼/解 碼中,藉由編碼側與解碼側是以同樣處理來切換迴圈濾鏡 處理,而可一面抑制畫質劣化之傳播,一面促使預測時所 用之參照影像的畫質提升’藉此而可促使編碼效率提升。 本發明的一樣態’係提供一種動畫編碼方法,係屬於 將已編碼之影像當作參照影像而使用於下個進行編碼之影 像之預測上的動畫編碼方法’其特徵爲’具備:對已編碼 之影像的局部解碼影像’適用濾鏡’以生成復原影像之步 驟;和將前記濾鏡的濾鏡係數資訊加以設定之步驟;和將 前記濾鏡係數資訊加以編碼之步驟;和將用來表示要當作 -8- 200948092 參照影像而使用之前記局部解碼影像或前記復原 定資訊,加以編碼之步驟;和基於前記特定資訊 * 記局部解碼影像或前記復原影像之任一者當作參 - 保存在記憶體中之步驟。 本發明之其他樣態,係提供一種動畫解碼方 於將已解碼之影像當作參照影像而使用於下個進 影像之預測上的動畫解碼方法’其特徵爲,具備 ❹ 影像適用濾鏡以生成復原影像之步驟;和將濾鏡 數資訊加以解碼之步驟;和將用來表示要當作參 使用之解碼影像或復原影像的特定資訊,加以解 ;和基於特定資訊,而將解碼影像或復原影像當 '像而保存在記憶體中之步驟。 【實施方式】 以下,參照圖面來說明本發明的實施形態。 ❿ (第1實施形態) 參照圖1,說明第1實施形態所述之動畫編 構成。以下,針對圖1的構成要素,分別予以說ί 圖1所示的動畫編碼裝置1000,係具有:預 成部101、減算器102、轉換.量化部103、熵編 、逆轉換.逆量化部105、加算器106、迴圏濾 107、參照影像用緩衝區108,而且是被編碼控制 控制。 影像的特 ,而將前 照影像而 法,係屬 行解碼之 :對解碼 的濾鏡係 照影像而 碼之步驟 作參照影 碼裝置的 泪。 測訊號生 碼部1 04 鏡處理部 部109所 -9- 200948092 預測訊號生成部101,係將參照影像用緩衝區108中 所儲存之已編碼之參照影像訊號19加以取得並進行所定 之預測處理,將預測影像訊號1 1予以輸出。預測處理係 採用例如運動預測·運動補償的時間方向之預測,或畫面 內的從已編碼之影像起算之空間方向的預測等即可。 減算器102,係計算已取得之輸入影像訊號10與預測 影像訊號11的差分,將預測誤差影像訊號12予以輸出。 轉換·量化部103,係首先取得預測誤差影像訊號12並進 行轉換處理。此處,轉換•量化部103,係使用例如DCT (離散餘弦轉換)等來進行預測誤差影像訊號12的正交 轉換,並生成轉換係數。作爲其他實施形態,亦可採用小 波轉換或獨立成分解析等手法來生成轉換係數。接著,轉 換·量化部103,係基於被編碼控制部109所設定的量化 參數,來進行已生成之轉換係數的量化處理,將量化後之 轉換係數1 3予以輸出。量化後之轉換係數1 3,係被輸入 至後述的熵編碼部104,同時也被輸入至逆轉換·逆量化 部 105。 逆轉換·逆量化部105,係將量化後之轉換係數13, 依照被編碼控制部1 09所設定的量化參數來進行逆量化, 並對所得到之轉換係數進行逆轉換(例如逆離散餘弦轉換 等),將預測誤差影像訊號1 5予以輸出。加算器1 06,係 將從逆轉換•逆量化部1 05所取得到的預測誤差影像訊號 1 5、和在預測訊號生成部1 〇 1中所被生成的預測影像訊號 11,進行加算,將局部解碼影像訊號16予以輸出。200948092 VI. Description of the Invention: [Technical Field] The present invention relates to an animation encoding/decoding method and apparatus, and more particularly to the 'filter coefficient information of a loop filter set on the encoding side and transmitted An animation encoding/decoding method and apparatus for obtaining an image quality improvement effect by using the decoding side. Lu [Prior Art] In the animation encoding/decoding method of orthogonally converting an image in units of pixel blocks and performing quantization processing of a conversion coefficient, image quality deterioration called block distortion occurs in the decoded image. In response to this, G. Bjontegaard, "Deblocking filter for 4x4 based coding", ITU-T Q.15/SG16 VCEG document, Q 1 5 - J-27, May 2000 (hereinafter referred to as "Deblocking filter for 4x4 based coding" The deblocking filter described in the method is to make the upper block distortion visually less conspicuous by applying a low-pass filter at the junction of the block, and obtain a subjectively good image. The block diagram of the decoding/decoding device with deblocking filter in "Deblocking filter for 4x4 based coding" is shown in Figure 34. The block filter is recorded, which is the deblocking filter processing shown in Figure 34. Like section 901, it is used in the loop of the encoding and decoding device, and is therefore also referred to as a loop filter. By designing a loop filter, the block distortion of the reference image used in the prediction can be reduced, especially in a high-compression bit rate band which is prone to block distortion, which can improve the coding efficiency. . -5- 200948092 However, the above-mentioned deblocking filter is a process that reduces the visually conspicuous deterioration by blurring the block boundary. It is not necessarily the case that the error with the input image is small, and sometimes it may cause loss. Fine texture, etc., resulting in reduced image quality. Further, since the image processed by the filter is used as a reference image and used for prediction of the next image to be encoded, there is a problem that the image quality due to the filter is reduced and the image is propagated to the predicted image. On the other hand, a filter that is different from the loop filter and acts only on the image output by the decoder is called a back-end filter. The latter filter is used because the image after the filter is not used as a reference image, so that the effect of the filter does not propagate to the predicted image. Japanese Laid-Open Patent Publication No. 2001-2751 1 0 provides an animation decoding method for dynamically switching whether the deblocking filter is used as a loop filter or as a back-end filter. A block diagram of an encoding/decoding apparatus having a switching method of loop/rear filter in the Japanese Patent Publication No. 2001-2751-01 is shown in Fig. 35. In the animation decoding method of the Japanese Laid-Open Patent Publication No. 2001-275110, the deblocking filter processing unit 902 included in the animation decoding device of FIG. 35 generates a decoded image to which the deblocking filter is applied, and The image is output and output. On the other hand, in the encoding parameter extracting unit 904, the quantization parameter is extracted from the encoded data; in the switching unit 903, based on the 前 of the pre-quantized parameter, whether or not the image after the filter processing is to be regarded as the reference image is controlled. And use. By means of the action in the switching unit 903, the deblocking filter is used as a loop filter on the high compression bit rate band with a high deblocking filter effect, on the low compression bit rate band. This is controlled by using the deblocking filter as a back-end filter. However, in the Japanese Patent Publication No. 6-1-200948092, the same problem is not found on the encoding side, so that there is a problem of mismatch between the encoding side and the decoding side. Moreover, since * is not a technique for improving the image quality of the reference image on the encoding side, it is impossible to obtain the encoding efficiency improvement effect. On the other hand, S. Wittmann and T. Wdi, “Post-filter SEI message for 4:4:4 coding", JVT of ISO/IEC MPEG & ITU-T VCEG, JVT-S030, April 2006 (hereinafter referred to as "Post-filter 0 SEI message for 4:4:4 coding" describes an animation encoding/decoding method which encodes the filter coefficient information of the back-end filter on the encoding side and uses it on the decoding side. The decoded filter coefficient information is used for the back-end filter processing. The block diagram of the animation encoding/decoding apparatus in "Post-filter SEI message for 4:4:4 coding" is shown in Fig. 36. • The animation of Fig. 36 The rear filter setting unit 905 included in the encoding device sets the predetermined filter coefficient information and outputs the mirror coefficient information 90. The filter coefficient information 90 is encoded, decoded on the decoding side, and Q is decoded in the animation. The rear-end filter processing unit 906 included in the device is used to perform the rear-end filter processing. In the animation encoding/decoding method of "Post-filter SEI message for 4:4:4 coding", the decoded side is reduced on the encoding side. Error with the input image The filter coefficient information is set, so that the image quality of the output image to which the back-end filter is applied on the decoding side is improved. However, the method of "Postfilter SEI message for 4:4:4 coding" is not to draw The image that has been upgraded is used as a reference image, and the encoding efficiency improvement effect cannot be obtained on the encoding side. 200948092 [Summary of the Invention] As described above, the method disclosed by Deblocking filter for 4x4 based coding does not necessarily result in image quality. Ascension, there is a problem that the deterioration of the image quality caused by the filter propagates to the predicted image. Further, in the method disclosed in Japanese Laid-Open Patent Publication No. 2001-275 1 10, the switching of the loop/post-segment filtering is performed only on the decoding side, so that there is a problem of mismatch between the encoding side and the decoding side. Moreover, the method disclosed in "Post-filter SEI message for 4:4:4 coding" is to improve the image quality of the output image on the decoding side, and it is impossible to promote the image quality of the reference image used for prediction. Get the coding efficiency improvement effect. It is an object of the present invention to provide an animation encoding/decoding method and apparatus for encoding the filter coefficient information set on the encoding side and decoding the filter coefficient information on the decoding side for use in animation encoding/decoding. By switching the loop filter processing in the same way by the encoding side and the decoding side, it is possible to suppress the propagation of image quality deterioration while promoting the image quality of the reference image used for prediction. Upgrade. The same state of the present invention provides an animation coding method, which belongs to an animation coding method for using a coded image as a reference image for prediction of the next coded image, which is characterized by: having an encoding a partial decoded image of the image 'applicable filter' to generate a reconstructed image; and a step of setting the filter coefficient information of the pre-filter; and a step of encoding the pre-filter filter information; and The step of encoding the local decoded image or the pre-reset information as the reference image is to be used as the reference image, and the recording of the local decoded image or the pre-recovered image based on the pre-recorded specific information* The steps in the memory. According to another aspect of the present invention, there is provided an animation decoding method for an animation decoding method to use a decoded image as a reference image for prediction of a next incoming image, which is characterized in that: ❹ image application filter is generated to generate a step of restoring the image; and a step of decoding the filter number information; and decoding the specific information to be used as the decoded image or the restored image to be used as a reference; and decoding the image or restoring based on the specific information The image is stored as a step in the memory. [Embodiment] Hereinafter, embodiments of the present invention will be described with reference to the drawings.第 (First Embodiment) An animation composition according to the first embodiment will be described with reference to Fig. 1 . In the following, the animation coding apparatus 1000 shown in FIG. 1 has a pre-forming unit 101, a subtractor 102, a conversion quantization unit 103, an entropy coding, an inverse conversion, and an inverse quantization unit. 105, the adder 106, the filter 107, the reference image buffer 108, and is controlled by the encoding control. The image is characterized by the pre-image, which is decoded by the decoding method: the step of decoding the image of the filter and the code is the tear of the reference image device. Measured signal generation unit 10 Mirror processing unit 109 -9- 200948092 The prediction signal generation unit 101 acquires the encoded reference video signal 19 stored in the reference video buffer 108 and performs predetermined prediction processing. , the predicted image signal 1 1 is output. The prediction processing may be, for example, prediction of the temporal direction of motion prediction and motion compensation, or prediction of the spatial direction from the encoded image in the picture. The subtractor 102 calculates the difference between the obtained input image signal 10 and the predicted image signal 11, and outputs the predicted error image signal 12. The conversion/quantization unit 103 first obtains the prediction error video signal 12 and performs conversion processing. Here, the conversion/quantization unit 103 performs orthogonal conversion of the prediction error video signal 12 using, for example, DCT (Discrete Cosine Transform), and generates a conversion coefficient. As another embodiment, a conversion method may be generated by a method such as wavelet conversion or independent component analysis. Next, the conversion/quantization unit 103 performs quantization processing of the generated conversion coefficient based on the quantization parameter set by the encoding control unit 109, and outputs the quantized conversion coefficient 1 3 . The quantized conversion coefficient 13 is input to the entropy coding unit 104, which will be described later, and is also input to the inverse conversion/inverse quantization unit 105. The inverse transform/inverse quantization unit 105 inversely quantizes the quantized transform coefficients 13 in accordance with the quantization parameters set by the encoding control unit 109, and inversely converts the obtained transform coefficients (for example, inverse discrete cosine transform). Etc.), the prediction error image signal 15 is output. The adder 016 adds the prediction error video signal 15 obtained from the inverse conversion/inverse quantization unit 156 and the predicted video signal 11 generated in the prediction signal generation unit 1 〇1. The local decoded video signal 16 is output.

-10- 200948092 迴圈濾鏡處理部107,係取得局部解碼影像訊號I6及 輸入影像訊號10,將參照影像訊號19、濾鏡係數資訊17 • 及用來表示要把局部解碼影像或復原影像當作參照影像使 * 用之意旨的特定資訊、具體而言係用來切換成局部解碼影 像或復原影像的切換資訊18,予以輸出。關於迴圈濾鏡處 理部107的詳細說明,將於後述。參照影像用緩衝區1〇8 ,係將已從迴圈濾鏡處理部107所取得到的參照影像訊號 Q 19,予以暫時保存。已保存在參照影像用緩衝區108中的 參照影像訊號19,係被預測訊號生成部101在生成預測影 像訊號11之際,會被參照。 另一方面,熵編碼部104,係除了量化後之轉換係數 1 3以外,還取得濾鏡係數資訊丨7、切換資訊1 8,其他還 '取得預測模式資訊、區塊尺寸切換資訊、運動向量、量化 參數等之編碼參數,並進行熵編碼(例如霍夫曼編碼或算 術編碼等)然後輸出成編碼資料14。編碼控制部109,係 6 進行發生編碼量回饋控制及量化控制、模式控制等,進行 編碼處理整體的控制。 接著’針對第1實施形態所述之動畫編碼裝置中的迴 圈濾鏡處理部107,使用圖2及圖3來詳細說明。以下, 針對圖2及圖3的構成要素,分別予以說明。 圖2所示的迴圈濾鏡處理部107,係具有:濾鏡設定 部110、切換濾鏡處理部U1、切換資訊生成部112,再者 ’切換爐鏡處理部1 1 1係如圖3所示,具有濾鏡處理部 113及迴圏德鏡切換部114。圖3的開關SW,係切換著端 -11 - 200948092 子A與端子B之連接。濾鏡設定部110,係取得局部解碼 影像訊號16及輸入影像訊號1〇,並設定所定之濾鏡係數 資訊17。關於濾鏡係數資訊17的設定方法之細節,將於 後述。已設定之濾鏡係數資訊17,係被輸入至後述的切換 濾鏡處理部111及熵編碼部104。 切換濾鏡處理部111,係在內部具有濾鏡處理部113 及迴圈濾鏡切換部114,會取得局部解碼影像16、濾鏡係 數資訊17及切換資訊18,然後將參照影像訊號19予以輸 出。切換資訊生成部112,係從切換濾鏡處理部ill取得 參照影像訊號19,並且取得輸入影像訊號1〇,依照所定 之切換判定方法來生成切換資訊18。已生成之切換資訊 18,係被輸入至切換濾鏡處理部111,同時也被輸入至熵 編碼部104。關於切換判定方法的細節,將於後述。 濾鏡處理部113,係取得局部解碼影像訊號16及濾鏡 係數資訊17,依照濾鏡係數資訊17而對局部解碼影像訊 號16進行濾鏡處理,生成出復原影像訊號20。已生成之 復原影像訊號20,係被輸入至後述的迴圈濾鏡切換部114 。迴圈濾鏡切換部114,係取得切換資訊18,依照切換資 訊18而用內部所具有之開關SW來切換端子A和端子B 之連接,將局部解碼影像訊號16或復原影像訊號20當作 參照影像訊號19而輸出。 以上是第1實施形態所述之動畫編碼裝置的構成。 接著,使用圖1、圖2、圖3及圖4,來詳細說明第1 實施形態所述之動畫編碼裝置中的迴圈濾鏡之相關動作。 -12- 200948092 此外,圖4係第1實施形態所述之動畫編碼裝置1 00 0中 的關於迴圈濾鏡之動作的流程圖。 . 首先,一旦對圖1的動畫編碼處理1000輸入了輸入 * 影像訊號10,則減算器102係進行輸入影像訊號10與從 預測訊號生成部101取得之預測影像訊號11的減算處理 ,生成預測誤差影像訊號12。已生成之預測誤差影像訊號 12,係於轉換·量化部103中被轉換、量化,輸出成已被 φ 量化之轉換係數12,在熵編碼部104中被編碼。另一方面 ,已被量化之轉換係數13,係在動畫編碼裝置1 000之內 部所具備的逆轉換•逆量化部105中被逆轉換及逆量化, 成爲預測誤差影像訊號15而輸出。預測誤差影像訊號15 ,係與預測訊號生成部101所輸出之預測影像訊號11,在 •加算器106中進行加算,生成局部解碼影像訊號16。 上記一連串處理,係爲進行預測處理與轉換處理的所 謂混合式編碼的動畫編碼中一般常見的編碼處理。 〇 此處,針對第1實施形態所述的動畫編碼裝置1000 中的特徵性處理亦即關於迴圈濾鏡之動作,使用圖2、圖 3及圖4來詳細說明。 首先,圖2的迴圏濾鏡處理部107的內部所具有的濾 鏡設定部110,係接受局部解碼影像訊號16及輸入影像訊 號10,進行濾鏡係數資訊17之設定(步驟S1100)。此 處,濾鏡設定部110係使用影像復原中一般會採用的2維 Wiener filter,將濾鏡係數設計成會使得對局部解碼影像 訊號16施加濾鏡處理後之影像、與輸入影像訊號10的平 -13- 200948092 均平方誤差成爲最小,將已設計之濾鏡係數及表示濾鏡尺 寸的値,當成濾鏡係數資訊17而進行設定。濾鏡設定部 110係將已設定之濾鏡係數資訊17,輸出至圖3的濾鏡處 理部113,同時也輸出至熵編碼部104。 接著,圖2的切換濾鏡處理部111,係接受局部解碼 影像訊號16、濾鏡係數資訊17及切換資訊18,基於切換 資訊18,將局部解碼影像訊號16或已在濾鏡處理部113 中所生成之復原影像訊號20,當作參照影像訊號19而予 以輸出(步驟 S1101〜S1109)。此處,首先,針對將局 部解碼影像訊號1 6當作參照影像訊號1 9之情況,和將復 原影像訊號20當作參照影像訊號19之情況,會由切換資 訊生成部112進行切換判定處理,生成用來決定要將局部 解碼影像訊號16還是復原影像訊號20之哪一者當作參照 影像訊號19所需之切換資訊18。基於已生成之切換資訊 18,迴圈濾鏡切換部114內部的開關SW會進行切換,輸 出參照影像訊號19。迴圈濾鏡處理部1〇7中的步驟S1101 至步驟S1109爲止之動作的詳細說明係如以下。 首先,將圖3的迴圈濾鏡切換部114中的開關SW連 接至端子A,將局部解碼影像訊號16當作參照影像訊號 19而輸入至切換資訊生成部112(步驟S1101)。接著’ 將開關SW連接至端子B,將復原影像訊號20當作參照影 像訊號19而輸入至切換資訊生成部112(步驟S1102) ° 此處,一旦開關SW是被連接至端子B,則濾鏡處理部 113係基於濾鏡係數資訊17而對局部解碼影像訊號16施 -14- 200948092 行濾鏡處理,生成復原影像訊號20。作爲一例,若假設局 部解碼影像上的位置(X, y)之像素爲F(x, y) ,2維濾 • 鏡的寬度爲W、高度爲Η,濾鏡係數爲h(i, j) ( i - ^W' w = W/2' h = H/2),則復原影像 G(x,Υ )係可用以下式子來表示。 〔數1〕 G(x»y)= i=-wj=-h-10- 200948092 The loop filter processing unit 107 obtains the local decoded video signal I6 and the input video signal 10, and uses the reference video signal 19 and the filter coefficient information 17 to indicate that the local decoded image or the restored image is to be used. The specific information used as the reference image is specifically used, and specifically, the switching information 18 for switching to the local decoded image or the restored image is output. The detailed description of the loop filter processing unit 107 will be described later. The reference video buffer 1 〇 8 is used to temporarily store the reference video signal Q 19 that has been acquired from the loop filter processing unit 107. The reference video signal 19 stored in the reference video buffer 108 is referred to when the predicted video signal generation unit 101 generates the predicted video signal 11. On the other hand, the entropy coding unit 104 obtains the filter coefficient information 丨7 and the switching information 1 in addition to the quantized conversion coefficient 1 3, and also obtains the prediction mode information, the block size switching information, and the motion vector. Encoding parameters such as quantization parameters are subjected to entropy coding (for example, Huffman coding or arithmetic coding, etc.) and then output to the coded material 14. The coding control unit 109 performs control of generating the coding amount feedback control, quantization control, mode control, and the like, and performs overall coding processing. Next, the loop filter processing unit 107 in the animation encoding apparatus according to the first embodiment will be described in detail with reference to Figs. 2 and 3 . Hereinafter, the components of FIGS. 2 and 3 will be described separately. The loop filter processing unit 107 shown in FIG. 2 includes a filter setting unit 110, a switching filter processing unit U1, and a switching information generating unit 112. Further, the "switching mirror processing unit 1 1 1 is as shown in FIG. As shown, the filter processing unit 113 and the return mirror switching unit 114 are provided. The switch SW of Fig. 3 switches the connection between terminal -11 - 200948092 sub A and terminal B. The filter setting unit 110 obtains the local decoded video signal 16 and the input video signal 1〇, and sets the determined filter coefficient information 17. Details of the setting method of the filter coefficient information 17 will be described later. The filter coefficient information 17 that has been set is input to the switching filter processing unit 111 and the entropy encoding unit 104 which will be described later. The switching filter processing unit 111 has a filter processing unit 113 and a loop filter switching unit 114 therein, and acquires the local decoded image 16, the filter coefficient information 17 and the switching information 18, and then outputs the reference image signal 19. . The switching information generating unit 112 acquires the reference video signal 19 from the switching filter processing unit ill, acquires the input video signal 1〇, and generates the switching information 18 in accordance with the predetermined switching determination method. The generated switching information 18 is input to the switching filter processing unit 111 and also input to the entropy encoding unit 104. Details of the handover determination method will be described later. The filter processing unit 113 obtains the local decoded video signal 16 and the filter coefficient information 17, and filters the local decoded video signal 16 according to the filter coefficient information 17 to generate the restored video signal 20. The restored image signal 20 that has been generated is input to the loop filter switching unit 114 which will be described later. The loop filter switching unit 114 obtains the switching information 18, and switches the connection of the terminal A and the terminal B with the switch SW provided therein according to the switching information 18, and uses the local decoded video signal 16 or the restored video signal 20 as a reference. The image signal 19 is output. The above is the configuration of the animation encoding device according to the first embodiment. Next, the operation of the loop filter in the animation encoding device according to the first embodiment will be described in detail with reference to Figs. 1, 2, 3 and 4. -12- 200948092 FIG. 4 is a flowchart showing the operation of the loop filter in the animation encoding device 10000 according to the first embodiment. First, once the input* video signal 10 is input to the animation encoding process 1000 of FIG. 1, the subtractor 102 performs subtraction processing of the input video signal 10 and the predicted video signal 11 obtained from the prediction signal generating unit 101 to generate a prediction error. Image signal 12. The generated prediction error video signal 12 is converted and quantized by the conversion/quantization unit 103, and is output as a conversion coefficient 12 quantized by φ, and is encoded by the entropy coding unit 104. On the other hand, the quantized conversion coefficient 13 is inversely converted and inversely quantized by the inverse conversion/inverse quantization unit 105 included in the animation coding apparatus 1 000, and is output as the prediction error video signal 15. The prediction error video signal 15 is added to the predicted video signal 11 outputted by the prediction signal generating unit 101, and is added to the adder 106 to generate a local decoded video signal 16. The series of processing described above is a common encoding process in the so-called hybrid coding animation coding for predictive processing and conversion processing. The characteristic processing in the animation encoding device 1000 according to the first embodiment, that is, the operation of the loop filter, will be described in detail with reference to FIGS. 2, 3, and 4. First, the filter setting unit 110 included in the feedback filter processing unit 107 of Fig. 2 receives the local decoded video signal 16 and the input video signal 10, and sets the filter coefficient information 17 (step S1100). Here, the filter setting unit 110 uses a two-dimensional Wiener filter generally used in image restoration to design a filter coefficient such that a filter-processed image is applied to the locally decoded image signal 16 and the input image signal 10 is used. Level-13- 200948092 The squared error is minimized, and the designed filter coefficient and the 表示 indicating the filter size are set as the filter coefficient information 17. The filter setting unit 110 outputs the set filter coefficient information 17 to the filter processing unit 113 of Fig. 3 and also outputs it to the entropy encoding unit 104. Next, the switching filter processing unit 111 of FIG. 2 receives the local decoded video signal 16, the filter coefficient information 17 and the switching information 18, and based on the switching information 18, the local decoded video signal 16 or has been in the filter processing unit 113. The generated restored video signal 20 is output as the reference video signal 19 (steps S1101 to S1109). Here, first, in the case where the local decoded video signal 16 is regarded as the reference video signal 19 and the restored video signal 20 is referred to as the reference video signal 19, the switching information generating unit 112 performs the switching determination process. The switching information 18 required to determine which of the locally decoded video signal 16 or the restored video signal 20 is to be used as the reference video signal 19 is generated. Based on the generated switching information 18, the switch SW inside the loop filter switching unit 114 is switched to output the reference image signal 19. The detailed description of the operations from step S1101 to step S1109 in the loop filter processing unit 1 to 7 is as follows. First, the switch SW in the loop filter switching unit 114 of Fig. 3 is connected to the terminal A, and the local decoded video signal 16 is input as the reference video signal 19 to the switching information generating unit 112 (step S1101). Then, the switch SW is connected to the terminal B, and the restored video signal 20 is input to the switching information generating unit 112 as the reference video signal 19 (step S1102). Here, once the switch SW is connected to the terminal B, the filter The processing unit 113 performs a filter processing on the local decoded video signal 16 based on the filter coefficient information 17 to generate a restored video signal 20. As an example, if the pixel of the position (X, y) on the locally decoded image is F(x, y), the width of the 2-dimensional filter is W, the height is Η, and the filter coefficient is h(i, j). (i - ^W' w = W/2' h = H/2), the restored image G(x, Υ ) can be expressed by the following equation. [Number 1] G(x»y)= i=-wj=-h

G 接著,圖2的切換資訊生成部112,係算出局部解碼 影像訊號16與輸入影像訊號10的殘差平方和SSDA、及 復原影像訊號20與輸入影像訊號10的殘差平方和SSDB (步驟S1103)。在這裡,假設切換判定處理是對影像的 每一局部領域進行,令局部領域內的像素位置爲i、全像 素數N、局部解碼影像訊號16爲Fi、復原影像訊號20爲 Gi、輸入影像訊號10爲Ii’則SSDA及SSDB係可以下式 ❽ 來表示。 〔數2〕 ssda /=1G, the switching information generating unit 112 of FIG. 2 calculates the residual squared sum SSDA of the locally decoded video signal 16 and the input video signal 10, and the residual squared sum SSDB of the restored video signal 20 and the input video signal 10 (step S1103). ). Here, it is assumed that the switching determination process is performed on each partial area of the image, so that the pixel position in the local area is i, the total number of pixels N, the local decoded video signal 16 is Fi, the restored video signal 20 is Gi, and the input video signal is input. 10 is Ii', then SSDA and SSDB can be expressed by the following formula. [Number 2] ssda /=1

SSDB i=l 接著,基於ssda及ssdb來進行以下的切換判定處 理(步驟S1104)。若SSDAo爲SSDB以下,則對切換資 訊18亦即l〇〇p_filter_flag設定0 (步驟s 1 105 )。反之 ’若SSDA〇爲大於SSDb的値,則對loop —filter flag設定 -15- 200948092 1(步驟Sll 06)。此處,將影像以16x16像素單位進行 分割所得之稱作巨集區塊的單位,以該單位進行上記切換 判定處理時的參照像素之例子,示於圖33。當令局部領域 爲巨集區塊時,上記[數2]的N係爲256,切換資訊18係 以巨集區塊單位而被輸出。作爲其他實施形態,切換判定 處理係亦可依照畫格單位或切片單位或是異於巨集區塊之 尺寸的區塊單位來判定影像訊號,該情況時,切換資訊18 也會以對應於判定結果之單位來輸出。 接著,圖3的迴圈濾鏡切換部114,係接受已生成之 切換資訊 18 亦即 loop_filter_flag,基於 l〇〇p_filter_flag 之値而將內部所具備之開關SW進行切換(步驟S1107) 。當loop_filter_flag爲0時,則迴圈濾鏡切換部114係 將開關SW連接至端子A,將局部解碼影像訊號16當作參 照影像訊號19而暫時保存於參照影像用緩衝區1〇8中( 步驟81108)。另一方面,當1〇〇卩_^1161>_^&吕爲1時,則 迴圈濾鏡切換部114係將開關SW連接至端子B’將復原 影像訊號20當作參照影像訊號19而暫時保存於參照影像 用緩衝區108中(步驟S1109)。 以上是迴圈濾鏡處理部107中的步驟S1101至步驟 s 1 1 0 9之動作。 最後,在濾鏡設定部11 〇中所生成之濾鏡係數資訊1 7 、及在切換資訊生成部112中所生成之切換資訊18’係在 熵編碼部104被編碼,並與量化後之轉換係數13、預測模 式資訊、區塊尺寸切換資訊、運動向量、量化參數等一起 -16- 200948092 被多工化成位元串流然後被發送至後述的動畫解碼裝置 2000 (步驟 SI 1 10 )。 ' 此處,關於濾鏡係數資訊17及切換資訊18的編碼方 ' 法,將本實施形態中所採用的語法結構之槪略,參照圖31 來詳細說明。在以下的例子中係設計成,將瀘鏡係數資訊 17以切片單位進行設定,將切換資訊18以巨集區塊單位 進行設定。 φ 語法係主要由3個部分所成,高等級語法(1 900 )中 係塡入有切片以上之上位層的語法資訊。在切片等級語法 (1 903 )中,係明記了每一切片所必須之資訊;在巨集區 塊等級語法(1 907 )中,係明記了每一巨集區塊所必須之 轉換係數資料或預測模式資訊、運動向量等。各語法係又 ‘ 再由更詳細的語法所構成,高等級語法(1 900 ),係由序 列參數集語法(1901)和圖像參數集語法( 19 02 )等這類 序列、圖像等級的語法所構成。切片等級語法(1 903 ), 〇 是由切片標頭語法(1 904 )、切片資料語法(1 905 )、迴 圈濾鏡資料語法(1906)等所構成。然後,巨集區塊等級 語法( 1907),係由巨集區塊層語法( 1908)、巨集區塊 預測語法(1 909 )等所構成。 在迴圈濾鏡資料語法(1 906 )中,係如圖32 ( a )所 示,記述有本實施形態之迴圈濾鏡所相關之參數亦即濾鏡 係數資訊1 7及切換資訊1 8。此處,濾鏡係數資訊1 7亦即 圖32 ( a)的filter_coeff[cy][cx]係爲2維濾鏡的係數, filter_size_y及filter_size_x係爲決定濾鏡尺寸的値。此 -17- 200948092 處雖然將表示濾鏡尺寸的値’記述於語法中,但作爲其他 實施形態,係亦可不記述在語法中而是使用預先制定的固 定値。只不過,若將濾鏡尺寸設成固定値,則在動畫編碼 裝置100 0及後述的動畫解碼裝置2 0 00中必須要使用同樣 的値才行,這點必須留意。又,圖32 ( 〇的l〇op_filter_flag 係爲切換資訊18,會將切片內的巨集區塊數之合計亦即 NumOfMacroblock 個的 loop filter flag > 進行傳送。 以上是關於動畫編碼裝置1000中的迴圈濾鏡所涉及 之動作的說明。 接著,說明相對於動畫編碼裝置1 00 0的動畫解碼裝 置。 參照圖5,說明第1實施形態所述之動畫解碼裝置的 構成。以下,針對圖5的構成要素,分別予以說明。 圖5所示的動畫解碼裝置2 0 00,係具有:熵解碼部 2〇1、逆轉換·逆量化部202、預測訊號生成部203、加算 器2 04、切換濾鏡處理部205、參照影像用緩衝區206,是 被解碼控制部207所控制。 熵解碼部2 0 1 ’係按照圖3 1所示的語法結構,對於高 等級語法、切片等級語法、巨集區塊等級語法之每一者, 依序解碼出編碼資料14的各語法的碼列,復原出已被量 化的轉換係數1 3、濾鏡係數資訊1 7、切換資訊1 8等。 逆轉換•逆量化部202,係取得已被量化之轉換係數 1 3然後進行逆量化、逆正交轉換(例如逆離散餘弦轉換等 ),輸出預測誤差影像訊號15。此處,雖然針對逆正交轉 -18- 200948092 換進行說明,但若在動畫編碼裝置1000中有進行小波轉 換等的情況下,則逆轉換·逆量化部202係會執行其所對 應之逆量化及逆小波轉換等。 • 預測訊號生成部203,係將參照影像用緩衝區206中 所儲存之已解碼之參照影像訊號19加以取得並進行所定 之預測處理,將預測影像訊號1 1予以輸出。預測處理係 採用例如運動補償的時間方向之預測,或畫面內的從已解 Q 碼之影像起算之空間方向的預測等即可,但必須要執行與 動畫編碼裝置1 〇〇〇同樣之預測處理,這點必須留意。 加算器204,係將已取得之預測誤差影像訊號1 5及預 測影像訊號11進行加算,生成解碼影像訊號21。 切換濾鏡處理部205,係取得解碼影像訊號21、濾鏡 '係數資訊17及切換資訊18,輸出參照影像訊號19。關於 切換濾鏡處理部2 05的詳細說明,將於後述。 參照影像用緩衝區206,係將已從切換濾鏡處理部 G 205所取得到的參照影像訊號19,予以暫時保存。已保存 在參照影像用緩衝區20 6中的參照影像訊號19,係被預測 訊號生成部203在生成預測影像訊號11之際,會被參照 〇 解碼控制部207,係進行解碼時序控制等,進行解碼 整體之控制。 接著,針對第1實施形態所述之動畫解碼裝置中的切 換濾鏡處理部20 5,使用圖6來詳細說明。以下,針對圖 6的構成要素,分別予以說明。 -19- 200948092 圖6所示的切換濾鏡處理部205A,係具有濾鏡處理 部208及迴圏濾鏡切換部209。開關SW,係切換著端子A 與端子B之連接。 濾鏡處理部208,係接受解碼影像訊號21及已在熵解 碼部201中所復原出來的濾鏡係數資訊17,依照濾鏡係數 資訊17而對解碼影像訊號21進行濾鏡處理,生成出復原 影像訊號20。已生成之復原影像20,係被輸入至後述的 迴圈濾鏡切換部209,同時也被當成輸出影像訊號22而已 解碼控制部207所管理之時序,進行輸出。 迴圈濾鏡切換部209,係接受已在熵解碼部201中復 原出來的切換資訊18,依照切換資訊18而用內部所具有 之開關SW來切換端子A和端子B之連接,將解碼影像訊 號21或復原影像訊號20當作參照影像訊號19而輸出。 以上是第1實施形態所述之動畫解碼裝置的構成。 接著,使用圖5、圖6及圖7’來詳細說明第〗實施 形態所述之動畫解碼裝置中的迴圈濾鏡之相關動作。此外 ,圖7係第1實施形態所述之動畫解碼裝置2000中的關 於迴圈濾鏡之動作的流程圖。 首先’一旦對圖5的動畫解碼裝置2 000輸入了編碼 資料14 ’則藉由熵解碼部201 ’除了轉換係數1 3、濾鏡係 數資訊17、切換資訊18’還有、預測模式資訊、區塊尺 寸切換資訊、運動向量、量化參數等,就會依照圖31的 語法結構而被解碼。接著’已被熵解碼部201所解碼之轉 換係數13,係被輸入至逆轉換•逆量化部202,依照被解 200948092 碼控制部207所設定之量化參數,而進行逆量化。對已被 逆量化的轉換係數,進行逆正交轉換(例如離散餘弦轉換 * 等),復原出預測誤差影像訊號15。預測誤差影像訊號 - 1 5,係與由預測訊號生成部203所輸出之預測影像訊號1 1 在加算器2 04中進行加算,生成解碼影像訊號21。 上記一連串處理,係爲進行預測處理與轉換處理的所 謂混合式編碼的動畫編碼中一般常見的解碼處理。 ϋ 此處,針對第1實施形態所述的動畫解碼裝置2000 中的特徵性處理亦即關於迴圈濾鏡之動作,使用圖6及圖 7來詳細說明。 首先’熵解碼部201,係按照圖31的語法結構,將濾 鏡係數資訊17及切換資訊18進行熵解碼(步驟S2100) 。在圖31的語法結構中的切片等級語法(19〇3)中所屬 之迴圈瀘鏡資料語法(1906 )中,係如圖32 ( a )所示, 記述有本實施形態之迴圈濾鏡所相關之參數亦即濾鏡係數 〇 資訊17及切換資訊18。此處’濾鏡係數資訊17亦即圖 32 ( a )的filter_coeff[cy][Cx]係爲2維濾鏡的係數, filter_size_y及fiHer_size_x係爲決定濾鏡尺寸的値。此 處雖然將表示濾鏡尺寸的値,記述於語法中,但作爲其他 實施形態,係亦可不記述在語法中而是將預先制定的固定 値當作濾鏡尺寸來使用。只不過,若將濾鏡尺寸設成固定 値,則在前述的動畫編碼裝置1000與動畫解碼裝置2000 中必須要使用同樣的値才行’這點必須留意。又,圖3 2 ( a)的loop_filter_flag係爲切換資訊18,切片內的巨集區 200948092 塊數之合計亦即 NumOfMacroblock 個的 l〇〇p_filter_flag ,會被解碼。 接著,圖6的濾鏡處理部208,係接受已解碼之濾鏡 係數資訊17(步驟S2101)。迴圈濾鏡切換部209,係移 電接受了已解碼之切換資訊18亦即l〇〇p_filter_flag (步 驟S2102),便基於該l〇〇p_filter_flag之値,來切換其內 部所具備之開關SW (步驟S2103)。當loop_filter_flag 爲〇時,則迴圈濾鏡切換部209係將開關SW連接至端子 A,將解碼影像訊號2 1當作參照影像訊號1 9而暫時保存 於參照影像用緩衝區206中(步驟S2104)。另一方面, 當l〇〇p_filter_flag爲1時,則迴圈濾鏡切換部209係將 開關SW連接至端子B,將復原影像訊號20當作參照影像 訊號19而暫時保存於參照影像用緩衝區206中(步驟 S2105)。此處,一旦開關SW是被連接至端子B,則濾鏡 處理部1 1 3係基於濾鏡係數資訊1 7而對解碼影像訊號2 1 施行濾鏡處理,生成復原影像訊號20。作爲一例,若假設 解碼影像上的位置(x,y)之像素爲F(x, y) ,2維濾鏡 的寬度爲W、高度爲H,濾鏡係數爲h(i, j) ( -w^ w 、-hS j S h、w = W/2、h = H/2 ),則復原影像 G ( x,y )係 可用[數1]來表示。 此處,迴圈濾鏡切換部209,係按照圖32(a)的語 法,以巨集區塊單位來接受切換資訊18,而將開關SWT 以切換。作爲其他實施形態,當動畫編碼裝置1 〇〇〇是以 畫格單位或切片單位或是異於巨集區塊之尺寸的區塊單位 -22- 200948092 置 及 及 則 測 的 部 20 降 藉 域 促 碼 影 中 予 鏡 號 來將切換資訊18進行編碼的情況下’在動畫解碼裝 2000,係以同樣的單位來執行切換資訊18的解碼’以 • 切換部209的開關SW之切換。 • 以上是關於動畫解碼裝置2000中的迴圈濾鏡所涉 之動作的說明。 如此,若依據第1實施形態所述之動畫編碼裝置’ 藉由設定迴圈濾鏡的濾鏡係數資訊以使得輸入影像與預 φ 訊號之誤差呈最小而進行濾鏡處理’就可促使復原影像 畫質提升。又,藉由以迴圈濾鏡切換部114來對每一局 領域切換是要將局部解碼影像訊號16與復原影像訊號 之哪一者當作參照影像使用,對於會因濾鏡而導致畫質 低之領域則不將復原影像訊號20當作參照影像使用, ' 此而防止畫質降低的傳播,並且,對於畫質會提升之領 則是將復原影像訊號20當作參照影像使用,藉此就可 使預測精度提升。 Q 又,若依據第1實施形態所述之動畫解碼裝置2000 則藉由使用與動畫編碼裝置1000同樣的濾鏡係數資訊 切換資訊來進行濾鏡處理及切換處理,就可保證動畫編 裝置1000中的參照影像與動畫解碼裝置2000中的參照 像是同步的。 此外,在第1實施形態所述之動畫解碼裝置2000 的切換濾鏡處理部205A上,雖然是將參照影像訊號19 以輸出來當作輸出影像訊號22,但亦可如圖8的切換濾 處理部205B般地,將解碼影像訊號21當作輸出影像訊 -23- 200948092 22而輸出,或是亦可如圖9的切換濾鏡處理部205C般地 ,將復原影像訊號20當作輸出影像訊號22而輸出。此情 況下,用來表示將解碼影像或復原影像當作輸出影像而使 用之意旨的切換資訊,會被生成。 又,亦可如圖10的切換濾鏡處理部205D般地,新具 備有後段濾鏡切換部2 1 0,藉由切換資訊1 8來切換開關 SW2,以切換輸出影像訊號22。此情況下,作爲切換輸出 影像訊號22用的切換資訊18,例如圖32(d)所示般地 ’將post_filter —flag以切片單位而記述於語法中,藉此來 將開關 SW2 予以切換。p〇st_filter_flag,係與 l〇〇p_filter_flag 同樣地,亦可用畫格單位或巨集區塊單位或是異於巨集區 塊之尺寸的區塊單位來記述。 此外,在第1實施形態所述之動畫編碼裝置1 000及 動畫解碼裝置2 000中,雖然是對局部解碼影像訊號16進 行濾鏡處理,但局部解碼影像訊號16係可採用先前之施 行過去區塊濾鏡處理後的影像。 此外,該動畫編碼裝置10 00及動畫解碼裝置20 00, 係例如亦可藉由採用例如通用之電腦裝置爲基本硬體來實 現。亦即,預測訊號生成部101、減算器102'轉換•量 化部103、熵編碼部104、逆轉換.逆量化部105、加算器 1〇6、迴圈濾鏡處理部107、參照影像用緩衝區108、編碼 控制部109、濾鏡設定部110、切換濾鏡處理部111、切換 資訊生成部112、濾鏡處理部113、迴圈濾鏡切換部114、 熵解碼部201、逆轉換.逆量化部202、預測訊號生成部 200948092 203、加算器204、迴圈濾鏡處理部205、參照影像用緩衝 區206、解碼控制部207、濾鏡處理部208、迴圈濾鏡切換 ' 部209及後段濾鏡切換部210,係可藉由令上記電腦裝置 * 中所搭載的處理器去執行程式而加以實現。此時’動畫編 碼裝置1000及動畫解碼裝置2000係爲,將上記程式事先 安裝至電腦裝置而加以實現,或亦可記憶在CD-ROM等記 憶媒體中,或透過網路來散佈上記程式,以將該程式適宜 φ 地安裝至電腦裝置而加以實現。又,參照影像用緩衝區 108及參照影像用緩衝區206,係可適宜利用內藏於電腦 裝置中的擴充記億體、硬碟或CD-R、CD_RW、DVD-RAM 、DVD-R等記憶媒體等來加以實現。 _ (第2實施形態) 參照圖11,說明第2實施形態所述之動畫編碼裝置的 構成。以下,針對圖1 1的構成要素,分別予以說明。 ❿ 圖11所示的動畫編碼裝置3000,係具有:切換資訊 生成預測部301A、迴圈濾鏡處理部302、局部解碼影像用 緩衝區303、復原影像用緩衝區3 04、減算器1〇2、轉換. 量化部103、熵編碼部104、逆轉換•逆量化部1〇5、加算 器1 〇6,是被編碼控制部1 09所控制。 此處’減算器102、轉換.量化部1〇3、熵編碼部ι〇4 、逆轉換•逆量化部105、加算器106及編碼控制部1〇9 ’係由於是和第1實施形態所述之圖1的動畫編碼裝置 1000中的同編號之構成要素進行同樣的動作,因此這裡省 :3 -25- 200948092 略說明。 切換資訊生成預測部301A,係如圖12所示,在其內 部具有參照切換預測部3 05A及切換資訊生成部112。此 處,切換資訊生成部1 1 2,係由於是和第1實施形態所述 之動畫編碼裝置中的同編號之構成要素進行同樣的動作’ 因此這裡省略說明。切換資訊生成預測部301A,係取得 局部解碼影像訊號16、復原影像訊號20及輸入影像訊號 10,輸出預測影像訊號11及切換資訊18。 迴圈瀘鏡處理部3 02,係如圖14所示,具有濾鏡設定 部110及濾鏡處理部113。濾鏡設定部110及濾鏡處理部 113,係由於是和第1實施形態所述之動畫編碼裝置中的 同編號之構成要素進行同樣的動作,因此這裡省略說明。 迴圈濾鏡處理部302,係取得局部解碼影像訊號16及輸入 影像訊號1〇,並將復原影像訊號20及濾鏡係數資訊17予 以輸出。 局部解碼影像用緩衝區303,係將加算器1〇6所生成 的局部解碼影像訊號16加以取得,並予以暫時保存。局 部解碼影像用緩衝區303中所保存的局部解碼影像訊號16 ,係被輸入至切換資訊生成預測部301A。 復原影像用緩衝區3 04,係將迴圈濾鏡處理部302所 生成的復原影像訊號20加以取得,並予以暫時保存。復 原影像用緩衝區304中所保存的復原影像訊號20’係被輸 入至切換資訊生成預測部301A。 參照切換預測部305A,係如圖1 3所示’具有預測訊 200948092 號生成部101及迴圈濾鏡切換部114。此處,預測訊號生 成部101及迴圈濾鏡切換部114,係由於分別是和第1實 ' 施形態所述之動畫編碼裝置1000中的同編號之構成要素 * 進行同樣的動作,因此這裡省略說明》參照切換預測部 305A,係取得局部解碼影像訊號16、復原影像訊號20及 切換資訊18,基於已取得之切換資訊18,將局部解碼影 像訊號16或復原影像訊號20之任一者當作參照影像而進 e 行所定之預測處理,輸出預測影像訊號1 1。預測處理係採 用例如運動預測·運動補償的時間方向之預測,或畫面內 的從已解碼之影像起算之空間方向的預測等即可。 以上是第2實施形態所述之動畫編碼裝置的構成。 接著,使用圖11、圖12、圖13、圖14及圖15,來 胃詳細說明第2實施形態所述之動畫編碼裝置之動作。此外 ,圖15係第2實施形態所述之動畫編碼裝置3 000中的關 於迴圈濾鏡之動作的流程圖。 〇 首先,與先前的混合式編碼或第1實施形態所述之動 畫編碼裝置1000同樣地,進行預測、轉換、量化、熵編 碼化,並且在編碼裝置內部進行局部解碼,生成局部解碼 影像訊號16»接著,將已生成之局部解碼影像訊號16, 暫時保存於局部解碼影像用緩衝區3 03中(步驟S3100) 。於迴圈濾鏡處理部3 02內部的濾鏡設定部11〇中,會取 得局部解碼影像訊號16及輸入影像訊號10 ’進行減鏡係 數資訊17之設定(步驟S3101)。此處,係使用影像復 原中一般會採用的2維Wiener filter,將濾鏡係數設計成 -27- 200948092 會使得對局部解碼影像訊號16施加濾鏡處理後之影像、 與輸入影像訊號ίο的平均平方誤差成爲最小’將已設計 之濾鏡係數及表示濾鏡尺寸的値’當成濾鏡係數資訊17 而進行設定。已設定之濾鏡係數資訊17’係同樣地被輸出 至迴圈濾鏡處理部302內部的濾鏡處理部113’同時也被 輸出至熵編碼部104。 迴圈濾鏡處理部302內部的濾鏡處理部113中,係對 局部解碼影像訊號16,使用已被濾鏡設定部11〇所取得之 濾鏡係數資訊17來進行濾鏡處理,生成復原影像訊號20 (步驟S3102)。已生成之復原影像訊號20係被暫時保 存於復原影像用緩衝區3 04中(步驟S3103)。 將濾鏡設定部110所生成之濾鏡係數資訊17,以熵編 碼部104進行編碼,連同量化後之轉換係數13、預測模式 資訊、區塊尺寸切換資訊、運動向量、量化參數等一起多 工化成位元串流,發送至後述的動畫解碼裝置4000 (步驟 S3104 )。此時,濾鏡係數資訊17,係於圖31之語法結 構中的切片等級語法( 1903)中所屬之迴圈濾鏡資料語法 (1 9 0 6 )中,被記述成如圖3 2 ( b )所示。此處,濾鏡係 數資訊17亦即圖32(b)的filter_coeff[cy][cx]係爲2維 濾鏡的係數,filter_size_y及filter_size_x係爲決定爐鏡 尺寸的値。此處雖然將表示濾鏡尺寸的値,記述於語法中 ,但作爲其他實施形態,係亦可不記述在語法中而是將預 先制定的固定値當作濾鏡尺寸來使用。只不過,若將減鏡 尺寸設成固定値,則在該當動畫編碼裝置3000與後述的 -28- 200948092 動畫解碼裝置4000中必須要使用同樣的値才行,這點必 須留意。 ' 接著,切換資訊生成預測部301A,係將局部解碼影 - 像訊號16或復原影像訊號20當作參照影像而進行所定之 預測處理,輸出預測影像訊號1 1 (步驟S3 1 05〜3 1 1 3 )。 此處,首先,參照切換預測部3 05A係分別取得,將局部 解碼影像當成參照影像時的預測影像、與將復原影像當成 φ 參照影像時的預測影像,切換資訊生成部112係基於它們 而進行切換判定處理,生成用來決定要將局部解碼影像或 復原影像之哪一者當作參照影像所需的切換資訊1 8。參照 切換預測部3 05A係基於已生成之切換資訊18,來切換圖 13的迴圈濾鏡切換部114內部的開關SW,然後輸出預測 影像訊號11。步驟3105至步驟3113之動作的詳細說明係 如以下。 首先,參照切換預測部3 0 5 A係將圖13的迴圈濾鏡切 〇 換部114中的開關SW連接至端子A,於預測訊號生成部 101中就會將局部解碼影像訊號16當作參照影像而取得預 測影像訊號11,輸入至圖12的切換資訊生成部112(步 驟S3 105 )。接著,參照切換預測部3 05A係將開關SW連 接至端子B,於預測訊號生成部1〇1中就會將復原影像訊 號20當作參照影像而取得預測影像訊號11,輸入至圖12 的切換資訊生成部112 (步驟S3 106)。 接著,圖2的切換資訊生成部112,係算出從局部解 碼影像訊號1 6所取得到之預測影像與輸入影像訊號1 〇的 -29- 200948092 殘差平方和SSDA,以及,從復原影像訊號20所取得到之 預測影像與輸入影像訊號10的殘差平方和SSDB (步驟 S3107) aSSDA及SSDB,係若假設從局部解碼影像訊號 1 6所取得到之預測影像爲Fi、從復原影像訊號20所取得 到之預測影像爲Gi,則可用與[數2]同樣之式子來表示。 切換資訊生成部112,係基於SSDAR SSDB來進行以 下的切換判定處理(步驟S3108)。若SSDA〇爲SSDB以 下,則對切換資訊18亦即l〇oP_filter_flag設定〇 (步驟 S3109)。反之,若 SSDA〇爲大於 SSDB的値,則對 loop_filter_flag設定1 (步驟S3110)。此處,係對每一 巨集區塊進行上記切換判定處理,將切換資訊18以巨集 區塊單位進行輸出。作爲其他實施形態,切換判定處理係 亦可依照畫格單位或切片單位或是異於巨集區塊之尺寸的 區塊單位來判定,該情況時,切換資訊18也會以對應於 判定結果之單位來輸出。 又,當預測訊號生成部1 0 1中的預測處理是運動預測 的情況下,則上記切換判定處理,係作爲運動預測處理係 亦可使用一般的方法,例如,對於殘差平方和D,亦可添 加切換資訊、參照影像指數、運動向量等之參數資訊的編 碼量R而如下式般地針對局部解碼影像及復原影像分別計 算出成本J,使用成本J來進行判定。 〔數3〕SSDB i = l Next, the following handover determination processing is performed based on ssda and ssdb (step S1104). If SSDAo is below SSDB, 0 is set to the handover information 18, i.e., l〇〇p_filter_flag (step s 1 105). Otherwise ‘If SSDA〇 is greater than SSDb, set -15-200948092 1 to loop_filter flag (step S110). Here, an example of a reference pixel in which the video is divided into 16x16 pixel units, which is called a macroblock, and the above-described switching determination processing is performed in this unit is shown in Fig. 33. When the local area is a macro block, the N system of the above [number 2] is 256, and the switching information 18 is outputted in a macro block unit. In another embodiment, the switching determination process may determine the image signal according to the frame unit or the slice unit or the block unit different from the size of the macro block. In this case, the switching information 18 also corresponds to the determination. The unit of results is output. Next, the loop filter switching unit 114 of Fig. 3 receives the loop information_flag_flag, which is the generated switching information 18, and switches the switch SW provided inside based on the l〇〇p_filter_flag (step S1107). When the loop_filter_flag is 0, the loop filter switching unit 114 connects the switch SW to the terminal A, and temporarily stores the locally decoded video signal 16 as the reference video signal 19 in the reference video buffer 1〇8 (step) 81108). On the other hand, when 1〇〇卩_^1161>_^& Lu is 1, the loop filter switching unit 114 connects the switch SW to the terminal B', and restores the restored image signal 20 as the reference image signal 19 It is temporarily stored in the reference image buffer 108 (step S1109). The above is the operation from step S1101 to step s 1 1 0 9 in the loop filter processing unit 107. Finally, the filter coefficient information 1 7 generated in the filter setting unit 11 及 and the switching information 18 ′ generated in the switching information generating unit 112 are encoded by the entropy encoding unit 104 and converted to be quantized. The coefficient 13, the prediction mode information, the block size switching information, the motion vector, the quantization parameter, and the like are multiplexed into a bit stream and then transmitted to the animation decoding device 2000 to be described later (step SI 1 10). Here, the syntax of the filter coefficient information 17 and the switching information 18 method will be described in detail with reference to FIG. 31. In the following example, the mirror coefficient information 17 is set in slice units, and the switching information 18 is set in macroblock units. The φ grammar system is mainly composed of three parts. In the high-level grammar (1 900 ), the grammar information of the upper layer above the slice is inserted. In the slice level grammar (1 903), the information necessary for each slice is clearly recorded; in the macro block level grammar (1 907), the conversion coefficient data necessary for each macro block is clearly recorded or Forecast mode information, motion vectors, etc. Each grammar is further composed of a more detailed grammar, a high-level grammar (1 900 ), which is composed of sequence parameter set grammar (1901) and image parameter set grammar (1902). The grammar is composed. The slice level syntax (1 903), 〇 is composed of a slice header syntax (1 904), a slice data syntax (1 905), and a loop filter data syntax (1906). Then, the macro block level grammar (1907) is composed of a macro block grammar (1908) and a macro block prediction grammar (1 909). In the loop filter data syntax (1 906), as shown in FIG. 32(a), the parameters related to the loop filter of the present embodiment, that is, the filter coefficient information 17 and the switching information 1 8 are described. . Here, the filter coefficient information 1 7 is the filter_coeff[cy][cx] of Fig. 32 (a) is the coefficient of the 2-dimensional filter, and the filter_size_y and filter_size_x are the parameters for determining the filter size. In the -17-200948092, the 値' indicating the filter size is described in the grammar. However, as another embodiment, a predetermined fixed 値 may be used instead of the grammar. However, if the filter size is set to be fixed, it is necessary to use the same flaw in the animation encoding apparatus 100 0 and the animation decoding apparatus 200 described later. Further, in Fig. 32 (〇's l〇op_filter_flag is the switching information 18, the total number of macroblocks in the slice, that is, the NumOfMacroblock loop filter flag > is transmitted. The above is about the animation coding apparatus 1000. Description of the operation of the loop filter. Next, the animation decoding device for the animation encoding device 100 will be described. The configuration of the animation decoding device according to the first embodiment will be described with reference to Fig. 5. The components of the animation decoding apparatus shown in FIG. 5 include an entropy decoding unit 2〇1, an inverse conversion/inverse quantization unit 202, a prediction signal generation unit 203, an adder 204, and a switching. The filter processing unit 205 and the reference video buffer 206 are controlled by the decoding control unit 207. The entropy decoding unit 2 0 1 ' is in accordance with the syntax structure shown in Fig. 31 for high-level syntax, slice level syntax, and giant Each of the block level grammars sequentially decodes the code columns of the grammars of the coded data 14, and restores the quantized conversion coefficients 1 3 , the filter coefficient information 17 , the switching information 1 8 , and the like. The inverse quantization unit 202 obtains the quantized conversion coefficient 13 and performs inverse quantization and inverse orthogonal conversion (for example, inverse discrete cosine conversion) to output a prediction error video signal 15. Here, although it is for inverse orthogonality In the case of performing the wavelet conversion or the like in the animation encoding device 1000, the inverse conversion/inverse quantization unit 202 performs the inverse quantization and inverse wavelet conversion corresponding thereto. The prediction signal generation unit 203 acquires the decoded reference video signal 19 stored in the reference video buffer 206 and performs predetermined prediction processing to output the predicted video signal 11. The prediction processing uses, for example, motion compensation. The prediction of the time direction or the prediction of the spatial direction from the image of the Q-coded image in the picture may be performed, but it is necessary to perform the same prediction process as the animation coding apparatus 1 ,, which must be noted. The unit 204 adds the obtained prediction error image signal 15 and the predicted image signal 11 to generate a decoded image signal 21. Switching filter processing The unit 205 obtains the decoded video signal 21, the filter 'coefficient information 17 and the switching information 18, and outputs the reference video signal 19. The detailed description of the switching filter processing unit 205 will be described later. The reference video signal 19 that has been acquired from the switching filter processing unit G 205 is temporarily stored. The reference video signal 19 stored in the reference video buffer 20 6 is generated by the predicted signal generating unit 203. When the video signal 11 is predicted, the decoding control unit 207 is referred to, and decoding timing control or the like is performed to control the entire decoding. Next, the switching filter processing unit 205 in the animation decoding device according to the first embodiment will be described in detail with reference to Fig. 6 . Hereinafter, the components of Fig. 6 will be described separately. -19- 200948092 The switching filter processing unit 205A shown in Fig. 6 includes a filter processing unit 208 and a return filter switching unit 209. The switch SW switches the connection between the terminal A and the terminal B. The filter processing unit 208 receives the decoded video signal 21 and the filter coefficient information 17 that has been restored by the entropy decoding unit 201, and filters the decoded video signal 21 according to the filter coefficient information 17 to generate a restored image. Image signal 20. The restored image 20 is input to the loop filter switching unit 209, which will be described later, and is also outputted as the timing at which the output of the video signal 22 is managed by the decoding control unit 207. The loop filter switching unit 209 receives the switching information 18 that has been restored by the entropy decoding unit 201, and switches the connection of the terminal A and the terminal B with the switch SW provided therein according to the switching information 18, and decodes the video signal. 21 or the restored video signal 20 is output as the reference video signal 19. The above is the configuration of the animation decoding device according to the first embodiment. Next, the operation of the loop filter in the animation decoding device according to the first embodiment will be described in detail with reference to Figs. 5, 6 and 7'. Further, Fig. 7 is a flowchart showing the operation of the loop filter in the animation decoding device 2000 according to the first embodiment. First, 'once the encoded data 14' is input to the animation decoding device 2 000 of FIG. 5, the entropy decoding unit 201' is divided by the conversion coefficient 13, the filter coefficient information 17, the switching information 18', the prediction mode information, and the region. Block size switching information, motion vectors, quantization parameters, etc., are decoded in accordance with the syntax structure of FIG. Then, the conversion coefficient 13 decoded by the entropy decoding unit 201 is input to the inverse conversion/inverse quantization unit 202, and inverse quantization is performed in accordance with the quantization parameter set by the 200948092 code control unit 207. The inverse orthogonal transform (e.g., discrete cosine transform *, etc.) is performed on the transform coefficients that have been inversely quantized, and the prediction error video signal 15 is restored. The prediction error video signal - 15 is added to the predicted video signal 1 1 outputted by the prediction signal generating unit 203 in the adder 204 to generate a decoded video signal 21. The above-mentioned series of processing is a decoding process which is generally common in animation coding of a so-called hybrid coding which performs prediction processing and conversion processing. Here, the characteristic processing in the animation decoding device 2000 according to the first embodiment, that is, the operation of the loop filter, will be described in detail with reference to FIGS. 6 and 7 . First, the entropy decoding unit 201 entropy decodes the filter coefficient information 17 and the switching information 18 in accordance with the syntax structure of Fig. 31 (step S2100). In the loop 泸 mirror data grammar (1906) to which the slice level grammar (19〇3) in the grammatical structure of FIG. 31 belongs, as shown in FIG. 32(a), the loop filter of the present embodiment is described. The relevant parameters are the filter coefficient 〇 information 17 and the switching information 18. Here, the filter coefficient information 17, that is, filter_coeff[cy][Cx] of Fig. 32 (a) is a coefficient of a two-dimensional filter, and filter_size_y and fiHer_size_x are 値 which determine the size of the filter. In this case, the 尺寸 of the filter size is described in the grammar. However, as another embodiment, the predetermined fixed 値 may be used as the filter size without being described in the grammar. However, if the filter size is set to a fixed size, the same animation must be used in the aforementioned animation encoding apparatus 1000 and animation decoding apparatus 2000. This must be noted. Moreover, the loop_filter_flag of FIG. 3 2 (a) is the switching information 18, and the total number of blocks in the slice 200948092, that is, the total number of blocks of NumOfMacroblock, l〇〇p_filter_flag, is decoded. Next, the filter processing unit 208 of Fig. 6 receives the decoded filter coefficient information 17 (step S2101). The loop filter switching unit 209 receives the decoded switching information 18, that is, the l〇〇p_filter_flag (step S2102), and switches the switch SW (which is provided therein) based on the switch of the l〇〇p_filter_flag ( Step S2103). When the loop_filter_flag is 〇, the loop filter switching unit 209 connects the switch SW to the terminal A, and temporarily stores the decoded video signal 2 1 as the reference video signal 19 in the reference image buffer 206 (step S2104). ). On the other hand, when l〇〇p_filter_flag is 1, the loop filter switching unit 209 connects the switch SW to the terminal B, and temporarily restores the restored video signal 20 as the reference video signal 19 to the reference image buffer. 206 (step S2105). Here, once the switch SW is connected to the terminal B, the filter processing unit 1 1 3 performs filter processing on the decoded video signal 2 1 based on the filter coefficient information 17 to generate the restored video signal 20. As an example, if the pixel at the position (x, y) on the decoded image is F(x, y), the width of the 2-dimensional filter is W, the height is H, and the filter coefficient is h(i, j) ( w^ w , -hS j S h, w = W/2, h = H/2 ), the restored image G ( x, y ) can be represented by [number 1]. Here, the loop filter switching unit 209 accepts the switching information 18 in the macro block unit in accordance with the syntax of Fig. 32 (a), and switches the switch SWT. As another embodiment, when the animation coding apparatus 1 is in a frame unit or a slice unit or a block unit different from the size of the macro block, the unit 20-2240092 and the measured portion 20 When the code information is encoded in the code to encode the switching information 18, 'the animation decoding device 2000 performs the decoding of the switching information 18 in the same unit' to switch the switch SW of the switching unit 209. • The above is an explanation of the operation of the loop filter in the animation decoding device 2000. As described above, the animation encoding apparatus according to the first embodiment can perform the filter processing by setting the filter coefficient information of the loop filter so that the error between the input image and the pre-φ signal is minimized. The picture quality is improved. Moreover, by switching the filter area for each field by the loop filter switching unit 114, which of the locally decoded video signal 16 and the restored video signal is to be used as a reference image, the image quality is caused by the filter. In the low field, the restored video signal 20 is not used as a reference image, which prevents the degradation of the image quality, and the improvement of the image quality is to use the restored image signal 20 as a reference image. It can improve the prediction accuracy. Further, according to the animation decoding apparatus 2000 according to the first embodiment, by performing filter processing and switching processing using the filter coefficient information switching information similar to the animation encoding apparatus 1000, the animation editing apparatus 1000 can be secured. The reference image is synchronized with the reference image in the animation decoding device 2000. Further, in the switching filter processing unit 205A of the animation decoding device 2000 according to the first embodiment, the reference video signal 19 is output as the output video signal 22, but the switching filter processing as shown in FIG. 8 is also possible. Similarly, the decoded video signal 21 is output as the output video signal -23-200948092 22, or the restored video signal 20 can be regarded as the output video signal as in the switching filter processing unit 205C of FIG. 22 and output. In this case, switching information indicating that the decoded image or the restored image is used as the output image is generated. Further, as in the switching filter processing unit 205D of Fig. 10, a rear filter switching unit 2 1 0 is newly provided, and the switch SW2 is switched by switching the information 1 to switch the output video signal 22. In this case, as the switching information 18 for switching the output video signal 22, for example, as shown in Fig. 32(d), the post_filter_flag is described in the syntax in the slice unit, thereby switching the switch SW2. Similarly to l〇〇p_filter_flag, p〇st_filter_flag can also be described by a tile unit or a macro block unit or a block unit different from the size of the macro block. Further, in the animation encoding device 1000 and the animation decoding device 2000 according to the first embodiment, although the local decoded video signal 16 is subjected to filter processing, the local decoded video signal 16 may be subjected to the previous execution of the past region. Image after block filter processing. Further, the animation encoding device 10 00 and the animation decoding device 20 00 can be realized, for example, by using, for example, a general-purpose computer device as a basic hardware. That is, the prediction signal generation unit 101, the subtractor 102' conversion/quantization unit 103, the entropy coding unit 104, the inverse conversion, the inverse quantization unit 105, the adder 1〇6, the loop filter processing unit 107, and the reference image buffer The area 108, the encoding control unit 109, the filter setting unit 110, the switching filter processing unit 111, the switching information generating unit 112, the filter processing unit 113, the loop filter switching unit 114, the entropy decoding unit 201, and the inverse conversion. The quantization unit 202, the prediction signal generation unit 200948092 203, the adder 204, the loop filter processing unit 205, the reference video buffer 206, the decoding control unit 207, the filter processing unit 208, the loop filter switching unit 209, and The rear-end filter switching unit 210 can be realized by executing a program by a processor mounted in the computer device*. In this case, the animation encoding device 1000 and the animation decoding device 2000 are implemented by attaching the above-mentioned program to the computer device in advance, or by storing it in a memory medium such as a CD-ROM or distributing the writing program through the network. This program is implemented by suitably mounting the φ to a computer device. Further, the reference video buffer 108 and the reference video buffer 206 can be suitably used in an expansion memory such as a built-in computer device, a hard disk, or a CD-R, a CD_RW, a DVD-RAM, or a DVD-R. The media is waiting to be realized. (Second Embodiment) A configuration of an animation encoding apparatus according to a second embodiment will be described with reference to Fig. 11 . Hereinafter, the components of Fig. 11 will be described separately. The animation encoding device 3000 shown in FIG. 11 includes a switching information generation prediction unit 301A, a loop filter processing unit 302, a local decoded image buffer 303, a restored image buffer 304, and a subtractor 1〇2. The conversion unit 103, the entropy coding unit 104, the inverse conversion/inverse quantization unit 1〇5, and the adder 1 〇6 are controlled by the coding control unit 109. Here, the 'reduction unit 102, the conversion/quantization unit 1〇3, the entropy coding unit ι4, the inverse conversion/inverse quantization unit 105, the adder 106, and the coding control unit 1〇9' are the same as the first embodiment. The constituent elements of the same number in the animation encoding apparatus 1000 of Fig. 1 perform the same operation, and therefore, the explanation is omitted here: 3 - 25 - 200948092. As shown in Fig. 12, the switching information generation predicting unit 301A has a reference switching predicting unit 305A and a switching information generating unit 112 in its internal portion. In this case, the switching information generating unit 1 1 2 performs the same operation as the constituent elements of the same number in the animation encoding device according to the first embodiment. Therefore, the description thereof is omitted here. The switching information generation prediction unit 301A obtains the local decoded video signal 16, the restored video signal 20, and the input video signal 10, and outputs the predicted video signal 11 and the switching information 18. The loop mirror processing unit 312 has a filter setting unit 110 and a filter processing unit 113 as shown in Fig. 14 . Since the filter setting unit 110 and the filter processing unit 113 perform the same operations as the components of the same number in the animation encoding device according to the first embodiment, the description thereof will be omitted. The loop filter processing unit 302 obtains the local decoded video signal 16 and the input video signal 1〇, and outputs the restored video signal 20 and the filter coefficient information 17 for output. The local decoded video buffer 303 is obtained by acquiring the local decoded video signal 16 generated by the adder 1 〇 6 and temporarily storing it. The local decoded video signal 16 stored in the local decoded video buffer 303 is input to the switching information generation predicting unit 301A. The restored video buffer 3 04 acquires the restored video signal 20 generated by the loop filter processing unit 302 and temporarily stores it. The restored video signal 20' stored in the restored image buffer 304 is input to the switching information generation predicting unit 301A. The reference switching prediction unit 305A has a prediction unit 200948092 generation unit 101 and a loop filter switching unit 114 as shown in Fig. 13 . Here, the prediction signal generation unit 101 and the loop filter switching unit 114 perform the same operations as the constituent elements* of the same number in the animation encoding apparatus 1000 described in the first embodiment. Referring to the switching prediction unit 305A, the local decoded video signal 16, the restored video signal 20, and the switching information 18 are obtained, and based on the acquired switching information 18, the local decoded video signal 16 or the restored video signal 20 is used as the As a reference image, the prediction process is performed in the e-line, and the predicted image signal 1 1 is output. The prediction processing system may use, for example, prediction of the temporal direction of motion prediction and motion compensation, or prediction of the spatial direction from the decoded image in the picture. The above is the configuration of the animation encoding device according to the second embodiment. Next, the operation of the animation encoding apparatus according to the second embodiment will be described in detail with reference to Figs. 11, 12, 13, 14, and 15. Further, Fig. 15 is a flowchart showing the operation of the loop filter in the animation encoding device 3 000 according to the second embodiment. First, similarly to the conventional hybrid coding or the animation coding apparatus 1000 according to the first embodiment, prediction, conversion, quantization, and entropy coding are performed, and local decoding is performed inside the coding apparatus to generate a locally decoded video signal 16 Next, the generated partial decoded video signal 16 is temporarily stored in the local decoded video buffer 310 (step S3100). The filter setting unit 11 in the loop filter processing unit 312 obtains the local decoded video signal 16 and the input video signal 10' to set the subtraction coefficient information 17 (step S3101). Here, the 2-dimensional Wiener filter generally used in image restoration is used, and the filter coefficient is designed to be -27-200948092, which causes the image of the filter image processed by the local decoded image signal 16 to be averaged with the input image signal ίο. The squared error becomes the minimum 'The designed filter coefficient and the 表示' indicating the filter size are set as the filter coefficient information 17. The filter coefficient information 17' that has been set is similarly outputted to the filter processing unit 113' inside the loop filter processing unit 302, and is also output to the entropy coding unit 104. The filter processing unit 113 in the loop filter processing unit 302 performs filter processing on the locally decoded video signal 16 using the filter coefficient information 17 obtained by the filter setting unit 11 to generate a restored image. Signal 20 (step S3102). The generated restored video signal 20 is temporarily stored in the restored video buffer 310 (step S3103). The filter coefficient information 17 generated by the filter setting unit 110 is encoded by the entropy encoding unit 104, and is multiplexed together with the quantized conversion coefficient 13, prediction mode information, block size switching information, motion vector, quantization parameter, and the like. The bit stream is converted into a stream stream and transmitted to the animation decoding device 4000 (to be described later) (step S3104). At this time, the filter coefficient information 17 is described in the loop filter data syntax (1 906) in the slice level syntax (1903) in the syntax structure of FIG. 31, and is described as FIG. 3 2 (b) ) shown. Here, the filter coefficient information 17, that is, filter_coeff[cy][cx] of Fig. 32(b) is a coefficient of a two-dimensional filter, and filter_size_y and filter_size_x are 値 which determine the size of the galvanometer. Here, the 表示 indicating the size of the filter is described in the grammar. However, as another embodiment, the pre-defined fixed 値 may be used as the filter size without being described in the grammar. However, if the size of the subtraction mirror is set to be fixed, it is necessary to use the same flaw in the animation encoding apparatus 3000 and the animation decoding apparatus 4000 described later in -28-200948092, which must be noted. Then, the switching information generation predicting unit 301A performs the predetermined prediction processing by using the local decoded video-image signal 16 or the restored video signal 20 as the reference video, and outputs the predicted video signal 1 1 (step S3 1 05 to 3 1 1). 3). Here, first, the switching prediction unit 305A acquires the predicted video when the partial decoded video is the reference video and the predicted video when the restored video is the φ reference video, and the switching information generating unit 112 performs the based on these. The switching determination process generates switching information 1 8 for determining which of the locally decoded video or the restored video is to be used as the reference video. The switching prediction unit 3 05A switches the switch SW inside the loop filter switching unit 114 of Fig. 13 based on the generated switching information 18, and then outputs the predicted video signal 11. The detailed description of the actions of steps 3105 to 3113 is as follows. First, referring to the switching prediction unit 3 0 5 A, the switch SW in the loop filter switching unit 114 of FIG. 13 is connected to the terminal A, and the local decoded video signal 16 is regarded as the localized decoded video signal 16 in the prediction signal generating unit 101. The predicted video signal 11 is obtained by referring to the video, and is input to the switching information generating unit 112 of Fig. 12 (step S3 105). Next, referring to the switching prediction unit 305A, the switch SW is connected to the terminal B, and the predicted video signal 20 is used as the reference image in the prediction signal generating unit 1〇1 to obtain the predicted video signal 11, and the switch is input to FIG. The information generating unit 112 (step S3 106). Next, the switching information generating unit 112 of FIG. 2 calculates the residual squared sum SSDA of the predicted image and the input video signal 1 取得 obtained from the locally decoded video signal 16 and the reconstructed video signal 20 The obtained residual image summation SSDB of the predicted image and the input image signal 10 (step S3107) aSSDA and SSDB, if the predicted image obtained from the locally decoded video signal 16 is Fi, and the reconstructed video signal 20 is If the obtained predicted image is Gi, it can be expressed by the same expression as [2]. The switching information generating unit 112 performs the following switching determination processing based on the SSDAR SSDB (step S3108). If SSDA〇 is below SSDB, 切换 is set to the switching information 18, i.e., l〇oP_filter_flag (step S3109). On the other hand, if SSDA 〇 is 値 larger than SSDB, 1 is set for loop_filter_flag (step S3110). Here, the above-mentioned macroblock is subjected to the above-described switching determination processing, and the switching information 18 is outputted in the macroblock unit. In another embodiment, the switching determination process may be determined according to a frame unit or a slice unit or a block unit different from the size of the macro block. In this case, the switching information 18 also corresponds to the determination result. Unit to output. Further, when the prediction processing in the prediction signal generating unit 1 0 1 is motion prediction, the switching determination processing is described above, and a general method may be used as the motion prediction processing system, for example, for the residual square sum D, The code amount R of the parameter information such as the switching information, the reference image index, and the motion vector can be added, and the cost J can be calculated for each of the local decoded video and the restored video as follows, and the cost J can be used for the determination. [Number 3]

J = D+ λ xRJ = D+ λ xR

[數3]的λ係以定數而給定,是基於量化寬度或量化參數 -30- 200948092 的値而被決定。將能夠給予如此所得到之成本j爲最小値 的切換資訊、參照影像指數、運動向量,加以編碼。又, ‘ 在本實施形態中,雖然使用了殘差平方和,但作爲其他實 • 施形態,亦可使用殘差絕對値和,或可將這些値作阿達瑪 轉換,或利用近似値等等。又,亦可使用輸入影像的活性 來作成成本’或可利用量化寬度或量化參數來作成成本函 數。又’預測處理並非侷限於運動欲錯,只要是作爲預測 q 處理而需要任何參數的預測,都亦可將該參數的編碼量當 作R而使用上記式子來算出成本J。 接著’圖13的迴圈濾鏡切換部114,係接受已生成之 切換資訊 1 8 亦即 loop —filter_flag,基於 l〇〇p_filter_flag 之値而將內部所具備之開關SW進行切換(步驟S3111) •。當loop_filter_flag爲0時,則迴圈濾鏡切換部114係 將開關SW連接至端子A,將局部解碼影像訊號16當作參 照影像訊號19而輸出,於預測訊號生成部1〇1中,生成 〇 預測影像訊號 11(步驟 S3112)。另一方面,當 loop_filter_flag爲1時’則迴圈濾鏡切換部1 14係將開關 SW連接至端子B,將復原影像訊號20當作參照影像訊號 19而輸出’於預測訊號生成部1〇1中,生成預測影像訊號 1 1 (步驟 S3 1 1 3 )。 以上是切換資訊生成預測部301A中的步驟3105至步 驟3 1 1 3之動作。 最後,熵編碼部104係將切換資訊1 8予以編碼,連 同量化後之轉換係數13、預測模式資訊、區塊尺寸切換資 -31 - 200948092 訊、運動向量、量化參數等一起多工化成位元串流,發送 至後述的動畫解碼裝置4 000 (步驟S3 114)。此時,切換 資訊18亦即l〇〇p_filter_flag’係於圖31之語法結構中的 巨集區塊等級語法( 1907)中所屬的巨集區塊層語法( 1908)中,被記述成如圖32(c)所示。 以上是關於動畫編碼裝置3000中的迴圈濾鏡所涉及 之動作的說明。 接著,說明相對於動畫編碼裝置3000的動畫解碼裝 置。參照圖16,說明第2實施形態所述之動畫解碼裝置的 構成。以下,針對圖1 6的構成要素,分別予以說明。 圖16所示的動畫解碼裝置4 0 00,係具有:參照切換 預測部401A、解碼影像用緩衝區4 02、復原影像用緩衝區 403、熵解碼部201、逆轉換•逆量化部202、加算器204 、濾鏡處理部208,是被解碼控制部207所控制。 此處,熵解碼部201、逆轉換.逆量化部202、加算 器204、濾鏡處理部208、及解碼控制部207,係由於是和 第1實施形態所述之動畫解碼裝置2000中的同編號之構 成要素進行同樣的動作,因此這裡省略說明。 參照切換預測部4 0 1 A,係如圖1 7所示,具有預測訊 號生成部203及迴圈濾鏡切換部209。預測訊號生成部 2〇3及迴圈濾鏡切換部209,係由於分別是和第1實施形 態所述之動畫解碼裝置2000中的同編號之構成要素進行 同樣的動作,因此這裡省略說明。參照切換預測部40 1 A ’係接受解碼影像訊號21、復原影像訊號20及已在熵解 -32- 200948092 碼部20 1中解碼之切換資訊18,基於所收到的切換資訊 18,將解碼影像訊號21或復原影像訊號20之任一者當作 ' 參照影像而進行所定之預測處理’輸出預測影像訊號1 1。 • 預測處理係採用例如運動補償的時間方向之預測,或畫面 內的從已解碼之影像起算之空間方向的預測等即可,但必 須要執行與動畫編碼裝置3000同樣之預測處理,這點必 須留意。 © 解碼影像用緩衝區402,係將加算器204所生成的解 碼影像訊號21加以取得,並予以暫時保存。解碼影像用 緩衝區4 02中所保存的解碼影像訊號21,係被輸入至參照 切換預測部40 1A。復原影像用緩衝區403,係將濾鏡處理 部208所生成的復原影像訊號20加以取得,並予以暫時 保存。復原影像用緩衝區403中所保存的復原影像訊號20 ,係被輸入至參照切換預測部40 1A。 以上是第2實施形態所述之動畫解碼裝置的構成。 〇 接著’使用圖16、圖17及圖18’來詳細說明第2實 施形態所述之動畫解碼裝置的動作。此外,圖18係第2 實施形態所述之動畫解碼裝置4000中的關於迴圏濾鏡之 動作的流程圖。 首先,一旦編碼資料14被輸入至動畫解碼裝置4000 ,則藉由熵解碼部2 0 1,除了轉換係數1 3、濾鏡係數資訊 17、切換資訊18,還有、預測模式資訊、區塊尺寸切換資 訊、運動向量、量化參數等,就會依照圖31的語法結構 而被解碼(步驟S4100)。 -33- 200948092 圖31之語法結構中的切片等級語法( 1903)中所屬 之迴圈濾鏡資料語法(1 906 )內’係如圖32 ( b)所示’ 記述著濾鏡係數資訊17。此處’濾鏡係數資訊17亦即圖 32(b)的filter_coeff[cy][cx]係爲2維滤鏡的係數’ filter_size_y及filter_size_x係爲決定減鏡尺寸的値。此 處雖然將表示濾鏡尺寸的値,記述於語法中’但作爲其他 實施形態,係亦可不記述在語法中而是將預先制定的固定 値當作濾鏡尺寸來使用。只不過’若將濾鏡尺寸設成固定 値,則在動畫編碼裝置3000與動畫解碼裝置4000中必須 要使用同樣的値才行,這點必須留意。 又,圖31之語法結構中的巨集區塊等級語法(1907 )中所屬之巨集區塊層語法(1 908 )內,係如圖32 ( c ) 所示,記述著l〇〇p_filter_flag作爲切換資訊18。 已被熵解碼部201所解碼之轉換係數13,係和先前之 混合式編碼或第1實施形態所述之動畫解碼裝置2000同 樣地,被進行熵解碼、逆量化、逆轉換,與已被參照切換 預測部401A所輸出之預測影像訊號11進行加算,成爲解 碼影像訊號21而被輸出。解碼影像訊號21,係被輸出至 濾鏡處理部208,同時被暫時保存於解碼影像用緩衝區 402 中(步驟 S4101 )。 濾鏡處理部208’係將已在熵解碼部201中所復原之 濾鏡係數資訊17,加以取得(步驟S4102 )。又,濾鏡處 理部2 08’係接受解碼影像訊號21,對解碼影像使用濾鏡 係數資訊17而進行濾鏡處理,生成復原影像訊號20(步 -34- 200948092 驟S4103)。濾鏡處理部208,係將已生成之復原影像訊 號20當作輸出影像訊號22而輸出,並且暫時保存於復原 ' 影像用緩衝區403中(步驟S41〇4)。 ' 接著,參照切換預測部401A,係將解碼影像訊號21 或復原影像訊號20當作參照影像而生成預測影像訊號11 (步驟4105〜4108)。以下,針對步驟4105至步驟4108 之動作加以說明。 Q 首先,圖17的參照切換預測部401A,係將已解碼之 切換資訊 18亦即l〇op_filter_flag,加以取得(步驟 S4105)。接著,參照切換預測部401A內部的迴圈濾鏡切 換部209,係基於已取得之l〇〇p_filter_flag,來切換開關 SW (步驟 S4106)。當 loop_filter_flag 爲 0 時,則迴圈 - 濾鏡切換部209係將開關SW連接至端子A,將解碼影像 訊號21當作參照影像訊號1 9而加以取得,然後送往預測 訊號生成部203。預測訊號生成部203係基於解碼影像訊 ❿ 號2 1所對應之參照影像訊號1 9,來生成預測影像訊號1 1 (步驟 S4107)。另一方面,當 lo〇p_filter_flag 爲 1 時, 則迴圈濾鏡切換部209係將開關SW連接至端子B ’將復 原影像訊號20當作參照影像訊號19而加以取得’然後送 往預測訊號生成部203。預測訊號生成部203係基於復原 影像訊號20所對應之參照影像訊號1 9 ’來生成預測影像 訊號1 1 (步驟S4108 )。 以上是參照切換預測部401A中的步驟S4105至步驟 S4 108之動作。 -35- 200948092 此處,參照切換預測部401A,係按照圖32(c)的語 法,以巨集區塊單位來取得切換資訊18,並且將開關SW 予以切換。作爲其他實施形態,當動畫編碼裝置3000是 以畫格單位或切片單位或是異於巨集區塊之尺寸的區塊單 位來將切換資訊18進行編碼的情況下,動畫解碼裝置 4000,係以同樣的單位來執行切換資訊18的解碼,以及 迴圈濾鏡切換部209的開關SW之切換。 以上是關於動畫解碼裝置4000中的迴圈濾鏡所涉及 之動作的說明。 如此,若依據第2實施形態所述之動畫編碼裝置,則 藉由設定會使輸入影像與預測訊號之誤差呈最小的迴圈濾 鏡的濾鏡係數資訊來進行濾鏡處理,就可促使參照影像的 畫質提升。又,藉由以切換資訊生成預測部301A之內部 所具備之迴圈濾鏡切換部114來切換是要將局部解碼影像 訊號16與復原影像訊號20之哪一者當作參照影像來進行 預測處理,對於會因濾鏡而導致畫質降低之領域則不將復 原影像訊號20當作參照影像使用,藉此而防止畫質降低 的傳播,並且,對於畫質會提升之領域則是將復原影像訊 號20當作參照影像使用,藉此就可促使預測精度提升。 又,若依據第2實施形態所述之動畫解碼裝置40 00, 則藉由使用與動畫編碼裝置3000同樣的濾鏡係數資訊、 切換資訊來進行濾鏡處理及參照影像之切換,就可保證動 畫編碼裝置3000中的參照影像與動畫解碼裝置4000中的 參照影像是同步的。 200948092 此外,在第2實施形態所述之動畫解碼裝置4000中 ,雖然是將濾鏡處理部208上所生成的復原影像訊號20 當作輸出影像訊號22而加以輸出’但作爲其他實施形態 • ,亦可將解碼影像訊號21當作輸出影像訊號22而加以輸 出。又,在第2實施形態所述之動畫編碼裝置3〇〇〇及動 畫解碼裝置4000中,雖然是對局部解碼影像訊號16進行 濾鏡處理,但局部解碼影像訊號16係可採用先前之施行 0 過去區塊濾鏡處理後的影像。 又,於第2實施形態所述之動畫編碼/解碼裝置中, 雖然在上記實施施形態裡,復原影像用緩衝區中所被暫時 保存的復原影像訊號20是被參照切換預測部401A取得並 使用’但作爲其他實施形態’亦可不具備復原影像用緩衝 區而改成在參照切換預測部內部具備有濾鏡處理部,而在 參照切換預測部內部生成復原影像。關於此種實施形態的 動畫編碼裝置及動畫解碼裝置之圖示,示於圖19〜圖23 ❹ 圖19的動畫編碼裝置3001係具有切換資訊生成預測 部301B,圖20的切換資訊生成預測部301B係具有參照 切換預測部305B。圖21的參照切換預測部3 05B,係在內 部具備濾鏡處理部113,藉此,就可藉由取得局部解碼影 像訊號1 6及濾鏡係數資訊1 7而生成復原影像訊號20。又 ,圖22的動畫解碼裝置4001,係具有參照切換預測部 401B。圖23的參照切換預測部401B,係在內部具備濾鏡 處理部208,藉此,就可藉由取得解碼影像訊號21及濾鏡 -37- 200948092 係數資訊17而生成復原影像訊號20。 如此’藉由採取如動畫編碼裝置3 00 1及動畫解碼裝 置4 0 0 1般地構成,就可一面削減復原影像用緩衝區所需 之記憶體量,一面實現動畫編碼裝置3000及動畫解碼裝 置4000的同樣動作。 此外’該動畫編碼裝置3000、3001、及動畫解碼裝置 4000、4 00 1 ’係例如亦可藉由採用例如通用之電腦裝置爲 基本硬體來實現。亦即,切換資訊生成預測部301、迴圈 濾鏡處理部3 02、局部解碼影像用緩衝區3 03、復原影像 用緩衝區3 04、參照切換預測部3 05、預測訊號生成部101 、減算器102、轉換·量化部103、熵編碼部104、逆轉換 •逆量化部105、加算器106、編碼控制部109、濾鏡設定 部110、切換資訊生成部112、濾鏡處理部113、迴圈濾鏡 切換部114、參照切換預測部40 1、解碼影像用緩衝區40 2 、復原影像用緩衝區403、熵解碼部201、逆轉換•逆量 化部202、預測訊號生成部203、加算器204、解碼控制部 207、濾鏡處理部208、及迴圈濾鏡切換部209,係可藉由 令上記電腦裝置中所搭載的處理器去執行程式而加以實現 。此時,動畫編碼裝置3000、3001及動畫解碼裝置4000 、4001係爲,將上記程式事先安裝至電腦裝置而加以實現 ,或亦可記憶在CD-ROM等記憶媒體中’或透過網路來散 佈上記程式,以將該程式適宜地安裝至電腦裝置而加以實 現。又,局部解碼影像用緩衝區3 03、復原影像用緩衝區 3 04、解碼影像用緩衝區402、及復原影像用緩衝區403’ 200948092 係可適宜利用內藏於上記電腦裝置中的擴充記憶體、硬碟 或CD-R、CD-RW、DVD-RAM、DVD-R等記億媒體等來加 以實現。 (第3實施形態) 參照圖24’說明第3實施形態所述之動畫編碼裝置的 構成。以下,針對圖24的構成要素,分別予以說明。 φ 圖24所示的動畫編碼裝置5000,係具有:迴圈濾鏡 處理部501、預測訊號生成部1〇1、減算器i 〇2、轉換.量 化部103、熵編碼部104、逆轉換•逆量化部105、加算器 106、參照影像用緩衝區108,是被編碼控制部109所控制 〇 此處,預測訊號生成部101、減算器102、轉換.量 化部103、熵編碼部104、逆轉換.逆量化部105、加算器 106、參照影像用緩衝區108、及編碼控制部109,係由於 6 是和第1實施形態所述之圖1的動畫編碼裝置1 000中的 同編號之構成要素進行同樣的動作,因此這裡省略說明。 迴圈濾鏡處理部501,係接受局部解碼影像訊號16及 輸入影像訊號1 〇,並將參照影像訊號1 9及濾鏡係數資訊 17予以輸出。關於迴圈濾鏡處理部501的詳細說明,將於 後述。 接著,針對第3實施形態所述之動畫編碼裝置中的迴 圈濾鏡處理部501,使用圖25及圖26來詳細說明。以下 ,針對圖25及圖26的構成要素,分別予以說明。 -39- 200948092 圖25所示的迴圈濾鏡處理部501,係具有濾鏡設定部 110及切換資訊生成濾鏡處理部502,再者,切換資訊生 成濾鏡處理部502係如圖26所示,具有切換濾鏡處理部 -111及切換資訊生成部503。此處,濾鏡設定部110、切換 濾鏡處理部111,係由於是和第1實施形態所述之圖2的 迴圈濾鏡處理部107中的同編號之構成要素進行同樣的動 作,因此這裡省略說明。 切換資訊生成部503,係被編碼控制部109所控制, ❻ 接受局部解碼影像訊號16,依照所定之切換判定方法來生 成切換資訊18。已生成之切換資訊18,係被輸入至切換 濾鏡處理部111。關於切換判定方法的細節,將於後述。 以上是第3實施形態所述之動畫編碼裝置的構成。 接著,使用圖25、圖26及圖27,來詳細說明第3實 - 施形態所述之動畫編碼裝置中的迴圈濾鏡的動作。此外, 圖27係第3實施形態所述之動畫編碼裝置5000中的關於 迴圈濾鏡之動作的流程圖。 ❹ 首先,圖25所示的濾鏡設定部110,係取得局部解碼 影像訊號16及輸入影像訊號10,進行濾鏡係數資訊17之 設定(步驟S5100)。此處,濾鏡設定部U0係使用影像 復原中一般會採用的2維Wiener filter,將濾鏡係數設計 成會使得對局部解碼影像訊號16施加濾鏡處理後之影像 、與輸入影像訊號1〇的平均平方誤差成爲最小,將已設 計之濾鏡係數及表示濾鏡尺寸的値,當成濾鏡係數資訊17 而進行設定。已設定之濾鏡係數資訊17,係輸出至圖25 -40- 200948092 所示的切換資訊生成濾鏡處理部5 02,同時也被輸出至熵 編碼部1 0 4。 * 接著,圖26所示之切換資訊生成濾鏡處理部5 02內 • 部所具備之切換資訊生成部5 03,係取得局部解碼影像訊 號16(步驟S5101)。切換資訊生成部503,係將局部解 碼影像訊號16中的注目像素,與其周邊像素的差分絕對 値和SAD,按照像素單位而加以算出(步驟S5102)。 〇 SAD,係若令局部解碼影像內的像素之座標爲x、y,局部 解碼影像爲F(x,y),則可以下式表示。 〔數4〕 SAD = Σ Σ \F{x, y) - F(x + i,y + ;)| -接著,使用SAD和預先設定至編碼控制部109的閾 値T,進行以下的切換判定處理(步驟S5 103 )。若SAD 爲T以下,則對切換資訊18亦即l00p_filter_flag設定0 Q (步驟 S5104)。反之,若SAD爲大於T的値,則對 l〇op_filter_flag設定1(步驟S5105)。在這裡,雖然是 依照像素單位來求出切換資訊18,但作爲其他實施形態, 亦可將切換判定處理依照畫格單位或切片單位或是巨集區 塊單位或異於巨集區塊之尺寸的區塊單位來進行判定,此 情況下,切換資訊1 8也是以對應於其之單位進行輸出。 又,此處雖然使用了對象像素與周邊像素的差分絕對 値和,但作爲其他實施形態,亦可使用差分平方和’只要 是能夠從局部解碼影像算出的指標即可,亦可使用活性或 空間頻率、邊緣強度、邊緣方向等指標。又,此處’雖然 -41- . 200948092 是從局部解碼影像算出上記指標’但亦可將對局部解碼影 像進行濾鏡處理後的復原影像加以取得’從復原影像來算 出上記指標。 又,作爲其他實施形態,亦可基於屬於編碼資訊之一 部分的量化參數、區塊尺寸、預測模式、運動向量、轉換 係數等來進行切換判定處理。 接著,圖26所示的切換濾鏡處理部111,係除了取得 已生成之切換資訊18,還取得局部解碼影像訊號16及濾 鏡係數資訊1 7,藉由進行與第1實施形態所述之動畫編碼 裝置1000中的圖4之步驟S1107至步驟S1109同樣的動 作,輸出參照影像訊號19。亦即,圖3的迴圈濾鏡切換部 1 14,係取得已生成之切換資訊18亦即l〇op_filter_flag ’ 基於l〇〇P_filter_fUg之値而將內部所具備之開關SW進行 切換(步驟S5106)。當l〇〇p_filter_flag爲0時,則迴圈 濾鏡切換部114係將開關SW連接至端子A,將局部解碼 影像訊號16當作參照影像訊號19而暫時保存於參照影像 用緩衝區 108中(步驟 S5107 )。另一方面,當 1〇(^_^^1"_0&§爲1時,則迴圈濾鏡切換部114係將開關 SW連接至端子B,將復原影像訊號20當作參照影像訊號 19而暫時保存於參照影像用緩衝區108中(步驟S5108) 〇 最後,濾鏡設定部1 1 〇所生成之濾鏡係數資訊1 7,係 被熵編碼部104進行編碼,連同量化後之轉換係數13、預 測模式資訊、區塊尺寸切換資訊、運動向量、量化參數等 -42- 200948092 一起被多工化成位元串流,然後被發送至後述的動畫解碼 裝置6000 (步驟S5109)。此時,濾鏡係數資訊17,係 • 於圖31之語法結構中的切片等級語法( 1903)中所屬之 • 迴圈濾鏡資料語法( 1906)中,被記述成如圖32(b)所 示。濾鏡係數資訊17亦即圖32 ( b )的 filter_coeff [cy][cx]係爲 2 維濾鏡的係數,filter_size_y 及 filter_siZe_x係爲決定濾鏡尺寸的値。此處雖然將表示濾 φ 鏡尺寸的値’記述於語法中,但作爲其他實施形態,係亦 可不記述在語法中而是將預先制定的固定値當作濾鏡尺寸 來使用。只不過’若將濾鏡尺寸設成固定値,則在動畫編 碼裝置5000與後述的動畫解碼裝置6000中必須要使用同 樣的値才行,這點必須留意。 以上是關於動畫編碼裝置5000中的迴圈瀘鏡所涉及 之動作的說明。 接著’說明相對於動畫編碼裝置6000的動畫解碼裝 Q 置。參照圖28 ’說明第3實施形態所述之動畫解碼裝置的 構成。以下,針對圖2 8的構成要素,分別予以說明。 圖28所示的動畫解碼裝置6000,係具有:切換資訊 生成濾鏡處理部601、熵解碼部201、逆轉換•逆量化部 202、預測訊號生成部203、加算器204、參照影像用緩衝 區2 0 6,是被解碼控制部2 0 7所控制。 此處’熵解碼部201、逆轉換•逆量化部202、預測 訊號生成部203、加算器204、參照影像用緩衝區206、及 解碼控制部2 0 7 ’係由於是和第1實施形態所述之圖5的 -43- 200948092 動畫解碼裝置2000中的同編號之構成要素進行同樣的動 作,因此這裡省略說明。 切換資訊生成濾鏡處理部601,係取得解碼影像訊號 21及濾鏡係數資訊17,輸出參照影像訊號19及輸出影像 訊號22。關於切換資訊生成濾鏡處理部601的詳細說明, 將於後述。 接著,針對第3實施形態所述之動畫解碼裝置中的切 換資訊生成濾鏡處理部601,使用圖29來詳細說明。以下 ,針對圖29的構成要素,分別予以說明。 圖29所示的切換資訊生成濾鏡處理部601,係具有切 換資訊生成部602及切換濾鏡處理部205。此處,切換濾 鏡處理部205,係由於是和第1實施形態所述之圖5的動 畫解碼裝置2000中的同編號之構成要素進行同樣的動作 ,因此這裡省略說明。 切換資訊生成部602,係被解碼控制部207所控制, 取得解碼影像訊號21,依照所定之切換判定方法來生成切 換資訊18。已生成之切換資訊18,係被輸入至切換濾鏡 處理部20 5。關於切換判定方法的細節,將於後述。 以上是第3實施形態所述之動畫解碼裝置的構成。 接著,使用圖28、圖29及圖30,來詳細說明第3實 施形態所述之動畫解碼裝置的動作。此外,圖30係第3 實施形態所述之動畫解碼裝置6000中的關於迴圈濾鏡之 動作的流程圖。 首先,熵解碼部201,係按照圖3 1的語法結構,將濾 -44 - 200948092 鏡係數資訊17予以解碼(步驟S6100)。圖31之語法結 構中的切片等級語法( 1 903 )中所屬之迴圈濾鏡資料語法 ' (1 906 )內,係如圖32 ( b )所示’記述著濾鏡係數資訊 • 17。此處,濾鏡係數資訊 17亦即圖 32 ( b )的 filter_coeff[cy][cx]係爲 2 維濾鏡的係數,filter_size_y& filter_size_x係爲決定濾鏡尺寸的値。此處雖然將表示濾 鏡尺寸的値,記述於語法中,但作爲其他實施形態,係亦 Q 可不記述在語法中而是使用預先制定的固定値。只不過, 若將濾鏡尺寸設成固定値,則在動畫編碼裝置5000與動 畫解碼裝置6000中必須要使用同樣的値才行,這點必須 留意。 切換資訊生成濾鏡處理部601,係取得已解碼之濾鏡 '係數資訊17(步驟S6101)。又,切換資訊生成濾鏡處理 部601 ’係取得從加算器2 04取得之解碼影像訊號21 (步 驟S6102)。圖29中所示的切換資訊生成濾鏡處理部601 © 的內部所具備之切換資訊生成部602,係將解碼影像訊號 21中的注目像素’與其周邊像素的差分絕對値和SAD, 按照像素單位而加以算出(步驟S6103 ) 。SAD,係若令 解碼影像內的像素之座標爲x、y,解碼影像爲 ’則可以[數4]來表示。 使用SAD和預先設定至解碼控制部207的閾値T,進 行以下的切換判定處理(步驟S 6 1 〇 4 )。此處,閾値T係 使用與曾在動畫編碼裝置5〇〇〇中所設定之閾値τ相同値 ,迫點必須留意。若s AD爲Τ以下,則對切換資訊18亦 -45- 200948092 即 loop_filter_flag 設定 0 (步驟 S 6105 )。反之’若 SAD 爲大於T的値,則對loop_filter_flag設定1 (步驟S6106 )。在這裡,雖然是依照像素單位來求出切換資訊18’但 作爲其他實施形態,亦可將切換判定處理依照畫格單位或 切片單位或是巨集區塊單位或異於巨集區塊之尺寸的區塊 單位來進行判定,此情況下,切換資訊18也是以對應於 其之單位進行解碼。 又,此處雖然使用了對象像素與周邊像素的差分絕對 値和,但作爲其他實施形態,亦可使用差分平方和,只要 是能夠從解碼影像算出的指標即可,亦可使用活性或空間 頻率、邊緣強度、邊緣方向等指標。又,此處,雖然是從 解碼影像算出上記指標,但亦可將對解碼影像進行濾鏡處 理後的復原影像加以取得,從復原影像來算出上記指標。 又,作爲其他實施形態,亦可基於屬於編碼資訊之一 部分的量化參數、區塊尺寸、預測模式、運動向量、轉換 係數等來進行切換判定處理。無論如何,動畫解碼裝置 6 000中的切換資訊生成部602,係必須要和動畫編碼處理 5 000中的切換資訊生成部5 03進行同樣的切換判定處理, 這點必須留意。 圖29所示的切換濾鏡處理部205,係除了取得已生成 之切換資訊18,還取得解碼影像訊號21及濾鏡係數資訊 17’藉由進行與第1實施形態所述之動畫解碼裝置2〇〇〇 中的圖7之步驟S2103至步驟S2105同樣的動作,輸出參 照影像訊號19。亦即,圖6的迴圈濾鏡切換部2 05A,係 200948092 取得已生成之切換資訊 18亦即l〇〇P_filter_flag,基於 l〇〇p_filter_flag之値而將內部所具備之開關SW進行切換 . (步驟S6107 )。當l〇〇p_filter_flag爲0時,則迴圈濾鏡 • 切換部205A係將開關SW連接至端子A,將解碼影像訊 號21當作參照影像訊號19而暫時保存於參照影像用緩衝 區 206 中(步驟 S6108)。另一方面,當 loop_filter_flag 爲1時,則迴圈濾鏡切換部2 05A係將開關SW連接至端 φ 子B,將復原影像訊號20當作參照影像訊號19而暫時保 存於參照影像用緩衝區206中(步驟S6109)。 如以上,切換資訊生成濾鏡處理部601,係使用已取 得之解碼影像訊號21,藉由與第3實施形態所述動畫編碼 裝置中的圖26之切換資訊生成濾鏡處理部502同樣的動 '作,來生成切換資訊18,並且輸出參照影像訊號19。 以上是關於動畫解碼裝置6000中的迴圈瀘鏡所涉及 之動作的說明。 〇 如此,若依據第3實施形態所述之動畫編碼裝置,則 藉由設定會使輸入影像與預測影像之誤差呈最小的迴圈濾 鏡的濾鏡係數資訊來進行濾鏡處理,就可促使參照影像的 畫質提升。又,以迴圈濾鏡切換部114來對每一局部領域 切換是要將局部解碼影像訊號16與復原影像訊號18之哪 一者當作參照影像來保持,是使用從局部解碼影像16所 算出之指標來對每一局部領域作切換,藉此,可防止濾鏡 所導致的畫質降低之傳播’可促使編碼效率提升。 又,若依據第3實施形態所述之動畫解碼裝置,則藉 -47- 200948092 由使用與動畫編碼裝置同樣的濾鏡係數資訊來進行濾鏡處 理,並且進行同樣的切換判定處理,就可保證動畫編碼裝 置中的參照影像與動畫解碼裝置中的參照影像是同步的。 ’ 又,若依據第3實施形態所述之動畫編碼裝置及動畫 · 解碼裝置,則在編碼側基於可從局部解碼影像所算出之指 標來生成切換資訊,藉此,在解碼側也能夠從解碼影像算 出同樣的切換資訊。因此,就可削減將切換資訊編碼時的 編碼量。 © 此外,在第3實施形態所述之動畫解碼裝置600 0中 ,係亦可和在第1實施形態所述之動畫解碼裝置2000同 樣地,將切換濾鏡處理部205設計成如圖8或圖9所示之 構成,以將解碼影像21或參照影像19當作輸出影像訊號 2 2而加以輸出。 又,亦可將切換濾鏡處理部205設計成如圖10之構 成,在切換資訊生成部新生成後段濾鏡用的切換資訊’藉 此以將圖10的後段濾鏡切換部210中所具備之開關SW2 Q 作切換,以切換輸出影像訊號22。 此外,在第3實施形態所述之動畫編碼裝置5000及 動畫解碼裝置6000中,雖然是對局部解碼影像訊號16進 行濾鏡處理,但局部解碼影像訊號16係可採用先前之施 行過去區塊濾鏡處理後的影像。 此外,動畫編碼裝置5000及動畫解碼裝置6000,係 例如亦可藉由採用例如通用之電腦裝置爲基本硬體來實現 。亦即,迴圈濾鏡處理部501、切換資訊生成濾鏡處理部 -48- 200948092 5 02、切換資訊生成部503、預測訊號生成部1〇1、 102、轉換·量化部103、熵編碼部104、逆轉換. ‘ 部105、加算器106、參照影像用緩衝區1〇8、編碼 • 1〇9、濾鏡設定部110、切換濾鏡處理部111、切換 成濾鏡處理部601、切換資訊生成部602、熵解碼蔚 逆轉換•逆量化部202、預測訊號生成部203、加舞 、切換濾鏡處理部205、參照影像用緩衝區206及 ❹ 制部207,係可藉由令上記電腦裝置中所搭載的處 執行程式而加以實現。此時,動畫編碼裝置5 000 解碼裝置6000係爲,將上記程式事先安裝至電腦 加以實現,或亦可記憶在CD-ROM等記憶媒體中, 網路來散佈上記程式,以將該程式適宜地安裝至電 而加以實現。又,參照影像用緩衝區108及參照影 衝區206,係可適宜利用內藏於電腦裝置中的擴充 、硬碟或 CD-R、CD-RW、DVD-RAM、DVD-R 等記 © 等來加以實現。 (第4實施形態) 在上記第1實施形態中係例示了,將切換資訊 16像素的巨集區塊單位來設定之例子。另一方面, 形態中的切換資訊的設定單位,係並非限定於上記 塊,而是可採用序列、畫格、或是將畫面內分割成 或像素區塊等。 在本實施形態中係說明,於上述的第1實施形 減算器 逆量化 控制部 資訊生 :201、 器204 解碼控 理器去 及動畫 裝置而 或透過 腦裝置 像用緩 記憶體 憶媒體 以1 6 X 本實施 巨集區 的切片 態所述 -49- 200948092 之編碼及解碼方法中,是令切換資訊的設定單位爲每一像 素區塊,並將像素區塊的尺寸作適應性切換的方法。 針對第4實施形態所述之編碼處理及解碼裝置中的切 換濾鏡處理部111,使用圖37、圖38來說明。圖37及圖 38的切換濾鏡處理部111,係爲圖3及圖6的切換濾鏡處 理部111之變形例,是只有藉由開關SW將復原影像訊號 20當作參照影像之情形時,進行濾鏡處理之構成。亦即, 於圖37中,在濾鏡處理部113的前段設置開關SW,開關 SW的端子A係直接被導出至參照影像用緩衝區108,將 局部解碼影像訊號16直接當作參照影像訊號而儲存在緩 衝區108中。開關SW的端子B係透過濾鏡處理部113而 被導出至參照影像用緩衝區1〇8,局部解碼影像訊號16係 在濾鏡處理部113中隨應於濾鏡係數資訊17而被進行濾 鏡處理,然後當作參照影像訊號而被儲存在緩衝區1〇8中 。又,於圖38中,在濾鏡處理部208的前段設置開關SW ,開關SW的端子A係直接被導出至輸出線,解碼影像訊 號21係直接被輸出至輸出線。開關SW的端子B係透過 濾鏡處理部2 0 8而被導出至輸出線’解碼影像訊號係藉由 濾鏡處理部208隨應於濾鏡係數資訊17而被進行濾鏡處 理,然後當作參照影像訊號19及輸出影像訊號22而被輸 出至輸出線。藉由如此構成,僅當切換資訊18亦即 loop_filter_flag爲1時,則濾鏡處理部113或208只要進 行濾鏡處理即可,因此相較於圖3及圖6之構成’可更加 削減處理成本。圖37及38的變形例係當然亦可適用於第The λ system of [Number 3] is given by a fixed number and is determined based on the quantization width or the quantization parameter -30-200948092. It is possible to encode the switching information, the reference image index, and the motion vector, which are obtained at such a cost j as the minimum 値. Further, in the present embodiment, although the sum of squares of residuals is used, as other embodiments, absolute residual sums may be used, or these may be converted into Hadamards, or approximates, etc. . Alternatively, the activity of the input image can be used to create a cost' or a quantization function can be made using a quantization width or quantization parameter. Further, the prediction process is not limited to the motion error. As long as it is a prediction that requires any parameter as the prediction q process, the code J of the parameter can be used as the R and the cost J can be calculated using the above formula. Then, the loop filter switching unit 114 of FIG. 13 receives the generated switching information 18, that is, loop_filter_flag, and switches the switch SW provided inside based on l〇〇p_filter_flag (step S3111). . When the loop_filter_flag is 0, the loop filter switching unit 114 connects the switch SW to the terminal A, and outputs the local decoded video signal 16 as the reference video signal 19, and generates the 解码 in the prediction signal generating unit 1〇1. The video signal 11 is predicted (step S3112). On the other hand, when the loop_filter_flag is 1, the loop filter switching unit 14 connects the switch SW to the terminal B, and the restored video signal 20 is output as the reference video signal 19 to the prediction signal generating unit 1〇1. The predicted video signal 1 1 is generated (step S3 1 1 3 ). The above is the operation from step 3105 to step 3 1 1 3 in the switching information generation prediction unit 301A. Finally, the entropy coding unit 104 encodes the switching information 18, and together with the quantized conversion coefficient 13, the prediction mode information, the block size switching resource, the motion vector, the quantization parameter, etc. The stream is sent to an animation decoding device 4 000, which will be described later (step S3 114). At this time, the switching information 18, that is, l〇〇p_filter_flag' is in the macroblock layer syntax (1908) belonging to the macroblock level syntax (1907) in the syntax structure of Fig. 31, and is described as 32(c). The above is an explanation of the operation of the loop filter in the animation encoding device 3000. Next, an animation decoding device with respect to the animation encoding device 3000 will be described. The configuration of the animation decoding apparatus according to the second embodiment will be described with reference to Fig. 16 . Hereinafter, the components of Fig. 16 will be described separately. The animation decoding apparatus 400 of the present invention has a reference switching prediction unit 401A, a decoded video buffer 502, a restored video buffer 403, an entropy decoding unit 201, an inverse conversion/inverse quantization unit 202, and an addition. The filter 204 and the filter processing unit 208 are controlled by the decoding control unit 207. Here, the entropy decoding unit 201, inverse conversion. The inverse quantization unit 202, the adder 204, the filter processing unit 208, and the decoding control unit 207 perform the same operations as the components of the same number in the animation decoding device 2000 according to the first embodiment. The description is omitted. The reference prediction unit 4 0 1 A has a prediction signal generation unit 203 and a loop filter switching unit 209 as shown in Fig. 17 . Since the prediction signal generation unit 2〇3 and the loop filter switching unit 209 perform the same operations as the components of the same number in the animation decoding device 2000 according to the first embodiment, the description thereof is omitted here. The reference switching prediction unit 40 1 A ' receives the decoded video signal 21, the restored video signal 20, and the switching information 18 that has been decoded in the entropy solution -32-200948092 code portion 20 1 , and is decoded based on the received switching information 18 Any one of the video signal 21 or the restored video signal 20 outputs a predicted video signal 1 1 as a predetermined prediction process for the reference image. • The prediction processing may use, for example, prediction of the temporal direction of motion compensation, or prediction of the spatial direction from the decoded image in the picture, but it is necessary to perform the same prediction processing as the animation coding apparatus 3000, which must be performed. Keep an eye out. The decoded video buffer 402 is obtained by acquiring the decoded video signal 21 generated by the adder 204 and temporarily storing it. The decoded video signal 21 stored in the decoded video buffer 602 is input to the reference switching prediction unit 40 1A. The restored image buffer 403 acquires the restored video signal 20 generated by the filter processing unit 208 and temporarily stores it. The restored video signal 20 stored in the restored video buffer 403 is input to the reference switching prediction unit 40 1A. The above is the configuration of the animation decoding device according to the second embodiment. 〇 Next, the operation of the animation decoding apparatus according to the second embodiment will be described in detail using Figs. 16, 17, and 18'. Further, Fig. 18 is a flowchart showing the operation of the look-ahead filter in the animation decoding device 4000 according to the second embodiment. First, once the encoded material 14 is input to the animation decoding device 4000, the entropy decoding unit 2 0 1, in addition to the conversion coefficient 1 3, the filter coefficient information 17, the switching information 18, and the prediction mode information, the block size. The switching information, the motion vector, the quantization parameter, and the like are decoded in accordance with the syntax structure of FIG. 31 (step S4100). -33- 200948092 The loop filter data syntax (1 906) belonging to the slice level syntax (1903) in the syntax structure of Fig. 31 is shown in Fig. 32 (b), and the filter coefficient information 17 is described. Here, the filter coefficient information 17, that is, filter_coeff[cy][cx] of Fig. 32(b) is a coefficient of the two-dimensional filter, 'filter_size_y and filter_size_x are 値 which determine the size of the subtraction mirror. In this case, the 滤 of the filter size is described in the grammar. However, as another embodiment, the predetermined fixed 値 may be used as the filter size without being described in the grammar. However, if the filter size is set to fixed 値, the same enthalpy must be used in the animation encoding device 3000 and the animation decoding device 4000, which must be noted. Further, in the macroblock block syntax (1 908) to which the macroblock level syntax (1907) in the syntax structure of Fig. 31 belongs, as shown in Fig. 32 (c), l〇〇p_filter_flag is described as Switch information 18. The conversion coefficient 13 decoded by the entropy decoding unit 201 is subjected to entropy decoding, inverse quantization, inverse conversion, and reference, similarly to the previous hybrid coding or the animation decoding apparatus 2000 according to the first embodiment. The predicted video signal 11 outputted by the switching prediction unit 401A is added to be decoded video signal 21 and output. The decoded video signal 21 is output to the filter processing unit 208 and temporarily stored in the decoded video buffer 402 (step S4101). The filter processing unit 208' acquires the filter coefficient information 17 restored by the entropy decoding unit 201 (step S4102). Further, the filter processing unit 2 08' receives the decoded video signal 21, performs filter processing on the decoded video using the filter coefficient information 17, and generates a restored video signal 20 (step -34 - 200948092, step S4103). The filter processing unit 208 outputs the generated restored video signal 20 as the output video signal 22, and temporarily stores it in the restored video buffer 403 (step S41〇4). Next, referring to the switching prediction unit 401A, the decoded video signal 21 or the restored video signal 20 is used as a reference video to generate a predicted video signal 11 (steps 4105 to 4108). Hereinafter, the operation of steps 4105 to 4108 will be described. Q First, the reference switching prediction unit 401A of Fig. 17 acquires the decoded switching information 18, i.e., l〇op_filter_flag (step S4105). Next, referring to the loop filter switching unit 209 inside the switching prediction unit 401A, the switch SW is switched based on the acquired l〇〇p_filter_flag (step S4106). When loop_filter_flag is 0, the loop-filter switching unit 209 connects the switch SW to the terminal A, acquires the decoded video signal 21 as the reference video signal 19, and sends it to the prediction signal generating unit 203. The prediction signal generation unit 203 generates the predicted video signal 1 1 based on the reference video signal 119 corresponding to the decoded video signal 21 (step S4107). On the other hand, when lo〇p_filter_flag is 1, the loop filter switching unit 209 connects the switch SW to the terminal B', and takes the restored video signal 20 as the reference video signal 19, and then sends it to the prediction signal generation. Part 203. The prediction signal generation unit 203 generates the predicted video signal 1 1 based on the reference video signal 1 9 ' corresponding to the restored video signal 20 (step S4108). The above is the operation of steps S4105 to S4 108 in the switching prediction unit 401A. -35- 200948092 Here, referring to the switching prediction unit 401A, the switching information 18 is obtained in units of macro blocks in accordance with the syntax of Fig. 32(c), and the switch SW is switched. In another embodiment, when the animation encoding device 3000 encodes the switching information 18 in a tile unit or a slice unit or a block unit different from the size of the macro block, the animation decoding device 4000 The decoding of the switching information 18 and the switching of the switch SW of the loop filter switching unit 209 are performed in the same unit. The above is an explanation of the operation of the loop filter in the animation decoding device 4000. As described above, according to the animation encoding apparatus according to the second embodiment, the filter processing is performed by setting the filter coefficient information of the loop filter that minimizes the error between the input image and the prediction signal, thereby promoting the reference. The image quality of the image is improved. Further, by the loop filter switching unit 114 included in the switching information generation and prediction unit 301A, it is switched whether or not the local decoded video signal 16 and the restored video signal 20 are to be used as reference images for prediction processing. In the field where the image quality is lowered due to the filter, the restored image signal 20 is not used as the reference image, thereby preventing the deterioration of the image quality, and the image is improved in the field where the image quality is improved. The signal 20 is used as a reference image, thereby promoting prediction accuracy. Further, according to the animation decoding device 40 00 according to the second embodiment, the filter processing and the reference image switching can be performed by using the same filter coefficient information and switching information as the animation encoding device 3000, thereby ensuring the animation. The reference image in the encoding device 3000 is synchronized with the reference image in the animation decoding device 4000. In the animation decoding apparatus 4000 according to the second embodiment, the restored video signal 20 generated by the filter processing unit 208 is output as the output video signal 22, but as another embodiment, The decoded video signal 21 can also be output as the output video signal 22. Further, in the animation encoding apparatus 3 and the animation decoding apparatus 4000 according to the second embodiment, although the local decoded video signal 16 is subjected to filter processing, the local decoded video signal 16 can be previously executed. Image processed by the past block filter. Further, in the animation encoding/decoding apparatus according to the second embodiment, in the above-described embodiment, the restored video signal 20 temporarily stored in the restored video buffer is acquired and used by the reference switching predicting unit 401A. In addition, in the other embodiment, the restored image buffer may be provided, and the filter processing unit may be provided in the reference switching prediction unit, and the restored image may be generated inside the reference switching prediction unit. The animation coding apparatus and the animation decoding apparatus of the embodiment are shown in FIG. 19 to FIG. 23, and the animation coding apparatus 3001 of FIG. 19 includes a switching information generation prediction unit 301B, and the switching information generation prediction unit 301B of FIG. There is a reference switching prediction unit 305B. The reference switching prediction unit 305B of Fig. 21 is provided with a filter processing unit 113 therein, whereby the restored video signal 20 can be generated by acquiring the locally decoded image signal 16 and the filter coefficient information 17. Further, the animation decoding device 4001 of Fig. 22 has a reference switching prediction unit 401B. The reference switching prediction unit 401B of Fig. 23 includes a filter processing unit 208 therein, whereby the restored video signal 20 can be generated by acquiring the decoded video signal 21 and the filter -37-200948092 coefficient information 17. By using the animation encoding device 301 and the animation decoding device 4,000 as described above, the animation encoding device 3000 and the animation decoding device can be realized while reducing the amount of memory required to restore the image buffer. The same action of 4000. Further, the animation encoding devices 3000 and 3001 and the animation decoding devices 4000 and 400 1 ' can be realized by, for example, using a general-purpose computer device as a basic hardware. In other words, the switching information generation prediction unit 301, the loop filter processing unit 312, the local decoded video buffer 303, the restored video buffer 304, the reference switching prediction unit 305, the prediction signal generation unit 101, and the subtraction The unit 102, the conversion/quantization unit 103, the entropy coding unit 104, the inverse conversion/inverse quantization unit 105, the adder 106, the encoding control unit 109, the filter setting unit 110, the switching information generating unit 112, the filter processing unit 113, and the back The circle filter switching unit 114, the reference switching prediction unit 40 1 , the decoded video buffer 40 2 , the restored video buffer 403 , the entropy decoding unit 201 , the inverse conversion • inverse quantization unit 202 , the prediction signal generation unit 203 , and the adder 204. The decoding control unit 207, the filter processing unit 208, and the loop filter switching unit 209 can be realized by executing a program by a processor mounted on the computer device. In this case, the animation encoding devices 3000 and 3001 and the animation decoding devices 4000 and 4001 are implemented by attaching the above-mentioned program to the computer device in advance, or may be stored in a memory medium such as a CD-ROM or distributed through a network. The above program is implemented by appropriately installing the program to a computer device. Further, the local decoded video buffer 303, the restored video buffer 304, the decoded video buffer 402, and the restored video buffer 403' 200948092 can be suitably used for the extended memory built in the upper computer device. , hard disk or CD-R, CD-RW, DVD-RAM, DVD-R, etc. to achieve the media. (Third Embodiment) A configuration of an animation encoding apparatus according to a third embodiment will be described with reference to Fig. 24'. Hereinafter, the components of Fig. 24 will be described separately. φ The animation coding apparatus 5000 shown in Fig. 24 includes a loop filter processing unit 501, a prediction signal generation unit 1〇1, a subtractor i〇2, and a conversion. The quantization unit 103, the entropy coding unit 104, the inverse conversion/inverse quantization unit 105, the adder 106, and the reference video buffer 108 are controlled by the coding control unit 109, and the prediction signal generation unit 101 and the subtractor 102 are controlled. Conversion. The quantization unit 103, the entropy coding unit 104, and the inverse conversion. The inverse quantization unit 105, the adder 106, the reference video buffer 108, and the encoding control unit 109 are configured by the same numbered components in the animation encoding device 1000 of Fig. 1 described in the first embodiment. The same operation, so the description is omitted here. The loop filter processing unit 501 receives the local decoded video signal 16 and the input video signal 1 〇, and outputs the reference video signal 19 and the filter coefficient information 17. The detailed description of the loop filter processing unit 501 will be described later. Next, the loop filter processing unit 501 in the animation encoding apparatus according to the third embodiment will be described in detail with reference to Figs. 25 and 26 . Hereinafter, the components of FIGS. 25 and 26 will be described separately. -39- 200948092 The loop filter processing unit 501 shown in Fig. 25 includes a filter setting unit 110 and a switching information generation filter processing unit 502. Further, the switching information generation filter processing unit 502 is as shown in Fig. 26. The switching filter processing unit 111 and the switching information generating unit 503 are provided. Here, the filter setting unit 110 and the switching filter processing unit 111 perform the same operations as the components of the same number in the loop filter processing unit 107 of FIG. 2 described in the first embodiment. Description is omitted here. The switching information generating unit 503 is controlled by the encoding control unit 109, receives the locally decoded video signal 16, and generates the switching information 18 in accordance with the predetermined switching determination method. The generated switching information 18 is input to the switching filter processing unit 111. Details of the handover determination method will be described later. The above is the configuration of the animation encoding device according to the third embodiment. Next, the operation of the loop filter in the animation encoding device according to the third embodiment will be described in detail with reference to Figs. 25, 26 and 27. Fig. 27 is a flowchart showing the operation of the loop filter in the animation encoding device 5000 according to the third embodiment. First, the filter setting unit 110 shown in Fig. 25 acquires the local decoded video signal 16 and the input video signal 10, and sets the filter coefficient information 17 (step S5100). Here, the filter setting unit U0 uses a two-dimensional Wiener filter generally used in image restoration, and the filter coefficient is designed such that the image after the filter processing is applied to the locally decoded image signal 16 and the input image signal is 1〇. The average squared error is minimized, and the designed filter coefficient and the 表示 indicating the filter size are set as the filter coefficient information 17. The set filter coefficient information 17 is output to the switching information generation filter processing unit 502 shown in Figs. 25-40-200948092, and is also output to the entropy encoding unit 104. * The switching information generation unit 503 provided in the switching information generation filter processing unit 052 shown in Fig. 26 acquires the local decoded video signal 16 (step S5101). The switching information generating unit 503 calculates the difference between the target pixel in the partial decoded video signal 16 and the peripheral pixel and the SAD in the pixel unit (step S5102). 〇 SAD, if the coordinates of the pixels in the locally decoded image are x, y and the locally decoded image is F(x, y), it can be expressed by the following equation. [Equation 4] SAD = Σ Σ \F{x, y) - F(x + i, y + ;)| - Next, the following switching determination processing is performed using the SAD and the threshold 値T set in advance to the encoding control unit 109. (Step S5 103). If SAD is T or less, 0 Q is set to the switching information 18, i.e., l00p_filter_flag (step S5104). On the other hand, if SAD is 値 greater than T, 1 is set to l〇op_filter_flag (step S5105). Here, although the switching information 18 is obtained in units of pixels, as another embodiment, the switching determination processing may be performed in accordance with a frame unit or a slice unit or a macro block unit or a size different from a macro block. The block unit is used for the determination. In this case, the switching information 18 is also output in units corresponding thereto. In addition, although the absolute difference between the target pixel and the peripheral pixel is used here, as another embodiment, the difference square sum may be used as long as it is an index that can be calculated from the locally decoded image, and activity or space may be used. Frequency, edge strength, edge direction and other indicators. Again, here, although -41-.  In 200948092, the above-mentioned index is calculated from the locally decoded image. However, the restored image obtained by filtering the locally decoded image may be acquired. The above-mentioned index is calculated from the restored image. Further, as another embodiment, the switching determination process may be performed based on a quantization parameter, a block size, a prediction mode, a motion vector, a conversion coefficient, and the like belonging to a part of the encoded information. Next, the switching filter processing unit 111 shown in FIG. 26 acquires the locally decoded video signal 16 and the filter coefficient information 17 in addition to the generated switching information 18, and performs the processing according to the first embodiment. The same operation of steps S1107 to S1109 of FIG. 4 in the animation encoding apparatus 1000 outputs the reference video signal 19. In other words, the loop filter switching unit 1 of FIG. 3 acquires the generated switching information 18, that is, l〇op_filter_flag', based on l〇〇P_filter_fUg, switches the switch SW provided therein (step S5106). . When l〇〇p_filter_flag is 0, the loop filter switching unit 114 connects the switch SW to the terminal A, and temporarily stores the local decoded video signal 16 as the reference video signal 19 in the reference image buffer 108 ( Step S5107). On the other hand, when 1〇(^_^^1"_0&§ is 1, the loop filter switching unit 114 connects the switch SW to the terminal B, and the restored video signal 20 is regarded as the reference image signal 19. Temporarily stored in the reference image buffer 108 (step S5108). Finally, the filter coefficient information 17 generated by the filter setting unit 1 1 is encoded by the entropy encoding unit 104, together with the quantized conversion coefficient 13 The prediction mode information, the block size switching information, the motion vector, the quantization parameter, and the like - 42 - 200948092 are multiplexed into a bit stream, and then transmitted to the animation decoding device 6000 described later (step S5109). The mirror coefficient information 17, which is included in the slice filter syntax (1906) in the slice level syntax (1903) in the syntax structure of Fig. 31, is described as shown in Fig. 32(b). The coefficient information 17 is also the filter_coeff [cy][cx] of Fig. 32 (b) is the coefficient of the 2D filter, and the filter_size_y and filter_siZe_x are the parameters determining the size of the filter. Here, the size of the filter φ mirror will be indicated. 'Described in grammar, but as other implementations The state may be used in the grammar instead of the predetermined fixed 値 as the filter size. However, if the filter size is set to fixed 値, the animation encoding device 5000 and the animation decoding device described later are used. The same enthalpy must be used in 6000. This must be noted. The above is a description of the actions involved in the loop mirror in the animation encoding device 5000. Next, the animation decoding device Q with respect to the animation encoding device 6000 will be described. The configuration of the animation decoding device according to the third embodiment will be described with reference to Fig. 28'. Hereinafter, the components of Fig. 28 will be described. The animation decoding device 6000 shown in Fig. 28 has switching information generation. The filter processing unit 601, the entropy decoding unit 201, the inverse conversion/inverse quantization unit 202, the prediction signal generation unit 203, the adder 204, and the reference video buffer 206 are controlled by the decoding control unit 207. The 'entropy decoding unit 201, the inverse conversion/inverse quantization unit 202, the prediction signal generation unit 203, the adder 204, the reference video buffer 206, and the decoding control unit 2 0 7 ' are In the same manner as the components of the same number in the animation decoding apparatus 2000 of the embodiment of the present invention, the same operation is omitted here. The switching information generation filter processing unit 601 obtains the decoded video signal 21 and The filter coefficient information 17 outputs the reference video signal 19 and the output video signal 22. The detailed description of the switching information generation filter processing unit 601 will be described later. Next, the switching information generation filter processing unit 601 in the animation decoding device according to the third embodiment will be described in detail with reference to Fig. 29 . Hereinafter, the components of Fig. 29 will be described separately. The switching information generation filter processing unit 601 shown in Fig. 29 includes a switching information generation unit 602 and a switching filter processing unit 205. Here, the switching filter processing unit 205 performs the same operation as the components of the same number in the animation decoding device 2000 of Fig. 5 described in the first embodiment, and thus the description thereof will be omitted. The switching information generating unit 602 is controlled by the decoding control unit 207 to obtain the decoded video signal 21, and generates the switching information 18 in accordance with the predetermined switching determination method. The generated switching information 18 is input to the switching filter processing unit 205. Details of the handover determination method will be described later. The above is the configuration of the animation decoding device according to the third embodiment. Next, the operation of the animation decoding apparatus according to the third embodiment will be described in detail with reference to Figs. 28, 29 and 30. Fig. 30 is a flowchart showing the operation of the loop filter in the animation decoding device 6000 according to the third embodiment. First, the entropy decoding unit 201 decodes the filter coefficient information 17 in accordance with the syntax structure of Fig. 31 (step S6100). The loop filter data syntax '(1 906) in the slice level syntax (1 903) in the syntax structure of Fig. 31 is as shown in Fig. 32 (b), and the filter coefficient information 17 is described. Here, the filter coefficient information 17 is that the filter_coeff[cy][cx] of Fig. 32 (b) is the coefficient of the 2-dimensional filter, and the filter_size_y& filter_size_x is the 决定 which determines the size of the filter. Here, the 表示 indicating the size of the filter is described in the grammar. However, as another embodiment, the predetermined 値 may be used instead of the grammar. However, if the filter size is set to fixed 値, the same 値 must be used in the animation encoding device 5000 and the animation decoding device 6000, which must be noted. The switching information generation filter processing unit 601 obtains the decoded filter 'coefficient information 17 (step S6101). Further, the switching information generation filter processing unit 601' obtains the decoded video signal 21 obtained from the adder 024 (step S6102). The switching information generating unit 602 included in the switching information generation filter processing unit 601 shown in FIG. 29 sets the absolute difference between the pixel of interest in the decoded video signal 21 and its peripheral pixels, and SAD, in units of pixels. And it is calculated (step S6103). SAD, if the coordinates of the pixels in the decoded image are x, y, and the decoded image is ', can be expressed as [4]. The following switching determination processing is performed using the SAD and the threshold 値T set in advance to the decoding control unit 207 (step S 6 1 〇 4 ). Here, the threshold 値T is the same as the threshold 値τ set in the animation coding apparatus 5〇〇〇, and the forced point must be noticed. If s AD is Τ or less, the switching information 18 is also -45- 200948092, that is, loop_filter_flag is set to 0 (step S6105). On the other hand, if SAD is 値 greater than T, 1 is set for loop_filter_flag (step S6106). Here, although the switching information 18' is obtained in units of pixels, as another embodiment, the switching determination process may be in accordance with the frame unit or the slice unit or the macro block unit or the size of the macro block. The block unit is used for the determination. In this case, the switching information 18 is also decoded in units corresponding thereto. Further, although the absolute difference between the target pixel and the peripheral pixel is used here, as another embodiment, a difference square sum may be used, and any index that can be calculated from the decoded image may be used, and an active or spatial frequency may be used. , edge strength, edge direction and other indicators. Here, although the above-mentioned index is calculated from the decoded image, the restored image obtained by performing the filter processing on the decoded image may be acquired, and the above-mentioned index may be calculated from the restored image. Further, as another embodiment, the switching determination process may be performed based on a quantization parameter, a block size, a prediction mode, a motion vector, a conversion coefficient, and the like belonging to a part of the encoded information. In any case, the switching information generating unit 602 in the animation decoding device 6 000 must perform the same switching determination processing as the switching information generating unit 503 in the animation encoding processing 5,000, which must be noted. The switching filter processing unit 205 shown in FIG. 29 acquires the decoded switching information 18 and acquires the decoded video signal 21 and the filter coefficient information 17' by performing the animation decoding device 2 according to the first embodiment. In the same operation as step S2103 to step S2105 of FIG. 7, the reference video signal 19 is output. That is, the loop filter switching unit 205A of Fig. 6 obtains the generated switching information 18, i.e., l〇〇P_filter_flag, and switches the switch SW provided therein based on the l〇〇p_filter_flag.  (Step S6107). When l〇〇p_filter_flag is 0, the loop filter switching unit 205A connects the switch SW to the terminal A, and temporarily stores the decoded video signal 21 as the reference video signal 19 in the reference image buffer 206 ( Step S6108). On the other hand, when loop_filter_flag is 1, the loop filter switching unit 205A connects the switch SW to the terminal φ sub B, and temporarily restores the restored video signal 20 as the reference video signal 19 to the reference image buffer. 206 (step S6109). As described above, the switching information generation filter processing unit 601 uses the acquired decoded video signal 21, and is similar to the switching information generation filter processing unit 502 of FIG. 26 in the animation encoding apparatus according to the third embodiment. 'Create, to generate the switching information 18, and output the reference image signal 19. The above is an explanation of the operation of the loop mirror in the animation decoding device 6000. As described above, according to the animation encoding apparatus according to the third embodiment, by performing filter processing by setting filter coefficient information of the loop filter that minimizes the error between the input image and the predicted image, the filter processing can be promoted. The image quality of the reference image is improved. Further, the loop filter switching unit 114 switches between the local decoded video signal 16 and the restored video signal 18 as a reference video for each local area switching, and is calculated from the local decoded video 16 The indicator is used to switch between each partial field, thereby preventing the spread of the image quality caused by the filter, which can promote the coding efficiency. Further, according to the animation decoding apparatus according to the third embodiment, the filter processing is performed by using the same filter coefficient information as that of the animation encoding apparatus, and the same switching determination processing is performed by the -47-200948092. The reference image in the animation encoding device is synchronized with the reference image in the animation decoding device. Further, according to the animation encoding apparatus and the animation/decoding apparatus according to the third embodiment, the encoding side generates switching information based on an index that can be calculated from the locally decoded video image, thereby enabling decoding from the decoding side. The image calculates the same switching information. Therefore, the amount of coding when the switching information is encoded can be reduced. In addition, in the animation decoding device 600 0 according to the third embodiment, the switching filter processing unit 205 may be designed as shown in FIG. 8 or in the same manner as the animation decoding device 2000 according to the first embodiment. The configuration shown in FIG. 9 is to output the decoded video 21 or the reference video 19 as the output video signal 2 2 . Further, the switching filter processing unit 205 may be configured as shown in FIG. 10, and the switching information generating unit newly generates the switching information for the rear-end filter, thereby providing the rear-end filter switching unit 210 of FIG. The switch SW2 Q is switched to switch the output image signal 22. Further, in the animation encoding apparatus 5000 and the animation decoding apparatus 6000 according to the third embodiment, although the local decoded video signal 16 is subjected to filter processing, the local decoded video signal 16 may be subjected to the previous execution of the past block filtering. Mirrored image. Further, the animation encoding device 5000 and the animation decoding device 6000 can be realized by, for example, using a general-purpose computer device as a basic hardware. In other words, the loop filter processing unit 501, the switching information generation filter processing unit -48-200948092 5 02, the switching information generation unit 503, the prediction signal generation unit 1〇1, 102, the conversion/quantization unit 103, and the entropy coding unit 104, inverse conversion.  The section 105, the adder 106, the reference video buffer 1〇8, the code•1〇9, the filter setting unit 110, the switching filter processing unit 111, the switching to the filter processing unit 601, the switching information generating unit 602, The entropy decoding inverse transforming unit 202, the prediction signal generating unit 203, the dancing, switching filter processing unit 205, the reference video buffer 206, and the buffer unit 207 can be mounted on the computer device. It is implemented by executing the program. In this case, the animation encoding device 5 000 decoding device 6000 is implemented by pre-installing the above-mentioned program to a computer, or may be stored in a memory medium such as a CD-ROM, and the network distributes the writing program to appropriately program the program. It is implemented by installing it to electricity. Further, the reference video buffer 108 and the reference buffer 206 can be suitably used for expansion, hard disk or CD-R, CD-RW, DVD-RAM, DVD-R, etc. built in a computer device, etc. To achieve it. (Fourth Embodiment) In the first embodiment, the example in which the macro block unit of the information 16 pixels is switched is set. On the other hand, the setting unit of the switching information in the form is not limited to the upper block, but may be a sequence, a frame, or a division into a pixel block or the like. In the present embodiment, the first embodiment of the inverse quantizer control unit information generation 201, the 204 decoding controller and the animation device, or the brain device image storage medium is used. 6 X The slice mode of the macro area is described in the encoding and decoding method of -49-200948092, which is a method for setting the switching information to each pixel block and adaptively switching the size of the pixel block. . The switching filter processing unit 111 in the encoding processing and decoding apparatus according to the fourth embodiment will be described with reference to Figs. 37 and 38. The switching filter processing unit 111 of FIGS. 37 and 38 is a modification of the switching filter processing unit 111 of FIGS. 3 and 6, and is a case where only the restored video signal 20 is regarded as a reference image by the switch SW. The composition of the filter processing is performed. That is, in FIG. 37, the switch SW is provided in the front stage of the filter processing unit 113, and the terminal A of the switch SW is directly derived to the reference image buffer 108, and the local decoded video signal 16 is directly used as the reference video signal. Stored in buffer 108. The terminal B of the switch SW is transmitted through the filter processing unit 113 to the reference video buffer 1〇8, and the local decoded video signal 16 is filtered in the filter processing unit 113 in accordance with the filter coefficient information 17. The mirror processing is then stored in the buffer 1〇8 as a reference image signal. Further, in Fig. 38, a switch SW is provided in the front stage of the filter processing unit 208, and the terminal A of the switch SW is directly led to the output line, and the decoded video signal 21 is directly output to the output line. The terminal B of the switch SW is passed through the filter processing unit 208 and is derived to the output line. The decoded video signal is filtered by the filter processing unit 208 in response to the filter coefficient information 17, and then treated as a filter. The reference image signal 19 and the output image signal 22 are output to the output line. According to this configuration, only when the loop information_flag_flag is 1 for the switching information 18, the filter processing unit 113 or 208 can perform the filter processing. Therefore, the processing cost can be further reduced as compared with the configuration of FIGS. 3 and 6. . The modifications of Figs. 37 and 38 can of course also be applied to the first

-50- 200948092 1乃至第3實施形態。 本實施形態所述之圖37及圖38的切換濾鏡處理部 " HI,係和上述第1實施形態中的切換濾鏡處理部111同 • 樣地,在內部具有濾鏡處理部113及迴圈濾鏡切換部114 ,會取得局部解碼影像訊號16或解碼影像訊號21、濾鏡 係數資訊17、及切換資訊18,輸出參照影像訊號19。又 ’在本實施形態係同時會取得領域設定資訊23,輸入至迴 〇 圈濾鏡切換部114。 此處,領域設定資訊23係爲隨應於區塊尺寸來控制 切換開關SW之時序用的資訊,係表示了當畫面分割成矩 形區塊時的1個區塊之大小。領域設定資訊23,係可直接 使用區塊尺寸的値,也可爲預先備妥之決定區塊尺寸用的 '領域尺寸規定表中的指數。 上記領域尺寸規定表之一例,示於圖39(a)。圖39 (a)的領域尺寸規定表中,將最小的區塊尺寸亦即縱橫4 © 像素的正方形區塊的指數設定成〇,每次將單邊的像素數 增加2倍,而準備到512x5 12像素爲止的8種區塊尺寸。 藉由在編碼及解碼側保持同樣的領域尺寸規定表,則在編 碼側係將圖39(a)的領域尺寸規定表中所決定之指數當 作區塊資訊而加以編碼,在解碼側則使用同樣的領域尺寸 規定表而根據已解碼之區塊資訊來決定區塊尺寸。此外, 領域尺寸規定表(a )係被設置於編碼控制部及解碼控制 部。 接著,針對本實施形態所述之含有領域設定資訊23 -51 - 200948092 的語法,使用圖42來加以說明。本實施形態所述之圖3 1 的語法結構中的迴圈濾鏡資料語法(1 9 03 ) ’係被記述成 如圖42所示。圖42(a)的filter_block_size係表示領域 設定資訊 23,NumOfBlock 係爲 filter_block_size 所示之 根據區塊尺寸所決定之1切片內的區塊總數。例如,當使 用了圖39(a)的領域尺寸規定表時,一旦領域設定資訊 亦即指數被設定成0,則將切片以4x4像素區塊作分割, 對每4x4像素區塊,將l〇〇p_filter_flag予以編碼。藉此 ,在切片中,在編碼側及解碼側雙方,就可將切換濾鏡處 理部111中的開關SW,依照每4x4像素區塊來進行切換 〇 又,作爲其他實施形態,亦可將對基本的區塊尺寸之 分割方法,當作領域設定資訊來使用。例如,如圖39(b )的領域尺寸規定表所示,備妥了「無分割」、「橫2分 割」、「縱2分割」、「縱橫2分割」的4種區塊分割方 法,並分別賦予指數編號。藉此,當將基本的區塊尺寸設 成16x16像素區塊及8x8像素區塊時,就可分別採取如圖 40所示的區塊形狀。關於這些的語法,係如圖42(b)所 示,在基本的區塊尺寸的迴圈中記述了領域設定資訊亦即 filter block size » 將由 filter_block_size 所決定之子區塊 數亦即 NumOfSubblock 之個數的 loop_filter_flag,予以 編碼。作爲一例,將基本區塊尺寸設成16x16及8x8時的 參照影像之例子,示於圖41。此外,領域尺寸規定表(b )係被設置於編碼控制部及解碼控制部。 200948092 如此,若依據第4實施形態 法,則藉由將用來把畫面內作領 . 訊加以設定以及進行編碼,就可 • 多樣性,藉由對已分割之各區塊 照畫格或畫格內的每一局部領域 之切換時序。 此處,當將區塊尺寸作適應 φ 以編碼所必須之編碼量,係區塊 得越細緻,則會越爲增加,而可 於是,亦可準備複數個上述的領 用迴圈濾鏡的影像的編碼量,且 方所獲得之所定之資訊,來切換 表。例如,可將如圖3 9 ( c )所 表,例如準備於編碼控制部及解 像尺寸、圖像類型、決定量化粗 〇 換領域尺寸規定表。關於影像尺 性越大,則使用準備了越大尺寸 又,關於圖像類型,係在被 於編碼量係有I圖像>P圖像> 較少編碼量來編碼的B圖像等時 區塊的領域尺寸規定表。 又,關於量化參數則是,由 換係數的編碼量會越小,因此使 區塊的領域尺寸規定表。 ΐ所述之動畫編碼及解碼方 域分割所需之領域設定資 使畫面的區塊分割上具有 來設定切換資訊,就可依 而適應性地控制濾鏡處理 性切換時,將切換資訊予 尺寸越小、亦即畫面分割 能導致編碼效率的降低。 域尺寸規定表,來改進適 基於在編碼側及解碼側雙 使用不同的領域尺寸規定 示的複數個領域尺寸規定 碼控制部中,可隨應於影 細度用的量化參數,來切 寸係爲,若影像尺寸相對 區塊的領域尺寸規定表。 設成一般所使用者時,由 Β圖像之傾向,所以在以 ,係使用準備了較大尺寸 於量化參數的値越大則轉 用準備了相對較大尺寸之 -53- 200948092 藉由如上記般地切換區塊尺寸不同之複數個領域尺寸 規定表,就可使用有限的指數編號,來更加適應性地選擇 區塊尺寸。又,可有效率地控制轉換係數或編碼參數之編 碼量與切換資訊所必須之編碼量之間的平衡。 作爲又再另一實施形態,區塊資訊係亦可使用,與在 生成局部解碼影像或解碼影像之際的編碼及解碼上曾經用 過的運動補償區塊尺寸及轉換區塊尺寸等同步的區塊尺寸 ,此情況下,在迴圈濾鏡資料語法(1906 )中就可不必記 述區塊資訊,仍可變更濾鏡處理的切換時序,可削減區塊 資訊所涉及之編碼量。 又,此處雖然說明了將畫面內分割成矩形區塊的情形 ,但只要是能夠在編碼裝置及解碼裝置雙方當中實現相同 領域分割的分割方法即可,不限定於矩形區塊。 (第5實施形態) 接著,參照圖43及圖44,說明第5實施形態所述之 動畫編碼方法。 圖43的迴圈濾鏡處理部1〇7,係具備與上述第1實施 形態所述之圖2的迴圈濾鏡處理部1〇7同樣之構成要素, 在內部具有濾鏡設定部110、切換濾鏡處理部111及切換 資訊生成部112,會取得局部解碼影像訊號16及輸入影像 訊號10,並輸出濾鏡係數資訊17、切換資訊18、及參照 影像訊號1 9。 另一方面,在本實施形態中係設計成,對畫面內分割 -54- 200948092 成的每一局部領域來設定切換資訊18,將切換資訊18輸 入至濾鏡設定部110,藉此,基於切換資訊18,選擇性地 ' 取得濾鏡設定時所用的領域。 * 關於第5實施形態所述之動畫編碼裝置中的迴圏瀘鏡 所相關之動作,係使用圖44的流程圖來詳細說明。 圖44的流程圖中,R係爲濾鏡係數資訊的設定次數 之最大値,N係爲畫面分割成的局部領域之總數。首先, Q 對濾鏡係數資訊設定次數r設定0 (步驟S7100),將切 換資訊亦即l〇〇p_filter_flag,在全部的領域中設定成1( 步驟S7101)。接著,將r增値1(步驟S7102)。 其後,於圖43的濾鏡設定部110中,進行濾鏡係數 資訊之設定(步驟S1100)。此處,係和第1實施形態同 樣地,使用影像復原中一般會採用的2維Wiener filter, 將濾鏡係數設計成會使得對局部解碼影像訊號16施加濾 鏡處理而得到的影像(參照影像訊號19)與輸入影像訊號 〇 ίο的平均平方誤差成爲最小,將已設計之濾鏡係數及表示 濾鏡尺寸的値,當成濾鏡係數資訊1 7而進行設定。此處 ,在第5實施形態所述之圖43的濾鏡設定部11〇中係設 計成,僅使用已被輸入之切換資訊亦即lo〇P-filter-flag 是被設定成1的領域,來算出平均平方誤差。此處’使 r=l的初次濾鏡係數資訊之設定時,係於步驟S7101中將 全部領域的l〇〇P_filter^_flag都設定成1,因此和第1實施 形態同樣地,已被設計成會使畫面全體的平均平方誤差呈 現最小的濾鏡係數資訊,會被生成。 -55- 200948092 接著,圖43的切換濾鏡處理部111,係對畫面分割成 之每一局部領域,設定切換資訊18。亦即,若假設領域編 號爲η,則首先將η設定成0(步驟S7103),並將η增 値1(步驟S7104)。接著,對第η號領域進行和第1實 施形態相同之步驟S1 101〜步驟S1 109之處理。重複以上 處理直到η達到總數Ν (步驟S 7 1 0 5 )。 對畫面內的所有領域都設定好l〇〇P_filter_flag後, 反覆進行上記一連串處理直到濾鏡係數資訊設定次數r達 到預先決定之濾鏡係數資訊之數定次數的最大値R(步驟 S71 06 )。如此,若依據第5實施形態所述之動畫編碼方 法,則第2次以後的濾鏡係數資訊之設定,係可限定於 l〇〇p_filter_flag已被設定成1之領域而設定使平均平方誤 差最小化的濾鏡。例如,當藉由第1次所被設定之切換資 訊所得到之參照影像是如圖3 3所示的情況下,第2次的 濾鏡係數資訊之設定,係可僅限定於「適用了迴圈濾鏡的 巨集區塊」而設定使平均平方誤差最小化的濾鏡。藉此, 對於「適用了迴圈濾鏡的巨集區塊」,係可設定畫質改善 效果更大的濾鏡。 此外,本發明係不限定於上記第1乃至第3實施形態 等的原樣,在實施階段可在不脫離其宗旨的範圍內,對構 成要素加以變形而具體化。的原樣,在實施階段可在不脫 離其宗旨的範圍內,對構成要素加以變形而具體化。又, 上記實施形態中所揭露的複數構成要素可做適宜組合’來 形成各種發明。例如,亦可將實施形態所示的所有構成要 200948092 素中刪除數個構成要素。甚至,亦可跨越不同實施形態將 構成要素加以組合。 • (第6實施形態) 接著,說明關於第6實施形態的動畫編碼裝置及動畫 解碼裝置。圖45係圖示了動畫編碼處理及動畫解碼裝置 之雙方中所存在的濾鏡處理部,係圖示了相當於第1及第 ❹ 2實施形態之濾鏡處理部1 1 3以及第3實施形態與第4實 施形態之濾鏡處理部208的濾鏡處理部。 對圖45的局部解碼影像濾鏡處理部701,係若爲動畫 編碼裝置時,則輸入著局部解碼影像訊號、切換資訊、濾 鏡資訊、編碼資訊,若爲動畫解碼裝置時,則輸入著解碼-_ 影像訊號、切換資訊、濾鏡資訊、編碼資訊,生成參照影 像訊號。濾鏡處理部701,係由濾鏡交界判定處理部702 、開關703、去區塊濾鏡處理部704、開關705、影像復原 Q 濾鏡處理部706所構成。 濾鏡交界判定處理部7〇2’係使用來自局部解碼影像 訊號或解碼影像訊號的編碼資訊,判定應該進行轉換處理 或運動補償單位亦即區塊之交界部分的像素當中應該進行 去區塊濾鏡處理之像素,而控制著開關703與開關705。 此時’濾鏡交界判定處理部702’係當判定輸入影像是屬 於區塊交界像素時,則將開關703及開關705連接至端子 A ’以去區塊濾鏡處理部704對像素訊號進行去區塊濾鏡 處理;當判定輸入影像並非區塊交界像素時,則將開關 -57- 200948092 7 03及開關70 5連接至端子B’以影像復原濾鏡處理部 706對輸入影像進行影像復原濾鏡處理。 去區塊濾鏡處理部704,係對於被區塊交界判定處理 部702判定爲區塊交界的像素,使用預先作成的濾鏡係數 或從局部解碼影像濾鏡處理部外面所給予的濾鏡係數 資訊,來進行轉換處理或用來抵消運動補償預測所產生之 區塊失真的濾鏡處理(例如像素訊號的平均化處理)。 影像復原濾鏡處理部706 ’係對被區塊交界判定處理 部7 02判定爲不是區塊交界的像素,藉由從局部解碼影像 濾鏡處理部701外部所給予的濾鏡係數,進行局部解碼影 像的復原處理。關於影像復原濾鏡處理部706,係假設是 將第1實施形態的圖3及圖6及圖8及圖9及圖10的切 換濾鏡處理部111、205A、205B、205C、205D、第2實施 形態的圖14及圖16的濾鏡處理部1 13、208、第2實施形 態的圖21及圖23的參照切換預測部305B、401B、第3 實施形態的圖26及圖29的切換資訊生成濾鏡處理部502 、601、第4實施形態的圖37及圖38的切換濾鏡處理部 1 1 1、205A,直接加以置換。 若依據如此的第6實施形態,則區塊交界部分,係可 考慮因轉換處理或運動補償處理導致產生區塊失真所造成 的解碼影像訊號之性質差異來進行濾鏡處理,可提升影像 全體的復原性能。 (第7實施形態)-50- 200948092 1 to the third embodiment. The switching filter processing unit & HI of Figs. 37 and 38 of the present embodiment has the filter processing unit 113 and the switching filter processing unit 111 in the first embodiment. The loop filter switching unit 114 obtains the local decoded video signal 16 or the decoded video signal 21, the filter coefficient information 17, and the switching information 18, and outputs the reference video signal 19. Further, in the present embodiment, the field setting information 23 is acquired at the same time, and is input to the loop filter switching unit 114. Here, the field setting information 23 is information for controlling the timing of the switch SW in accordance with the block size, and indicates the size of one block when the screen is divided into rectangular blocks. The field setting information 23 is a 可 which can directly use the block size, and can also be an index in the 'domain size specification table for the predetermined block size. An example of the above-mentioned field size specification table is shown in Fig. 39 (a). In the field size specification table of Fig. 39 (a), the smallest block size, that is, the index of the square block of the vertical and horizontal 4 © pixels is set to 〇, and the number of pixels of one side is doubled each time, and the number of pixels is 512x5. 8 block sizes up to 12 pixels. By maintaining the same field size specification table on the encoding and decoding side, the code determined by the field size specification table of FIG. 39(a) is encoded as block information on the encoding side, and is used on the decoding side. The same field size specification table determines the block size based on the decoded block information. Further, the field size specification table (a) is provided in the encoding control unit and the decoding control unit. Next, the syntax of the field-containing setting information 23 - 51 - 200948092 described in the present embodiment will be described using FIG. 42 . The loop filter data syntax (1 9 03 )' in the syntax structure of Fig. 31 described in the present embodiment is described as shown in Fig. 42. The filter_block_size of Fig. 42(a) indicates the area setting information 23, and the NumOfBlock is the total number of blocks in one slice determined by the block size as indicated by filter_block_size. For example, when the field size specification table of Fig. 39 (a) is used, once the field setting information, that is, the index is set to 0, the slice is divided into 4x4 pixel blocks, and for every 4x4 pixel block, l〇 〇p_filter_flag is encoded. Thereby, in the slicing, both the encoding side and the decoding side can switch the switch SW in the switching filter processing unit 111 every 4×4 pixel block, and as another embodiment, the pair may be used. The basic method of dividing the block size is used as the field setting information. For example, as shown in the field size specification table of Fig. 39 (b), four types of block division methods of "no division", "horizontal division", "longitudinal division", and "vertical division 2 division" are prepared, and Give the index number separately. Thereby, when the basic block size is set to a 16x16 pixel block and an 8x8 pixel block, the block shape as shown in Fig. 40 can be taken separately. As for the grammar of these, as shown in FIG. 42(b), the field setting information, that is, the filter block size, is described in the loop of the basic block size. The number of sub-blocks determined by filter_block_size, that is, the number of NumOfSubblocks. The loop_filter_flag is encoded. As an example, an example of a reference image when the basic block size is set to 16x16 and 8x8 is shown in Fig. 41. Further, the field size specification table (b) is provided in the encoding control unit and the decoding control unit. 200948092 In this way, according to the fourth embodiment, by using the image to be set and encoded, the diversity can be achieved by drawing a picture or drawing on the divided blocks. The switching timing of each local area within the grid. Here, when the block size is adapted to φ to encode the amount of coding necessary, the finer the block is, the more it will be increased, and thus, a plurality of the above-mentioned loop filters can be prepared. Switch the table by the amount of image coding and the information obtained by the party. For example, as shown in Fig. 39 (c), for example, the encoding control unit and the resolution size and image type can be determined, and the quantization size field size specification table can be determined. The larger the image size, the larger the size is prepared, and the image type is a B image that is encoded by the I image >P image > The field size specification table for the time block. Further, regarding the quantization parameter, the coding amount of the conversion coefficient is smaller, so that the domain size specification table of the block is made. The field setting required for the animation coding and decoding domain segmentation described above has the switching information set on the block division of the screen, and the information can be switched to the size when the filter processing switching is adaptively controlled. The smaller, that is, the picture segmentation can lead to a reduction in coding efficiency. The domain size specification table is improved in a plurality of domain size specification code control sections based on different domain size specifications on the encoding side and the decoding side, and can be used in accordance with the quantization parameter for the shadow size. For example, if the image size is relative to the field size of the block, the table is specified. When it is set as a general user, due to the tendency of the image, the larger the size of the quantization parameter is prepared, the larger the size is used. -53-200948092 By switching a plurality of field size specification tables having different block sizes as above, a limited index number can be used to more adaptively select the block size. Moreover, the balance between the coding amount of the conversion coefficient or the coding parameter and the amount of coding necessary for switching the information can be efficiently controlled. In still another embodiment, the block information system can also be used, and the area of the motion compensation block size and the conversion block size that have been used in encoding and decoding at the time of generating the locally decoded image or the decoded image can be used. The block size. In this case, in the loop filter data syntax (1906), it is not necessary to describe the block information, and the switching timing of the filter processing can be changed, and the amount of coding involved in the block information can be reduced. Further, although the case where the intra-screen is divided into rectangular blocks has been described here, the segmentation method in which the same field division can be realized in both the encoding device and the decoding device is not limited to the rectangular block. (Fifth Embodiment) Next, an animation coding method according to a fifth embodiment will be described with reference to Figs. 43 and 44. The loop filter processing unit 1A of FIG. 43 includes the same components as the loop filter processing unit 1〇7 of FIG. 2 described in the first embodiment, and has a filter setting unit 110 therein. The switching filter processing unit 111 and the switching information generating unit 112 obtain the local decoded video signal 16 and the input video signal 10, and output the filter coefficient information 17, the switching information 18, and the reference video signal 19. On the other hand, in the present embodiment, it is designed to set the switching information 18 for each partial field of the intra-screen division -54-200948092, and input the switching information 18 to the filter setting unit 110, thereby switching based on Information 18, Selectively 'Get the field used for filter settings. * The operation related to the reticle mirror in the animation encoding device according to the fifth embodiment will be described in detail using the flowchart of Fig. 44. In the flowchart of Fig. 44, R is the maximum number of times the filter coefficient information is set, and N is the total number of partial fields into which the screen is divided. First, Q sets the filter coefficient information setting frequency r to 0 (step S7100), and sets the switching information, i.e., l〇〇p_filter_flag, to 1 in all fields (step S7101). Next, r is incremented by 1 (step S7102). Thereafter, the filter setting unit 110 of Fig. 43 sets the filter coefficient information (step S1100). Here, similarly to the first embodiment, a two-dimensional Wiener filter generally used in image restoration is used, and the filter coefficient is designed to be an image obtained by applying a filter to the locally decoded video signal 16 (refer to the image). The average squared error of the signal 19) and the input image signal 〇ίο is minimized, and the designed filter coefficient and the 表示 indicating the filter size are set as the filter coefficient information 17. Here, in the filter setting unit 11 of FIG. 43 described in the fifth embodiment, it is designed to use only the switching information that has been input, that is, the field in which the lo〇P-filter-flag is set to 1. To calculate the average squared error. Here, when the initial filter coefficient information of r = 1 is set, the l〇〇P_filter^_flag of all the fields is set to 1 in step S7101. Therefore, similarly to the first embodiment, it has been designed. A filter coefficient information that minimizes the average squared error of the entire picture will be generated. -55- 200948092 Next, the switching filter processing unit 111 of Fig. 43 sets the switching information 18 for each partial field in which the screen is divided. That is, if the field number is assumed to be η, η is first set to 0 (step S7103), and η is incremented by 1 (step S7104). Next, the processing of steps S1 101 to S1 109 which are the same as the first embodiment is performed on the nth field. The above process is repeated until η reaches the total number Ν (step S 7 1 0 5 ). After setting l〇〇P_filter_flag for all fields in the screen, the series of processing is repeated until the filter coefficient information setting number r reaches the maximum number R of predetermined filter coefficient information (step S71 06). As described above, according to the animation coding method according to the fifth embodiment, the setting of the filter coefficient information after the second time or later can be limited to the field in which l〇〇p_filter_flag has been set to 1 and the average square error is minimized. Filter. For example, when the reference image obtained by the switching information set for the first time is as shown in FIG. 3, the setting of the second filter coefficient information may be limited to "applicable back". A macroblock of the circle filter is set to set a filter that minimizes the average squared error. Therefore, for the "Macro block to which the loop filter is applied", it is possible to set a filter with a larger image quality improvement effect. In addition, the present invention is not limited to the above-described first embodiment to the third embodiment, and the constituent elements may be modified and embodied in the implementation stage without departing from the spirit and scope of the invention. As it is, in the implementation stage, the constituent elements can be deformed and embodied without departing from the scope of the object. Further, the plural constituent elements disclosed in the above embodiments can be combined as appropriate to form various inventions. For example, it is also possible to delete a plurality of constituent elements from all the constituents shown in the embodiment. It is even possible to combine constituent elements across different embodiments. (Sixth embodiment) Next, an animation coding apparatus and an animation decoding apparatus according to a sixth embodiment will be described. 45 is a view showing a filter processing unit existing in both the animation encoding processing and the animation decoding apparatus, and the filter processing unit 1 1 3 and the third implementation corresponding to the first and second embodiments are illustrated. The filter processing unit of the filter processing unit 208 of the fourth embodiment. The local decoded video filter processing unit 701 of FIG. 45 inputs a locally decoded video signal, switching information, filter information, and encoding information if it is an animation encoding device, and inputs decoding if it is an animation decoding device. -_ Image signal, switching information, filter information, encoding information, and generating reference image signals. The filter processing unit 701 is composed of a filter boundary determination processing unit 702, a switch 703, a deblocking filter processing unit 704, a switch 705, and a video restoration Q filter processing unit 706. The filter boundary determination processing unit 7〇2′ determines whether the pixel to be converted or the motion compensation unit, that is, the pixel boundary portion, should be deblocked using the coded information from the locally decoded video signal or the decoded video signal. The pixels are mirrored and the switch 703 and switch 705 are controlled. At this time, the 'filter boundary determination processing unit 702' connects the switch 703 and the switch 705 to the terminal A' when the input image belongs to the block boundary pixel, and the deblocking filter processing unit 704 goes to the pixel signal. Block filter processing; when it is determined that the input image is not a block boundary pixel, the switch -57-200948092 07 and the switch 70 5 are connected to the terminal B', and the image restoration filter processing unit 706 performs image restoration on the input image. Mirror processing. The deblocking filter processing unit 704 uses the filter coefficient prepared in advance or the filter coefficient given from the outside of the local decoded image filter processing unit for the pixel determined by the block boundary determination processing unit 702 as the block boundary. Information to perform conversion processing or filter processing to offset block distortion caused by motion compensated prediction (eg, averaging of pixel signals). The image restoration filter processing unit 706' determines that the pixel boundary determination processing unit 702 determines that the pixel is not the block boundary, and performs local decoding by the filter coefficient given from the outside of the local decoded image filter processing unit 701. Image restoration processing. The video restoration filter processing unit 706 assumes that the switching filter processing units 111, 205A, 205B, 205C, and 205D and the second embodiment of FIGS. 3 and 6 and FIGS. 8 and 9 and FIG. 10 of the first embodiment are assumed. The filter processing units 1 13 and 208 of FIGS. 14 and 16 of the embodiment, the reference switching prediction units 305B and 401B of FIGS. 21 and 23 of the second embodiment, and the switching information of FIGS. 26 and 29 of the third embodiment. The filter processing units 502 and 601 and the switching filter processing units 1 1 and 205A of Figs. 37 and 38 of the fourth embodiment are directly replaced. According to the sixth embodiment, the block boundary portion can be subjected to filter processing in consideration of the difference in the nature of the decoded video signal caused by the block distortion caused by the conversion processing or the motion compensation processing, thereby improving the overall image. Restore performance. (Seventh embodiment)

-58- 200948092 接著使用圖46至圖49,說明關於第7實施形態的動 畫編碼裝置及解碼裝置。首先,參照圖46’說明第7實施 ' 形態所述之動畫編碼裝置的構成。以下’針對圖46的構 ' 成要素,分別予以說明。 圖46所示的動畫編碼裝置70 00,係具有:減算器 102、轉換.量化部103、熵編碼部104、逆轉換.逆量化 部105、加算器106、Deblocking濾鏡部801、濾鏡設定· ❹ 切換資訊生成部802、解碼影像用緩衝區803、預測影像 作成部804、運動向量生成部805,是被編碼控制部1〇9 所控制。 此處,減算器102、轉換·量化部103、熵編碼部104 、逆轉換•逆量化部105、加算器106及編碼控制部1〇9 ,係由於是和第2實施形態所述之圖11的動畫編碼裝置 3 000中的同編號之構成要素進行同樣的動作,因此這裡省 略說明。 Ο 又’濾鏡設定.切換資訊生成部802,係和第1實施 形態所述之圖2的迴圈濾鏡處理部107內的濾鏡設定部 110與切換資訊生成部112進行同樣的動作。 具體的動作係爲,和先前的混合式編碼或第1實施形 態所述之動畫編碼裝置1 000或第2實施形態所述之動畫 編碼裝置3000同樣地,動畫編碼裝置700〇,係對預測誤 差進行轉換、量化、熵編碼化,並進行逆量化、逆轉換所 戒之局部解碼。對於局部解碼訊號,進行了以Deblocking 濾鏡部去除區塊交界之失真的濾鏡處理後,將該濾鏡處理 -59- 200948092 後的局部解碼訊號,暫時保存在解碼影像用緩衝區8 03。 滴鏡設定•切換資訊生成部8 02,係接受已濾鏡處理之局 部解碼影像訊號及輸入影像訊號,生成濾鏡係數及切換資 訊。濾鏡設定•切換資訊生成部8 02,係將已生成之濾鏡 係數資訊,輸出至解碼影像用緩衝區803,同時也輸出至 熵編碼部104。熵編碼部1〇4係將濾鏡設定•切換資訊生 成部8 02所生成的濾鏡係數、切換資訊予以編碼,又,將 量化後之轉換係數,與預測模式資訊、區塊尺寸切換資訊 、運動向量、量化參數等一起多工化成位元串流,發送至 後述的動畫解碼裝置8000。此時,濾鏡係數資訊、切換資 訊’係被當成輸入影像的編碼資訊,依照圖32等之語法 而被發送。 解碼影像用緩衝區803中,係積存著將被預測影像作 成部804所參照之局部解碼影像及該局部解碼影像所對應 之濾鏡係數、切換資訊。 預測影像作成部804,係使用被解碼影像用緩衝區 8 03所管理之局部解碼影像、濾鏡係數、切換資訊,及運 動向量生成部805所生成之運動向量資訊,來作成經過運 動補償的預測影像。減算器1 0 2係生成已作成之預測影像 與輸入影像之間的預測誤差訊號。另一方面,運動向量資 訊,係被熵編碼部104所編碼,並與其他資訊進行多工化 〇 參照圖47’說明第7實施形態所述之動畫解碼裝置的 構成。以下’針對圖4 7的構成要素,分別予以說明。圖 -60- 200948092 47所示的動畫解碼裝置8000,係具有:熵解碼部20.1、逆 轉換·逆量化部202、加算器204、Deblocking濾鏡部81 1 、解碼影像用緩衝區813、預測影像作成部814,是被解 • 碼控制部2 0 7所控制。 熵解碼部201、逆轉換.逆量化部202、加算器204 及解碼控制部207,係由於是和第1實施形態所述之動畫 解碼裝置2000中的同編號之構成要素進行同樣的動作, ❹ 因此這裡省略說明。 作爲具體的動作係爲,和先前之混合式編碼之解碼或 第1實施形態所述之動畫解碼裝置2000或第2實施形態 所述之動畫編碼裝置4000同樣地,將熵解碼化部201上 所解碼出來訊號進行逆量化、逆轉換,並生成預測誤差訊 號,在加算器204中與預測影像進行加算,藉此以生成解 碼影像。對該解碼訊號以Deblocking濾鏡部811進行用來 去除區塊交界失真的濾鏡處理後,輸出解碼影像並使其被 〇 積存在解碼影像用緩衝區813中。 在解碼影像用緩衝區813中’係積存著將被預測影像 作成部814所參照之解碼影像及已被熵解碼部201所解碼 之該解碼影像所對應之濾鏡係數、切換資訊。在預測影像 作成部813中,根據已被熵解碼部201所解碼之運動向量 資訊與來自解碼影像用緩衝區402的解碼影像、濾鏡係數 及切換資訊,作成已進行過復原濾鏡處理與運動補償的預 測影像。 藉由如圖46及圖47所示的動畫編碼處理及動畫解碼 -61 - 200948092 裝置之構成,而將適應的影像復原處理適用於運動補償預 測中’以實現編碼效率之改善。圖48係圖46與圖47的 預測影像作成部814的具體實施之一實施形態。以下,針 對圖48的構成要素,分別予以說明。 圖4 8所示的預測影像作成部8 0 4,係由開關8 2 1、復 原影像作成部822、內插影像作成部823所構成。開關 821 ’係對基於運動向量資訊而被參照之解碼影像,切換 是否進行復原濾鏡處理的切換開關,會基於已被濾鏡設定 .切換資訊生成部802所生成的切換資訊,而被切換。當 開關821是被切換成端子A時,復原影像作成部822,係 使用已被濾鏡設定•切換資訊生成部802所設定的濾鏡係 數’對基於運動向量資訊而被參照之解碼影像,進行復原 濾鏡處理。當開關821是被切換成端子B時,解碼影像係 直接成爲內插影像作成部823的輸入。內插影像作成部 8 23,係基於運動向量資訊,生成小數點像素位置的內插 影像’並當作預測影像。藉由如此構成,就可實現適應性 的影像復原處理與運動補償預測之組合。 圖49係圖46與圖47的預測影像作成部的具體實施 之另一實施形態。以下’針對圖49的構成要素,分別予 以說明。 圖49所示的預測影像作成部804,係由:開關831與 整數像素濾鏡部832、位元擴充部833、內插影像作成部 8 3 4、開關83 5、加權預測影像作成部836、位元縮減部 8 3 7所構成。開關831,係對基於運動向量資訊而被參照 200948092 之解碼影像’切換是否以整數像素單位進行復原濾鏡處理 的切換開關’會基於已被圖46的濾鏡設定•切換資訊生 成部802所生成的切換資訊,而被切換。當開關831是被 ' 切換成端子A時,整數像素濾鏡部832,係使用已被濾鏡 設定•切換資訊生成部802所設定的濾鏡係數,對基於運 動向量資訊而被參照之解碼影像,進行復原濾鏡處理。此 時的特徵爲,當解碼影像的像素位元長度是N-bit時,整 〇 數像 素濾鏡部832的輸出的像素位元長度,係會成爲M2 N的M-bit。當開關83 1是被切換成端子B時,解碼影像 係被輸入至位元擴充部83 3,N-bit的解碼影像就會被擴張 成爲M2N的M-bit。具體而言,位元擴充部83 3係將像 素値V往左方進行(M-N)位元的算術平移處理。 '內插影像作成部834,係基於運動向量資訊,生成小 數點像素位置的內插影像。此時的特徵爲,對於輸入M-bit的像素位元長度,使輸出成爲L2N的L-bit之像素位 〇 元長度。開關83 5,係爲被編碼控制部109,或基於編碼 資訊而被解碼控制部207所控制的開關,切換著是否進行 加權預測。當開關8 3 5是被切換至端子A時,加權預測影 像作成部8 3 6,係基於H.264/AVC等中所被進行的加權預 測式,來作成預測影像。此時的特徵爲,對於L-bit像素 位元長度的輸入,進行處理而使其成爲N-bit像素位元長 度的輸出。當開關835被切換成端子B時,位元縮減部 847,係進行捨入處理以使L-bit之輸入成爲L2N的N-bit。 -63- 200948092 藉由如此使得像素位元長度大於解碼影像的像素位元 長度,來構成預測影像作成部8 04的內部,就可減少復原 處理與內插處理與加權預測處理時的演算的捨入誤差,可 實現編碼效率更佳的預測影像作成。 (第8實施形態) 在第4實施形態中係說明,將設定切換資訊之單位設 成每一像素區塊,使用表示像素區塊之大小或分割方法的 領域設定資訊,來適應性地切換像素區塊尺寸之方法。在 本實施形態中,作爲在畫面內適應性切換像素區塊尺寸的 方法,是將所定大小之母區塊內部,階層式地分割成小尺 寸的子區塊的方法,使用圖50至圖55來加以說明。 此處,使用4分樹構造來說明將母區塊階層式分割成 區塊的方法。圖50係表示4分樹構造的階層式區塊分割 。圖50中的階層0之母區塊Bq,。,係在階層1中被分割 成子區塊Βι,〇〜Βι, 3,在階層2中被分割成子區塊Β2, 〇 〜B2, 1 5。 接著,說明4分樹構造的分割樹的表現方法。圖51 係使用以0或1所表示的區塊分割資訊,表現至階層3爲 止的分割樹。區塊分割資訊爲1時係表示要分割成下個階 層的子區塊,爲0時係表示不分割。如圖51所設定的區 塊分割資訊所表示的區塊,係如圖52般地由母區塊被分 割成4個子區塊,該子區塊又再被分割成4個子(孫)區 塊,該子(孫)區塊又再被分割成4個子(曾孫)區塊。 -64- 200948092 對如此被分割成大小互異之區塊的每一者,分別設定有切 換資訊。亦即,等同於是對圖51的分割樹中的區塊分割 ' 資訊爲〇時所對應之區塊,設定有切換資訊。 • 母區塊內的子區塊的尺寸,係由母區塊尺寸與階層的 深度來決定。圖53係使用4分樹構造時的對於母區塊尺 寸的階層〇〜4之子區塊之尺寸的圖示。於圖53中,例如 當母區塊尺寸爲32x3 2時,則階層4的子(曾孫)區塊尺 @ 寸就爲2x2。畫面內的局部領域亦即母區塊的內部,係被 分割成依照階層的深度而依序變小的領域亦即子區塊。 接著,針對本實施形態所述之含有領域設定資訊的語 法,使用圖54來加以說明。本實施形態所述之圖3 1的語 法結構中的迴圈濾鏡資料語法(1 9 03 ),係被記述成如圖 54所示。圖54的filter_bl〇Ck_siZe,係和第4實施形態 同樣地,是決定區塊尺寸的値,此處係代表母區塊尺寸。 接著,max_layer_level,係表示所取得之階層的深度 〇 的最大値。在本實施形態中,雖然是如圖54的語法所示 ,將maxjayerjevel以切片單位進行編碼,但作爲其他 實施形態,亦可以序列、畫格或母區塊單位來進行編碼。 圖55係圖示了以母區塊單位來進行編碼之際的語法。又 ’若編碼裝置及解碼裝置雙方使用表示同樣階層的深度之 最大値的値,則亦可不將max_layer_level包含在語法中 ,而是採用預先設定好的値。 又’在本實施形態中雖然係以filter_block_size來表 示母區塊尺寸’但亦可爲表示最小區塊尺寸的値,此情況 -65- 200948092 下則是使用 max_layer_level 與 filter_block_size,藉由所 定之計算式來導出母區塊尺寸。例如,若爲4分樹構造時 ,若令 filter_block_size 爲 B,max_layer_level 爲 L,則 母區塊尺寸P係可以P = Bx2L來表示。 接著,NumOfParentBlock係1切片內的母區塊之總數 ,NumOfChildBlock[丨ayer]係1個母區塊內的layer所表示 之階層的子區塊之總數。舉例來說,若爲4分樹構造時, 則 NumOfChildBlock[0] = l 、 NumOfChildBlock[l] = 4 、-58- 200948092 Next, an animation coding apparatus and a decoding apparatus according to a seventh embodiment will be described with reference to Figs. 46 to 49. First, the configuration of the animation encoding apparatus according to the seventh embodiment will be described with reference to Fig. 46'. The following description of the constituent elements of Fig. 46 will be respectively described. The animation encoding device 70 00 shown in Fig. 46 includes a subtractor 102, a conversion and quantization unit 103, an entropy encoding unit 104, an inverse conversion, an inverse quantization unit 105, an adder 106, a Deblocking filter unit 801, and a filter setting. The switching information generating unit 802, the decoded video buffer 803, the predicted video creating unit 804, and the motion vector generating unit 805 are controlled by the encoding control unit 1〇9. Here, the subtractor 102, the conversion/quantization unit 103, the entropy coding unit 104, the inverse conversion/inverse quantization unit 105, the adder 106, and the coding control unit 1〇9 are shown in Fig. 11 according to the second embodiment. The components of the same number in the animation coding device 3,000 perform the same operations, and thus the description thereof is omitted here. In addition, the filter setting unit 802 of the filter setting unit 802 performs the same operation as the switching information generating unit 112 in the loop filter processing unit 107 of Fig. 2 described in the first embodiment. The specific operation is that, similar to the conventional hybrid coding or the animation coding apparatus 1000 according to the first embodiment or the animation coding apparatus 3000 according to the second embodiment, the animation coding apparatus 700 is corrective for prediction errors. Perform conversion, quantization, entropy coding, and perform local decoding by inverse quantization and inverse conversion. For the local decoded signal, after the filter processing is performed by the Deblocking filter section to remove the distortion of the block boundary, the local decoded signal after the filter processing -59-200948092 is temporarily stored in the decoded image buffer 8 03. The dropper setting/switching information generating unit 8 02 receives the decoded image signal and the input image signal of the localized portion processed by the filter, generates filter coefficients, and switches the information. The filter setting/switching information generating unit 902 outputs the generated filter coefficient information to the decoded image buffer 803 and also outputs it to the entropy encoding unit 104. The entropy coding unit 1〇4 encodes the filter coefficient and the switching information generated by the filter setting/switching information generating unit 802, and further converts the quantized conversion coefficient with the prediction mode information and the block size switching information. The motion vector, the quantization parameter, and the like are multiplexed into a bit stream and transmitted to the animation decoding device 8000, which will be described later. At this time, the filter coefficient information and the switching information are transmitted as the encoded information of the input image, and are transmitted in accordance with the syntax of Fig. 32 and the like. In the decoded video buffer 803, the local decoded video to be referred to by the predicted video generating unit 804 and the filter coefficients and switching information corresponding to the locally decoded video are stored. The predicted image creating unit 804 uses the locally decoded video, the filter coefficient, the switching information, and the motion vector information generated by the motion vector generating unit 805 managed by the decoded video buffer 803 to create a motion compensated prediction. image. The subtractor 1 0 2 generates a prediction error signal between the predicted image and the input image that has been created. On the other hand, the motion vector information is encoded by the entropy coding unit 104 and multiplexed with other information. The configuration of the animation decoding apparatus according to the seventh embodiment will be described with reference to Fig. 47'. Hereinafter, the components of Fig. 47 will be described separately. The animation decoding apparatus 8000 shown in Fig.-60-200948092 47 includes an entropy decoding unit 20.1, an inverse conversion/inverse quantization unit 202, an adder 204, a Deblocking filter unit 81 1 , a decoded image buffer 813, and a predicted image. The preparation unit 814 is controlled by the solution control unit 207. The entropy decoding unit 201, the inverse transform, the inverse quantization unit 202, the adder 204, and the decoding control unit 207 perform the same operations as the components of the same number in the animation decoding device 2000 according to the first embodiment. Therefore, the description is omitted here. The specific operation is performed by the entropy decoding unit 201 in the same manner as the decoding of the previous hybrid code or the animation decoding device 2000 according to the first embodiment or the animation encoding device 4000 according to the second embodiment. The decoded signal is subjected to inverse quantization and inverse conversion, and a prediction error signal is generated, and is added to the predicted image in the adder 204 to generate a decoded image. The deblocking filter unit 811 performs filter processing for removing the block boundary distortion, and outputs the decoded image and stores it in the decoded image buffer 813. In the decoded video buffer 813, the filter coefficients and the switching information corresponding to the decoded video referred to by the predicted video generating unit 814 and the decoded video decoded by the entropy decoding unit 201 are stored. The predicted image creating unit 813 creates the restored filter processing and motion based on the motion vector information decoded by the entropy decoding unit 201 and the decoded video, filter coefficient, and switching information from the decoded video buffer 402. The predicted image of the compensation. The adaptive image restoration processing is applied to the motion compensation prediction by the composition of the animation encoding processing and the animation decoding -61 - 200948092 as shown in Figs. 46 and 47 to achieve an improvement in encoding efficiency. Fig. 48 is a view showing an embodiment of the specific embodiment of the predicted image creating unit 814 of Figs. 46 and 47. Hereinafter, the components of Fig. 48 will be described separately. The predicted image creating unit 804 shown in Fig. 4 is composed of a switch 8 2 1 , a restoration image creating unit 822, and an interpolation image creating unit 823. The switch 821' is a switch for switching whether or not to perform the restoration filter processing on the decoded video referred to based on the motion vector information, and is switched based on the switching information generated by the filter setting information switching unit 802. When the switch 821 is switched to the terminal A, the restored image creating unit 822 performs the decoded image referred to by the motion vector information using the filter coefficient set by the filter setting/switching information generating unit 802. Restore filter processing. When the switch 821 is switched to the terminal B, the decoded video is directly input to the interpolated video creating unit 823. The interpolated image creating unit 8 23 generates an interpolated image 'of a decimal point pixel position based on the motion vector information and uses it as a predicted image. With this configuration, a combination of adaptive image restoration processing and motion compensation prediction can be realized. Fig. 49 is a view showing another embodiment of the specific embodiment of the predicted image forming unit of Figs. 46 and 47. The following description of the components of Fig. 49 will be given. The predicted image creating unit 804 shown in FIG. 49 is composed of a switch 831, an integer pixel filter unit 832, a bit expansion unit 833, an interpolation image creating unit 843, a switch 835, and a weighted prediction image creating unit 836. The bit reduction unit 837 is composed of. The switch 831 is based on the decoded video of the 200948092 based on the motion vector information, and the switching switch that performs the restoration filter processing in integer pixel units is generated based on the filter setting/switching information generating unit 802 of FIG. Switching information while being switched. When the switch 831 is 'switched to the terminal A, the integer pixel filter unit 832 uses the filter coefficient set by the filter setting/switching information generating unit 802 to decode the decoded image based on the motion vector information. , perform restoration filter processing. In this case, when the pixel bit length of the decoded video image is N-bit, the pixel bit length of the output of the integer pixel filter unit 832 becomes the M-bit of M2 N. When the switch 83 1 is switched to the terminal B, the decoded image is input to the bit expansion unit 83 3 , and the N-bit decoded image is expanded to become the M-bit of the M2N. Specifically, the bit expansion unit 83 3 performs arithmetic shift processing of (M-N) bits to the left of the pixel 値V. The interpolation image creating unit 834 generates an interpolated image of the decimal point pixel position based on the motion vector information. In this case, the length of the pixel bit of the input M-bit is such that the output becomes the L-bit pixel bit length of L2N. The switch 83 5 is switched by the encoding control unit 109 or the switch controlled by the decoding control unit 207 based on the encoding information, and switches whether or not to perform weighted prediction. When the switch 835 is switched to the terminal A, the weighted prediction image creating unit 836 creates a predicted image based on the weighted prediction equation performed in H.264/AVC or the like. At this time, the input of the L-bit pixel bit length is processed to be an output of the N-bit pixel bit length. When the switch 835 is switched to the terminal B, the bit reduction unit 847 performs rounding processing so that the input of the L-bit becomes the N-bit of L2N. -63- 200948092 By constructing the inside of the predicted image creating unit 804 by making the pixel bit length larger than the pixel bit length of the decoded image, the calculation of the restoration processing, the interpolation processing, and the weighting prediction processing can be reduced. Into the error, it is possible to achieve predictive image creation with better coding efficiency. (Eighth Embodiment) In the fourth embodiment, the unit for setting switching information is set to each pixel block, and the field setting information indicating the size of the pixel block or the dividing method is used to adaptively switch pixels. The method of block size. In the present embodiment, as a method of adaptively switching the pixel block size in the screen, a method of hierarchically dividing the inside of the parent block of a predetermined size into small-sized sub-blocks is used, and FIG. 50 to FIG. 55 are used. To explain. Here, a method of hierarchically dividing a parent block into blocks is explained using a 4-point tree structure. Fig. 50 is a diagram showing hierarchical block division of a 4-point tree structure. The parent block Bq of level 0 in Fig. 50. It is divided into sub-blocks Βι, 〇~Βι, 3 in the hierarchy 1, and is divided into sub-blocks Β2, 〇~B2, 1 5 in the hierarchy 2. Next, a description will be given of a method of expressing a segmentation tree of a 4-point tree structure. Figure 51 shows the split tree represented by level 3 using the block split information represented by 0 or 1. When the block division information is 1, it indicates that the sub-block is to be divided into the next layer, and when it is 0, it indicates that it is not divided. The block represented by the block splitting information set as shown in FIG. 51 is divided into four sub-blocks by the parent block as shown in FIG. 52, and the sub-block is further divided into four sub-blocks. The child (sun) block is again divided into 4 sub-children. -64- 200948092 Switching information is set for each of the blocks thus divided into different sizes. That is, it is equivalent to setting the switching information for the block corresponding to the block division 'in the split tree of FIG. 51'. • The size of the sub-blocks within the parent block is determined by the size of the parent block and the depth of the hierarchy. Fig. 53 is a view showing the sizes of the sub-blocks of the hierarchy 〇 ~ 4 for the parent block size when the 4-point tree structure is used. In Fig. 53, for example, when the parent block size is 32x3 2, the sub-dimension of the level 4 is 2x2. The local area within the picture, that is, the inside of the parent block, is divided into sub-blocks that are sequentially smaller in accordance with the depth of the hierarchy. Next, the syntax including the field setting information described in the present embodiment will be described with reference to Fig. 54. The loop filter data syntax (1 903 ) in the syntax structure of Fig. 31 described in the present embodiment is described as shown in Fig. 54. The filter_bl 〇 Ck_siZe of Fig. 54 is the 决定 which determines the block size as in the fourth embodiment, and here represents the parent block size. Next, max_layer_level is the maximum value of the depth 〇 of the obtained hierarchy. In the present embodiment, maxjayerjevel is encoded in slice units as shown in the syntax of Fig. 54, but as another embodiment, encoding may be performed in a sequence, a frame or a parent block unit. Fig. 55 is a diagram showing the syntax at the time of encoding in parent block units. Further, if both the encoding device and the decoding device use 値 which indicates the maximum depth of the same level, the max_layer_level may not be included in the grammar, but a predetermined 値 may be used. Further, in the present embodiment, although the parent block size is represented by filter_block_size, it may be a parameter indicating the minimum block size. In this case, -65-200948092, max_layer_level and filter_block_size are used, by the predetermined calculation formula. To derive the parent block size. For example, in the case of a 4-point tree structure, if filter_block_size is B and max_layer_level is L, the parent block size P can be represented by P = Bx2L. Next, NumOfParentBlock is the total number of parent blocks in one slice, and NumOfChildBlock[丨ayer] is the total number of child blocks in the hierarchy indicated by the layer in one parent block. For example, if it is a 4-point tree construct, then NumOfChildBlock[0] = l , NumOfChildBlock[l] = 4 ,

NumOfChi 1 dB 1 o ck [2] = 1 6 、 NumOfChi 1 dB 1 ock[3 ] = 64 ,NumOfChi 1 dB 1 o ck [2] = 1 6 , NumOfChi 1 dB 1 ock[3 ] = 64 ,

NumOfChildBlock[layer]係可以 4lay“來表示。 接著,說明 valid_block_flag 及 block partitioning flag 。valid_block_flag 係取 0 或 1 之値,valid_block_flag 的 初期値係被設定爲〇。block_partitioning_flag係區塊分割 資訊,如上記所述,有分割時則被設定爲1,無分割時則 被設定爲〇。 如此,若依據第8實施形態所述之動畫編碼及解碼方 法,則藉由使用階層區塊分割資訊來作爲領域分割資訊, 就可將基準的母區塊進行階層式分割成子區塊,藉由對已 分割之各子區塊來設定切換資訊’就可依照畫格內的每一 局部領域而適應性地控制濾鏡處理之切換時序。又,藉由 如此將設定切換資訊之單位’設計成在畫面內爲可變,迴 圈濾鏡對畫質影響較大之領域中係進行較細緻的領域分割 ,迴圈濾鏡對畫質影響較小之領域中係進行較粗糙的領域 分割,藉此,就可效率良好地設定切換資訊。 -66 - 200948092 又’此處雖然作爲母區塊之分割方法是說明了 4分樹 構造所致之分割方法’但只要是可執行在編碼裝置及解碼 裝置雙方均相同之領域分割的分割方法,則不限於4分樹 構造。 又,此處雖然說明了將母區塊內階層式分割成矩形區 塊之情形,但只要是可執行在編碼裝置及解碼裝置雙方均 相同之領域分割的分割方法,則不限於矩形區塊。 Q 若依據本發明,則將編碼側上所設定之濾鏡係數資訊 加以編碼,並將濾鏡係數資訊在解碼側加以解碼而使用之 動畫編碼/解碼中,藉由編碼側與解碼側是以同樣處理來 切換迴圏濾鏡處理,可一面抑制畫質劣化之傳播’ 一面促 使預測時所用之參照影像的畫質提升’使得編碼效率之提 "升,成爲可能。 〔產業上利用之可能性〕 © 本發明所述之影像編碼及解碼方法及裝置’係可使用 在通訊媒體、儲存媒體及播送媒體等中的影像壓縮處理上 【圖式簡單說明】 〔圖1〕圖1係第1實施形態所述之動畫編碼裝置的 區塊圖。 〔圖2〕圖2係第1實施形態所述之動畫編碼裝置中 的迴圈濾鏡處理部的區塊圖。 -67- 200948092 〔圖3〕圖3係第1實施形態所述之動畫編碼裝置中 的切換濾鏡處理部的區塊圖。 〔圖4〕圖4係第1實施形態所述之動畫編碼裝置之 -動作的流程圖。 - 〔圖5〕圖5係第1實施形態所述之動畫解碼裝置的 區塊圖。 〔圖6〕圖6係第1實施形態所述之動畫解碼裝置中 的第1切換濾鏡處理部的區塊圖。 @ 〔圖7〕圖7係第1實施形態所述之動畫解碼裝置之 動作的流程圖。 〔圖8〕圖8係第1實施形態所述之動畫解碼裝置中 的第2切換濾鏡處理部的區塊圖。 〔圖9〕圖9係第1實施形態所述之動畫解碼裝置中 的第3切換濾鏡處理部的區塊圖。 〔圖10〕圖10係第1實施形態所述之動畫解碼裝置 中的第4切換濾鏡處理部的區塊圖。 〇 〔圖1 1〕圖1 1係第2實施形態所述之第1動畫編碼 裝置的區塊圖。 〔圖12〕圖12係第2實施形態所述之第1動畫編碼 裝置中的切換資訊生成預測部的區塊圖。 〔圖1 3〕圖1 3係第2實施形態所述之第1動畫編碼 裝置中的參照切換預測部的區塊圖。 〔圖1 4〕圖1 4係第2實施形態所述之第1動畫編碼 裝置中的迴圈濾鏡處理部的區塊圖。 -68- 200948092 〔圖1 5〕圖15係第2實施形態所述之第1動畫編碼 裝置之動作的流程圖。 〔圖1 6〕圖1 6係第2實施形態所述之第1動畫解碼 • 裝置的區塊圖。 〔圖1 7〕圖1 7係第2實施形態所述之第1動畫解碼 裝置中的參照切換預測部的區塊圖。 〔圖18〕圖18係第2實施形態所述之動畫解碼裝置 φ 之動作的流程圖。 〔圖19〕圖19係第2實施形態所述之第2動畫編碼 裝置的區塊圖。 〔圖20〕圖20係第2實施形態所述之第2動畫編碼 裝置中的切換資訊生成預測部的區塊圖。 〔圖21〕圖21係第2實施形態所述之第2動畫編碼 裝置中的參照切換預測部的區塊圖。 〔圖22〕圖22係第2實施形態所述之第2動畫解碼 〇 裝置的區塊圖。 〔圖23〕圖23係第2實施形態所述之第2動畫解碼 裝置中的參照切換預測部的區塊圖。 〔圖24〕圖24係第3實施形態所述之動畫編碼裝置 的區塊圖。 〔圖25〕圖25係第3實施形態所述之動畫編碼裝置 中的迴圈濾鏡處理部的區塊圖。 〔圖26〕圖26係第3實施形態所述之動畫編碼裝置 中的切換資訊生成濾鏡處理部的區塊圖° -69- 200948092 〔圖27〕圖27係第3實施形態所述之動畫編碼裝置 之動作的流程圖。 〔圖28〕圖28係第3實施形態所述之動畫解碼裝置 的區塊圖。 〔圖29〕圖29係第3實施形態所述之動畫解碼裝置 中的切換資訊生成濾鏡處理部的區塊圖。 〔圖30〕圖30係第3實施形態所述之動畫解碼裝置 之動作的流程圖。 〔圖31〕圖31係第1、第2、第3實施形態所述之語 法結構之圖示。 〔圖32〕圖32係第1、第2、第3實施形態所述之迴 圈濾鏡資料語法之圖示。 〔圖33〕圖33係對每一巨集區塊進行迴圈濾鏡之切 換時的參照影像之例子。 〔圖34〕圖34係非專利文獻1中的編碼/解碼裝置的 區塊圖。 〔圖35〕圖35係專利文獻1中的編碼/解碼裝置的區 塊圖。 〔圖36〕圖36係非專利文獻2中的編碼/解碼裝置的 區塊圖。 〔圖37〕圖37係第4實施形態所述之動畫編碼裝置 中的切換濾鏡處理部的區塊圖。 〔圖38〕圖38係桌4實施形態所述之動畫解碼裝置 中的切換濾鏡處理部的區塊圖。 -70- 200948092 〔圖39〕圖39係第4實施形態所述之用來決定區塊 尺寸及區塊分割方法的參照表的圖示。 〔圖40〕圖40係第4實施形態所述之區塊分割之例 • 子的圖示。 〔圖41〕圖41係第4實施形態所述之區塊分割方法 有切換時的參照影像之例子。 〔圖42〕圖42係第4實施形態所述之迴圈濾鏡資料 φ 語法的圖示。 〔圖43〕圖43係第5實施形態所述之動畫編碼裝置 中的迴圈濾鏡處理部的區塊圖。 〔圖44〕圖44係第5實施形態所述之動畫編碼裝置 之動作的流程圖。 '〔圖45〕圖45係第6實施形態所述之局部解碼影像 濾鏡處理部的區塊圖。 〔圖46〕圖46係第7實施形態所述之動畫編碼裝置 Q 的區塊圖。 〔圖47〕圖47係第8實施形態所述之動畫解碼裝置 的區塊圖。 〔圖4 8〕圖4 8係第7實施形態所述之預測影像作成 部之一例的區塊圖。 〔圖49〕圖49係第7實施形態所述之預測影像作成 部之另一例的區塊圖。 〔圖5 0〕圖5 0係第9實施形態所述之階層式區塊分 割之例子的圖不。 -71 - 200948092 〔圖51〕圖51係第9實施形態所述之分割樹及已被 分割之區塊之例子的圖示。 〔圖52〕圖52係第9實施形態所述之已被分割之區 · 塊之例子的圖示。 . 〔圖53〕圖53係第9實施形態所述之各階層的區塊 尺寸之例子的圖示。 〔圖54〕圖54係第9實施形態所述之含有區塊分割 資訊的語法的圖示。 @ 〔圖5 5〕圖5 5係第9實施形態所述之含有區塊分割 資訊的另一語法的圖示。 【主要元件符號說明】 1 〇 :輸入影像訊號 1 1 :預測影像訊號 1 2 :預測誤差影像訊號 1 3 :量化後之轉換係數 0 14 :編碼資料 1 5 :預測誤差影像訊號 16:局部解碼影像訊號 1 7 :濾鏡係數資訊 1 8 :切換資訊 1 9 :參照影像訊號 2〇 :復原影像訊號 21 :解碼影像訊號 -72- 200948092 22 :輸出影像訊號 23 :領域設定資訊 101 : • 102 : 103 : 104 : 105 : © 106 : 107 : 108 ·_ 109 : 110: 111: 112: 113: 〇 114: 201 : 202 : 203 : 204 :NumOfChildBlock[layer] can be represented by 4lay. Next, explain valid_block_flag and block partitioning flag. After valid_block_flag is 0 or 1, the initial value of valid_block_flag is set to 〇. block_partitioning_flag is block partitioning information, as described above. When there is a division, it is set to 1, and when there is no division, it is set to 〇. Thus, according to the animation coding and decoding method according to the eighth embodiment, the hierarchical division information is used as the domain division information. , the primary block of the reference can be hierarchically divided into sub-blocks, and by setting the switching information for each divided sub-block, the filter can be adaptively controlled according to each local area in the frame. The switching timing of the processing. Further, by setting the unit for setting the switching information so as to be variable in the screen, the loop filter performs fine-grained domain segmentation in the field where the image quality is greatly affected, and the loop filtering In the field where the mirror has less influence on the image quality, the rougher domain segmentation is performed, whereby the switching information can be set efficiently. -66 - 200948092 In addition, although the division method as the parent block is a description of the division method by the 4-point tree structure, the division method that can be divided in the same area as both the encoding device and the decoding device can be performed. The present invention is not limited to the four-point tree structure. Here, although the case where the hierarchical structure in the parent block is divided into rectangular blocks has been described, the segmentation method that can be divided in the same domain as both the encoding device and the decoding device can be performed. It is not limited to a rectangular block. Q According to the present invention, the filter coefficient information set on the encoding side is encoded, and the filter coefficient information is decoded on the decoding side to be used in the animation encoding/decoding. The encoding side and the decoding side switch to the 圏 filter processing in the same process, and it is possible to suppress the propagation of the image quality deterioration while promoting the image quality improvement of the reference image used in the prediction, so that the coding efficiency is improved. [Effect of industrial use] The image encoding and decoding method and apparatus of the present invention can be used in communication media and storage media. [Fig. 1] Fig. 1 is a block diagram of the animation coding apparatus according to the first embodiment. [Fig. 2] Fig. 2 is a first embodiment. A block diagram of the loop filter processing unit in the animation coding apparatus described above - 67 - 200948092 [Fig. 3] Fig. 3 is a block diagram of the switching filter processing unit in the animation encoding apparatus according to the first embodiment Fig. 4 is a flowchart showing the operation of the animation encoding apparatus according to the first embodiment. Fig. 5 is a block diagram of the animation decoding apparatus according to the first embodiment. [Fig. 6] Fig. 6 is a block diagram of a first switching filter processing unit in the animation decoding device according to the first embodiment. [ Fig. 7] Fig. 7 is a flowchart showing the operation of the animation decoding apparatus according to the first embodiment. [Fig. 8] Fig. 8 is a block diagram showing a second switching filter processing unit in the animation decoding device according to the first embodiment. [Fig. 9] Fig. 9 is a block diagram of a third switching filter processing unit in the animation decoding device according to the first embodiment. [ Fig. 10] Fig. 10 is a block diagram showing a fourth switching filter processing unit in the animation decoding device according to the first embodiment. Fig. 11 is a block diagram of a first animation encoding apparatus according to a second embodiment. [Fig. 12] Fig. 12 is a block diagram of a switching information generation prediction unit in the first animation encoding device according to the second embodiment. [Fig. 13] Fig. 1 is a block diagram of a reference switching prediction unit in the first animation encoding device according to the second embodiment. Fig. 14 is a block diagram of a loop filter processing unit in the first animation encoding device according to the second embodiment. -68- 200948092 [Fig. 15] Fig. 15 is a flowchart showing the operation of the first animation encoding apparatus according to the second embodiment. Fig. 16 is a block diagram of the first animation decoding device according to the second embodiment. [Fig. 17] Fig. 1 is a block diagram of a reference switching prediction unit in the first animation decoding device according to the second embodiment. [Fig. 18] Fig. 18 is a flowchart showing the operation of the animation decoding device φ according to the second embodiment. Fig. 19 is a block diagram showing a second animation encoding apparatus according to the second embodiment. [Fig. 20] Fig. 20 is a block diagram showing a switching information generation prediction unit in the second animation encoding device according to the second embodiment. [Fig. 21] Fig. 21 is a block diagram of a reference switching prediction unit in the second animation encoding device according to the second embodiment. Fig. 22 is a block diagram showing a second animation decoding apparatus according to the second embodiment. [Fig. 23] Fig. 23 is a block diagram of a reference switching prediction unit in the second animation decoding device according to the second embodiment. Fig. 24 is a block diagram showing an animation encoding apparatus according to a third embodiment. [Fig. 25] Fig. 25 is a block diagram showing a loop filter processing unit in the animation encoding device according to the third embodiment. [ Fig. 26] Fig. 26 is a block diagram of a switching information generation filter processing unit in the animation encoding apparatus according to the third embodiment. - 69 - 200948092 [Fig. 27] Fig. 27 is an animation described in the third embodiment. A flow chart of the operation of the encoding device. [Fig. 28] Fig. 28 is a block diagram of the animation decoding apparatus according to the third embodiment. [Fig. 29] Fig. 29 is a block diagram showing a switching information generation filter processing unit in the animation decoding device according to the third embodiment. Fig. 30 is a flow chart showing the operation of the animation decoding apparatus according to the third embodiment. Fig. 31 is a view showing the syntax structure of the first, second, and third embodiments. Fig. 32 is a view showing the syntax of the loop filter data described in the first, second, and third embodiments. [Fig. 33] Fig. 33 is an example of a reference image when a loop filter is switched for each macroblock. [Fig. 34] Fig. 34 is a block diagram of the encoding/decoding apparatus in Non-Patent Document 1. [Fig. 35] Fig. 35 is a block diagram of an encoding/decoding apparatus in Patent Document 1. [Fig. 36] Fig. 36 is a block diagram of the encoding/decoding apparatus in Non-Patent Document 2. [Fig. 37] Fig. 37 is a block diagram showing a switching filter processing unit in the animation encoding device according to the fourth embodiment. [Fig. 38] Fig. 38 is a block diagram showing a switching filter processing unit in the animation decoding apparatus according to the embodiment of Table 4. [0039] Fig. 39 is a diagram showing a reference table for determining a block size and a block dividing method according to the fourth embodiment. [Fig. 40] Fig. 40 is a diagram showing an example of block division according to the fourth embodiment. [Fig. 41] Fig. 41 is a diagram showing an example of a reference image when switching is performed in the block division method according to the fourth embodiment. [Fig. 42] Fig. 42 is a diagram showing the syntax of the loop filter data φ according to the fourth embodiment. [Fig. 43] Fig. 43 is a block diagram showing a loop filter processing unit in the animation encoding device according to the fifth embodiment. Fig. 44 is a flow chart showing the operation of the animation encoding apparatus according to the fifth embodiment. Fig. 45 is a block diagram showing a partial decoded video filter processing unit according to the sixth embodiment. [ Fig. 46] Fig. 46 is a block diagram of the animation encoding apparatus Q according to the seventh embodiment. Fig. 47 is a block diagram showing an animation decoding apparatus according to an eighth embodiment. Fig. 4 is a block diagram showing an example of a predicted image creating unit according to the seventh embodiment. [Fig. 49] Fig. 49 is a block diagram showing another example of the predicted image creating unit according to the seventh embodiment. Fig. 50 is a diagram showing an example of hierarchical block division described in the ninth embodiment. [71] Fig. 51 is a diagram showing an example of a segmentation tree and a divided block according to the ninth embodiment. [Fig. 52] Fig. 52 is a view showing an example of a divided block in the ninth embodiment. Fig. 53 is a view showing an example of the block size of each level described in the ninth embodiment. Fig. 54 is a diagram showing the syntax of the block division information according to the ninth embodiment. @ [Fig. 5 5] Fig. 5 is a diagram showing another syntax of the block division information described in the ninth embodiment. [Main component symbol description] 1 〇: Input video signal 1 1 : Predicted video signal 1 2 : Prediction error video signal 1 3 : Quantized conversion coefficient 0 14 : Coded data 1 5 : Prediction error video signal 16: Local decoded image Signal 1 7 : Filter coefficient information 1 8 : Switching information 1 9 : Reference video signal 2 〇: Restoring video signal 21 : Decoding video signal - 72 - 200948092 22 : Output video signal 23 : Field setting information 101 : • 102 : 103 : 104 : 105 : © 106 : 107 : 108 ·_ 109 : 110: 111: 112: 113: 〇114: 201 : 202 : 203 : 204 :

205 : 20 5 A 205B 20 5 C 預測訊號生成部 減算器 轉換·量化部 熵編碼部 逆轉換•逆量化部 加算器 迴圈濾鏡處理部 參照影像用緩衝區 編碼控制部 濾鏡設定部 切換濾鏡處理部 切換資訊生成部 爐鏡處理部 迴圈濾鏡切換部 熵解碼部 逆轉換•逆量化部 預測訊號生成部 加算器 切換濾鏡處理部 :切換濾鏡處理部 :切換濾鏡處理部 :切換濾鏡處理部 -73 200948092 205D :切換濾鏡處理部 206 :參照影像用緩衝區 2 0 7 :解碼控制部 208 :濾鏡處理部 209 :迴圈濾鏡切換部 2 1 0 :後段濾鏡切換部 3 0 1 :切換資訊生成預測部 3 0 1 A :切換資訊生成預測部 3 0 1 B :切換資訊生成預測部 302:迴圈濾鏡處理部 3 03 :局部解碼影像用緩衝區 304:復原影像用緩衝區 3 05 :參照切換預測部 3 05A :參照切換預測部 3 0 5 B :參照切換預測部 401 :參照切換預測部 4 0 1 A :參照切換預測部 4 0 1 B :參照切換預測部 4 02 :解碼影像用緩衝區 403:復原影像用緩衝區 501 :迴圈濾鏡處理部 5 02 :切換資訊生成濾鏡處理部 5 03:切換資訊生成部 601 :切換資訊生成濾鏡處理部 -74- 200948092 602 :切換資訊生成部 701 :局部解碼影像濾鏡處理部 ' 702 :濾鏡交界判定處理部 ' 7 〇 3 :開關 704:去區塊爐鏡處理部 7 0 5 :開關 706 :影像復原濾鏡處理部 ❺ 801 : Deblocking 濾鏡部 8 02 :濾鏡設定·切換資訊生成部 803:解碼影像用緩衝區 804 :預測影像作成部 805 :運動向量生成部 811 : Deblocking 濾鏡部 813:解碼影像用緩衝區 8 1 4 :預測影像作成部 Q 821 :開關 822 :復原影像作成部 823:內插影像作成部 8 3 1 :開關 8 3 2 :整數像素濾鏡部 8 3 3 :位元擴充部 834 :內插影像作成部 8 3 5 :開關 83 6 :加權預測影像作成部 -75- 200948092 8 3 7 :位元縮減部 847 :位元縮減部 901 :去區塊濾鏡處理部 902:去區塊濾鏡處理部 903 :切換部 904 :編碼參數抽出部 905 :後段濾鏡設定部 906 :後段濾鏡處理部 1 000 :動畫編碼裝置 1900:筒等級語法 1901 :序列參數集語法 1 902 :圖像參數集語法 1 903 :切片等級語法 1 904 :切片標頭語法 1 905 :切片資料語法 1 906 :迴圈濾鏡資料語法 1 907 :巨集區塊等級語法 1908:巨集區塊層語法 1909 :巨集區塊預測語法 2000 :動畫解碼裝置 3 000 :動畫編碼裝置 3 00 1 :動畫編碼裝置 4000 :動畫解碼裝置 400 1 :動畫解碼裝置 -76- 200948092 5000:動畫編碼裝置 6000 :動畫解碼裝置 7000 :動畫編碼裝置 8000 :動畫解碼裝置 SW :開關 A,B :端子。205 : 20 5 A 205B 20 5 C Predictive signal generation unit reducer conversion/quantization unit entropy coding unit inverse conversion • inverse quantization unit adder loop filter processing unit reference image buffer coding control unit filter setting unit switching filter Mirror processing unit switching information generation unit, hob mirror processing unit, loop filter switching unit, entropy decoding unit, inverse conversion, inverse quantization unit, prediction signal generation unit, addition unit switching filter processing unit: switching filter processing unit: switching filter processing unit: Switching filter processing unit-73 200948092 205D : switching filter processing unit 206 : reference video buffer 2 0 7 : decoding control unit 208 : filter processing unit 209 : loop filter switching unit 2 1 0 : rear filter Switching unit 3 0 1 : switching information generation prediction unit 3 0 1 A : switching information generation prediction unit 3 0 1 B : switching information generation prediction unit 302: loop filter processing unit 3 03 : local decoded video buffer 304: The restored video buffer 3 05 : Reference switching prediction unit 3 05A : Reference switching prediction unit 3 0 5 B : Reference switching prediction unit 401 : Reference switching prediction unit 4 0 1 A : Reference switching prediction unit 4 0 1 B : Reference switching Prediction section 4 0 2: Decoded video buffer 403: restored video buffer 501: loop filter processing unit 5 02: switching information generation filter processing unit 03: switching information generation unit 601: switching information generation filter processing unit - 74 - 200948092 602 : switching information generation unit 701 : local decoded video filter processing unit '702 : filter boundary determination processing unit ' 7 〇 3 : switch 704 : deblocking furnace mirror processing unit 7 0 5 : switch 706 : image restoration Filter processing unit 801 : Deblocking filter unit 8 02 : Filter setting/switching information generating unit 803 : Decoded video buffer 804 : Predicted video creating unit 805 : Motion vector generating unit 811 : Deblocking filter unit 813 : Decoding Video buffer 8 1 4 : Predicted video preparation unit Q 821 : Switch 822 : Restored video creation unit 823 : Interpolated video creation unit 8 3 1 : Switch 8 3 2 : Integer pixel filter unit 8 3 3 : Bit expansion Part 834: Interpolation image creation unit 8 3 5 : Switch 83 6 : Weighted prediction image creation unit -75 - 200948092 8 3 7 : Bit reduction unit 847 : Bit reduction unit 901 : Deblocking filter processing unit 902: Deblocking filter processing unit 903: switching unit 904: encoding parameters Output section 905: Rear section filter setting section 906: Rear section filter processing section 1 000: Animation encoding apparatus 1900: Cartridge level syntax 1901: Sequence parameter set syntax 1 902: Image parameter set syntax 1 903: Slice level syntax 1 904: Slice header syntax 1 905 : Slice data syntax 1 906 : Loop filter data syntax 1 907 : Macro block level syntax 1908: Macro block layer syntax 1909 : Macro block prediction syntax 2000 : Animation decoding device 3 000 : animation encoding device 3 00 1 : animation encoding device 4000 : animation decoding device 400 1 : animation decoding device - 76 - 200948092 5000: animation encoding device 6000 : animation decoding device 7000 : animation encoding device 8000 : animation decoding device SW : switch A, B: terminal.

-77--77-

Claims (1)

200948092 七、申請專利範圍: 1. 一種動畫編碼方法,係屬於將已編碼之影像當作參 照影像而使用於下個進行編碼之影像之預測上的動畫編碼 方法,其特徵爲,具備: 對已編碼之影像的局部解碼影像’適用濾鏡’以生成 復原影像之步驟;和 將前記濾鏡的濾鏡係數資訊加以設定之步驟;和 將前記濾鏡係數資訊加以編碼之步驟;和 將用來表示把前記局部解碼影像或前記復原影像當作 參照影像來使用之意旨的特定資訊,加以編碼之步驟;和 基於前記特定資訊,而將前記局部解碼影像或前記復 原影像當作參照影像而保存在記憶體中之步驟。 2. —種動畫解碼方法,係屬於將已解碼之影像當作參 照影像而使用於下個進行解碼之影像之預測上的動畫解碼 方法,其特徵爲,具備: 對解碼影像適用濾鏡以生成復原影像之步驟;和 將前記濾鏡的濾鏡係數資訊加以解碼之步驟;和 將用來表示要當作參照影像而使用之前記解碼影像或 前記復原影像的特定資訊,加以解碼之步驟;和 基於前記特定資訊,而將前記解碼影像或前記復原影 像當作參照影像而保存在記憶體中之步驟。 3. —種動畫編碼方法,係屬於將已編碼之影像當作參 照影像而使用於下個進行編碼之影像之預測上的動畫編碼 方法,其特徵爲,具備: -78- 200948092 對已編碼之影像的局部解碼影像,適用濾鏡,以生成 復原影像之步驟;和 將前記濾鏡的濾鏡係數資訊加以設定之步驟;和 * 將前記濾鏡係數資訊加以編碼之步驟;和 將用來表示把前記局部解碼影像或前記復原影像當作 參照影像來使用之意旨的特定資訊,加以編碼之步驟;和 基於前記特定資訊,而將前記局部解碼影像或前記復 φ 原影像當作參照影像而在預測時加以使用之步驟。 4. 一種動畫解碼方法,係屬於將已解碼之影像當作參 照影像而使用於下個進行解碼之影像之預測上的動畫解碼 方法,其特徵爲,具備: 對解碼影像適用濾鏡以生成復原影像之步驟;和 將前記濾鏡的濾鏡係數資訊加以解碼之步驟;和 將用來表示把前記局部解碼影像或前記復原影像當作 參照影像來使用之意旨的特定資訊,加以解碼之步驟;和 〇 基於前記特定資訊,而將前記解碼影像或前記復原影 像當作參照影像而在預測時加以使用之步驟。 5. —種動畫編碼方法,係屬於將已編碼之影像當作參 照影像而使用於下個進行編碼之影像之預測上的動畫編碼 方法,其特徵爲’具備: 對已編碼之影像的局部解碼影像,適用濾鏡,以生成 復原影像之步驟;和 將前記濾鏡的濾鏡係數資訊加以設定之步驟;和 將前記濾鏡係數資訊加以編碼之步驟;和 -79- 200948092 基於用來表示把前記局部解碼影像或前記復原影像當 作參照影像來使用之意旨的特定資訊’而將前記局部解碼 影像或前記復原影像當作參照影像而保存在記憶體中之步 驟。 6.—種動畫解碼方法,係屬於將已解碼之影像當作參 照影像而使用於下個進行解碼之影像之預測上的動畫解碼 方法,其特徵爲,具備: 對解碼影像適用濾鏡以生成復原影像之步驟;和 將前記濾鏡的濾鏡係數資訊加以解碼之步驟;和 基於用來表示把前記局部解碼影像或前記復原影像當 作參照影像來使用之意旨的特定資訊,而將前記解碼影像 或前記復原影像當作參照影像而保存在記憶體中之步驟。 7 ·如申請專利範圍第1項或第3項所記載之動畫編碼 方法,其中,具備: 將用來表示在解碼側將前記解碼影像或前記復原影像 當作輸出影像來使用之意旨的特定資訊,在編碼側上加以 編碼之步驟。 8. 如申請專利範圍第2項或第4項所記載之動畫解碼 方法,其中,具備: 將用來表示把前記解碼影像或前記復原影像當作輸出 影像來使用之意旨的特定資訊,加以解碼之步驟;和 基於目ij記特定資訊’而將前記解碼影像或前記復原影 像當作輸出影像而加以輸出之步驟。 9. 如申請專利範圍第6項所記載之動畫解碼方法,其 -80- 200948092 中,具備: 將用來表示把前記解碼影像或前記復原影像當作輸出 影像來使用之意旨的特定資訊’加以生成之步驟;和 ' 基於前記特定資訊,而將前記解碼影像或前記復原影 像當作輸出影像而加以輸出之步驟。 1 〇·如申請專利範圍第1、3、7項之任1項所記載之 動畫編碼方法,其中,將前記特定資訊,依照畫格或畫格 〇 內的每一局部領域而加以編碼。 1 1 ·如申請專利範圍第2、4、8項之任1項所記載之 動畫解碼方法,其中,將前記特定資訊,依照畫格或畫格 內的每一局部領域而加以解碼。 12.如申請專利範圍第1、3、7項之任1項所記載之 _ 動畫編碼方法,其中,含有: 計算出前記局部解碼影像及前記復原影像與輸入原影 像之間的誤差之步驟;和 〇 生成用來表示把誤差較小之影像當作參照影像之意旨 的前記特定資訊之步驟。 1 3 .如申請專利範圍第5項所記載之動畫編碼方法, 其中,含有:使用可從前記局部解碼影像或前記解碼影像 中取得出來之指標亦即與周邊像素之差分絕對値和、差分 平方和、活性(activity )、空間頻率、邊緣強度、邊緣 方向之任一者以上,來生成前記特定資訊之步驟。 14.如申請專利範圍第6項或第9項所記載之動畫解 碼方法,其中,含有:使用可從前記局部解碼影像或前記 -81 - 200948092 解碼影像中取得出來之指標亦即與周邊像素之差分絕對値 和、差分平方和、活性(activity)、空間頻率、邊緣強 度、邊緣方向之任一者以上,來生成前記特定資訊之步驟 〇 15. 如申請專利範圍第5項所記載之動畫編碼方法’ 其中,含有:使用編碼資訊的一部分亦即量化參數、區塊 尺寸、預測模式、運動向量、轉換係數資訊之任一者以上 ,來生成前記特定資訊之步驟。 16. 如申請專利範圍第6項或第9項所記載之動畫解 碼方法,其中,含有:使用編碼資訊的一部分亦即量化參 數、區塊尺寸、預測模式、運動向量、轉換係數資訊之任 一者以上,來生成前記特定資訊之步驟。 17. —種動畫編碼裝置,係屬於將已編碼之影像當作 參照影像而使用於下個進行編碼之影像之預測上的動畫編 碼裝置,其特徵爲,具備: 復原影像生成部,係用以對已編碼之影像的局部解碼 影像,適用濾鏡,以生成復原影像;和 設定部,係用以設定前記濾鏡的濾鏡係數資訊;和 解碼部,係用以將前記瀘鏡係數資訊加以編碼;和 編碼部,係用以將用來表示把前記局部解碼影像或前 記復原影像當作參照影像來使用之意旨的特定資訊’加以 編碼;和 保存部,係基於前記特定資訊,而將前記局部解碼影 像或前記復原影像當作參照影像而保存在記憶體中。 -82- 200948092 18·—種動畫解碼裝置,係屬於將已解碼之影像當作 參照影像而使用於下個進行解碼之影像之預測上的動畫解 碼裝置,其特徵爲,具備: ' 復原影像生成部,係用以對解碼影像適用濾鏡以生成 復原影像;和 第1解碼部,係用以將前記濾鏡的濾鏡係數資訊,加 以解碼;和 Φ 第2解碼部,係用以將用來表示把前記局部解碼影像 或前記復原影像當作參照影像來使用之意旨的特定資訊, 加以解碼;和 保存部,係基於前記特定資訊,而將前記解碼影像或 前記復原影像當作參照影像而保存在記憶體中。 _ 19. 一種動畫編碼方法,係屬於將已編碼之影像當作 參照影像而使用於下個進行編碼之影像之預測上的動畫編 碼方法,其特徵爲,具備: 〇 特定資訊編碼步驟,係用以將特定資訊加以編碼,其 係用來表示是否把對已編碼之影像的局部解碼影像適用濾 鏡所生成之復原影像當作前記參照影像來使用,或者是否 把前記局部解碼影像當作參照影像來使用;和 保存步驟,係基於前記特定資訊,而將前記局部解碼 影像或前記復原影像當作參照影像而保存在記億體中。 2 0 .如申請專利範圍第1 9項所記載之動畫編碼方法, 其中,前記特定資訊編碼步驟,係爲了將前記特定資訊按 照畫格內的每一局部領域而加以設定並進行編碼,而含有 -83- 200948092 將決定前記局部領域之大小用的領域設定資訊加以編碼之 步驟。 21. 如申請專利範圍第19項所記載之動畫編碼方法, 其中,前記特定資訊編碼步驟,係爲了將前記特定資訊按 照畫格內的每一局部領域進行編碼,而含有基於前記特定 資訊來設定前記濾鏡的濾鏡係數資訊之步驟。 22. 如申請專利範圍第19項所記載之動畫編碼方法, 其中,前記特定資訊編碼步驟,係將前記領域設定資訊, 依照畫格或畫格內的每一局部領域而加以編碼。 23. 如申請專利範圍第20項或第22項所記載之動畫 編碼方法,其中,前記領域設定資訊,係用來表示前記局 部領域之大小的値。 24. 如申請專利範圍第20項或第22項所記載之動畫 編碼方法,其中,前記領域設定資訊,係用來表示將所定 大小之局部領域內部予以分割所需之分割方法。 25. 如申請專利範圍第24項所記載之動畫編碼方法, 其中,前記領域設定資訊係用來表示,依照預先備妥之領 域尺寸規定表所決定的前記局部領域之大小或是表示前記 分割方法的指數。 26. 如申請專利範圍第25項所記載之動畫編碼方法, 其中,準備複數個前記領域尺寸規定表,基於影像尺寸、 圖像類型及用來規定量化粗細度之量化參數的1者以上, 來切換之。 27. —種動畫解碼方法,係屬於將已解碼之影像當作 -84- 200948092 參照影像而使用於下個進行解碼之影像之預測上的動畫解 碼方法,其特徵爲,含有: * 特定資訊解碼步驟,係用以將特定資訊加以解碼’其 - 係用來表示是否把對解碼影像適用濾鏡所生成之復原影像 當作參照影像來使用’或者是否把前記解碼影像當作參照 影像來使用;和 保存步驟,係基於前記特定資訊,而將前記解碼影像 0 或前記復原影像當作參照影像而保存在記憶體中。 28. 如申請專利範圍第27項所記載之動畫解碼方法, 其中,前記特定資訊解碼步驟,係爲了將前記特定資訊予 以解碼並按照畫格內的每一局部領域而加以設定,而含有 領域設定資訊解碼步驟,其係用以將決定前記局部領域之 大小用的領域設定資訊加以解碼。 29. 如申請專利範圍第27項所記載之動畫解碼方法, 其中,前記領域設定資訊解碼步驟,係將前記領域設定資 〇 訊,依照畫格或畫格內的每一局部領域而加以解碼。 30·如申請專利範圍第28項或第29項所記載之動畫 解碼方法,其中’前記領域設定資訊,係用來表示前記局 部領域之大小的値。 3 1.如申請專利範圍第28項或第29項所記載之動畫 解碼方法,其中’前記領域設定資訊,係用來表示將所定 大小之局部領域內部予以分割所需之分割方法。 32.如申請專利圍第28項或第29項所記載之動書 解碼方法’其中’ BUg己領域設定資訊係爲,依照預先備妥 -85- 200948092 之領域尺寸規定表所決定的前記領域之大小或是表示前記 分割方法的指數。 33. 如申請專利範圍第32項所記載之動畫解碼方法, 其中,準備複數個前記領域尺寸規定表’基於影像尺寸、 圖像類型、用來規定量化粗細度之量化參數的1者以上’ 來切換之。 34. —種動畫編碼方法,其特徵爲,對申請項i、3、5 、13、15、19、20、21、22、及24之任1項所記載之已 編碼之局部解碼影像’適用濾鏡以生成復原影像的步驟’ 係 含有:判定局部解碼影像是否爲區塊交界之步驟;和 對於已將前記局部解碼影像判定爲區塊交界的像素’進行 用來去除區塊失真的濾鏡處理之步驟。 3 5.—種動畫解碼方法,其特徵爲,對申請項2、4、6 、9、27、28及29之任1項所記載之解碼影像’適用濾鏡 以生成復原影像的步驟,係 含有:判定局部解碼影像是否爲區塊交界之步驟;和 對於已將前記局部解碼影像判定爲區塊交界的像素’進行 用來去除區塊失真的濾鏡處理之步驟。 36.—種動畫編碼方法,係屬於將已編碼之影像當作 參照影像而使用於運動補償預測上的動畫編碼方法’其特 徵爲,具備: 對已編碼之影像的局部解碼影像,適用濾鏡’以生成 復原影像之步驟;和 -86- 200948092 將前記濾鏡的濾鏡係數資訊加以設定之步驟;和 將前記濾鏡係數資訊加以編碼之步驟;和 ' 將用來表示把前記局部解碼影像或前記復原影像當作 • 參照影像來使用之意旨的特定資訊,加以編碼之步驟;和 基於前記特定資訊,而將前記局部解碼影像或前記復 原影像當作參照影像而在運動補償預測時加以使用之步驟 0 ❹ 37.—種動畫解碼方法,係屬於將已解碼之影像當作 參照影像而使用於運動補償預測上的動畫解碼方法,其特 徵爲,具備: 對解碼影像適用濾鏡以生成復原影像之步驟;和 將前記濾鏡的濾鏡係數資訊加以解碼之步驟;和 將用來表示把前記局部解碼影像或前記復原影像當作 參照影像來使用之意旨的特定資訊,加以解碼之步驟;和 基於前記特定資訊,而將前記解碼影像或前記復原影 © 像當作參照影像而在運動補償預測時加以使用之步驟。 38.如申請專利範圍第20項或第22項所記載之動畫 編碼方法,其中,前記領域設定資訊,係用來將前記局部 領域內部,按照階層越深則分割成越小領域所需之階層分 割資訊。 39·如申請專利範圍第38項所記載之動畫編碼方法, 其中,前記階層分割資訊,係爲用來表示是否要將各階層 中的每個領域進行分割的領域分割資訊。 4〇.如申請專利範圍第38項所記載之動畫編碼方法, -87- 200948092 其中,前記階層分割資訊’係含有最大階層資訊’其係用 來表示階層之深度之最大値。 4 1.如申請專利範圍第28項或第29項所記載之動畫 解碼方法,其中,前記領域設定資訊,係用來將前記局部 領域’按照階層越深則分割成越小領域所需之階層分割資 訊。 4 2.如申請專利範圍第41項所記載之動畫解碼方法’ 其中,前記階層分割資訊,係爲用來表示是否要將各階層 中的每個領域進行分割的領域分割資訊。 4 3.如申請專利範圍第41項所記載之動畫解碼方法’ 其中,前記階層分割資訊’係含有最大階層資訊’其係用 $表示階層之深度之最大値。 -88-200948092 VII. Patent application scope: 1. An animation coding method belongs to an animation coding method for predicting an image to be encoded by using the encoded image as a reference image, which is characterized by: a step of locally decoding the image of the encoded image 'applicable filter' to generate a reconstructed image; and a step of setting filter coefficient information of the predescriptor filter; and encoding the preamble filter coefficient information; and a step of encoding the specific information of the pre-recorded local decoded image or the pre-recovered image as a reference image, and encoding the pre-recorded local decoded image or the pre-recovered image as a reference image based on the pre-recorded specific information. The steps in the memory. 2. An animation decoding method, which is an animation decoding method for predicting an image to be decoded by using a decoded image as a reference image, and is characterized in that: a filter is applied to the decoded image to generate a step of restoring the image; and a step of decoding the filter coefficient information of the pre-filter; and decoding the specific information to be used as the reference image to use the previously decoded image or the pre-recovered image; and The step of storing the pre-decoded image or the pre-recovered image as a reference image in the memory based on the pre-recorded specific information. 3. An animation coding method, which is an animation coding method for predicting an image to be encoded by using a coded image as a reference image, and is characterized in that it has: -78- 200948092 a locally decoded image of the image, a filter for generating a reconstructed image; and a step of setting a filter coefficient information of the front filter; and * a step of encoding the front filter coefficient information; and The step of encoding the specific information of the pre-recorded local decoded image or the pre-recovered image as the reference image, and encoding the pre-recorded local decoded image or the pre-recorded original image as the reference image based on the pre-recorded specific information The steps to use when predicting. 4. An animation decoding method, which is an animation decoding method for predicting an image to be decoded by using a decoded image as a reference image, and is characterized in that: a filter is applied to the decoded image to generate a restoration a step of decoding; and a step of decoding the filter coefficient information of the pre-filter; and decoding the specific information for indicating that the pre-recorded local decoded image or the pre-recovered image is used as the reference image; And 〇 based on the pre-recorded specific information, the pre-recorded decoded image or the pre-recorded restored image is used as a reference image and used in the prediction. 5. An animation coding method, which is an animation coding method for predicting an image to be encoded by using the encoded image as a reference image, and is characterized by: having: local decoding of the encoded image Image, applying a filter to generate a reconstructed image; and setting the filter coefficient information of the pre-filter; and encoding the pre-filter coefficient information; and -79-200948092 based on The pre-recorded partial decoded image or the pre-recovered image is used as a reference image to be used as a reference image, and the pre-recorded partial decoded image or the pre-recorded restored image is stored as a reference image in the memory. 6. An animation decoding method, which is an animation decoding method for predicting an image to be decoded by using a decoded image as a reference image, and is characterized in that: a filter is applied to the decoded image to generate a step of restoring the image; and decoding the filter coefficient information of the pre-filter; and decoding the pre-record based on the specific information used to indicate that the pre-recorded local decoded image or the pre-recovered image is used as the reference image The image or the pre-recovered image is stored as a reference image in the memory. 7. The animation encoding method according to the first or third aspect of the patent application, comprising: specific information for indicating that the pre-decoded video or the pre-reconstructed video is used as an output image on the decoding side. , the step of encoding on the encoding side. 8. The animation decoding method according to the second or fourth aspect of the patent application, wherein: the specific information for indicating that the pre-decoded video or the pre-reconstructed video is used as an output video is used for decoding And the step of outputting the pre-decoded image or the pre-recovered image as an output image based on the specific information of the object ij. 9. In the animation decoding method described in the sixth paragraph of the patent application, the -80-200948092 includes: a specific information for indicating that the pre-decoded video or the pre-reconstructed video is used as an output image. The step of generating; and the step of outputting the pre-decoded image or the pre-recovered image as an output image based on the pre-recorded specific information. 1 〇 · An animation coding method as described in any of items 1, 3, and 7 of the patent application, wherein the pre-recorded specific information is coded according to each partial field in the frame or the frame 。. 1 1 The animation decoding method according to any one of claims 2, 4, and 8, wherein the pre-recording specific information is decoded in accordance with each partial field in the frame or the frame. 12. The method of encoding an animation according to any one of claims 1, 3, and 7, wherein: the step of: calculating a difference between a pre-recorded local decoded image and a pre-recovered image and an input original image; And 〇 generates a step for indicating the pre-recorded specific information of the image having the smaller error as the reference image. The animation coding method described in claim 5, wherein the method includes: using an index that can be obtained from a previously recorded partial decoded image or a pre-recorded decoded image, that is, an absolute sum of difference from a peripheral pixel, and a difference squared Any of the above, activity, spatial frequency, edge strength, and edge direction to generate the pre-recorded specific information. 14. The animation decoding method according to claim 6 or claim 9, wherein the method includes: using an indicator that can be obtained from a previously decoded partial decoded image or a pre-recorded -81 - 200948092 decoded image, that is, a peripheral pixel The steps of generating the pre-recorded specific information by using either absolute difference sum, difference sum of squares, activity, spatial frequency, edge intensity, or edge direction 〇 15. The animation code described in item 5 of the patent application scope The method s includes the step of generating a pre-recorded specific information using any of a part of the encoded information, that is, a quantization parameter, a block size, a prediction mode, a motion vector, and a conversion coefficient information. 16. The animation decoding method as described in claim 6 or claim 9, wherein the method includes: using a part of the encoded information, that is, a quantization parameter, a block size, a prediction mode, a motion vector, and a conversion coefficient information. Above, to generate the steps of the pre-recorded specific information. 17. An animation coding apparatus, which is an animation coding apparatus that uses a coded video as a reference image and is used for prediction of a next coded image, and includes: a restoration video generation unit; For the locally decoded image of the encoded image, a filter is applied to generate the restored image; and a setting unit is used to set the filter coefficient information of the front filter; and a decoding unit is used to add the information of the front mirror coefficient The encoding unit and the encoding unit are configured to encode the specific information used to indicate that the pre-recorded local decoded image or the pre-recorded restored image is used as a reference image; and the storage unit is based on the pre-recorded specific information. The locally decoded image or the pre-recovered image is stored as a reference image in the memory. -82- 200948092 18--Animated decoding device is an animation decoding device belonging to the prediction that the decoded image is used as a reference image for the next decoded image, and is characterized by: 'Reconstructed image generation a portion for applying a filter to the decoded image to generate a restored image; and a first decoding unit for decoding filter coefficient information of the pre-filter; and a second decoding unit for use in the second decoding unit The specific information for the purpose of using the pre-recorded partial decoded image or the pre-recovered restored image as a reference image is decoded, and the storage unit uses the pre-recorded decoded image or the pre-reposted restored image as the reference image based on the pre-recorded specific information. Save in memory. _ 19. An animation coding method is an animation coding method for predicting an image to be encoded by using the encoded image as a reference image, and is characterized in that: 〇 a specific information encoding step is used To encode specific information, which is used to indicate whether the restored image generated by applying the filter to the locally decoded image of the encoded image is used as a pre-referenced image, or whether the pre-recorded locally decoded image is used as a reference image. The use and storage steps are based on the pre-recorded specific information, and the pre-recorded local decoded image or the pre-recovered restored image is stored as a reference image in the cell. The animation coding method described in claim 19, wherein the pre-recording specific information coding step is to set and encode the pre-recorded specific information according to each partial field in the frame. -83- 200948092 The procedure for encoding the field setting information for the size of the local area beforehand is determined. 21. The animation encoding method as recited in claim 19, wherein the pre-recording specific information encoding step is configured to encode the pre-recorded specific information according to each partial field in the frame, and includes setting based on the pre-recorded specific information. The steps for the filter coefficient information of the pre-filter. 22. The animation encoding method as recited in claim 19, wherein the pre-recording specific information encoding step encodes the pre-recorded field information according to each partial field in the frame or the frame. 23. The method of encoding an animation as described in claim 20 or 22, wherein the pre-recorded field setting information is used to indicate the size of the field of the predecessor. 24. The method of encoding an animation as recited in claim 20 or 22, wherein the pre-recorded field setting information is used to indicate a segmentation method required to divide a local region of a predetermined size. 25. The method of encoding an animation according to claim 24, wherein the pre-recording field setting information is used to indicate a size of a pre-recorded partial field determined according to a pre-prepared domain size specification table or a pre-recording method. Index. 26. The animation coding method according to claim 25, wherein a plurality of pre-registration field size specification tables are prepared, based on one or more of an image size, an image type, and a quantization parameter for specifying a quantization thickness. Switch it. 27. An animation decoding method is an animation decoding method for predicting an image to be decoded by using the decoded image as a -84-200948092 reference image, which is characterized by: * specific information decoding The step is to decode the specific information, which is used to indicate whether the restored image generated by applying the filter to the decoded image is used as a reference image or whether the pre-recorded decoded image is used as a reference image; And the saving step is based on the pre-recorded specific information, and the pre-decoded image 0 or the pre-recovered image is stored as a reference image in the memory. 28. The animation decoding method according to claim 27, wherein the pre-recording specific information decoding step is to decode the pre-recorded specific information and set according to each partial field in the frame, and includes the domain setting. The information decoding step is used to decode the field setting information for determining the size of the pre-recorded local area. 29. The animation decoding method as recited in claim 27, wherein the pre-recording field setting information decoding step is to set the pre-recording field information to be decoded according to each partial field in the frame or the frame. 30. An animation decoding method as described in claim 28 or claim 29, wherein the pre-recording field setting information is used to indicate the size of the field of the preceding office. 3 1. The animation decoding method as described in claim 28 or claim 29, wherein the 'previous field setting information' is used to indicate a division method required to divide the internal area of the predetermined size. 32. If the method of decoding the mobile document described in the 28th or 29th paragraph of the patent application is 'the 'Bug field setting information', the pre-recorded field determined according to the pre-prepared field size specification table of -85- 200948092 Size or an index that represents the pre-segmentation method. 33. The animation decoding method according to claim 32, wherein a plurality of pre-registration field size specification tables are prepared based on one or more of image size, image type, and quantization parameter for specifying quantization thickness. Switch it. 34. An animation coding method, characterized in that the encoded local decoded image described in any one of the applications i, 3, 5, 13, 15, 19, 20, 21, 22, and 24 is applicable. The step of generating a reconstructed image by the filter includes: a step of determining whether the locally decoded image is a block boundary; and a filter for removing the block distortion for the pixel that has determined that the pre-recorded locally decoded image is a block boundary The steps of processing. 3 5. An animation decoding method, characterized in that the step of applying a filter to generate a reconstructed image for the decoded image described in any one of claims 2, 4, 6, 9, 27, 28, and 29 is And including: a step of determining whether the locally decoded image is a block boundary; and a step of performing filter processing for removing the block distortion for the pixel that has determined that the pre-recorded locally decoded image is a block boundary. 36. An animation coding method is an animation coding method used for motion compensation prediction using a coded image as a reference image. The feature is: a method for: locally decoding an image of the encoded image, applying a filter 'To generate a reconstructed image; and -86- 200948092 to set the filter coefficient information of the pre-filter; and to encode the pre-filter coefficient information; and 'will be used to represent the pre-recorded local decoded image Or the pre-recovery image is used as a specific information for the purpose of the reference image, and the step of encoding; and based on the pre-recorded specific information, the pre-recorded local decoded image or the pre-recovered image is used as the reference image for motion compensation prediction. Step 0 ❹ 37. The animation decoding method belongs to an animation decoding method for using a decoded image as a reference image for motion compensation prediction, and is characterized in that: a filter is applied to the decoded image to generate a restoration. The steps of the image; and the step of decoding the filter coefficient information of the pre-filter; and a step of decoding the specific information used to use the pre-recorded local decoded image or the pre-recovered image as a reference image, and decoding the pre-decoded image or the pre-recovered image as a reference image based on the pre-recorded specific information. The steps to be used in motion compensation prediction. 38. The animation coding method described in claim 20 or 22, wherein the pre-recorded field setting information is used to divide the pre-recorded local area into a smaller class according to the deeper the hierarchy. Split the information. 39. The animation coding method according to claim 38, wherein the pre-hierarchical segmentation information is field segmentation information indicating whether or not each domain in each hierarchy is to be divided. 4. The animation coding method described in claim 38, -87- 200948092 wherein the pre-hierarchical segmentation information ' contains the largest class information' which is used to indicate the maximum depth of the hierarchy. 4 1. The animation decoding method described in claim 28 or 29, wherein the pre-recording field setting information is used to divide the pre-recorded local area into the smaller required segments according to the deeper level of the hierarchy. Split the information. 4 2. The animation decoding method as described in claim 41, wherein the pre-hierarchical segmentation information is field segmentation information indicating whether or not each domain in each hierarchy is to be divided. 4 3. The animation decoding method as recited in claim 41, wherein the pre-hierarchical segmentation information' contains the largest class information' and the system uses $ to indicate the maximum depth of the hierarchy. -88-
TW098106980A 2008-03-07 2009-03-04 Dynamic image encoding/decoding method and device TW200948092A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2008058375 2008-03-07
PCT/JP2008/073636 WO2009110160A1 (en) 2008-03-07 2008-12-25 Dynamic image encoding/decoding method and device

Publications (1)

Publication Number Publication Date
TW200948092A true TW200948092A (en) 2009-11-16

Family

ID=44870484

Family Applications (1)

Application Number Title Priority Date Filing Date
TW098106980A TW200948092A (en) 2008-03-07 2009-03-04 Dynamic image encoding/decoding method and device

Country Status (1)

Country Link
TW (1) TW200948092A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI629893B (en) * 2012-01-06 2018-07-11 日商新力股份有限公司 Image processing device and method
TWI703855B (en) * 2011-11-25 2020-09-01 南韓商三星電子股份有限公司 Apparatus of decoding image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI703855B (en) * 2011-11-25 2020-09-01 南韓商三星電子股份有限公司 Apparatus of decoding image
TWI629893B (en) * 2012-01-06 2018-07-11 日商新力股份有限公司 Image processing device and method

Similar Documents

Publication Publication Date Title
WO2009110160A1 (en) Dynamic image encoding/decoding method and device
JP6950055B2 (en) Video information coding / decoding method and equipment
US10455232B2 (en) Method and device for encoding/decoding image
CN106331709B (en) Method and apparatus for processing video using in-loop processing
JP6356346B2 (en) Deblock filtering for intra block copy
KR101677406B1 (en) Video codec architecture for next generation video
JP6165840B2 (en) Chroma slice level QP offset and deblocking
TWI751623B (en) Method and apparatus of cross-component adaptive loop filtering with virtual boundary for video coding
JP2018530246A (en) Improved video intra prediction using position-dependent prediction combinations for video coding
WO2010143583A1 (en) Image processing device and method
WO2010001614A1 (en) Video image encoding method, video image decoding method, video image encoding apparatus, video image decoding apparatus, program and integrated circuit
JP2017513342A (en) System and method for low complex forward transformation using zeroed out coefficients
JPWO2008120577A1 (en) Image encoding and decoding method and apparatus
KR20160075556A (en) Adaptive inter-color component residual prediction
JPWO2010021108A1 (en) Interpolation filtering method, image encoding method, image decoding method, interpolation filtering device, program, and integrated circuit
JPWO2009133844A1 (en) Video encoding / decoding method and apparatus having filtering function considering edge
EP3039871A1 (en) Residual prediction for intra block copying
KR20130119494A (en) Pixel level adaptive intra-smoothing
EP2774368A1 (en) Secondary boundary filtering for video coding
KR20140110771A (en) Method and apparatus for scalable video encoding using switchable de-noising filtering, Method and apparatus for scalable video decoding using switchable de-noising filtering
JP7195348B2 (en) Apparatus and method for filtering in video coding
KR20120010177A (en) Deblocking filtering method and apparatus, method and apparatus for encoding and decoding using deblocking filtering
WO2009133845A1 (en) Video encoding/decoding device and method
JP2017514353A (en) System and method for low-complex forward transformation using mesh-based computation
TW200948092A (en) Dynamic image encoding/decoding method and device