JP4763241B2 - Motion prediction information detection device - Google Patents

Motion prediction information detection device Download PDF

Info

Publication number
JP4763241B2
JP4763241B2 JP2004021328A JP2004021328A JP4763241B2 JP 4763241 B2 JP4763241 B2 JP 4763241B2 JP 2004021328 A JP2004021328 A JP 2004021328A JP 2004021328 A JP2004021328 A JP 2004021328A JP 4763241 B2 JP4763241 B2 JP 4763241B2
Authority
JP
Japan
Prior art keywords
motion prediction
prediction information
weight information
derivation formula
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2004021328A
Other languages
Japanese (ja)
Other versions
JP2005217746A (en
Inventor
晴久 加藤
康之 中島
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
KDDI Corp
Original Assignee
KDDI Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by KDDI Corp filed Critical KDDI Corp
Priority to JP2004021328A priority Critical patent/JP4763241B2/en
Publication of JP2005217746A publication Critical patent/JP2005217746A/en
Application granted granted Critical
Publication of JP4763241B2 publication Critical patent/JP4763241B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Description

本発明は、動き予測情報検出装置に関し、特に、重み付き動き補償において処理対象画像に対する近似画像の誤差を最小にする重み情報を決定する動き予測情報検出装置に関する。   The present invention relates to a motion prediction information detection device, and more particularly to a motion prediction information detection device that determines weight information that minimizes an error of an approximate image with respect to a processing target image in weighted motion compensation.

連続して入力される動画像信号を符号化する符号化方式のひとつとして、フレーム間予測符号化方式がある。フレーム間予測符号化方式では、時間的相関の予測効率が高められた動き補償を行うために動き予測情報が用いられる。   One of the encoding methods for encoding continuously input moving image signals is an inter-frame predictive encoding method. In the inter-frame predictive coding method, motion prediction information is used to perform motion compensation with improved temporal correlation prediction efficiency.

明るさが時間的に変化するフェードシーンや2つのシーンが混ざり合うクロスフェード(ディゾルブ)シーンにおいては、従来の単純な動き補償では効率的な圧縮を行うことができないため、いくつかの対策が提案されている。   In a fade scene where the brightness changes over time and a cross-fade (dissolve) scene where two scenes are mixed, conventional simple motion compensation cannot perform efficient compression, so several measures are proposed. Has been.

例えば、特許文献1,2にはフレーム平均輝度差分による補正を行うことが記載されており、特許文献3,4,5にはそれぞれ、ビット配分、エンコーダそのもの、動き探索の誤差評価関数を適応的に変化させることによってフェードシーンにおける符号化効率の劣化を防ぐというように、フェード区間の符号化処理に工夫を凝らすことが提案されている。また、特許文献6にはフェードと判断したシーンでは動きベクトルを強制的に0にすることによってランダムな動きベクトルの発生を抑制することが提案されている。   For example, Patent Documents 1 and 2 describe that correction is performed using a frame average luminance difference, and Patent Documents 3, 4, and 5 adaptively allocate bit allocation, the encoder itself, and an error evaluation function for motion search, respectively. In order to prevent deterioration of encoding efficiency in a fade scene by changing to, it has been proposed to devise a coding process in a fade section. Patent Document 6 proposes to suppress the generation of a random motion vector by forcibly setting the motion vector to 0 in a scene determined to be a fade.

映像圧縮方式の国際標準規格MPEG-4 AVC/H.264(ISO/IEC 14496-10)では従来の動き補償方式に加えて重み付き動き補償方式の採用を決定しており、特許文献7では重み付き動き補償方式がフェードシーンの圧縮に効果があることが示されている。ただし、MPEG-4 AVC/H.264(ISO/IEC 14496-10)では重み係数の決定方法は規定していないので、特許文献7では重み係数を0,0.5,1,2だけに制限し、フレーム平均差分の遷移をもとに重み係数が決定されている。
特開平6−46412号公報 特開平8−65684号公報 特開平10−336641号公報 特開2001−510964号公報 特開2003−87795号公報 特開2002−51341号公報 特開2003−284075号公報
The MPEG-4 AVC / H.264 (ISO / IEC 14496-10) international standard for video compression has decided to adopt a weighted motion compensation method in addition to the conventional motion compensation method. It has been shown that the attached motion compensation method is effective for compression of fade scenes. However, since MPEG-4 AVC / H.264 (ISO / IEC 14496-10) does not stipulate a method for determining a weighting factor, Patent Document 7 restricts the weighting factor to 0, 0.5, 1, and 2, The weighting factor is determined based on the transition of the frame average difference.
JP-A-6-46412 JP-A-8-65684 Japanese Patent Laid-Open No. 10-336641 JP 2001-510964 A Japanese Patent Laid-Open No. 2003-87795 JP 2002-51341 A JP 2003-284075 A

特許文献1の技術では付加情報を必要とするため、既存の圧縮方式と互換性がないという課題があり、特許文献2の技術ではマクロブロック(MB)単位の処理を行うので、計算量が増加するという課題がある。フレーム平均輝度差分を用いる方式では、大域的なフレーム平均輝度差分が局所的な変化に正しく当てはまらず、正しく機能しないことが起こり得る。例えば、フェードの始点または終点付近で画素値が取り得る範囲を逸脱した場合、最小値または最大値で丸めるとフェードによる変化とは異なる効果がフレーム平均輝度差分に現れることになる。   Since the technique of Patent Document 1 requires additional information, there is a problem that it is not compatible with the existing compression method, and the technique of Patent Document 2 performs processing in units of macroblocks (MB), which increases the amount of calculation. There is a problem of doing. In the method using the frame average luminance difference, the global frame average luminance difference is not correctly applied to the local change, and may not function correctly. For example, when the pixel value deviates from the range that can be taken near the start point or end point of the fade, an effect different from the change due to the fade appears in the frame average luminance difference when rounded to the minimum value or the maximum value.

特許文献3,4,5の技術では並進だけを前提とした単純な動き補償を対象としており、これでは自ずと圧縮率の面で限界がある。また、特許文献6では誤った動きベクトルによる無駄な情報を発生させないことだけを目的としており、これでは圧縮効率が改善されるとは限らない。   The techniques of Patent Documents 3, 4, and 5 are intended for simple motion compensation based on only translation, and this naturally has a limit in terms of compression rate. Further, Patent Document 6 aims only to prevent useless information due to an incorrect motion vector, and this does not necessarily improve the compression efficiency.

さらに、特許文献7の技術では本来任意の値を設定可能である重み係数が制限されているので、指数的または対数的に遷移する非線形なフェードなどに対応できない。しかも、最適な重み係数を決定する方法は、フレーム平均輝度の遷移によるフェード検出に依存するので、フレーム平均輝度差分を用いる方式と同じ問題を抱える。   Furthermore, since the technique of Patent Document 7 limits the weighting factor that can originally set an arbitrary value, it cannot cope with a non-linear fade that changes exponentially or logarithmically. In addition, since the method for determining the optimum weight coefficient depends on the fade detection based on the transition of the frame average luminance, it has the same problem as the method using the frame average luminance difference.

本発明の目的は、上記課題を解決し、任意の重み付き動き予測方式の重み情報を高速かつ高精度に決定できる動き予測情報検出装置を提供することにある。   An object of the present invention is to solve the above-described problems and provide a motion prediction information detection apparatus capable of determining weight information of an arbitrary weighted motion prediction method at high speed and with high accuracy.

上記課題を解決するため、本発明は、1枚以上の任意の枚数の参照画像から動き補償された近似画像に重み付けするために使用され、近似画像の精度を向上させて処理対象画像との誤差を最小にする重み情報を決定する動き予測情報検出装置において、部分領域ごとに処理対象画像が参照する参照画像の任意の枚数が与えられ、該枚数に応じた重み付き補償方式での各参照画像についての重み情報を同時に決定するのに必要な導出式を導く導出式決定手段と、前記導出式決定手段で導かれた導出式に応じて処理対象画像の画素値および参照画像の画素値の組合せによる演算結果から部分領域ごとの重み情報を決定する動き予測情報導出手段を備え、前記重み付き補償方式および重み情報の要素数は任意に与えられ、前記導出式決定手段は、枚数に応じた重み付き補償方式の重み情報の決定に必要な複数の導出式を予め導出しておき、前記動き予測情報導出手段は、前記導出式決定手段により予め導出された複数の導出式を適宜選択して部分領域ごとの重み情報を決定するように構成されている点に基本的な特徴がある。 In order to solve the above-described problem, the present invention is used to weight an approximate image that has been motion-compensated from an arbitrary number of reference images of one or more, and improves the accuracy of the approximate image to improve the error from the processing target image. In the motion prediction information detection apparatus that determines weight information that minimizes the number of reference images, an arbitrary number of reference images referred to by the processing target image is given for each partial region, and each reference image in a weighted compensation scheme according to the number A derivation formula deciding means for deriving a derivation formula necessary to simultaneously determine the weight information on the image, and a combination of the pixel value of the processing target image and the pixel value of the reference image according to the derivation formula derived by the derivation formula decision means It includes a motion prediction information deriving means for determining the weight information for each partial area from the calculation result of the number of elements of the weighted compensation method and the weight information is arbitrarily given, the derived formula determining means, sheets A plurality of derivation formulas necessary for determining the weight information of the weighted compensation scheme according to the above are derived in advance, and the motion prediction information derivation means appropriately calculates the plurality of derivation formulas derived in advance by the derivation formula determination means. There is a basic feature in that it is configured to select and determine weight information for each partial region .

本発明によれば、重み付き動き補償方式に応じた導出式を導き、この導出式に応じて重み情報を決定するので、参照画像および処理対象画像の画素値から、重み付き動き補償の参照画像数に依存せず、任意の参照画像数に応じて最適な重み情報を決定することができる。また、重み情報の決定に際し、計算処理に工夫を施すことにより計算量を大幅に削減し、処理負担を軽減することができる。   According to the present invention, a derivation formula corresponding to the weighted motion compensation method is derived, and weight information is determined according to the derivation formula. Therefore, the reference image for weighted motion compensation is obtained from the pixel values of the reference image and the processing target image. The optimum weight information can be determined according to an arbitrary number of reference images without depending on the number. Further, when determining the weight information, the calculation amount can be greatly reduced by devising the calculation process, and the processing load can be reduced.

まず、本発明の原理について説明する。動画像の各フレームを圧縮する過程において、処理対象画像をPと定義する。また、処理対象画像Pを構成する画素を走査順に並べたとき、i番目の画素の画素値をp(i;c)で表現する。このとき、処理対象画像Pからsフレーム離れた任意の参照画像Pc+sおよび従来の動き予測情報uを用いて再構成された近似画像P の画素の画素値は、p(i+u;c+s)で与えられる。 First, the principle of the present invention will be described. In the process of compressing each frame of the moving image, the processing target image is defined as Pc . Further, when the pixels constituting the processing target image Pc are arranged in the scanning order, the pixel value of the i-th pixel is expressed by p (i; c). At this time, the pixel values of the pixels of any of the reference image P c + s and approximate image P 'c reconstructed using the conventional motion prediction information u s away s frame from the processed image P c is, p (i + u s ; c + s).

一方、j=1,2,・・・,mをパラメータとし、重み情報wにより変換された画素値をp"(i;c)とすると、重み付き動き補償された再構成画像の画素値は、重み関数f( )を用いて式(1)で表される。なお、重み関数は重み付き動き補償方式を規定する。

p"(i;c)=f(p(i+u;c+s), w) (1)
On the other hand, assuming that j = 1, 2,..., M is a parameter and the pixel value converted by the weight information w j is p ″ (i; c), the pixel value of the reconstructed image subjected to weighted motion compensation Is expressed by equation (1) using a weight function f (), which defines a weighted motion compensation scheme.

p "(i; c) = f (p (i + u s; c + s), w j) (1)

また、複数の参照画像を用いる場合には、さらに処理対象画像Pからrフレーム離れた別の参照画像Pc+rの画素値p(i;c+r)および動き予測情報uを用いるとすると、重み付き動き補償された再構成画像の画素値は、重み関数g( )を用いて式(2)で表わされる。

p"(i;c)=g(p(i+u;c+s), p(i+u;c+r), w) (2)
In the case of using a plurality of reference images, yet another reference from the target image P c distant r frame images P c + r of the pixel values p; if (i c + r) and the use of the motion prediction information u r, weight The pixel value of the reconstructed image that has undergone additional motion compensation is expressed by equation (2) using a weight function g ().

p "(i; c) = g (p (i + u s; c + s), p (i + u r; c + r), w j) (2)

重み情報を決定することは、上記の手法で得られた画素値p"(i;c)と原画像の画素値p(i;c)を一致させる重み情報w,w,・・・,wを求めることである。 Determining the weight information includes weight information w 1 , w 2 ,... For matching the pixel value p ″ (i; c) obtained by the above method with the pixel value p (i; c) of the original image. , W m .

そこで、処理対象画像Pの画素値p(i;c)と重み付き動き補償された画素値p"(i;c)の類似度を測る尺度として、両画素値の2乗誤差の総和eを式(3)で定義し、eを最小にするwを最適な重み情報とする。 Therefore, as a measure for measuring the similarity between the pixel value p (i; c) of the processing target image P c and the weighted motion compensated pixel value p ″ (i; c), the sum of square errors of both pixel values e Is defined by the equation (3), and w j that minimizes e is the optimum weight information.

Figure 0004763241
Figure 0004763241

重み情報wが2乗誤差の総和eを最小にするためには、eを個々の重み情報wで偏微分した式(4)がそれぞれ0にならなくてはならない。 In order for the weight information w j to minimize the sum of squared errors e, equations (4) obtained by partial differentiation of e with respect to the individual weight information w j must be zero.

Figure 0004763241
Figure 0004763241

ここで、poi=p(i;c)、p1i=f(p(i+u;c+s), w)、pgi=g(p(i+u;c+s), p(i+u;c+r),w)と略記すると、式(4)は式(5)あるいは式(6)で表される。 Here, p oi = p (i; c), p 1i = f (p (i + u s; c + s), w j), p gi = g (p (i + u s; c + s), p (i + u r; c + r) , w j ), the expression (4) is expressed by the expression (5) or the expression (6).

Figure 0004763241
Figure 0004763241

Figure 0004763241
Figure 0004763241

一般に、重み関数f( )およびg( )は未知の重み情報wに対して線形であり、m個の未知数wに対して拘束式もm個存在するので、重み情報wは代数的に導出することができる。 In general, the weight functions f () and g () are linear with respect to the unknown weight information w j , and there are m constraint equations for m unknowns w j , so the weight information w j is algebraic. Can be derived.

(5)式あるいは(6)式は、重み関数を重み情報によって偏微分した項と重み付き動き補償によって得られた近似画像による誤差成分の項とを乗算した和が0となる、重み予測情報wの導出式である。 (5) or (6) is weight prediction information in which the sum obtained by multiplying the term obtained by partial differentiation of the weighting function by weighting information and the error component term by the approximated image obtained by weighted motion compensation is zero. This is a derivation formula for w j .

(5)式あるいは(6)式は、例えば、未知の重み情報wと掛け算する項と掛け算しない項とに分離できる。ここで、重み関数f( )の導関数と重み関数f( )そのものの積、および重み関数g( )の導関数と重み関数g( )そのものの積に個々の画素値を代入した総和を要素として持つ行列を〈A〉とし、重み関数f( )の導関数と処理対象画素の積、あるいは重み関数g( )の導関数と処理対象画素の積に個々の画素値を代入した総和を要素として持つベクトルを〈b〉と定義すると、重み情報〈w〉は〈A〉−1〈b〉で求めることができる。なお、〈 〉付き小文字はベクトルを表し、〈 〉付き大文字は行列を表す。 The expression (5) or the expression (6) can be separated into, for example, a term that is multiplied by unknown weight information w j and a term that is not multiplied. Here, the sum of the individual pixel values assigned to the product of the derivative of the weight function f () and the weight function f () itself, and the product of the derivative of the weight function g () and the weight function g () itself is Is a matrix with <A>, and the sum of the individual function values assigned to the product of the derivative of the weight function f () and the pixel to be processed or the product of the weight function g () and the pixel to be processed Is defined as <b>, the weight information <w j > can be obtained by <A> −1 <b>. Note that lowercase letters with <> represent vectors, and uppercase letters with <> represent matrices.

次に、図面を参照して本発明の実施形態について説明する。図1は、上記原理による本発明に係る動き予測情報検出装置における処理手順を示すフロー図である。   Next, embodiments of the present invention will be described with reference to the drawings. FIG. 1 is a flowchart showing a processing procedure in the motion prediction information detecting apparatus according to the present invention based on the above principle.

本例の処理手順は、重み関数の導関数と重み関数の積和からなる行列〈A〉を算出するステップ(S1)、重み関数の導関数と画素値の積和からなるベクトル〈b〉を算出するステップ(S2)、および重み情報〈w〉を算出するステップ(S3)からなる。 The processing procedure of this example is a step (S1) of calculating a matrix <A> made up of a product of weight function and weight function, and a vector <b> made up of the product of weight function derivative and pixel value. It comprises a step (S2) for calculating and a step (S3) for calculating the weight information <w j >.

S1では、重み関数f( )の導関数と重み関数f( )そのものの積、あるいは重み関数g( )の導関数と重み関数g( )そのものの積に個々の画素値を代入した総和を要素として持つ行列〈A〉を算出する。   In S1, the sum of the individual pixel values assigned to the product of the derivative of the weight function f () and the weight function f () itself, or the product of the weight function g () and the weight function g () itself As a matrix <A>.

S2では、重み関数f( )の導関数と処理対象画素の積、あるいは重み関数g( )の導関数と処理対象画素の積に個々の画素値を代入した総和を要素として持つベクトル〈b〉を算出する。なお、S1およびS2で処理対象とする画素は処理対象画像の全画素である必要はなく、その一部であってもよい。また、重み関数およびそのパラメータは任意であり、複数の重み関数およびパラメータに対する行列〈A〉およびベクトル〈b〉を予め計算しておき、適宜選択できるようにすることもできる。   In S2, a vector <b> having, as elements, the product of the derivative of the weighting function f () and the pixel to be processed, or the sum of the individual pixel values assigned to the product of the derivative of the weighting function g () and the pixel to be processed. Is calculated. Note that the pixels to be processed in S1 and S2 do not have to be all pixels of the processing target image, and may be a part thereof. The weighting function and its parameters are arbitrary, and the matrix <A> and vector <b> for a plurality of weighting functions and parameters can be calculated in advance so that they can be selected as appropriate.

S3では、S1およびS2で算出された行列〈A〉およびベクトル〈b〉から〈A〉−1〈b〉を算出して重み情報〈w〉を求める。 In S3, <A> −1 <b> is calculated from the matrix <A> and the vector <b> calculated in S1 and S2, and weight information <w j > is obtained.

次に、参照画像を用いて重み情報を決定する手順を具体的に説明する。以下では、参照画像が1枚である場合と2枚である場合について説明するが、参照画像の枚数は任意であり、その場合でも同様の手順で実施できる。   Next, a procedure for determining weight information using a reference image will be specifically described. In the following, a case where there are one reference image and a case where there are two reference images will be described. However, the number of reference images is arbitrary, and even in that case, the same procedure can be performed.

まず、参照画像が1枚である場合について説明する。処理対象画像Pの任意の参照画像Pc+sにおいて動き補償された画素値を式(7)で略記すると、重み付き動き補償による画素値p"(i;c)は重み関数f( )を用いて式(8)で与えられる。なお、ここでは重み情報として、重み係数wおよびオフセット係数wを定義した。

1i=p(i+u;c+s) (7)

p"(i;c)=f(p1i, w)
=wp1i+w (1≦i≦n) (8)
First, a case where there is one reference image will be described. When a pixel value subjected to motion compensation in an arbitrary reference image P c + s of the processing target image P c is abbreviated by Expression (7), a pixel value p ″ (i; c) by weighted motion compensation uses a weight function f (). The weight coefficient w 1 and the offset coefficient w 2 are defined here as weight information.

p 1i = p (i + u s ; c + s) (7)

p "(i; c) = f (p 1i , w j )
= W 1 p 1i + w 2 (1 ≦ i ≦ n) (8)

このとき、処理対象画像Pcの画素値p0iと重み付き動き補償による画素値との2乗誤差の総和eは、式(9)で表される。nは処理対象画素数を表すが、処理対象画素は処理対象画像の一部から構成する。 At this time, the sum e of square errors between the pixel value p0i of the processing target image Pc and the pixel value obtained by weighted motion compensation is expressed by Expression (9). n represents the number of the processing target pixel, a processing target pixel constitutes a part of the processing target image.

Figure 0004763241
Figure 0004763241

重み関数f( )の重み情報wによる偏微分項は、式(10)となり、2乗誤差の総和eの未知変数(w,w)による偏微分は式(11)となる。ただし、式(11)の係数における[
]はガウス(Gauss)の和の記号であり、例えば[p]は式(12)で表されるように画素値p0iのn個の和を示す。
The partial differential term based on the weight information w j of the weighting function f () is expressed by equation (10), and the partial differential based on the unknown variable (w 1 , w 2 ) of the sum of squared errors e is expressed by equation (11). However, in the coefficient of equation (11) [
] Is a symbol of the sum of Gauss. For example, [p 0 ] indicates n sums of pixel values p 0i as represented by the equation (12).

式(11)は、処理対象画像の画素値の和、参照画像の画素値の和と2乗和、両画像の画素値の積和、処理対象画素数を係数とする連立1次方程式であり、これによりにより重み情報wを求めることができる。 Expression (11) is a simultaneous linear equation with coefficients as the sum of pixel values of the processing target image, the sum and square sum of the pixel values of the reference image, the product sum of the pixel values of both images, and the number of processing target pixels. Thereby, the weight information w j can be obtained.

Figure 0004763241
Figure 0004763241

Figure 0004763241
Figure 0004763241

Figure 0004763241
Figure 0004763241

〈w〉=(w, w)として式(11)をベクトルおよび行列の形で表すと、式(13)になる。

〈A〉〈w〉−〈b〉=〈0〉 (13)
When Expression (11) is expressed in the form of a vector and a matrix with <w> = (w 1 , w 2 ) t , Expression (13) is obtained.

<A 1><w> - < b 1> = <0> (13)

ここで、行列〈A〉は式(14)で与えられ、ベクトル〈b〉は式(15)で与えられる。 Here, the matrix <A 1> is given by equation (14), the vector <b 1> is given by equation (15).

Figure 0004763241
Figure 0004763241

Figure 0004763241
Figure 0004763241

式(14)および式(15)から明らかなように、処理対象画像Pにおける個々の画素値p0i、参照画像から動き補償で得た個々の画素値p1i、両者の積および処理対象画像の画素数nから、行列〈A〉およびベクトル〈b〉の各要素を求めることができる。1枚の参照画像を用いる重み情報のベクトル〈w〉=(w, w)は〈A−1〈b〉を計算することにより得ることができる。 As is clear from the equations (14) and (15), the individual pixel values p 0i in the processing target image P c, the individual pixel values p 1i obtained by motion compensation from the reference image, the product of both, and the processing target image from the number of pixels n, it can be determined each element of the matrix <A 1> and vector <b 1>. Vector of weight information using one reference picture <w> = (w 1, w 2) t can be obtained by calculating the <A 1> -1 <b 1> .

また、〈A〉は対称行列であるので逆行列〈A−1を計算することなく、コレスキー(Cholesky)分解によって〈A〉を下三角行列と上三角行列に分解し、〈b〉対し下三角行列の後退代入および上三角行列の後退代入を段階的に行ことによっても重み情報のベクトル〈w〉を求めることができる。 Further, <A 1> without calculating the inverse matrix <A 1> -1 since it is symmetric matrix, decomposing the <A 1> in the lower and upper triangular matrices by Cholesky (Cholesky) decomposition, < The weight information vector <w> can also be obtained by performing stepwise substitution of the lower triangular matrix and stepwise substitution of the upper triangular matrix for b 1 >.

下三角行列〈L〉は〈A〉=〈L〉〈Lで計算でき、〈A〉〈w〉−〈b〉=〈0〉より式(16)と変形することができるので、まず、後退代入によって〈L〈w〉を求め、続けて〈w〉を求める。

〈L〉(〈L〈w〉)=〈b〉 (16)
Lower triangular matrix <L 1> can be calculated by <A 1> = <L 1> <L 1> t, <A 1><w> - <b 1> = deforming the equation (16) from <0> Therefore, first, <L 1 > t <w> is obtained by backward substitution, and then <w> is obtained.

<L 1 >(<L 1 > t <w>) = <b 1 > (16)

また、式(11)または式(13)を代数的に解くと式(17)が得られるので、これによっても重み情報wを求めることもできる。 Moreover, since the equation (17) is obtained by solving the equation (11) or the equation (13) algebraically, the weight information w j can also be obtained by this.

Figure 0004763241
Figure 0004763241

次に、参照画像が2枚である場合について説明する。処理対象画像Pの任意の参照画像Pc+sおよびPc+rにおいて動き補償された画素値p1i,p2iをそれぞれ式(18)で略記すると、重み付き動き補償による画素値p"(i;c)は重み関数g( )を用いて式(19)で与えられる。なお、ここでは重み情報として、重み係数w,wおよびオフセット係数wを定義した。

1i=p(i+u;c+s)
2i=p(i+u;c+r) (18)

p"(i;c)=g(p1i, p2i, w)
=wp1i+wp2i+w (1≦i≦n) (19)
Next, a case where there are two reference images will be described. When pixel values p 1i and p 2i motion-compensated in arbitrary reference images P c + s and P c + r of the processing target image P c are abbreviated by equation (18), pixel values p ″ (i; c by weighted motion compensation) ) Is given by equation (19) using a weight function g (), where weighting factors w 1 and w 2 and an offset factor w 3 are defined as weighting information.

p 1i = p (i + u s ; c + s)
p 2i = p (i + u r; c + r) (18)

p "(i; c) = g (p 1i , p 2i , w j )
= W 1 p 1i + w 2 p 2i + w 3 (1 ≦ i ≦ n) (19)

このとき、処理対象画像Pの画素値p0iと重み付き動き補償による画素値との2乗誤差の総和eは、式(20)で表される。nは処理対象画素数を表すが、処理対象画素は処理対象画像の全画素あるいはその一部から構成する。 At this time, the sum e of the square errors between the pixel value p 0i of the processing target image P c and the pixel value by weighted motion compensation is expressed by Expression (20). n represents the number of pixels to be processed, and the pixel to be processed is composed of all the pixels of the processing target image or a part thereof.

Figure 0004763241
Figure 0004763241

重み関数g( )の重み情報wによる偏微分項は、式(21)となる。よって、2乗誤差の総和eの未知変数(w, w,w)による偏微分は式(22)となる。ただし、式(22)の係数における[ ]は、ガウス(Gauss)の和の記号であり、例えば[p]は式(23)で表されるように画素値p0iのn個の和を示す。 The partial differential term based on the weight information w j of the weight function g () is expressed by Equation (21). Therefore, the partial differentiation of the sum of squared errors e by unknown variables (w 1 , w 2 , w 3 ) is given by equation (22). However, [] in the coefficient of equation (22) is a symbol of the sum of Gauss, and for example, [p 0 ] represents n sums of pixel values p 0i as represented by equation (23). Show.

式(22)は、処理対象画像の画素値の和、参照画像の画素値の和と2乗和、2つの画像の画素値の積和、処理対象画素数を係数とする連立1次方程式であり、これによりにより重み情報wを求めることができる。 Equation (22) is a simultaneous linear equation with the coefficient of the sum of pixel values of the processing target image, the sum and square sum of the pixel values of the reference image, the product sum of the pixel values of the two images, and the number of processing target pixels. With this, the weight information w j can be obtained.

Figure 0004763241
Figure 0004763241

Figure 0004763241
Figure 0004763241

Figure 0004763241
Figure 0004763241

〈w〉=(w, w, w)として式(22)をベクトルおよび行列の形で表すと、式(24)になる。

〈A〉〈w〉−〈b〉=〈0〉 (24)

ここで、行列〈A〉は式(25)で与えられ、ベクトル〈b〉は式(26)で与えられる。
<W> = (w 1 , w 2 , w 3 ) When Expression (22) is expressed in the form of a vector and a matrix as t , Expression (24) is obtained.

<A 2><w> - < b 2> = <0> (24)

Here, the matrix <A 2> is given by equation (25), the vector <b 2> is given by equation (26).

Figure 0004763241
Figure 0004763241

Figure 0004763241
Figure 0004763241

式(14)および式(15)から明らかなように、処理対象画像Pにおける個々の画素値p0i、参照画像から動き補償で得た個々の画素値p1i、p2i、これら画素値の積および処理対象画像の画素数nから、行列〈A〉およびベクトル〈b〉の各要素を求めることができる。2枚の参照画像を用いる重み情報のベクトル〈w〉=(w, w, w)は〈A−1〈b〉を計算することにより得ることができる。 As is clear from the equations (14) and (15), the individual pixel values p 0i in the processing target image P c, the individual pixel values p 1i and p 2i obtained by motion compensation from the reference image, from the pixel number n of the product and the processing target image, it is possible to determine the elements of the matrix <A 2> and vector <b 2>. Vector of weight information using two reference images <w> = (w 1, w 2, w 3) t can be obtained by calculating the <A 2> -1 <b 2> .

また、〈A〉は対称行列であるので逆行列〈A−1を計算することなく、コレスキー分解によって〈A〉を下三角行列と上三角行列に分解し、〈b〉対し下三角行列の後退代入および上三角行列の後退代入を段階的に行ことによっても重み情報のベクトル〈w〉を求めることができる。 Further, <A 2> without calculating the inverse matrix <A 2> -1 since it is symmetric matrix, is decomposed into a lower triangular matrix and an upper triangular matrix <A 2> by Cholesky decomposition, <b 2> On the other hand, the weight information vector <w> can also be obtained by performing stepwise substitution of the lower triangular matrix and stepwise substitution of the upper triangular matrix.

下三角行列〈L〉は〈A〉=〈L〉〈Lで計算でき、〈A〉〈w〉−〈b〉=〈0〉より式(27)と変形することができるので、まず、後退代入によって〈L〈w〉を求め、続けて〈w〉を求める。

〈L2〉(〈L2〈w〉)=〈b2〉 (27)
Lower triangular matrix <L 2> can be calculated by <A 2> = <L 2> <L 2> t, <A 2><w> - <b 2> = deforms the formula (27) from <0> First, <L 2 > t <w> is obtained by backward substitution, and then <w> is obtained.

<L 2 >(<L 2 > t <w>) = <b 2 > (27)

また、式(22)または式(24)を代数的に解くと式(28)が得られるので、これによって重み情報wを求めることもできる。 Moreover, since the equation (28) is obtained by solving the equation (22) or the equation (24) algebraically, the weight information w j can also be obtained.

Figure 0004763241
Figure 0004763241

本発明によれば、参照画像および処理対象画像の画素値から、重み付き動き補償の参照画像数に依存せず、任意の参照画像数に応じて最適な重み情報を決定することができる。これにより決定された重み情報を用いればMPEG-4 AVC/H.264において従来の動き補償より効果的な圧縮効率を実現することができる。また、フェードやディゾルブのシーンでは本発明によって決定された重み情報は従来の動き補償より圧縮効率を改善するので、本発明は、圧縮効率が改善された場所を精査してフェードやディゾルブのシーンを検出する用途にも適用できる。   According to the present invention, it is possible to determine optimum weight information according to an arbitrary number of reference images, without depending on the number of reference images for weighted motion compensation, from the pixel values of the reference image and the processing target image. By using the weight information determined in this way, MPEG-4 AVC / H.264 can achieve more efficient compression efficiency than conventional motion compensation. In addition, in the fade and dissolve scenes, the weight information determined by the present invention improves the compression efficiency compared to the conventional motion compensation. Therefore, the present invention examines the place where the compression efficiency has been improved and performs the fade and dissolve scenes. It can also be used for detection purposes.

本発明に係る動き予測情報検出装置における処理手順を示すフロー図である。It is a flowchart which shows the process sequence in the motion prediction information detection apparatus which concerns on this invention.

符号の説明Explanation of symbols

S1・・・行列〈A〉の算出ステップ、S2・・・ベクトル〈b〉の算出ステップ、S3・・・重み情報〈w〉の算出ステップ S1... Matrix <A> calculation step, S2... Vector <b> calculation step, S3... Weight information <w j > calculation step

Claims (8)

1枚以上の任意の枚数の参照画像から動き補償された近似画像に重み付けするために使用され、近似画像の精度を向上させて処理対象画像との誤差を最小にする重み情報を決定する動き予測情報検出装置において、
部分領域ごとに処理対象画像が参照する参照画像の任意の枚数が与えられ、該枚数に応じた重み付き補償方式での各参照画像についての重み情報を同時に決定するのに必要な導出式を導く導出式決定手段と、
前記導出式決定手段で導かれた導出式に応じて処理対象画像の画素値および参照画像の画素値の組合せによる演算結果から部分領域ごとの重み情報を決定する動き予測情報導出手段を備え
前記重み付き補償方式および重み情報の要素数は任意に与えられ、
前記導出式決定手段は、枚数に応じた重み付き補償方式の重み情報の決定に必要な複数の導出式を予め導出しておき、前記動き予測情報導出手段は、前記導出式決定手段により予め導出された複数の導出式を適宜選択して部分領域ごとの重み情報を決定するように構成されていることを特徴とする動き予測情報検出装置。
Motion prediction is used to weight a motion-compensated approximate image from an arbitrary number of reference images of one or more, and determines weight information that improves the accuracy of the approximate image and minimizes an error from the processing target image. In the information detection device,
An arbitrary number of reference images to be referred to by the processing target image is given for each partial area, and a derivation formula necessary for simultaneously determining weight information for each reference image in the weighted compensation method according to the number is derived. A derivation determining means;
A motion prediction information deriving unit that determines weight information for each partial region from a calculation result by a combination of a pixel value of the processing target image and a pixel value of the reference image according to the deriving equation derived by the deriving equation determining unit ;
The weighted compensation method and the number of elements of weight information are arbitrarily given,
The derivation formula determining means derives in advance a plurality of derivation formulas necessary for determining weight information of the weighted compensation method according to the number of sheets, and the motion prediction information deriving means is derived in advance by the derivation formula determining means. A motion prediction information detection apparatus configured to determine weight information for each partial region by appropriately selecting a plurality of derived equations .
前記導出式決定手段は、重み情報の精度評価基準として処理対象画像の画素値と重み付き動き補償によって得られた画素値との2乗誤差を用いることを特徴とする請求項1に記載の動き予測情報検出装置。 2. The motion according to claim 1, wherein the derivation formula determining means uses a square error between the pixel value of the processing target image and the pixel value obtained by weighted motion compensation as the accuracy evaluation criterion of the weight information. Prediction information detection device. 前記導出式決定手段は、重み付き補償方式における重み関数を重み情報によって偏微分した項と重み付き動き補償によって得られた近似画像による誤差成分の項とを乗算した和が0となる複数の導出式を導くことを特徴とする請求項記載の動き予測情報検出装置。 The derivation formula determining means includes a plurality of derivations in which a sum obtained by multiplying a term obtained by partial differentiation of the weighting function in the weighted compensation scheme by weight information and an error component term based on the approximate image obtained by weighted motion compensation becomes zero. The motion prediction information detection apparatus according to claim 2 , wherein an equation is derived. 前記導出式決定手段は、処理対象画像の画素値における和、参照画像の画素値における和と2乗和、処理対象画像と参照画像の内の2つの画像の画素値の積和および処理対象画像の画素数を係数とする連立1次方程式からなる導出式を決定することを特徴とする請求項に記載の動き予測情報検出装置。 The derivation formula determining means includes a sum of pixel values of the processing target image, a sum and square sum of pixel values of the reference image, a product sum of pixel values of two images of the processing target image and the reference image, and the processing target image. The motion prediction information detection apparatus according to claim 2 , wherein a derivation formula comprising simultaneous linear equations with the number of pixels as a coefficient is determined. 前記予測情報導出手段は、連立1次方程式の係数として構成される導出式を解くことによって重み情報を決定する導出手段を備えることを特徴とする請求項に記載の動き予測情報検出装置。 The motion prediction information detection apparatus according to claim 4 , wherein the prediction information deriving unit includes a deriving unit that determines weight information by solving a deriving equation configured as a coefficient of simultaneous linear equations. 前記導出式決定手段は重み付き動き補償方式における重み関数から導出式を導き、前記動き予測情報導出手段は、前記導出式決定手段で導かれた導出式の係数を実際の画素値をもとに算出し、該導出式を解くことによって重み情報を決定することを特徴とする請求項1に記載の動き予測情報検出装置。 The derivation formula determination means derives a derivation formula from a weight function in the weighted motion compensation scheme, and the motion prediction information derivation means uses the derivation formula coefficients derived by the derivation formula determination means based on actual pixel values. The motion prediction information detection apparatus according to claim 1, wherein weight information is determined by calculating and solving the derivation formula. 前記動き予測情報導出手段は、導出式の解法として導出式の中で重み情報に掛かる係数を要素とする係数行列の逆行列と、それ以外の係数からなる係数ベクトルとを乗算することで重み情報を決定することを特徴とする請求項に記載の動き予測情報検出装置。 The motion prediction information deriving means multiplies weight information by multiplying an inverse matrix of a coefficient matrix whose elements are coefficients applied to weight information in the derivation formula as a solution of the derivation formula and a coefficient vector consisting of other coefficients. The motion prediction information detection apparatus according to claim 6 , wherein: 請求項の動き予測情報導出手段に代えて、コレスキー分解に基づき係数行列を下三角形行列および上三角形行列に分解し、係数ベクトルに対し下三角形行列の後退代入および上三角形行列の後退代入を段階的に行うことによって係数行列の逆行列算出処理を省略して重み情報を決定することを特徴とする動き予測情報検出装置。 In place of the motion prediction information deriving means of claim 7 , the coefficient matrix is decomposed into a lower triangle matrix and an upper triangle matrix based on Cholesky decomposition, and backward substitution of the lower triangle matrix and backward substitution of the upper triangle matrix are performed on the coefficient vector. A motion prediction information detection apparatus characterized in that weight information is determined by performing stepwise processing by omitting an inverse matrix calculation process of a coefficient matrix.
JP2004021328A 2004-01-29 2004-01-29 Motion prediction information detection device Expired - Fee Related JP4763241B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2004021328A JP4763241B2 (en) 2004-01-29 2004-01-29 Motion prediction information detection device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2004021328A JP4763241B2 (en) 2004-01-29 2004-01-29 Motion prediction information detection device

Publications (2)

Publication Number Publication Date
JP2005217746A JP2005217746A (en) 2005-08-11
JP4763241B2 true JP4763241B2 (en) 2011-08-31

Family

ID=34905003

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2004021328A Expired - Fee Related JP4763241B2 (en) 2004-01-29 2004-01-29 Motion prediction information detection device

Country Status (1)

Country Link
JP (1) JP4763241B2 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005051091A1 (en) * 2005-10-25 2007-04-26 Siemens Ag Methods and apparatus for determining and reconstructing a predicted image area
JP2008219100A (en) * 2007-02-28 2008-09-18 Oki Electric Ind Co Ltd Predictive image generating device, method and program, and image encoding device, method and program
WO2008108372A1 (en) 2007-03-05 2008-09-12 Nec Corporation Weighted prediction information calculation method, device, program, dynamic image encoding method, device, and program

Also Published As

Publication number Publication date
JP2005217746A (en) 2005-08-11

Similar Documents

Publication Publication Date Title
US10448022B2 (en) Method and system to improve the performance of a video encoder
JP5281891B2 (en) Adaptive motion search range
CN101507277B (en) Image encoding/decoding method and apparatus
JP5714042B2 (en) Video information processing
EP1729521A2 (en) Intra prediction video encoding and decoding method and apparatus
US20060039470A1 (en) Adaptive motion estimation and mode decision apparatus and method for H.264 video codec
KR20090028441A (en) Coding tool selection in video coding based on human visual tolerance
JP4235162B2 (en) Image encoding apparatus, image encoding method, image encoding program, and computer-readable recording medium
EP2670143A1 (en) Video encoding device, video encoding method and video encoding program
JP6052319B2 (en) Video encoding device
Lan et al. Exploiting non-local correlation via signal-dependent transform (SDT)
KR100742762B1 (en) Detector for predicting of movement
Kim et al. Two-bit transform based block motion estimation using second derivatives
JP5178616B2 (en) Scene change detection device and video recording device
JP5639444B2 (en) Motion vector generation apparatus, motion vector generation method, and computer program
EP2187647A1 (en) Method and device for approximating a DC coefficient of a block of pixels of a frame
JP4763241B2 (en) Motion prediction information detection device
EP2587803A1 (en) Methods for coding and reconstructing a pixel block and corresponding devices.
KR100711196B1 (en) Detector for predicting of movement
Kim et al. An efficient inter-frame coding with intra skip decision in H. 264/AVC
RU2493668C1 (en) Method of encoding/decoding multi-view video sequence based on local adjustment of brightness and contract of reference frames without transmitting additional service data
Saeedi et al. Content adaptive pre-filtering for video compression
JP4819855B2 (en) Moving picture quantization method, moving picture quantization apparatus, moving picture quantization program, and computer-readable recording medium recording the program
Yang et al. Research on Video Quality Assessment.
KR20090084312A (en) Reliability evaluation and compensation method of motion vector

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20050831

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20071130

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20071205

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20080201

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20080227

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20080422

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20080716

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20080815

A911 Transfer to examiner for re-examination before appeal (zenchi)

Free format text: JAPANESE INTERMEDIATE CODE: A911

Effective date: 20080926

A912 Re-examination (zenchi) completed and case transferred to appeal board

Free format text: JAPANESE INTERMEDIATE CODE: A912

Effective date: 20090319

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20110609

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140617

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Ref document number: 4763241

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150

LAPS Cancellation because of no payment of annual fees