JP2008084171A - Method for improving sense of dynamic depth - Google Patents

Method for improving sense of dynamic depth Download PDF

Info

Publication number
JP2008084171A
JP2008084171A JP2006265566A JP2006265566A JP2008084171A JP 2008084171 A JP2008084171 A JP 2008084171A JP 2006265566 A JP2006265566 A JP 2006265566A JP 2006265566 A JP2006265566 A JP 2006265566A JP 2008084171 A JP2008084171 A JP 2008084171A
Authority
JP
Japan
Prior art keywords
moving object
depth
moving
image
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2006265566A
Other languages
Japanese (ja)
Inventor
Hiroyuki Sato
弘行 佐藤
Ritsuo Yoshida
律生 吉田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Toshiba Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Corp filed Critical Toshiba Corp
Priority to JP2006265566A priority Critical patent/JP2008084171A/en
Publication of JP2008084171A publication Critical patent/JP2008084171A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Studio Circuits (AREA)
  • Image Analysis (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To provide a method for improving the sense of dynamic depth, improving the sense of depth by detecting a mobile body in a frame by comparing a plurality of frames in a moving image, and highlighting the circumferential shadow of the mobile body, thereby separating the mobile body from the background. <P>SOLUTION: In response to input of an image, a frame concerned is compared with an internally retained previous frame, motion vectors are detected in a pixel unit, and adjacent pixels having the same motion vector quantity are determined to be the same mobile body. <P>COPYRIGHT: (C)2008,JPO&INPIT

Description

本発明は、動画中の複数のフレームを比較することでフレーム内の動体を検出し、該当する動体の周囲の陰影を強調させることにより、背景と動体の分離を行い奥行感の向上を図ることを特徴とする動的奥行感向上方法に関する。   The present invention detects a moving object in a frame by comparing a plurality of frames in a moving image, and enhances a sense of depth by separating a background and a moving object by enhancing shadows around the corresponding moving object. The present invention relates to a method for improving a dynamic depth feeling characterized by:

従来、静止した映像の立体感を表現するためにシェーディングを利用したりする方法、例えばビルや立体交差道路などを立体的に表示するために陰影付けする方法があった。   Conventionally, there has been a method of using shading to express the stereoscopic effect of a still image, for example, a method of shading in order to display a building, a three-dimensional intersection, or the like in a three-dimensional manner.

例えば特許文献1には、これらの方法が記載されている。
特開2003−141555号公報
For example, Patent Document 1 describes these methods.
JP 2003-141555 A

しかしながら、上述した従来の技術は、動体と背景との分離による奥行感提示を特徴とするものではなかった。なお、シェーディングとは一般的には、光源とモデルの形状などをもとにモデルに陰影をつける手法である。   However, the above-described conventional technique is not characterized by the presentation of a sense of depth by separating the moving object and the background. In general, shading is a method for shading a model based on the shape of the light source and the model.

本発明は上述の事情を考慮してなされたものであり、画素やブロック単位ではなく、動体単位での陰影強調により、動体と背景の分離が可能となり奥行感向上が可能となる動的奥行感向上方法を提供することを目的とする。   The present invention has been made in consideration of the above-described circumstances, and dynamic depth sensation that enables separation of moving objects and the background by enhancing shadows in units of moving objects, not in units of pixels and blocks, thereby improving the feeling of depth. An object is to provide an improvement method.

上記課題を解決するため、請求項1に係る発明の動的奥行感向上方法では、背景画像と動体からなる動画を構成する複数の映像フレームを比較し、前記映像フレーム内における前記動体を検出し、検出した前記動体と前記背景画像との境界部分の陰影を強調処理することにより、前記背景画像から前記動体を立体的に表示し、前記動画の奥行き感を向上することを特徴とする。   In order to solve the above problem, in the dynamic depth enhancing method according to the first aspect of the present invention, a plurality of video frames constituting a moving image composed of a background image and a moving object are compared, and the moving object in the video frame is detected. The moving object is displayed three-dimensionally from the background image by enhancing the shadow of the boundary portion between the detected moving object and the background image, thereby improving the sense of depth of the moving image.

さらに請求項2に係る発明の動的奥行感向上方法では、前記強調処理は、前記動体の周囲部分に施すことを特徴とする。   Further, in the dynamic depth sensation improving method according to the second aspect of the present invention, the enhancement processing is performed on a peripheral portion of the moving body.

さらに請求項3に係る発明の動的奥行感向上方法では、前記強調処理は、前記動体の内側部分に施すことを特徴とする。   Furthermore, in the dynamic depth improving method of the invention according to claim 3, the enhancement processing is performed on an inner portion of the moving body.

本発明によれば、映像の中の動きのある物体(動体)に着眼し、動体と背景を分離させ、陰影強調(輝度変化) することで、動体を映像の中で浮き上がらせ奥行感を提供することができる。   According to the present invention, focusing on a moving object (moving object) in an image, separating the moving object from the background, and emphasizing shadows (luminance change), the moving object is lifted in the image and provides a sense of depth. can do.

図1に従来のフレームを表示する際のイメージ図を示す。101が該当フレームの背景であり、102が対象とする動体とした場合、該当フレームの輝度値をグラフで示したものが103である。ここでは比較のため図2に示すような段階的な陰影処理はしていない。   FIG. 1 shows an image when a conventional frame is displayed. When 101 is the background of the corresponding frame and 102 is the target moving object, 103 indicates the luminance value of the corresponding frame in a graph. For comparison, stepwise shadow processing as shown in FIG. 2 is not performed.

図2に本発明のイメージ図を示す。201は101と同様に該当フレームの背景であり、202は102と同様に動体とした場合、202に隣接する背景204を201よりも急峻に輝度変化させることで陰影を強調し、動体202を背景201との分離を行うことにより奥行感向上を特徴とする。図2中、背景204は陰影の段階を模式的に示しているが、実際にはグラフ203に示されるようななだらかな表示としてもよい。   FIG. 2 shows an image diagram of the present invention. 201 is the background of the corresponding frame as in 101, and 202 is a moving object as in 102. When the background 204 adjacent to 202 is changed in brightness more sharply than 201, the shadow is emphasized, and the moving object 202 is set as the background. It is characterized by a sense of depth improvement by separating from 201. In FIG. 2, the background 204 schematically shows the stage of shading, but actually it may be a gentle display as shown in the graph 203.

図3に本発明の動作フローの一例を示す。本実施の形態は、該当フレーム内に対象とする動体は1つと仮定したものである。ここで本発明は動体1つにしか対応できないものではなく、複数の動体が存在する場合にも、動きベクトルの閾値を設定する等の手段で動体を選択して検出することができ、本フロー図と同様の処理が可能である。   FIG. 3 shows an example of the operation flow of the present invention. In the present embodiment, it is assumed that there is only one moving object in the corresponding frame. Here, the present invention is not compatible with only one moving object, and even when there are a plurality of moving objects, the moving object can be selected and detected by means such as setting a threshold value of a motion vector. The same processing as in the figure is possible.

本フロー図では、まず映像の入力を受け(S1)、内部で保持した直前のフレームと該当フレームとを比較し(S2)、例えば、画素単位での動きベクトルを検出し、隣接した画素の動きベクトル量が同一のものを同一動体と判定することで動体検出を行う(S3)。   In this flowchart, first, an input of a video is received (S1), and the immediately preceding frame held inside is compared with the corresponding frame (S2), for example, a motion vector in pixel units is detected, and the motion of adjacent pixels Moving object detection is performed by determining that the same vector amount is the same moving object (S3).

動体検出が完了したら、該当動体とその周辺の背景とを分離し、それぞれの輝度値検出を行う(S4)。次に、陰影強調のための輝度値を任意に決定し(S5)、下記一例に示すような動体と背景を分離することを特徴とする奥行感向上処理を実施し(S6)、映像表示を行う(S7)。   When the moving object detection is completed, the corresponding moving object and the surrounding background are separated and each luminance value is detected (S4). Next, a brightness value for shadow enhancement is arbitrarily determined (S5), and a depth improvement process characterized by separating a moving object and a background as shown in the following example is performed (S6), and an image display is performed. Perform (S7).

図2を用いて奥行感向上処理の一例を説明する。今回の一例では、動体と背景は分離されたものとし、円を描画する例を用いているが、本発明においてはCGI等の描画に適用するだけでなく、TV放送等の映像に対し本特許を適用することも可能である。   An example of the depth feeling improving process will be described with reference to FIG. In this example, the moving object and the background are assumed to be separated, and an example of drawing a circle is used. However, in the present invention, this patent is applied not only to drawing of CGI etc. but also to images of TV broadcasting etc. It is also possible to apply.

一例として、図2に示すように202の周囲を204のように黒っぽく陰影を強調することで、202が201から浮き上がったかのように表示する。この輝度値をグラフで表現すると203のようになり、103と比較すると205の部分が急峻に落ち込み陰影を強調していることが確認できる。   As an example, as shown in FIG. 2, the periphery of 202 is displayed in black as 204, and the shadow is emphasized to display 202 as if it was lifted from 201. When this luminance value is expressed in a graph, it becomes 203, and when compared with 103, it can be confirmed that the portion 205 sharply falls and emphasizes the shadow.

これを表現するための関数の一例としては、202の輝度値をY0、201の輝度値をY1、204の陰影強調のための輝度値をY2、202の半径をr、202の中心座標を(x,y)および陰影の濃度をα、陰影の急峻度をβとした場合、f(d)の条件は次の通りとなる。   As an example of a function for expressing this, the luminance value of 202 is Y0, the luminance value of 201 is Y1, the luminance value of 204 for shadow enhancement is Y2, the radius of 202 is r, and the center coordinate of 202 is ( If x, y) and the density of the shadow are α and the steepness of the shadow is β, the condition of f (d) is as follows.

(1) dが∞に大きくなる場合、f(d)=Y1
(2) dがrに∞から近づく場合、f(d)=Y2 ( d )
f(d) = Y1(1+((α-1)+(1-α)(Y0/Y1))(r/d)^β) ( r ≦ d)
で表現することができる。
(1) When d increases to ∞, f (d) = Y1
(2) When d approaches r from ∞, f (d) = Y2 (d)
f (d) = Y1 (1 + ((α-1) + (1-α) (Y0 / Y1)) (r / d) ^ β) (r ≤ d)
Can be expressed as

また、上記例は、動体の周囲の陰影を強調するものであるが、動体の内部側の陰影を強調することによる奥行感向上も同様の手法により可能である。   In the above example, the shadow around the moving object is emphasized, but the depth feeling can be improved by enhancing the shadow on the inner side of the moving object.

以上のように、動画中の複数のフレームを比較することでフレーム内の動体を検出し、該当する動体の周囲の陰影を強調させることにより、背景と動体の分離を行い奥行感の向上を図ることができる動的奥行感向上方法を提供することができる。   As described above, a moving object in a frame is detected by comparing a plurality of frames in a moving image, and a shadow around the corresponding moving object is emphasized, thereby separating the background and the moving object and improving the sense of depth. It is possible to provide a method for improving the dynamic depth feeling.

従来のフレームを表示する際の説明図。Explanatory drawing at the time of displaying the conventional flame | frame. 本発明の実施形態に係る動的奥行感向上方法を示す説明図。Explanatory drawing which shows the dynamic depth feeling improvement method which concerns on embodiment of this invention. 本発明の実施形態に係る動的奥行感向上方法を示すフロー図。The flowchart which shows the dynamic depth feeling improvement method which concerns on embodiment of this invention.

符号の説明Explanation of symbols

201・・・背景
202・・・動体
203・・・グラフ
204・・・陰影を示す背景
201 ... Background 202 ... Moving object 203 ... Graph 204 ... Background showing shading

Claims (3)

背景画像と動体からなる動画を構成する複数の映像フレームを比較し、前記映像フレーム内における前記動体を検出し、検出した前記動体と前記背景画像との境界部分の陰影を強調処理することにより、前記背景画像から前記動体を立体的に表示し、前記動画の奥行き感を向上することを特徴とする動的奥行感向上方法。 By comparing a plurality of video frames constituting a moving image composed of a background image and a moving object, detecting the moving object in the video frame, and emphasizing a shadow of a boundary portion between the detected moving object and the background image, A dynamic depth feeling improving method, wherein the moving object is three-dimensionally displayed from the background image to improve the depth feeling of the moving image. 前記強調処理は、前記動体の周囲部分に施すことを特徴とする請求項1記載の動的奥行感向上方法。 The dynamic depth improving method according to claim 1, wherein the enhancement processing is performed on a peripheral portion of the moving body. 前記強調処理は、前記動体の内側部分に施すことを特徴とする請求項1記載の動的奥行感向上方法。 The dynamic depth improving method according to claim 1, wherein the enhancement processing is performed on an inner portion of the moving body.
JP2006265566A 2006-09-28 2006-09-28 Method for improving sense of dynamic depth Pending JP2008084171A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006265566A JP2008084171A (en) 2006-09-28 2006-09-28 Method for improving sense of dynamic depth

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2006265566A JP2008084171A (en) 2006-09-28 2006-09-28 Method for improving sense of dynamic depth

Publications (1)

Publication Number Publication Date
JP2008084171A true JP2008084171A (en) 2008-04-10

Family

ID=39354955

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2006265566A Pending JP2008084171A (en) 2006-09-28 2006-09-28 Method for improving sense of dynamic depth

Country Status (1)

Country Link
JP (1) JP2008084171A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011170402A (en) * 2010-02-16 2011-09-01 Casio Computer Co Ltd Image processing apparatus and image processing program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011170402A (en) * 2010-02-16 2011-09-01 Casio Computer Co Ltd Image processing apparatus and image processing program
US8736682B2 (en) 2010-02-16 2014-05-27 Casio Computer Co., Ltd. Image processing apparatus

Similar Documents

Publication Publication Date Title
CN102202224B (en) Caption flutter-free method and apparatus used for plane video stereo transition
CN109992226B (en) Image display method and device and spliced display screen
US9396590B2 (en) Image processing apparatus and method for three-dimensional image zoom
TWI517711B (en) Processing method of display setup and embedded system
EP2416582A1 (en) Video processing device, video processing method, and computer program
US8922622B2 (en) Image processing device, image processing method, and program
KR101295649B1 (en) Image processing apparatus, image processing method and storage medium
JP5047344B2 (en) Image processing apparatus and image processing method
JP2017191146A (en) Image display device and image display method
CN105574918A (en) Material adding method and apparatus of 3D model, and terminal
CN101848346A (en) Television and image display method thereof
JP2008090818A (en) Three-dimensional graphics rendering method and system for efficiently providing motion blur effect
KR20030069897A (en) Method and apparatus for improving picture sharpness
JP2014059691A (en) Image processing device, method and program
EP2479985A1 (en) Video display device
JP2008046437A (en) Image display controller, image display method, and program
CN111787240B (en) Video generation method, apparatus and computer readable storage medium
CN112700456A (en) Image area contrast optimization method, device, equipment and storage medium
CN109859303B (en) Image rendering method and device, terminal equipment and readable storage medium
JP2008084171A (en) Method for improving sense of dynamic depth
US10109077B2 (en) Image generation device and display device
CN109859328B (en) Scene switching method, device, equipment and medium
JP5655550B2 (en) Image processing apparatus, image processing method, and program
JP5377649B2 (en) Image processing apparatus and video reproduction apparatus
KR101437447B1 (en) Image proceesing apparatus and image processing method thereof