JPH11242753A - Method and device for three-dimensional plotting - Google Patents

Method and device for three-dimensional plotting

Info

Publication number
JPH11242753A
JPH11242753A JP4336298A JP4336298A JPH11242753A JP H11242753 A JPH11242753 A JP H11242753A JP 4336298 A JP4336298 A JP 4336298A JP 4336298 A JP4336298 A JP 4336298A JP H11242753 A JPH11242753 A JP H11242753A
Authority
JP
Japan
Prior art keywords
pixel
value
luminance
dimensional
graphic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP4336298A
Other languages
Japanese (ja)
Inventor
Kenji Ando
健治 安藤
Masahiro Goto
正宏 後藤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to JP4336298A priority Critical patent/JPH11242753A/en
Publication of JPH11242753A publication Critical patent/JPH11242753A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

PROBLEM TO BE SOLVED: To automatically provide out-of-focus display by repeating processing which equally divides luminance value to neighborhood pixels corresponding to a copy position calculated in accordance with the deviation between the Z value of a pixel and the depth of field. SOLUTION: The coordinate and luminance of a pixel are received, the luminance is made into 1/4, the absolute value D1 of finite difference between Zf value of a corresponding image which is preliminarily set to a depth of field register and the Z value of the pixel is calculated, copy pixel distance D2 having a non-linear characteristic to deviation D1 is calculated by using a function of arctan and a result is made an integer (S101 to 103). The coordinates of neighborhood pixels are calculated according to the coordinate and the D2 and if the vertex accumulation bit of a pixel held on a frame memory being 1 is decided (S104 and 105). The luminance of a received pixel is added to the luminance of the neighborhood pixels and luminance value is written to a copy destination pixel (S108). Furthermore, the processing is continued until the processing of all of the neighborhood pixels is finished to the received pixel (S109).

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【発明の属する技術分野】本発明は3次元描画装置に係
わり、特に描画図形の焦点ぼかしを容易、かつ高速に処
理する描画方式に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a three-dimensional drawing apparatus, and more particularly to a drawing method for easily and rapidly processing a defocus of a drawing figure.

【0002】[0002]

【従来の技術】従来、3次元図形の焦点を合わせたりぼ
かしたりする処理は、ユーザ指示により微妙に位置をず
らした同一の図形を複数回フレームメモリ上に描画し、
そのイメージデータを別のメモリ(アキュムレーション
メモリ)上で加算し、その結果を最終的にフレームメモ
リ上へコピーすることにより実現している。この例に、
「オープンGLプログラミングガイド(OpenGL ARB,ISB
N4‐7952‐9645‐6c3055)」に紹介されている、シーン
・アンチェアリング方式が知られている。
2. Description of the Related Art Conventionally, a process of focusing or blurring a three-dimensional figure is performed by drawing the same figure slightly shifted in position on a frame memory by a user's instruction a plurality of times.
This is realized by adding the image data on another memory (accumulation memory) and finally copying the result on the frame memory. In this example,
"OpenGL Programming Guide (OpenGL ARB, ISB
N4-7952-9645-6c3055), the scene unchairing method is known.

【0003】[0003]

【発明が解決しようとする課題】上記の従来技術で、3
次元図形の焦点をぼかして表示する場合、ユーザは焦点
をぼかしたい3次元図形を、描画位置をずらして複数回
の描画を行なうことが必要なため手間と時間がかかり、
リアルタイムGCなどへの適用が困難であった。
In the above prior art, 3
In the case of displaying a three-dimensional figure whose focus is to be blurred when displaying a three-dimensional figure, the user needs to perform drawing a plurality of times while shifting the drawing position, which is troublesome and time consuming.
It has been difficult to apply to real-time GC and the like.

【0004】本発明の目的は、3次元図形の焦点をぼか
した表示を、ユーザ指示によらずに自動的に実現する3
次元図形の描画方法及び装置を提供することにある。
An object of the present invention is to automatically realize a display in which a three-dimensional figure is defocused without depending on a user instruction.
An object of the present invention is to provide a method and an apparatus for drawing a three-dimensional figure.

【0005】[0005]

【課題を解決するための手段】上記の目的は、3次元の
図形を2次元の画面上へ表示する際に、図形内の画素毎
に奥行き情報(Z値)を含む座標データと輝度値からな
る表示データを生成してフレームメモリに描画する3次
元描画方法において、対象図形内の画素毎に順次、対象
画素のZ値と画面単位などに設定されている被写界深度
(Zf)との偏差(D1=Z−Zf)に応じて当該画素
の複写位置(D2)を求め、該複写位置に相当する近隣
画素に当該画素の輝度値を当分割する描画処理を繰返
し、対象図形の周縁部に前記偏差に応じて増減するぼか
し領域(低輝度領域)を形成することにより達成され
る。
An object of the present invention is to display a three-dimensional graphic on a two-dimensional screen by using coordinate data including depth information (Z value) and luminance value for each pixel in the graphic. In a three-dimensional rendering method of generating display data and rendering the same in a frame memory, the Z value of the target pixel and the depth of field (Zf) set for each screen are sequentially determined for each pixel in the target graphic. The drawing process of obtaining the copy position (D2) of the pixel in accordance with the deviation (D1 = Z-Zf), and dividing the luminance value of the pixel into neighboring pixels corresponding to the copy position is repeated, and the periphery of the target graphic is repeated. This is achieved by forming a blur area (low-luminance area) that increases or decreases according to the deviation.

【0006】あるいは、対象図形内の画素毎に、そのZ
値と画面単位などに設定されている被写界深度(Zf)
との偏差(D1)に基づいて当該画素の複写画素距離
(D2)を算出し、当該画素から所定方向に前記複写画
素距離だけ離れた複数の近隣画素に当該画素の輝度値を
等分割して書き込み、かつ、その近隣画素に既に輝度値
の書き込みがある場合は今回値と加算するように描画処
理し、対象図形内の全ての画素について繰り返すことに
より達成される。
Alternatively, for each pixel in the target graphic, its Z
Value and depth of field (Zf) set for each screen
And calculates a copy pixel distance (D2) of the pixel based on the deviation (D1) of the pixel, and equally divides the luminance value of the pixel into a plurality of neighboring pixels separated from the pixel by the copy pixel distance in a predetermined direction. This is achieved by writing and, if a luminance value has already been written to a neighboring pixel, performing drawing processing so as to add it to the current value, and repeating for all pixels in the target graphic.

【0007】前記複写画素距離は、前記偏差の絶対値を
0からnの整数空間に非線形写像するように求めること
を特徴とする。また、前記所定方向は上下左右の4方向
であり、前記近隣画素に書き込むための当分割の輝度値
は対象画素の輝度値の1/4となる。ちなみに、実施例
ではn=3であり、四方の近隣画素は処理対象の画素か
ら最大で3画素離れた位置となる。
[0007] The copy pixel distance is determined so that the absolute value of the deviation is nonlinearly mapped to an integer space of 0 to n. The predetermined directions are four directions of up, down, left, and right, and the luminance value of this division for writing to the neighboring pixels is 1 / of the luminance value of the target pixel. Incidentally, in the embodiment, n = 3, and the neighboring pixels on the four sides are located at a maximum of three pixels away from the pixel to be processed.

【0008】本発明によれば、ユーザが設定する被写界
深度(Zf)に対し描画対象の図形が前方または後方と
なるとき、図形の周縁部の外側に低輝度のぼかし領域が
形成され、その領域の幅(画素数)はZfとの偏差(D
1)に応じて増減できる。
According to the present invention, when the graphic to be drawn is forward or rearward with respect to the depth of field (Zf) set by the user, a low-luminance blur area is formed outside the periphery of the graphic, The width (the number of pixels) of the area is different from Zf (D
It can be increased or decreased according to 1).

【0009】[0009]

【発明の実施の形態】以下に、本発明の一実施例を説明
する。図2は、本発明を適用する計算機システムの構成
を示す。CPU202は、メモリ203に格納されたプ
ログラムを実行し、同時にメモリ203に格納された3
次元図形の座標データを参照・更新することによってデ
ータバス201経由で描画処理装置204に対して描画
処理命令を発行し、3次元図形の座標データを転送す
る。
DESCRIPTION OF THE PREFERRED EMBODIMENTS One embodiment of the present invention will be described below. FIG. 2 shows a configuration of a computer system to which the present invention is applied. The CPU 202 executes the programs stored in the memory 203, and simultaneously executes the programs stored in the memory 203.
By referring to and updating the coordinate data of the three-dimensional figure, a drawing processing command is issued to the drawing processing apparatus 204 via the data bus 201, and the coordinate data of the three-dimensional figure is transferred.

【0010】本実施例の描画処理装置204は、レンダ
リング処理部206に特徴がある。まず、ジオメトリ処
理部205が受け取った3次元座標データを2次元座標
系へ変換し、輝度計算を行なう。次に、レンダリング処
理部206が2次元座標系上に展開された図形の頂点デ
ータからスパン画素の算出を行い、さらに画素単位で後
述するZ比較とぼかし処理を行った後、フレームメモリ
207への画素値書き込みを行う。フレームメモリ20
7に保持された値はディジタル・アナログ変換器208
で周期的にアナログ信号に変換され、最終的にディスプ
レイで表示される。
The rendering processor 204 of this embodiment is characterized by a rendering processor 206. First, the three-dimensional coordinate data received by the geometry processing unit 205 is converted into a two-dimensional coordinate system, and a luminance calculation is performed. Next, the rendering processing unit 206 calculates a span pixel from the vertex data of the figure developed on the two-dimensional coordinate system, and further performs a Z comparison and a blurring process described later for each pixel. Write the pixel value. Frame memory 20
7 is stored in the digital / analog converter 208.
Is periodically converted into an analog signal, and finally displayed on a display.

【0011】図3は、一実施例によるレンダリング処理
部の構成を示す。レンダリング処理部206は、CPU
202から設定する被写界深度レジスタ301、スパン
画素算出処理部302及び輝度補正画素複写書込処理部
303から構成される。
FIG. 3 shows a configuration of a rendering processing unit according to one embodiment. The rendering processing unit 206 includes a CPU
The apparatus includes a depth-of-field register 301 set from 202, a span pixel calculation processing unit 302, and a luminance correction pixel copy / write processing unit 303.

【0012】ここで被写界深度とは、3次元の視点座標
系上に構成される視体積内に存在する物体で、完全に焦
点の合った物体が存在するZ軸に垂直な平面を意味す
る。ひらたく言えば、画面の奥行き方向にみた焦点深度
(カメラのピント合わせ位置に相当)で、通常は画面毎
にCPU202から設定される。
Here, the depth of field refers to a plane perpendicular to the Z-axis at which a perfectly in-focus object exists within a visual volume formed on a three-dimensional viewpoint coordinate system. I do. Generally speaking, the depth of focus (corresponding to the focus position of the camera) viewed in the depth direction of the screen is usually set by the CPU 202 for each screen.

【0013】図4は、フレームメモリの1画素当たりの
データ構成を示す説明図である。RGBバッファ401
は1画素当たり24ビットを占有し、レッド(R)、グ
リーン(G)、ブルー(B)の値をそれぞれ8ビットで
保持する。Zバッファ402は奥行き情報(z値)を2
4ビットで保持する。頂点アキュムレーションビットは
(AccB)1画素当たり1ビットで構成され、最初は
0にリセットされ、輝度補正画素複写書込処理部203
で処理フラグとして使用される。
FIG. 4 is an explanatory diagram showing a data structure per pixel of the frame memory. RGB buffer 401
Occupies 24 bits per pixel, and holds the values of red (R), green (G), and blue (B) with 8 bits each. The Z buffer 402 stores the depth information (z value) as 2
Holds in 4 bits. The vertex accumulation bit is composed of one bit per pixel (AccB) and is initially reset to 0, and the luminance correction pixel copy / write processing unit 203
Used as a processing flag.

【0014】次に、スパン画素算出処理部302の処理
を説明する。図5は、座標データと被写界深度を3次元
の装置座標系上で表した模式図である。X軸501、Y
軸502、Z軸503は3次元の装置座標系を構成して
おり、3次元装置座標データとしてジオメトリ処理部2
05が出力する頂点505、506及び507の座標デ
ータと、被写界深度504を3次元の装置座標系上で表
している。平面515はディスプレイ209の表示面に
相当する。被写界深度504は被写界深度レジスタ30
1に設定されたZfの距離だけ原点500から離れ、Z
軸503に垂直な平面である。頂点508、509及び
510は頂点505、506及び507をそれぞれ平面
515へ正射影したもので、以下ではこの図形を処理対
象にして説明する。
Next, the processing of the span pixel calculation processing unit 302 will be described. FIG. 5 is a schematic diagram showing coordinate data and depth of field on a three-dimensional apparatus coordinate system. X axis 501, Y
The axis 502 and the Z-axis 503 constitute a three-dimensional device coordinate system, and the geometry processing unit 2
The coordinate data of the vertices 505, 506, and 507 output by 05 and the depth of field 504 are represented on a three-dimensional apparatus coordinate system. The plane 515 corresponds to the display surface of the display 209. The depth of field 504 is the depth of field register 30
The distance from the origin 500 by the distance of Zf set to 1
A plane perpendicular to the axis 503. The vertices 508, 509 and 510 are obtained by orthogonally projecting the vertices 505, 506 and 507 onto the plane 515, respectively.

【0015】スパン画素算出処理部302は、3次元ジ
オメトリ処理後の頂点505、506及び507をジオ
メトリ処理部205より受け取り、頂点508、509
及び510の図形内部に存在する画素の輝度とZ値を画
素毎に、従来と同様に算出する。
The span pixel calculation processing unit 302 receives the vertices 505, 506 and 507 after the three-dimensional geometry processing from the geometry processing unit 205, and outputs the vertices 508 and 509.
And 510, the luminance and the Z value of the pixel existing inside the figure are calculated for each pixel in the same manner as in the related art.

【0016】図6に示すように、(a)DDA処理によ
り、各頂点を結ぶ線分をなす画素の座標と輝度を算出
し、(b)ラスタライズ処理により、ラスタ(X軸方
向)方向に輝度を算出する。そして、各画素の算出結果
は、逐次その座標と輝度を輝度補正画素複写書込処理部
303へ転送する。以上の処理を図形内部のすべての画
素について行った後終了する。なお、図示の破線の格子
は、ディスプレイ画面515上の画素を示している。
As shown in FIG. 6, (a) the coordinates and the luminance of the pixels forming the line connecting the vertices are calculated by the DDA processing, and (b) the luminance in the raster (X-axis direction) direction is calculated by the rasterizing processing. Is calculated. Then, the calculation result of each pixel is sequentially transferred to the brightness correction pixel copy / write processing unit 303 with its coordinates and brightness. After the above processing is performed for all the pixels inside the figure, the process ends. It should be noted that the dashed-line grid shown in the figure indicates pixels on the display screen 515.

【0017】図7に、輝度補正画素複写書き込み処理の
概念図を示す。輝度補正画素複写書込処理部303は画
素514のデータを受け取ると、その輝度を1/4倍
し、画素514とその四方に等距離だけ離れた4つの近
傍画素701〜704に同じ1/4倍の輝度を書き込
む。この受け取った画素514の座標と近傍画素の座標
距離705を輝度補正画素複写距離(複写画素距離)と
呼ぶ。また、各画素の輝度の書き込みの際に、フレーム
メモリ207上に保持している頂点アキュムレーション
ビット(AccB)403の判定を行い、AccB=1
がセットされている場合は、輝度の書き込みを行う前の
当該画素の輝度と今回の書き込み輝度を加算した値を書
き込む。
FIG. 7 is a conceptual diagram of the luminance correction pixel copy / write processing. Upon receiving the data of the pixel 514, the luminance correction pixel copy / write processing unit 303 multiplies the luminance by 4, and applies the same に to the pixel 514 and the four neighboring pixels 701 to 704, which are equidistant to the four sides. Write twice the brightness. The coordinates of the received pixel 514 and the coordinate distance 705 between the neighboring pixels are referred to as a luminance correction pixel copy distance (copy pixel distance). Further, when writing the luminance of each pixel, the vertex accumulation bit (AccB) 403 held on the frame memory 207 is determined, and AccB = 1.
Is set, a value obtained by adding the luminance of the pixel before writing the luminance and the current luminance is written.

【0018】以上の処理をスパン画素算出処理部302
が算出するすべての画素に対して行うことにより、図形
中心部の画素の輝度が多く加算され本来の輝度に近づ
き、図形周辺部の画素の輝度は本来の輝度の1/4に近
くなる。これにより焦点のぼけた図形の表示が実現され
る。
The above processing is performed by the span pixel calculation processing unit 302.
Is performed on all the pixels calculated, the luminance of the pixel in the central part of the graphic is increased and approaches the original luminance, and the luminance of the pixel in the peripheral part of the graphic becomes close to 1 / of the original luminance. Thus, the display of the defocused figure is realized.

【0019】図1に、一実施例による輝度補正画素複写
書き込み処理のフローを示す。本処理はスパン画素算出
処理部302で算出した図形内部の画素を受け取る度に
実行される。
FIG. 1 shows a flow of a luminance correction pixel copy / write process according to one embodiment. This process is executed each time a pixel inside the figure calculated by the span pixel calculation processing unit 302 is received.

【0020】例えば、画素514の座標(X1,Y1,
Z1)と輝度(R1,G1,B1)を受け取ると、その
輝度を1/4倍(R1/4,G1/4,B1/4)する
(S101)。
For example, the coordinates (X1, Y1,
Z1) and the brightness (R1, G1, B1), the brightness is multiplied by 1/4 (R1 / 4, G1 / 4, B1 / 4) (S101).

【0021】次に、CPU202内で駆動している3次
元図形表示プログラムにより予め被写界深度レジスタ3
01に設定されている該当画面のZf値と画素514の
Z値(=Z1)との差分の絶対値D1を数1により算出
する(S102)。
Next, the depth of field register 3 is set in advance by a three-dimensional graphic display program driven in the CPU 202.
The absolute value D1 of the difference between the Zf value of the corresponding screen set to 01 and the Z value (= Z1) of the pixel 514 is calculated by Equation 1 (S102).

【0022】[0022]

【数1】D1=|Z1−Zf| このD1を図5上で説明すると、当該画素に対応する元
図形(頂点505〜507)上での座標位置512と被
写界深度504上での対応位置513との距離515と
なる。
D1 = | Z1-Zf | This D1 will be described with reference to FIG. 5. The correspondence between the coordinate position 512 on the original figure (vertex 505 to 507) corresponding to the pixel and the depth of field 504 The distance 515 from the position 513 is obtained.

【0023】次に、偏差D1に対して非線形特性を持た
せた複写画素距離D2を、arctanの関数を用いて
算出し、結果を整数化する(S103)。
Next, a copy pixel distance D2 having a non-linear characteristic with respect to the deviation D1 is calculated using an arctan function, and the result is converted into an integer (S103).

【0024】[0024]

【数2】D2=|tan~1(D1)|・6/π ここで、tan~1の値は2π〜−2πの値であり、D2
は0〜3の値をとる。
D2 = | tan ~ 1 (D1) | · 6 / π Here, the value of tan ~ 1 is a value of 2π to -2π, and D2
Takes a value of 0 to 3.

【0025】数2による複写画素距離D2は一例であ
る。数2は、十分大きい値を取りうる被写界深度Zfと
図形内各画素のZ値との偏差D1を、0から3の整数空
間に非線形写像して、偏差が大きい画素のXY軸方向の
近傍が発散しないように考慮したものであり、シグモイ
ドなどの他の非線形関数による代替も可能である。
The copy pixel distance D2 according to Equation 2 is an example. Equation 2 non-linearly maps the deviation D1 between the depth of field Zf, which can take a sufficiently large value, and the Z value of each pixel in the figure into an integer space of 0 to 3, and calculates the large deviation of the pixel in the XY axis direction. This is designed so that the neighborhood does not diverge, and can be replaced by another nonlinear function such as sigmoid.

【0026】以後の処理は、1画素について繰り返し行
われ、図7に示した画素514(X1,Y1)の4つの
近傍画素701(X1,Y1+D2)、702(X1−
D2,Y1)、703(X1,Y1−D2)、704
(X1+D2,Y1)のすべてに対して行う。
The subsequent processing is repeated for one pixel, and the four neighboring pixels 701 (X1, Y1 + D2) and 702 (X1-X1) of the pixel 514 (X1, Y1) shown in FIG.
D2, Y1), 703 (X1, Y1-D2), 704
This is performed for all of (X1 + D2, Y1).

【0027】まず、受け取った画素514の座標(X
1,Y1)とD2より、近傍画素の座標701を算出す
る(S104)。次に、フレームメモリ207上に保持
されている画素701の頂点アキュームレーションビッ
ト(AccB)=1であるか判定する(S105)。最
初は0にリセットされているので”NO”である。その
結果、当該画素に対しAccB=1がセットされる(S
107)。次に、複写先ピクセル(この場合、画素70
1のピクセル)へ輝度値(R1/4,G1/4,B1/
4)を書き込む(S108)。さらに、受け取った画素
に対する全ての近傍画素の処理が終了するまで(S10
9)、S104からの処理を繰り返す。
First, the coordinates (X
From (1, Y1) and D2, the coordinates 701 of the neighboring pixels are calculated (S104). Next, it is determined whether the vertex accumulation bit (AccB) of the pixel 701 held in the frame memory 207 is 1 (S105). Initially, it is "NO" because it has been reset to zero. As a result, AccB = 1 is set for the pixel (S
107). Next, the copy destination pixel (in this case, pixel 70
1 pixel) to the luminance value (R1 / 4, G1 / 4, B1 /
4) is written (S108). Further, until the processing of all the neighboring pixels for the received pixel is completed (S10
9), repeat the processing from S104.

【0028】一方、S105の判定でAccB=1(”
YES”)の場合は、近傍画素として一度は輝度値が書
き込まれている。従って、近傍画素の輝度(R0,G
0,B0)を読み出して、今回受け取った画素による輝
度(R1/4,G1/4,B1/4)と加算(R0+R
1/4,G0+G1/4,B0+B1/4)する(S1
06)。
On the other hand, in the determination of S105, AccB = 1 ("
In the case of "YES"), the luminance value has been written once as the neighboring pixel. Therefore, the luminance (R0, G
0, B0), and adds the luminance (R1 / 4, G1 / 4, B1 / 4) of the pixel received this time (R0 + R0).
1/4, G0 + G1 / 4, B0 + B1 / 4) (S1
06).

【0029】これにより、他の画素の輝度値補正画素複
写処理の際に、当該近傍画素が複写画素距離D2に一致
すると、その他の画素の輝度/4が加算される。一般
に、図形内の輝度は連続性を有しているので、図形内部
の輝度は一旦、四方に4分割しても周辺の他画素から埋
め合わされ、ほぼ元通りの輝度を回復する。しかし、外
側に画素のない図形の周縁部では埋め合わせが不十分と
なるので輝度が低下し、ぼけた図形となる。
Thus, in the luminance value correction pixel copy processing of another pixel, if the neighboring pixel matches the copy pixel distance D2, the luminance of another pixel / 4 is added. In general, since the luminance in a figure has continuity, even if the luminance inside the figure is once divided into four parts, the luminance inside the figure is compensated for from other surrounding pixels, and almost the original luminance is recovered. However, at the periphery of a figure having no pixels on the outside, the brightness is reduced and the figure becomes blurred because of insufficient filling.

【0030】なお、受け取った画素のZ値が被写界深度
Zfと同一(または近似)のときはD2=0となるの
で、複写先=受け取った画素としてその輝度値をフレー
ムメモリに書き込むようにしてもよい。
When the Z value of the received pixel is the same (or approximate) as the depth of field Zf, D2 = 0, so that the luminance value is written to the frame memory as the copy destination = received pixel. You may.

【0031】図8は、本実施例のぼかし処理による表示
図形を示し、(a)は被写界深度上にある図形801、
(b)は被写界深度より奥にある図形802を示したも
のである。図形801の場合は、輝度補正複写画素距離
D2=0となるので、全ての画素の複写は本来の画素位
置に対して行なわれ、焦点がぼけていない図形として表
示される。一方、図形802の場合は、輝度補正複写画
素距離D2=1となり、図形の輪郭部分の輝度値が小さ
くなって、焦点のぼけた図形として表示される。
FIGS. 8A and 8B show a display graphic by the blurring processing of the present embodiment, and FIG. 8A shows a graphic 801 on the depth of field,
(B) shows the figure 802 located behind the depth of field. In the case of the graphic 801, since the luminance correction copy pixel distance D 2 = 0, the copying of all the pixels is performed at the original pixel position, and is displayed as a graphic with no defocus. On the other hand, in the case of the graphic 802, the luminance correction copy pixel distance D2 = 1, and the luminance value of the outline of the graphic is reduced, and the graphic is displayed as a defocused graphic.

【0032】[0032]

【発明の効果】本発明による3次元図形の描画は、画面
単位などに設定される被写界深度Zfと図形内の画素毎
のZ値の偏差に応じて当該画素の描画位置と輝度を四方
に分散するので、図形の周縁部の輝度を低下させるぼか
し表示を実現できる。
According to the present invention, drawing of a three-dimensional figure is performed by changing the drawing position and brightness of the pixel in accordance with the deviation between the depth of field Zf set for each screen and the Z value of each pixel in the figure. , It is possible to realize a blur display that lowers the luminance of the peripheral portion of the figure.

【0033】これによれば、ユーザによる同一図形の位
置ずれ指示が不要となるので、処理が速く使い勝手のよ
い3次元図形の描画処理装置を提供でき、リアルタイム
CGなどへの適用が可能になる。
According to this, since it is not necessary for the user to give an instruction for positional deviation of the same figure, it is possible to provide a three-dimensional figure drawing processing apparatus which is quick and easy to use, and is applicable to real-time CG and the like.

【図面の簡単な説明】[Brief description of the drawings]

【図1】本発明の一実施例による3次元図形の描画方法
で、ぼかし表示の描画処理を示すフローチャート。
FIG. 1 is a flowchart showing a blur display drawing process in a three-dimensional figure drawing method according to one embodiment of the present invention.

【図2】本発明を適用する描画処理装置のシステム構成
図。
FIG. 2 is a system configuration diagram of a drawing processing apparatus to which the present invention is applied.

【図3】一実施例によるレンダリング処理部の構成図。FIG. 3 is a configuration diagram of a rendering processing unit according to one embodiment.

【図4】フレームメモリの1画素当たりのデータ構成を
示す説明図。
FIG. 4 is an explanatory diagram showing a data configuration per pixel of a frame memory.

【図5】座標データと被写界深度を3次元の装置座標系
上で表した模式図。
FIG. 5 is a schematic diagram showing coordinate data and a depth of field on a three-dimensional apparatus coordinate system.

【図6】スパン画素算出処理部による図形内部に存在す
る画素の輝度とZ値を算出する、DDA処理とラスタラ
イズ処理の説明図。
FIG. 6 is an explanatory diagram of a DDA process and a rasterizing process for calculating a luminance and a Z value of a pixel existing inside a figure by a span pixel calculation processing unit.

【図7】輝度補正画素複写書き込み処理の概念図。FIG. 7 is a conceptual diagram of a luminance correction pixel copy / write process.

【図8】本実施例による描画処理の表示図形を示した説
明図。
FIG. 8 is an explanatory diagram showing a display figure of a drawing process according to the embodiment.

【符号の説明】[Explanation of symbols]

201…データバス、202…CPU、203…メモ
リ、204…描画処理装置、205…ジオメトリ処理
部、206…レンダリング処理部、207…フレームメ
モリ、208…ディジタル・アナログ変換器、209…
ディスプレイ、301…被写界深度レジスタ、302…
スパン画素算出処理部、303…輝度補正画素複写書込
処理部。
201: data bus, 202: CPU, 203: memory, 204: drawing processing unit, 205: geometry processing unit, 206: rendering processing unit, 207: frame memory, 208: digital / analog converter, 209 ...
Display, 301 ... depth of field register, 302 ...
Span pixel calculation processing unit 303: luminance correction pixel copy / write processing unit

Claims (5)

【特許請求の範囲】[Claims] 【請求項1】 3次元の図形を2次元の画面上へ表示す
る際に、図形内の画素毎に奥行き情報(Z値)を含む座
標データと輝度値からなる表示データを生成してフレー
ムメモリに描画する3次元描画方法において、 対象図形内の画素毎に順次、対象画素のZ値と予め画面
単位などに設定されている被写界深度(Zf)との偏差
に応じて当該画素の複写位置を求め、該複写位置に相当
する近隣画素に当該画素の輝度値を当分割する描画処理
を繰返し、対象図形の周縁部に前記偏差に応じて増減す
るぼかし領域を形成することを特徴とする3次元描画方
法。
When displaying a three-dimensional graphic on a two-dimensional screen, display data consisting of coordinate data including depth information (Z value) and a luminance value is generated for each pixel in the graphic, and a frame memory is generated. In the three-dimensional drawing method, the pixel is sequentially copied for each pixel in the target figure in accordance with the deviation between the Z value of the target pixel and the depth of field (Zf) set in advance in screen units or the like. A drawing process of obtaining a position and dividing the luminance value of the pixel into a neighboring pixel corresponding to the copy position is repeated, and a blur region which increases or decreases according to the deviation is formed at a peripheral portion of the target graphic. 3D drawing method.
【請求項2】 3次元図形を2次画面上へ表示する際
に、図形内の画素毎に奥行き情報(Z値)を含む座標デ
ータと輝度値からなる表示データを生成してフレームメ
モリに描画する3次元描画方法において、 対象図形内の画素毎に、そのZ値と予め画面単位などに
設定されている被写界深度(Zf)との偏差に基づいて
当該画素の複写画素距離を算出し、当該画素から所定方
向に前記複写画素距離だけ離れた複数の近隣画素に当該
画素の輝度値を等分割して書き込み、かつ、その近隣画
素に既に輝度値の書き込みがある場合は今回値と加算す
るように描画処理し、対象図形内の全ての画素について
繰り返すことを特徴とする3次元描画方法。
2. When displaying a three-dimensional graphic on a secondary screen, display data including luminance data and coordinate data including depth information (Z value) is generated for each pixel in the graphic and drawn on a frame memory. In the three-dimensional drawing method, a copy pixel distance of a pixel in a target graphic is calculated based on a deviation between a Z value and a depth of field (Zf) set in advance on a screen basis or the like. The luminance value of the pixel is equally divided and written into a plurality of neighboring pixels separated by the copy pixel distance in a predetermined direction from the pixel, and if the neighboring pixel has already been written with the luminance value, the current value is added. A three-dimensional rendering method, wherein the rendering process is repeated for all pixels in the target graphic.
【請求項3】 請求項2において、 前記複写画素距離は、前記偏差の絶対値を0からnの整
数空間に非線形写像するように求めることを特徴とする
3次元描画方法。
3. The three-dimensional drawing method according to claim 2, wherein the copy pixel distance is obtained such that the absolute value of the deviation is nonlinearly mapped to an integer space from 0 to n.
【請求項4】 請求項2または3において、 前記所定方向は上,下,左,右の4方向であり、前記近
隣画素に書き込むための当分割の輝度値は対象画素の輝
度値の1/4となることを特徴とする3次元描画方法。
4. The method according to claim 2, wherein the predetermined direction is four directions of up, down, left, and right, and a luminance value of the division for writing in the neighboring pixel is 1/1/1 of a luminance value of a target pixel. 4. A three-dimensional drawing method, characterized in that:
【請求項5】 3次元図形の描画処理命令を発行するC
PUと、受け取った描画処理命令に基づき3次元座標デ
ータを2次元座標系へ変換し、輝度値を計算してフレー
ムメモリへ画素値の書き込みを行う描画処理装置を備え
る3次元描画装置において、 前記描画処理装置は、前記CPUから画面単位に設定さ
れる被写界深度(Zf)を記憶する被写界深度レジスタ
と、処理対象の図形内部に存在する画素のZ値を含む座
標データと輝度を画素毎に算出するスパン画素算出処理
部と、請求項1〜4のいずれか1項に記載の描画処理を
行なう輝度補正画素複写書込処理部を設けていることを
特徴とする3次元描画装置。
5. A C for issuing a drawing processing command for a three-dimensional figure
A PU and a drawing processing device that converts the three-dimensional coordinate data into a two-dimensional coordinate system based on the received drawing processing command, calculates a luminance value, and writes a pixel value to a frame memory; The drawing processing device stores a depth-of-field register that stores a depth-of-field (Zf) set for each screen from the CPU, and stores coordinate data including a Z value of a pixel existing inside a graphic to be processed and luminance. A three-dimensional drawing apparatus, comprising: a span pixel calculation processing unit for calculating each pixel; and a luminance correction pixel copy / write processing unit for performing the drawing processing according to claim 1. .
JP4336298A 1998-02-25 1998-02-25 Method and device for three-dimensional plotting Pending JPH11242753A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP4336298A JPH11242753A (en) 1998-02-25 1998-02-25 Method and device for three-dimensional plotting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP4336298A JPH11242753A (en) 1998-02-25 1998-02-25 Method and device for three-dimensional plotting

Publications (1)

Publication Number Publication Date
JPH11242753A true JPH11242753A (en) 1999-09-07

Family

ID=12661758

Family Applications (1)

Application Number Title Priority Date Filing Date
JP4336298A Pending JPH11242753A (en) 1998-02-25 1998-02-25 Method and device for three-dimensional plotting

Country Status (1)

Country Link
JP (1) JPH11242753A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001160153A (en) * 1999-12-03 2001-06-12 Namco Ltd Image generation system and information storage medium
JP2001250127A (en) * 1999-12-31 2001-09-14 Square Co Ltd Computer-readable recording medium with recorded program for three-dimensional computer image processing, shading drawing method, and video game device
JP2001283246A (en) * 2000-03-30 2001-10-12 Konami Co Ltd Three-dimensional image compositing device, its method, information storage medium program distributing device and its method
JP2002024849A (en) * 2000-07-10 2002-01-25 Konami Co Ltd Three-dimensional image processing device and readable recording medium with three-dimensional image processing program recorded thereon
JP2002032780A (en) * 2000-05-10 2002-01-31 Namco Ltd Game system, program and information storage medium
US6927777B2 (en) 1999-12-17 2005-08-09 Namco, Ltd. Image generating system and program
US7167600B2 (en) 2000-12-27 2007-01-23 Sony Computer Entertainment Inc. Drawing method for drawing image on two-dimensional screen
US8022962B2 (en) 2007-06-27 2011-09-20 Nintendo Co., Ltd. Image processing program and image processing apparatus

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001160153A (en) * 1999-12-03 2001-06-12 Namco Ltd Image generation system and information storage medium
US6927777B2 (en) 1999-12-17 2005-08-09 Namco, Ltd. Image generating system and program
US7042463B2 (en) 1999-12-17 2006-05-09 Namco Ltd. Image generating system and program
JP2001250127A (en) * 1999-12-31 2001-09-14 Square Co Ltd Computer-readable recording medium with recorded program for three-dimensional computer image processing, shading drawing method, and video game device
JP2001283246A (en) * 2000-03-30 2001-10-12 Konami Co Ltd Three-dimensional image compositing device, its method, information storage medium program distributing device and its method
JP2002032780A (en) * 2000-05-10 2002-01-31 Namco Ltd Game system, program and information storage medium
JP2002024849A (en) * 2000-07-10 2002-01-25 Konami Co Ltd Three-dimensional image processing device and readable recording medium with three-dimensional image processing program recorded thereon
US7167600B2 (en) 2000-12-27 2007-01-23 Sony Computer Entertainment Inc. Drawing method for drawing image on two-dimensional screen
US8022962B2 (en) 2007-06-27 2011-09-20 Nintendo Co., Ltd. Image processing program and image processing apparatus

Similar Documents

Publication Publication Date Title
JP6392370B2 (en) An efficient re-rendering method for objects to change the viewport under various rendering and rasterization parameters
JP4845147B2 (en) Perspective editing tool for 2D images
CN111243071A (en) Texture rendering method, system, chip, device and medium for real-time three-dimensional human body reconstruction
JPH10302079A (en) Solid texture mapping processor and three-dimensional image generating device using the processor
EP1906359B1 (en) Method, medium and system rendering 3-D graphics data having an object to which a motion blur effect is to be applied
JPH07200867A (en) Image generating device
US6360029B1 (en) Method and apparatus for variable magnification of an image
JPH0771936A (en) Device and method for processing image
JP2006244426A (en) Texture processing device, picture drawing processing device, and texture processing method
JP5956756B2 (en) Video processing apparatus and control method thereof
CN108805849B (en) Image fusion method, device, medium and electronic equipment
JPH11161819A (en) Image processor, its method and recording medium recording image processing program
JPH11242753A (en) Method and device for three-dimensional plotting
KR20140010708A (en) Apparatus and method for generating texture for three dimensional mesh model of target object
JP2006350852A (en) Image generation system
JP4987890B2 (en) Stereoscopic image rendering apparatus, stereoscopic image rendering method, stereoscopic image rendering program
KR100305461B1 (en) Graphic processing device
JP3629243B2 (en) Image processing apparatus and method for rendering shading process using distance component in modeling
JP2016072691A (en) Image processing system, control method of the same, and program
JP4642431B2 (en) Map display device, map display system, map display method and program
JP7119081B2 (en) Projection data generation device, three-dimensional model, projection data generation method, neural network generation method and program
US20060104544A1 (en) Automatic image feature embedding
JP2002260003A (en) Video display device
JP3522714B2 (en) Image generation method
CN116188668B (en) Shadow rendering method, medium and electronic device based on IOS platform