US7030880B2  Information processor  Google Patents
Information processor Download PDFInfo
 Publication number
 US7030880B2 US7030880B2 US10/344,452 US34445203A US7030880B2 US 7030880 B2 US7030880 B2 US 7030880B2 US 34445203 A US34445203 A US 34445203A US 7030880 B2 US7030880 B2 US 7030880B2
 Authority
 US
 United States
 Prior art keywords
 λ
 coordinates
 calculating
 characterized
 matrix
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Active, expires
Links
Classifications

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T3/00—Geometric image transformation in the plane of the image
 G06T3/0006—Affine transformations
Abstract
obtained by multiplying a matrix A for the affine transformation by λ(≠0) is stored in a memory section in advance. In case of conducting calculation processing for transforming coordinates (x,y,z)^{t }into coordinates (x′,y′,z′)^{t }by means of the affine transformation, A′ and a matrix (t_{1},t_{2},t_{3})^{t }are read from the memory section,
x′=(a′ _{11} *x+a′ _{12} *y+a′ _{13} *z)/λ+t _{1};
y′=(a′ _{21} *x+a′ _{22} *y+a′ _{23} *z)/λ+t _{2};
z′=(a′ _{31} *x+a′ _{32} *y+a′ _{33} *z)/λ+t _{3};
are calculated, and the coordinates (x′,y′,z′)^{t }are obtained.
Description
The present invention relates to a technology of the imaging of a threedimensional image, and especially, to a technology for reducing geometry calculation and processing of division which are conducted in processing of the imaging of the threedimensional image, and realizing the imaging of threedimensional graphics even in an information processing apparatus which does not have an FPU (floatingpoint processing unit) and an information processing apparatus which has low throughput of a CPU.
In case of imaging threedimensional graphics, an information processing terminal conducts
 (1) coordinate transformation calculation (Transformation) for shifting a threedimensional object,
 (2) light source calculation processing (Lighting) for calculating a part which is being sunned and a part which is being shaded, assuming that light from a light source (the sun and so forth, for example) is shining upon an object,
 (3) processing (Rasterize) for dividing an object into columns that are called dots,
 (4) processing (Texture mapping) for mapping texture into the columns, and so forth.
Usually, a CPU of an information processing apparatus itself takes charge of work of what is called geometry calculation of (1) and (2), and conducts the processing utilizing an FPU (floatingpoint processing unit) of the CPU.
Further, the processing of what is called rasterize of (3) and (4) is usually conducted a 3D graphics accelerator.
However, although the work of what is called the geometry calculation of (1) and (2) is conducted by utilizing the FPU (floatingpoint processing unit) of the CPU, the FPU is designed so as to conduct not only the geometry calculation but also the calculation of a general floatingpoint, and in addition, the CPU conducts other processing, it is not necessarily suitable for the processing of the imaging of the threedimensional graphics.
Accordingly, a 3D graphics accelerator designed so as to conduct the geometry calculation by means of a graphics chip (in other words, in which a geometry engine is build) appears, and it devises to lower a load rate of a CPU, and in addition, the capacity of the geometry calculation can be drastically improved above all, in other words, the imaging capacity of 3D can be improved, compared with a case where it is conducted by the CPU.
However, the 3D graphics accelerator is expensive, and it is not equipped with all information processing apparatuses.
Further, in the information processing apparatuses, there is one which does not have not only the 3D graphics accelerator but also an FPU (floatingpoint processing unit), like in a mobile telephone and PDA (Personal Digital (Data) Assistants) for example.
In such an information processing apparatus, generally the capacity of the CPU is also low, and it is said that the 3D graphics is hardly possible.
Further, the speed of division processing is much lower than that of multiplication processing, and in order to conduct calculation processing at a high speed, it is preferable to reduce the division as much as possible.
Therefore, the objective of the present invention is to provide a technology for realizing the imaging of the threedimensional graphics even in the information processing apparatus which does not have the FPU (floatingpoint processing unit) by conducting integer calculation processing in the geometry calculation that is conducted in the processing of the imaging of the threedimensional graphics.
Also, the objective of the present invention is to provide a technology for realizing the imaging of the threedimensional graphics at a high speed even in the information processing apparatus which has the CPU having low processing capacity by conducting rasterize processing without conducting division in the rasterize processing.
The first invention for accomplishing the abovedescribed objective is an information processing apparatus for, in conducting Rasterize processing of imaging of a threedimensional image, calculating a quantity of a change of x coordinates
to y coordinates of a straight line passing through a point P_{1}(x_{1},y_{1}) and a point P_{2}(x_{2},y_{2}) on a twodimensional plain, characterized in that the apparatus comprises:

 a memory in which a constant μ representative of
is stored in association with the quantity of the change of the y coordinates; and

 calculation means for, in calculating Δx, reading μ corresponding to the quantity of the change of said y coordinates from said memory, and calculating
Δx′=μ*(x _{2} −x _{1});
by means of multiplication, and calculating this result as Δx.
 calculation means for, in calculating Δx, reading μ corresponding to the quantity of the change of said y coordinates from said memory, and calculating
The second invention for accomplishing the abovedescribed objective is, in the abovedescribed first invention, characterized in that said λ is limited to 2^{n}(n≧1), and the apparatus further comprises means for calculating Δx by rightshifting said calculated Δx′ by n digits.
The third invention for accomplishing the abovedescribed objective is, in the abovedescribed first invention or second invention, wherein coordinate values are limited to integers, and transformation processing is applied to coordinates (x,y,z)^{t }to create coordinates (x′,y′,z′)^{t }by means of an affine transformation representative of
characterized in that the apparatus further comprises:

 a memory in which a matrix

 that is obtained by multiplying an affine transformation matrix
by λ(≠0), a matrix (t_{1},t_{2},t_{3})^{t}, and a shape data are stored; and

 calculation means for, in transforming the coordinates (x,y,z)^{t }of said shape data into the coordinates (x′,y′,z′)^{t }by means of an affine transformation of the matrix A and the matrix (t_{1},t_{2},t_{3})^{t}, reading the matrix A′ and the matrix (t_{1},t_{2},t_{3})^{t }from said memory, and calculating
x′=(a′ _{11} *x+a′ _{12} *y+a′ _{13} *z)/λ+t _{1};
y′=(a′ _{21} *x+a′ _{22} *y+a′ _{23} *z)/λ+t _{2};
z′=(a′ _{31} *x+a′ _{32} *y+a′ _{33} *z)/λ+t _{3};
to calculate the coordinates (x′,y′,z′)^{t}.
 calculation means for, in transforming the coordinates (x,y,z)^{t }of said shape data into the coordinates (x′,y′,z′)^{t }by means of an affine transformation of the matrix A and the matrix (t_{1},t_{2},t_{3})^{t}, reading the matrix A′ and the matrix (t_{1},t_{2},t_{3})^{t }from said memory, and calculating
The fourth invention for accomplishing the abovedescribed objective is, in the abovedescribed third invention, characterized in that said λ is limited to 2^{n}(n≧1), and
said calculation means is means for calculating divisional calculation of
x′=(a′ _{11} *x+a′ _{12} *y+a′ _{13} *z)/λ+t _{1};
y′=(a′ _{21} *x+a′ _{22} *y+a′ _{23} *z)/λ+t _{2};
z′=(a′ _{31} *x+a′ _{32} *y+a′ _{33} *z)/λ+t _{3};
by conducting rightshifting by n digits.
The fifth invention for accomplishing the abovedescribed objective is, in the abovedescribed fourth invention, characterized in that said calculation means is means for conducting the calculation by conducting rightshifting by n digits after adding a constant λ/2 to each number to be divided.
The sixth invention for accomplishing the abovedescribed objective is, in the abovedescribed third, fourth or fifth invention, characterized in that the apparatus comprises synthesis means for synthesizing two or more parameters that are multiplied by λ(≠0) in advance.
The seventh invention for accomplishing the abovedescribed objective is, an imaging processing method of, in conducting Rasterize processing of imaging of a threedimensional image, calculating a quantity of a change of x coordinates
to y coordinates of a straight line passing through a point P_{1}(x_{1},y_{1}) and a point P_{2}(x_{2},y_{2}) on a twodimensional plain, characterized in that said method comprises steps of:

 in calculating Δx, reading μ corresponding to a quantity of a change of y coordinates of Δx to be calculated, from a memory in which the constant μ representative of
is stored in association with the quantity of the change of the y coordinates; and

 based on said μ that was read, calculating
Δx′=μ*(x _{2} −x _{1});
by means of multiplication, and calculating this result as Δx.
 based on said μ that was read, calculating
The eighth invention for accomplishing the abovedescribed objective is, in the abovedescribed seventh invention, characterized in that, said λ is limited to 2^{n}(n≧1), and the method further comprises a step of calculating Δx by rightshifting said calculated Δx′ by n digits.
The ninth invention for accomplishing the abovedescribed objective is, in the abovedescribed seventh or eighth invention, in which coordinate values are limited to integers, and transformation processing is applied to coordinates (x,y,z)^{t }to create coordinates (x′,y′,z′)^{t }by means of an affine transformation representative of
characterized in that the method comprises steps of:

 in transforming the coordinates (x,y,z)^{t }of a shape data into the coordinates (x′,y′,z′)^{t }by means of an affine transformation of a matrix A and a matrix (t_{1},t_{2},t_{3})^{t}, reading a parameter A′

 that is obtained by multiplying the matrix A by λ (≠0), and the matrix (t_{1},t_{2},t_{3})^{t}, which are stored in a memory; and
 based on said read matrix A′ and matrix (t_{1},t_{2},t_{3})^{t}, calculating
x′=(a′ _{11} *x+a′ _{12} *y+a′ _{13} *z)/λ+t _{1};
y′=(a′ _{21} *x+a′ _{22} *y+a′ _{23} *z)/λ+t _{2};
z′=(a′ _{31} *x+a′ _{32} *y+a′ _{33} *z)/λ+t _{3};
and calculating the coordinates (x′,y′,z′)^{t}.
The tenth invention for accomplishing the abovedescribed objective is, in the abovedescribed ninth invention, characterized in that said λ is limited to 2^{n}(n≧1), and

 said calculating step is a step of calculating division of
x′=(a′ _{11} *x+a′ _{12} *y+a′ _{13} *z)/λ+t _{1};
y′=(a′ _{21} *x+a′ _{22} *y+a′ _{23} *z)/λ+t _{2};
z′=(a′ _{31} *x+a′ _{32} *y+a′ _{33} *z)/λ+t _{3};
by conducting rightshifting by n digits to calculate the coordinates (x′,y′,z′)^{t}.
 said calculating step is a step of calculating division of
The eleventh invention for accomplishing the abovedescribed objective is, in the abovedescribed tenth invention, characterized in that the method further comprises a step of adding a constant λ/2 to each number to be divided before conducting said rightshifting by n digits.
The twelfth invention for accomplishing the abovedescribed objective is, in the abovedescribed tenth or eleventh invention, characterized in that the method further comprises a step of synthesizing two or more parameters that are multiplied by λ(≠0) in advance.
The thirteenth invention for accomplishing the abovedescribed objective is a program for, in conducting Rasterize processing of imaging of a threedimensional image, making an information processing apparatus conduct calculation processing for calculating a quantity of a change of x coordinates
to y coordinates of a straight line passing through a point P_{1}(x_{1},y_{1}) and a point P_{2}(x_{2},y_{2}) on a twodimensional plain, characterized in that said program comprises steps of:

 in calculating Δx, reading μ corresponding to a quantity of a change of y coordinates of Δx to be calculated, from a memory in which the constant μ representative of
is stored in association with the quantity of the change of the y coordinates; and

 based on said μ that was read, calculating
Δx′=μ*(x _{2} −x _{1});
by means of multiplication, and calculating this result as Δx.
 based on said μ that was read, calculating
The fourteenth invention for accomplishing the abovedescribed objective is, in the abovedescribed thirteenth invention, characterized in that, in the information processing apparatus, said λ is limited to 2^{n}(n≧1), and the program further comprises a step of calculating Δx by rightshifting said calculated Δx′ by n digits.
The fifteenth invention for accomplishing the abovedescribed objective is, in the abovedescribed thirteenth or fourteenth invention, in making an information processing apparatus apply transformation processing to coordinates (x,y,z)^{t }to create coordinates (x′,y′,z′)^{t }by means of an affine transformation representative of
wherein coordinate values are limited to integers, characterized in that the program comprises steps of:

 in transforming the coordinates (x,y,z)^{t }of the shape data into the coordinates (x′,y′,z′)^{t }by means of an affine transformation of a matrix A and a matrix (t_{1},t_{2},t_{3})^{t}, reading a parameter A′
that is obtained by multiplying the matrix A by λ (≠0), and the matrix (t_{1},t_{2},t_{3})^{t}, which are stored in a memory; and

 based on said read matrix A′ and matrix (t_{1},t_{2},t_{3})^{t}, calculating
x′=(a′ _{11} *x+a′ _{12} *y+a′ _{13} *z)/λ+t _{1};
y′=(a′ _{21} *x+a′ _{22} *y+a′ _{23} *z)/λ+t _{2};
z′=(a′ _{31} *x+a′ _{32} *y+a′ _{33} *z)/λ+t _{3};
to calculate the coordinates (x′,y′,z′)^{t}.
 based on said read matrix A′ and matrix (t_{1},t_{2},t_{3})^{t}, calculating
The sixteenth invention for accomplishing the abovedescribed objective is, in the abovedescribed fifteenth invention, characterized in that, in case that said λ is limited to 2^{n}(n≧1),

 said calculating step in said program is a step of, in the information processing apparatus, calculating division of
x′=(a′ _{11} *x+a′ _{12} *y+a′ _{13} *z)/λ+t _{1};
y′=(a′ _{21} *x+a′ _{22} *y+a′ _{23} *z)/λ+t _{2};
z′=(a′ _{31} *x+a′ _{32} *y+a′ _{33} *z)/λ+t _{3};
by conducting rightshifting by n digits to calculate the coordinates (x′,y′,z′)^{t}.
 said calculating step in said program is a step of, in the information processing apparatus, calculating division of
The seventeenth invention for accomplishing the abovedescribed objective is, in the abovedescribed sixteenth invention, characterized in that, in the information processing apparatus, the program further comprises a step of adding a constant λ/2 to each number to be divided before conducting said rightshifting by n digits.
The eighteenth invention for accomplishing the abovedescribed objective is, in the abovedescribed fifteenth, sixteenth and seventeenth invention, characterized in that, in the information processing apparatus, the program further comprises a step of synthesizing two or more parameters that are multiplied by λ(≠0) in advance.
The nineteenth invention for accomplishing the abovedescribed objective is a record medium in which a program for, in conducting Rasterize processing of imaging of a threedimensional image, making an information processing apparatus conduct calculation processing for calculating a quantity of a change of x coordinates
to y coordinates of a straight line passing through a point P_{1}(x_{1},y_{1}) and a point P_{2}(x_{2},y_{2}) on a twodimensional plain, characterized in that said program comprises steps of:

 in calculating Δx, reading μ corresponding to a quantity of a change of y coordinates of Δx to be calculated, from a memory in which the constant μ representative of
is stored in association with the quantity of the change of the y coordinates; and

 based on said μ that was read, calculating
Δx′=μ*(x _{2} −x _{1});
by means of multiplication, and calculating this result as Δx.
 based on said μ that was read, calculating
The twentieth invention for accomplishing the abovedescribed objective is, in the abovedescribed nineteenth invention, characterized in that, in the information processing apparatus, said λ is limited to 2^{n}(n≧1), and the program further comprises a step of calculating Δx by rightshifting said calculated Δx′ by n digits.
The twentyoneth invention for accomplishing the abovedescribed objective is, in the abovedescribed nineteenth or twentieth invention, in making an information processing apparatus apply transformation processing to coordinates (x,y,z)^{t }to create coordinates (x′,y′,z′)^{t }by means of an affine transformation representative of
wherein coordinate values are limited to integers, characterized in that the program comprises steps of:

 in transforming the coordinates (x,y,z)^{t }of the shape data into the coordinates (x′,y′,z′)^{t }by means of an affine transformation of a matrix A and a matrix (t_{1},t_{2},t_{3})^{t}, reading a parameter A′
that is obtained by multiplying the matrix A by λ (≠0), and the matrix (t_{1},t_{2},t_{3})^{t}, which are stored in a memory; and

 based on said read matrix A′ and matrix (t_{1},t_{2},t_{3})^{t}, calculating
x′=(a′ _{11} *x+a′ _{12} *y+a′ _{13} *z)/λ+t _{1};
y′=(a′ _{21} *x+a′ _{22} *y+a′ _{23} *z)/λ+t _{2};
z′=(a′ _{31} *x+a′ _{32} *y+a′ _{33} *z)/λ+t _{3};
to calculate the coordinates (x′,y′,z′)^{t}.
 based on said read matrix A′ and matrix (t_{1},t_{2},t_{3})^{t}, calculating
The twentytwoth invention for accomplishing the abovedescribed objective is, in the abovedescribed twentyoneth invention, characterized in that, in case that said λ is limited to 2^{n}(n≧1),
said calculating step in said program is a step of, in the information processing apparatus, calculating division of
x′=(a′ _{11} *x+a′ _{12} *y+a′ _{13} *z)/λ+t _{1};
y′=(a′ _{21} *x+a′ _{22} *y+a′ _{23} *z)/λ+t _{2};
z′=(a′ _{31} *x+a′ _{32} *y+a′ _{33} *z)/λ+t _{3};
by conducting rightshifting by n digits to calculate the coordinates (x′,y′,z′)^{t}.
The twentythreeth invention for accomplishing the abovedescribed objective is, in the abovedescribed twentytwoth invention, characterized in that, in the information processing apparatus, said program further comprises a step of adding a constant λ/2 to each number to be divided before conducting said rightshifting by n digits.
The twentyfourth invention for accomplishing the abovedescribed objective is, in the abovedescribed twentyoneth, twentytwoth and twentythreeth invention, characterized in that, in the information processing apparatus, said program further comprises a step of synthesizing two or more parameters that are multiplied by λ(≠0) in advance.
The best mode for working the present invention will be explained.
First, geometry calculation by means of an integer, and triangle rasterize in which division calculation is not used, using an information processing apparatus that is a feature of the present invention will be explained.
<Geometry Calculation by Means of an Integer>
1. Coordinate Transformation by Means of an Affine Transformation
When an affine transformation from coordinates (x,y,z)^{t }into coordinates (x′,y′,z′)^{t }is represented by a form
in case that x′, y′ and z′ are obtained by means of numerical operation like C language, they are as follows:
x′=a _{11} *x+a _{12} *y+a _{13} *z+t _{1};
y′=a _{21} *x+a _{22} *y+a _{23} *z+t _{2};
z′=a _{31} *x+a _{32} *y+a _{33} *z+t _{3};
However, all of numeric value types have an infinite range and accuracy.
In case of coordinate calculation is conducted by means of an integer having a finite range, calculation accuracy and operation overflow become an issue.
With regard to the overflow, if a range of the coordinates to be handled is restricted, it does not become an issue, and however, as for the calculation accuracy, when a real number having a small absolute value is rounded off (For a numeric value or the like including a decimal, a value of a place (digit) lower than a place (or digit) at a required right end by one is rounded off to shorten the numeric value. Other than the roundoff, roundup and truncation are also available.), a relative error becomes larger. Especially, with regard to the components of the matrix A, there are so many cases where their absolute values are lower than or equal to 1, and in case that their numeric values are rounded off, a result thereof is largely shifted from expected one.
Accordingly, when the components of the matrix A are rounded off to integers, in order to make a relative error small as much as possible, λ(≠0) is multiplied by the components of the matrix A.
Coordinate transformation is conducted by means of this matrix A′, the calculation is conducted as follows:
In case that x′, y′ and z′ are obtained by means of numerical operation like C language, they are obtained as follows:
x′=(a′ _{11} *x+a′ _{12} *y+a′ _{13} *z)/λ+t _{1};
y′=(a′ _{21} *x+a′ _{22} *y+a′ _{23} *z)/λ+t _{2};
z′=(a′ _{31} *x+a′ _{32} *y+a′ _{33} *z)/λ+t _{3};
However, all of numeric value types are integers.
Also in such calculation, it becomes integer calculation, and it is not necessary to conduct floatingpoint operation, and however, in order to conduct the calculation at a higher speed, to avoid integer division during execution, λ is limited to 2^{n}(n≧1), and λ′=log_{2}λ is assumed, and operation almost the same as division is conducted by means of an arithmetic right shift as follows:
x′=((a′ _{11} *x+a′ _{12} *y+a′ _{13} *z)>>λ′)+t _{1};
y′=((a′ _{21} *x+a′ _{22} *y+a′ _{23} *z)>>λ′)+t _{2};
z′=((a′ _{31} *x+a′ _{32} *y+a′ _{33} *z)>>λ′)+t _{3};
Also, in case that the arithmetic right shift is utilized as the integer division, since the roundoff is in a −∞ direction, a constant λ/2 is added before conducting the arithmetic right shift as follows:
x′=((a′ _{11} *x+a′ _{12} *y+a′ _{13} *z+λ/2)>>λ′)+t _{1};
y′=((a′ _{21} *x+a′ _{22} *y+a′ _{23} *z+λ/2)>>λ′)+t _{2};
z′=((a′ _{31} *x+a′ _{32} *y+a′ _{33} *z+λ/2)>>λ′)+t _{3};
and an error is corrected.
This will be explained using a particular example, and for example, a case where an affine transformation is applied to a point of coordinates (111,222,333)^{t }will be considered.
Assuming that the matrices A and t(t_{1},t_{2},t_{3})^{t }of the abovedescribed equation 2 are

 a result of the equation 2 when the calculation is conducted using a real number becomes as follows:
Here, as mentioned above, when the affine transformation of the present invention is used, in case that λ=4096(2^{12}) is assumed, the matrix A′ becomes as follows:
When the affine transformation in which these matrix A′ and matrix t(t_{1},t_{2},t_{3})^{t }are used is represented by equations of C language, it becomes as follows:
int x,y,z;
x=((3646*111+−1316*222+1323*333+4096/2)>>12)+856;
y=((1580*111+3723*222+−649*333+4096/2)>>12)+387;
z=((−994*111+1088*222+3822*333+4096/2)>>12)+973;
This calculation result becomes (991, 579, 1316)^{t}, and is almost the same as the abovedescribed operation result of the real number.
In the actual processing of the information processing apparatus, the matrix A′ that is obtained by multiplying the matrix A by λ and the matrix t(t_{1},t_{2},t_{3})^{t }are stored in a memory of the information processing apparatus as parameters, and in conducting the coordinate transformation by means of the affine transformation, the matrix A′ corresponding to the corresponding matrix A, and the matrix t(t_{1},t_{2},t_{3})^{t }are read from the memory, and based on the read matrix A′, the operation is conducted for each element. And, the right shift is applied to a result thereof, to which a constant λ/2 is added, constant λ′ times, and the matrix t(t_{1},t_{2},t_{3})^{t }is added to a result thereof to calculate the coordinates.
2. Synthesis of Affine Transformations
It is assumed that two affine transformations f and g (transformation parameters) are as follows, respectively:
f(p)=Ap+u 5
g(p)=Bp+v 6
The synthesis of these two transformations g∘f is as follows:
g∘f(p)=BAp+Bu+v 7
When A′=λA and B′=λB are given, the following is obtained:
If the followings are assumed,
the equation 8 becomes as follows:
The right side of this equation 11 and the right side of the equation 4 have the same form.
Assuming the following equations
in case of obtaining the components of M and t by means of the numerical operation like C language, the following is obtained:
for (i=1; i<=3; i++) {for (j=1; j<=3; j++)
m _{ij}=(b _{i1} *a _{1j} +b _{i2} +a _{2j} +b _{i3} *a _{3j}+λ/2)/λ;
t _{i}=(b _{i1} *u _{1} +b _{i2} *u _{2} +b _{i3} *u _{3})/λ+v _{i};}
Assuming that λ is limited to 2^{n}(n≧1) and λ′=log_{2 }λ is given, and then, all integer types are calculated by integers, the following is obtained:
for (i=1; i<=3; i++) {for (j=1; j<=3; j++)
m _{ij}=(b _{i1} *a _{1j} +b _{i2} *a _{2j} +b _{i3} *a _{3j}+λ/2)>>λ′;
t _{i}=((b _{i1} *u _{1} +b _{i2} *u _{2} +b _{i3} *u _{3}+λ/2)>>λ′)+v _{i};}
In accordance with this method, the affine transformations which are multiplied by λ in advance can be synthesized.
This will be explained using a particular example, and for example, assuming the followings
and λ=4096 is assumed, the followings are obtained:
Matrices in which the elements of λA and λB
Accordingly, if the matrix M and the matrix t are obtained, they become as follows:
In this manner, the values almost the same as the followings calculated as described above can be obtained:
<Triangle Rasterize in Which Division is not Used>
In case of obtaining a quantity of a change of x coordinates on a straight line, it is assumed that the coordinates of a point P_{1 }on a twodimensional plane are (x_{1},y_{1}), and the coordinates of a point P_{2 }are (x_{2},y_{2}). When y_{1}≠y_{2 }is given, the quantity of the change of x coordinates to y coordinates on a straight line that links P_{1 }with P_{2 }is as follows:
Assuming Δx′=λΔx, in case of obtaining Δx′ by means of the numerical operation like C language, it becomes as follows:
Δx′=λ*(x _{2} −x _{1})/(y _{2} −y _{1});
And, assuming
it becomes a form of multiplication as follows:
Δx′=μ*(x _{2} −x _{1});
In case of conducting the calculation by means of an integer only, if λ is fully large, and y_{2}−y_{1} is small to some extent, a relative error due to the roundoff becomes small.
Accordingly, assuming that λ is a constant, if a range of the y coordinates is restricted, it is possible to obtain μ from a small number of arrangement prepared in advance. In other words, μ is stored in a memory of the information processing apparatus in advance in association with y_{2}−y_{1}. And, if μ is read using y_{2}−y_{1} as an index, and
Δx′=μ*(x _{2} −x _{1});
is calculated, it is possible to obtain Δx′ almost same as Δx by means of multiplication only.
Also, if λ is restricted to 2^{n}(n≧1), it is possible to calculate Δx by means of the arithmetic right shift.
Accordingly, the quantity of the change of the x coordinates can be obtained entirely without division.
Also, with regard to texture coordinates, due to the same reason, the division is not required.
As a particular example, assuming λ=65536, an example of a line segment between a start point P_{1 }of coordinates (10,20) and a start point P_{2 }of coordinates (85,120) will be explained.
The quantity of a change of x coordinates to y coordinates in this line segment becomes as follows:
and Δx′=λx=49152.0 is obtained.
In case of calculating Δx′ by means of an integer only without division, as mentioned above, corresponding μ is read from the memory using 120−20=100 as an index.
In this case, assuming
the following is obtained:
Δx′=μ×(85−10)=49125,
and it is possible to obtain a value almost the same as Δx′=λx=49152.0 obtained by using division.
Next, the abovementioned method will be explained in a case where it is used for a mobile terminal and so forth.
In
In addition, in the memory section 3, a transformation parameter table 4 in which transformation parameters (the abovementioned matrix A′ and matrix t(t_{1},t_{2},t_{3})^{t}) are stored, and a reciprocal table 5 in which A is stored using the abovementioned y_{2}−y_{1} as an index are stored.
In the imaging of the threedimensional graphics of this explanation below, an object is represented by a combination of planes of every polygon. And, the polygon is called a polygon. An object (polygon) in threedimensional space has three coordinate values of X, Y, Z. By moving these coordinates, it is possible to change a position and a direction of the object. Further, in order to finally display the object represented by means of threedimensional coordinates on a twodimensional screen, a transformation into a screen coordinate system is conducted.
Such processing (calculation) likes a series of a coordinate transformation, a perspective transformation, light source calculation or the like is called geometry calculation. And, the polygon which has been transformed by means of the calculation is finally written in a frame buffer, and imaging is conducted.
Usually, although the processing of the geometry calculation is conducted utilizing the FPU (floatingpoint processing unit) of the CPU, the FPU is designed for not only the geometry calculation but also the calculation of a general floatingpoint. Further, in instruments represented by a mobile telephone, there are many ones which do not have the FPU (floatingpoint processing unit) due to a reason that they become expensive.
Accordingly, even in an information processing apparatus, such as a mobile terminal, of a CPU which does not have the FPU (floatingpoint processing unit), in order to make it possible to conduct the geometry calculation, the control section 3 conducts the geometry calculation by means of an integer in the present invention.
Also, with regard to rasterize for dividing an object into dots, in a mobile terminal which does not have a 3D graphics accelerator, a processing load increases.
Accordingly, in the present invention, in order to reduce the processing load, the control section 3 rasterizes a triangle without using division.
Next, an imaging operation of the threedimensional graphics using the abovementioned calculation, which is conducted by the control section 3, will be explained.
First, information of a virtual frame buffer of an object to be imaged is set (Step 100).
Next, a shape data that is a geometric coordinate data is read from the memory section 3 into a model object (Step 101).
In the shape data read here, the information of an vertex coordinate row, a polygon row and a segment row is included. Also, in the data of the segment, transformation parameters for a basic posture, an attached vertex group and ID (identification information) of a parent segment are included.
In addition, the transformation parameters to be read are the matrix A′ obtained by multiplying the basic matrix A by λ(2^{n}(n≧1)), and the matrix t(t_{1},t_{2},t_{3})^{t}, and the matrix A′ and the matrix t(t_{1},t_{2},t_{3})^{t }are read from the transformation parameter table 4.
Successively, a texture data corresponding to the shape data, such as a data for the feel of quality and so forth, is read (Step 102).
And, a transformation parameter from a model coordinate system to a viewpoint coordinate system is set (Step 103), and a transformation parameter from the viewpoint coordinate system to a screen coordinate system is set (Step 104).
A twodimensional background is imaged in a virtual frame buffer (Step 105).
The model object is imaged in the virtual frame buffer (Step 106).
Finally, the contents of the virtual frame buffer are displayed on an actual screen (Step 107).
And, by repeating Step 103 to Step 107, the threedimensional graphics are imaged.
Successively, Step 106 at which the control section 3 images the model object will be further explained in detail, in which the calculation that is a feature of the present invention is used.
First, the vertex coordinates of the model are transformed into the screen coordinate system from a local coordinate system (Step 200).
Here, a structure of the model will be explained. Usually, the model has a plurality of segments therein, and each segment has a plurality of vertexes. These vertexes have coordinate values in a segment coordinate system. The segment can have a plurality of child segments, and has a transformation parameter for transforming the coordinates of the vertexes included in the segment into a parent segment coordinate system. The transformation parameter that the segment of a highest rank has becomes a value for transforming the coordinates of the vertexes included in the segment into the model coordinate system. Also, it has information of a transformation parameter for a basic posture.
Accordingly, to transform the vertex coordinates of the model, which have coordinate values in the segment coordinate system, into the screen coordinate system from the local coordinate system, first a transformation into the model coordinate system from the segment coordinate system is conducted.
The calculation of the transformation parameter from the segment coordinate system to the model coordinate system is conducted by means of the synthesis of the affine transformation by the abovementioned geometry calculation by means of an integer.
In case of the segment which has the parent segment, a transformation parameter f that the segment has, and a transformation parameter g from the segment coordinate system of the parent segment to the model coordinate system are synthesized, and a transformation parameter h=g∘f from the segment coordinate system to the model coordinate system is generated.
In this synthesis, the abovementioned method of the synthesis of the affine transformation is used.
In addition, in case of the segment which does not have the parent segment, assuming that a transformation parameter that the segment has is f, a transformation parameter from the segment coordinate system to the model coordinate system becomes h=f.
Successively, a transformation from the model coordinate system to the screen coordinate system will be explained.
A transformation parameter from the model coordinate system to the screen coordinate system is also calculated by the same method.
Assuming that a transformation parameter from the model coordinate system to the viewpoint coordinate system is p, and that a transformation parameter from the viewpoint coordinate system to the screen coordinate system is q, a transformation parameter from the model coordinate system to the screen coordinate system becomes r=q∘p.
Finally, based on the transformation parameter h from the segment coordinate system to the model coordinate system and the transformation parameter r from the model coordinate system to the screen coordinate system, which were calculated as mentioned above, a transformation parameter s from the segment coordinate system to the screen coordinate system is calculated.
Same as the abovementioned synthesis, the transformation parameter s from the segment coordinate system to the screen coordinate system becomes s=r∘h. Although h is required to be calculated for each segment, the calculation of r is required only one time for the calculation of a onebody model.
Using the transformation parameter s calculated in this manner, the vertex coordinates in the segment coordinate system is transformed into coordinates in the screen coordinate system.
The calculation of the coordinate transformation is conducted by means of the abovementioned coordinate transformation by the affine transformation of the geometry calculation by means of an integer.
In other words, the vertex coordinates (x,y,z)^{t }of the segment coordinate system are transformed into coordinates (x′,y′,z′)^{t }in the screen coordinate system using the transformation parameter s.
Successively, polygons on a reverse side in screen space are removed from an object to be processed (Step 201).
Polygons to be processed are sorted according to a Z value (depth) in the viewpoint coordinate system (Step 202).
Finally, the polygons are imaged in the virtual frame buffer by means of the screen coordinate system (Step 203).
This step will be explained further in detail, and first, here, a possibility that a triangle is displayed in an imaging region will be checked. In case that the triangle is completely out of the imaging region, later imaging processing is skipped.
And, a numeric value for scanning the triangle is calculated in advance. In this calculation, a quantity of a change of edge line coordinates of the triangle, texture coordinates or the like is mainly calculated. This quantity of the change is calculated by the abovementioned triangle rasterize without using division.
In the scanning of the triangle, a simple operation such as addition and a bit shift is repeated, and a pixel value is written in the virtual frame buffer.
With regard to a particular calculation method, in case that a quantity of a change of coordinates x on a straight line is obtained, it is assumed that coordinates of a point P_{1 }on a twodimensional plane are (x_{1},y_{1}), and coordinates of a point P_{2 }are (x_{2},y_{2}). When y_{1}≠y_{2}, the quantity of the change of the coordinates x to the coordinates y on a straight line that links P_{1 }with P_{2 }is, as mentioned above, as follows:
Assuming Δx′=λΔx, in case of obtaining Δx′ by means of the numerical operation like C language, it becomes as follows:
Δx′=λ*(x _{2} −x _{1})/(y _{2} −y _{1});
And, given
it becomes a form of multiplication as follows:
Δx′=μ*(x _{2} −x _{1});
In case of conducting the calculation by means of an integer only, if λ is fully large and y_{2}−y_{1} is small to some extent, a relative error due to the roundoff becomes small.
Accordingly, assuming that λ is a constant, if a range of y coordinates is restricted, it is possible to obtain μ from a small number of arrangement.
Then, a value of μ, an index of which is y_{2}−y_{1}, is stored in the reciprocal table 5 of the memory section 3 in advance. And, μ corresponding to y_{2}−y_{1} is read from the reciprocal table 5, and by conducting division of these μ and (x_{2}−x_{1}), Δx′ is calculated.
And, to conduct the processing at a higher speed, necessary calculation is conducted by Δx′, and it is divided by λ when a pixel value is written in the virtual frame buffer.
Also, at a step of the division, if it is assumed that λ is 2^{n}(n≧1), Δx can be calculated by means of the arithmetic right shift, and without using division, it becomes possible to obtain the quantity of the change of the x coordinates.
In addition, the same processing can be also applied to the texture coordinates.
Next, as an actual particular example of the present invention, a program described by means of C language, which is executed by the control section 3, will be explained.
First, a program for the geometry calculation by means of an integer will be described below.
Described below is an example of the transformation parameters, namely the matrix A′ and the matrix (t_{1},t_{2},t_{3})^{t}, which are stored in the memory.
/** Geometry calculation by means of an integer  
*  
* λ = 4096 is assumed  
*/  
public class Atrans3i {  
public int m00;  // corresponding to a′_{11}  
public int m01;  // corresponding to a′_{12}  
public int m02;  // corresponding to a′_{13}  
public int m03;  // corresponding to t′_{1}  
public int m10;  // corresponding to a′_{21}  
public int m11;  // corresponding to a′_{22}  
public int m12;  // corresponding to a′_{23}  
public int m13;  // corresponding to t′_{2}  
public int m20;  // corresponding to a′_{31}  
public int m21;  // corresponding to a′_{32}  
public int m22;  // corresponding to a′_{33}  
public int m23;  // corresponding to t′_{3}  
Here, as comments are made, public int m00 to public int m23 correspond to each element of the matrix A′ for the affine transformation and the matrix (t_{1},t_{2},t_{3})^{t}. Also, it is assumed that λ=4096.
Next, a coordinate transformation will be shown below.
/** Coordinate transformation  
*  
* @param src Point of a transformation source  
* @param dst Point of a transformation result  
*/  
public void transPoint(Vec3i src, Vec3i dst)  
{  
int x = ((m00 * src.x + m01 * src.y + m02 *  
src.z + 2048) >> 12) + m03;  
int y = ((m10 * src.x + m11 * src.y + m12 *  
src.z + 2048) >> 12) + m13;  
int z = ((m20 * src.x + m21 * src.y + m22 *  
src.z + 2048) >> 12) + m23;  
dst.x = x; dst.y = y; dst.z = z;  
return;  
}  
Here, @param src corresponds to the coordinates (x,y,z)^{t }shown in the abovedescribed equation 2, and @param dst corresponds to the coordinates (x′,y′,z′)^{t }shown in the abovedescribed equation 2.
Next, the synthesis of the affine transformation will be shown below.
/** synthesis of affine transformation  
*  
* @param t1 Affine transformation of left side  
* @param t2 Affine transformation of right  
side  
*/  
public void multiply( Atrans3i t1, Atrans3i  
t2 ) {  
int a00 = (t1.m00 * t2.m00 + t1.m01 * t2.m10  
+ t1.m02 * t2.m20 + 2048) >> 12;  
int a01 = (t1.m00 * t2.m01 + t1.m01 * t2.m11  
+ t1.m02 * t2.m21 + 2048) >> 12;  
int a02 = (t1.m00 * t2.m02 + t1.m01 * t2.m12  
+ t1.m02 * t2.m22 + 2048) >> 12;  
int a03 = ((t1.m00 * t2.m03 + t1.m01 *  
t2.m13 + t1.m02 * t2.m23 + 2048) >> 12) + t1.m03;  
int a10 = (t1.m10 * t2.m00 + t1.m11 * t2.m10  
+ t1.m12 * t2.m20 + 2048) >> 12;  
int a11 = (t1.m10 * t2.m01 + t1.m11 * t2.m11  
+ t1.m12 * t2.m21 + 2048) >> 12;  
int a12 = (t1.m10 * t2.m02 + t1.m11 * t2.m12  
+ t1.m12 * t2.m22 + 2048) >> 12;  
int a13 = ((t1.m10 * t2.m03 + t1.m11 *  
t2.m13 + t1.m12 * t2.m23 + 2048) >> 12) + t1.m13;  
int a20 = (t1.m20 * t2.m00 + t1.m21 * t2.m10  
+ t1.m22 * t2.m20 + 2048) >> 12;  
int a21 = (t1.m20 * t2.m01 + t1.m21 * t2.m11  
+ t1.m22 * t2.m21 + 2048) >> 12;  
int a22 = (t1.m20 * t2.m02 + t1.m21 * t2.m12  
+ t1.m22 * t2.m22 + 2048) >> 12;  
int a23 = ((t1.m20 * t2.m03 + t1.m21 *  
t2.m13 + t1.m22 * t2.m23 + 2048) >> 12) + t1.m23;  
m00 = a00; m01 = a01; m02 = a02; m03 = a03;  
m10 = a10; m11 = a11; m12 = a12; m13 = a13;  
m20 = a20; m21 = a21; m22 = a22; m23 = a23;  
return;  
}  
}  
Next, the polygon imaging will be shown.  
/** Polygon imaging  
*  
* λ = 65536 is assumed  
*/  
public class Polydraw {  
/** Polygon vertex information */  
static final class Vertex {  
int x;  // Pixel X coordinates  
int y;  // Pixel Y coordinates  
int u;  // Texel U coordinates  
int v;  // Texel V coordinates  
}  
// Internal constant  
private final static int SFT = 16;  
private final static int TEXHMASK = 0x7F0000;  
private final static int TEXWMASK = 0x7F;  
private final static int TEXPSHIFT = 0x09;  
// Table of μ−1  
private static final short _inverse_tbl[] = {  
0x0000, (short)0xffff, 0x7fff, 0x5554,  
0x3fff, 0x3332, 0x2aa9, 0x2491,  
0x1fff, 0x1c70, 0x1998, 0x1744, 0x1554,  
0x13b0, 0x1248, 0x1110,  
0x0fff, 0x0f0e, 0x0e37, 0x0d78, 0x0ccb,  
0x0c2f, 0x0ba1, 0x0b20,  
0x0aa9, 0x0a3c, 0x09d7, 0x097a, 0x0923,  
0x08d2, 0x0887, 0x0841,  
0x07ff, 0x07c0, 0x0786, 0x074f, 0x071b,  
0x06ea, 0x06bb, 0x068f,  
0x0665, 0x063d, 0x0617, 0x05f3, 0x05d0,  
0x05af, 0x058f, 0x0571,  
0x0554, 0x0538, 0x051d, 0x0504, 0x04eb,  
0x04d3, 0x04bc, 0x04a6,  
0x0491, 0x047c, 0x0468, 0x0455, 0x0443,  
0x0431, 0x0420, 0x040f,  
0x03ff, 0x03ef, 0x03df, 0x03d1, 0x03c2,  
0x03b4, 0x03a7, 0x039a,  
0x038d, 0x0380, 0x0374, 0x0368, 0x035d,  
0x0352, 0x0347, 0x033c,  
0x0332, 0x0328, 0x031e, 0x0314, 0x030b,  
0x0302, 0x02f9, 0x02f0,  
0x02e7, 0x02df, 0x02d7, 0x02cf, 0x02c7,  
0x02bf, 0x02b8, 0x02b0,  
0x02a9, 0x02a2, 0x029b, 0x0294, 0x028e,  
0x0287, 0x0281, 0x027b,  
0x0275, 0x026f, 0x0269, 0x0263, 0x025d,  
0x0258, 0x0252, 0x024d,  
0x0248, 0x0242, 0x023d, 0x0238, 0x0233,  
0x022f, 0x022a, 0x0225,  
0x0221, 0x021c, 0x0218, 0x0213, 0x020f,  
0x020b, 0x0207, 0x0203  
};  
In addition, although the abovedescribed table corresponds to the abovementioned reciprocal table 5 of a value of μ, from a memory storage capacity saving point of view, μ−1 is assumed. And, it forms a line in order of magnitude of y_{2}−y_{1}.
//Virtual frame buffer information  
private byte scr_image[]; // Top address of  
virtual frame buffer  
private int scr_width; // Lateral pixel number  
of frame buffer  
private int scr_height; // Vertical pixel  
number of frame buffer  
private int scr_pitch; // Scan line strand  
private int scr_offset; // Address offset  
// Texture image information  
private byte tex_image[]; // Top address of  
texel data  
private Texture texture;  // Texture  
private int tex_width;  // Width of texture  
private int tex_height;  // Hight of texture  
// Clip information  
private int clip_left;  // Left clip  
position  
private int clip_top;  // Upper clip  
position  
private int clip_right;  // Right clip  
position  
private int clip_bottom;  // Lower clip  
position  
/** Imaging of triangle */  
void drawTriangle( Polydraw.Vertex v0,  
Polydraw.Vertex v1, Polydraw.Vertex v2 ) {  
/* Clipping (code omitted) */  
boolean inside;  
/* Sort three points (code omitted) */  
int top_x, top_y, top_u, top_v;  
int mid_x, mid_y, mid_u, mid_v;  
int bot_x, bot_y, bot_u, bot_v;  
/* Three points on a line */  
if ( top_y == bot_y )  
return;  
/* Scan start point */  
int scan_scr_y = top_y;  
int pixel = scan_scr_y * scr_pitch +  
scr_offset;  
int dist_scr_x, dist_scr_y;  
int tb_scr_x, tm_scr_x, mb_scr_x;  
int tb_scr_dx, tm_scr_dx, mb_scr_dx;  
int dist_tex_u, dist_tex_v;  
int tb_tex_u, tb_tex_v, tm_tex_u, tm_tex_v,  
mb_tex_u, mb_tex_v;  
int tb_tex_du, tb_tex_dv, tm_tex_du,  
tm_tex_dv, mb_tex_du, mb_tex_dv;  
/* top−bot */  
dist_scr_x = bot_x − top_x;  
dist_scr_y = inverse16( bot_y − top_y );  
tb_scr_dx = dist_scr_x * dist_scr_y; // X  
gradient of topbot (how to obtain Δx′)  
tb_scr_x = top_x << SFT; // X on top−bot  
/* top−bot of texture */  
dist_tex_u = bot_u − top_u; (how to obtain  
Δx′)  
dist_tex_v = bot_v —top_v; (how to obtain  
Δx′)  
tb_tex_du = dist_tex_u * dist_scr_y;  
tb_tex_dv = dist_tex_v * dist_scr_y;  
tb_tex_u = top_u << SFT; // U on top−bot  
tb_tex_v = top_v << SFT; // V on top−bot  
/* topmid */  
dist_scr_x = mid_x − top_x;  
dist_scr_y = mid_y − top_y;  
dist_tex_u = mid_u − top_u;  
dist_tex_v = mid_v − top_v;  
/* dx, dy topbot for texture scan−>mid */  
int scan_tex_du, scan_tex_dv;  
int width = dist_scr_x − ((tb_scr_dx *  
dist_scr_y) >> SFT);  
if ( width != 0 ) {  
int inv_width = inverse16( width );//  
horizontal scan U gradient (how to obtain Δx′)  
scan_tex_du = (dist_tex_u − ((tb_tex_du  
* dist_scr_y) >> SFT)) * inv_width; );// horizontal  
scan v gradient (how to obtain Δx′)  
scan_tex_dv = (dist_tex_v − ((tb_tex_dv  
* dist_scr_y) >> SFT)) * inv_width;  
}  
else  
return;  
/* topmid */  
int scr_end, scr_dd;  
if ( dist_scr_y > 0 ) {  
/* There is an upper triangle */  
dist_scr_y = inverse16( dist_scr_y );  
tm_scr_dx = dist_scr_x * dist_scr_y;// x  
gradient of topmid (how to obtain Δx′)  
tm_scr_x = tb_scr_x; // X on topmid  
/* topmid of texture */  
tm_tex_du = dist_tex_u * dist_scr_y; //  
U gradient of topmid (how to obtain Δx′)  
tm_tex_dv = dist_tex_v * dist_scr_y; //  
v gradient on topmid (how to obtain Δx′)  
tm_tex_u = tb_tex_u; // U gradient on  
topmid  
tm_tex_v = tb_tex_v; // V gradient on  
topmid  
if ( width > 0 ) { // topbot is left  
if ( inside ) {  
while ( scan_scr_y < mid_y ) {  
int p1 = (tb_scr_x >> SFT) +  
pixel;  
int p2 = (tm_scr_x >> SFT) +  
pixel;  
int tpx = tb_tex_u;  
int tpy = tb_tex_v;  
while( p1 < p2 ) {  
int tidx = ((tpy &  
TEXHMASK) >>> TEXPSHIFT) + ((tpx >>> SFT) &  
TEXWMASK);  
scr_image[p1]=tex_image[tidx];//Write one pixel  
tpx += scan_tex_du;  
tpy += scan_tex_dv;  
p1++;  
}  
scan_scr_y++;  
pixel += scr_pitch;  
tb_scr_x += tb_scr_dx;  
tm_scr_x += tm_scr_dx;  
tb_tex_u += tb_tex_du;  
tb_tex_v += tb_tex_dv;  
}  
}  
else { /* Code omitted */ }  
}  
else { /* topmid is left (code omitted)  
*/ }  
}  
/* Bottom is horizontal */  
if ( mid_y == bot_y )  
return;  
/* midbot */  
dist_scr_x = bot_x − mid_x;  
dist_scr_y = inverse16( bot_y − mid_y );  
mb_scr_dx = dist_scr_x * dist_scr_y; //  
gradient of midbot  
mb_scr_x = mid_x << SFT; // X on midbot  
if ( width > 0 ) { // topbot is left  
if ( inside ) {  
while ( scan_scr_y < bot_y) {  
int p1 = (tb_scr_x >> SFT) +  
pixel;  
int p2 = (mb_scr_x >> SFT) +  
pixel;  
int tpx = tb_tex_u;  
int tpy = tb_tex_v;  
while ( p1 < p2 ) {  
int tidx = ((tpy & TEXHMASK)  
>>> TEXPSHIFT) + ((tpx >>> SFT) & TEXWMASK);  
scr_image[p1] =  
tex_image[tidx]; //Write one pixel  
tpx += scan_tex_du;  
tpy += scan_tex_dv;  
p1++;  
}  
scan_scr_y++;  
pixel += scr_pitch;  
tb_scr_x += tb_scr_dx;  
mb_scr_x += mb_scr_dx;  
tb_tex_u += tb_tex_du;  
tb_tex_v += tb_tex_dv;  
}  
}  
else { /* code omitted */ }  
}  
else { /* midbot is left (code omitted)  
*/ }  
return;  
}  
/**Calculate μ  
*  
* @param num denominator (−32767..−1,  
1..32767)  
* @return μ=λ/num  
*/  
private static int inverse16( int num ) {  
boolean posi_flg = (num >= 0);  
int denom = posi_flg ? num : −num;  
if ( denom < 128 ) {  
int val = (_inverse_tbl[denom] & 0xffff)  
+ 1;  
return posi_flg ? val : −val;  
}  
int x = 32768*128;  
int y = denom << 15;  
{ int s = x − y; x = (s >= 0) ? (s << 1) +  
1 : x << 1; }  
{ int s = x − y; x = (s >= 0) ? (s << 1) +  
1 : x << 1; }  
{ int s = x − y; x = (s >= 0) ? (s << 1) +  
1 : x << 1; }  
{ int s = x − y; x = (s >= 0) ? (s << 1) +  
1 : x << 1; }  
{ int s = x − y; x = (s >= 0) ? (s << 1) +  
1 : x << 1; }  
{ int s = x − y; x = (s >= 0) ? (s << 1) +  
1 : x << 1; }  
{ int s = x − y; x = (s >= 0) ? (s << 1) +  
1 : x << 1; }  
{ int s = x − y; x = (s >= 0) ? (s << 1) +  
1 : x << 1; }  
{ int s = x − y; x = (s >= 0) ? (s << 1) +  
1 : x << 1; }  
int r = (x >> 15);  
X <<= 1;  
if ( denom <= r )  
x++;  
x &= 0xffff  
return posi_flg ? x : −x;  
}  
}  
The above is one example of the program for working the present invention.
In the present invention, since in the calculation processing for the imaging of a threedimensional image, the calculation processing can be conducted within a range of an integer, the threedimensional image can be handled even though the FPU (floatingpoint processing unit) is not provided like in a mobile telephone and PDA (Personal Digital (Data) Assistants).
Also, since division having a high processing load is not conducted in the calculation processing, smooth imaging processing can be conducted even in an information processing apparatus which has low capacity of a CPU.
Claims (32)
Δx′=μ*(x _{2} −x _{1});
x′=(a′ _{11} *x+a′ _{12} *y+a′ _{13} *z)/λ+t _{1};
y′=(a′ _{21} *x+a′ _{22} *y+a′ _{23} *z)/λ+t _{2};
z′=(a′ _{31} *x+a′ _{32} *y+a′ _{33} *z)/λ+t _{3};
x′=(a′ _{11} *x+a′ _{12} *y+a′ _{13} *z)/λ+t _{1};
y′=(a′ _{21} *x+a′ _{22} *y+a′ _{23} *z)/λ+t _{2};
z′=(a′ _{31} *x+a′ _{32} *y+a′ _{33} *z)/λ+t _{3};
Δx′=μ*(x _{2} −x _{1});
x′=(a′ _{11} *x+a′ _{12} *y+a′ _{13} *z)/λ+t _{1};
y′=(a′ _{21} *x+a′ _{22} *y+a′ _{23} *z)/λ+t _{2};
z′=(a′ _{31} *x+a′ _{32} *y+a′ _{33} *z)/λ+t _{3};
x′=(a′ _{11} *x+a′ _{12} *y+a′ _{13} *z)/λ+t _{1};
y′=(a′ _{21} *x+a′ _{22} *y+a′ _{23} *z)/λ+t _{2};
z′=(a′ _{31} *x+a′ _{32} *y+a′ _{33} *z)/λ+t _{3};
Δx′=μ*(x _{2} −x _{1});
x′=(a′ _{11} *x+a′ _{12} *y+a′ _{13} *z)/λ+t _{1};
y′=(a′ _{21} *x+a′ _{22} *y+a′ _{23} *z)/λ+t _{2};
z′=(a′ _{31} *x+a′ _{32} *y+a′ _{33} *z)/λ+t _{3};
x′=(a′ _{11} *x+a′ _{12} *y+a′ _{13} *z)/λ+t _{1};
y′=(a′ _{21} *x+a′ _{22} *y+a′ _{23} *z)/λ+t _{2};
z′=(a′ _{31} *x+a′ _{32} *y+a′ _{33} *z)/λ+t _{3};
Δx′=μ*(x _{2} −x _{1});
x′=(a′ _{11} *x+a′ _{12} *y+a′ _{13} *z)/λ+t _{1};
y′=(a′ _{21} *x+a′ _{22} *y+a′ _{23} *z)/λ+t _{2};
z′=(a′ _{31} *x+a′ _{32} *y+a′ _{33} *z)/λ+t _{3};
x′=(a′ _{11} *x+a′ _{12} *y+a′ _{13} *z)/λ+t _{1};
y′=(a′ _{21} *x+a′ _{22} *y+a′ _{23} *z)/λ+t _{2};
z′=(a′ _{31} *x+a′ _{32} *y+a′ _{33} *z)/λ+t _{3};
Priority Applications (3)
Application Number  Priority Date  Filing Date  Title 

JP2001187619  20010621  
JP2001187619  20010621  
PCT/JP2002/006159 WO2003001458A1 (en)  20010621  20020620  Information processor 
Publications (2)
Publication Number  Publication Date 

US20030185460A1 US20030185460A1 (en)  20031002 
US7030880B2 true US7030880B2 (en)  20060418 
Family
ID=19026886
Family Applications (2)
Application Number  Title  Priority Date  Filing Date 

US10/296,811 Active 20230728 US6970178B2 (en)  20010621  20020620  Information processing apparatus 
US10/344,452 Active 20240221 US7030880B2 (en)  20010621  20020620  Information processor 
Family Applications Before (1)
Application Number  Title  Priority Date  Filing Date 

US10/296,811 Active 20230728 US6970178B2 (en)  20010621  20020620  Information processing apparatus 
Country Status (8)
Country  Link 

US (2)  US6970178B2 (en) 
EP (2)  EP1406214A4 (en) 
JP (2)  JPWO2003001458A1 (en) 
KR (2)  KR100924250B1 (en) 
CN (2)  CN1465036A (en) 
IL (4)  IL154451D0 (en) 
TW (2)  TWI257795B (en) 
WO (2)  WO2003001457A1 (en) 
Cited By (22)
Publication number  Priority date  Publication date  Assignee  Title 

US20050259107A1 (en) *  20040521  20051124  Thomas Olson  Sprite rendering 
US20070176914A1 (en) *  20060127  20070802  Samsung Electronics Co., Ltd.  Apparatus, method and medium displaying image according to position of user 
US20080007543A1 (en) *  20060706  20080110  Tyco Electronics Corporation  Autogain switching module for acoustic touch systems 
US20080166046A1 (en) *  20070104  20080710  Dipesh Koirala  Efficient fixedpoint realtime thresholding for signal processing 
US8055298B1 (en)  20030926  20111108  Iwao Fujisaki  Communication device 
US8064964B1 (en)  20011018  20111122  Iwao Fujisaki  Communication device 
US8121635B1 (en)  20031122  20120221  Iwao Fujisaki  Communication device 
US8121587B1 (en)  20040323  20120221  Iwao Fujisaki  Communication device 
US8208954B1 (en)  20050408  20120626  Iwao Fujisaki  Communication device 
US8229512B1 (en)  20030208  20120724  Iwao Fujisaki  Communication device 
US8241128B1 (en)  20030403  20120814  Iwao Fujisaki  Communication device 
US8290482B1 (en)  20011018  20121016  Iwao Fujisaki  Communication device 
US8340726B1 (en)  20080630  20121225  Iwao Fujisaki  Communication device 
US8452307B1 (en)  20080702  20130528  Iwao Fujisaki  Communication device 
US8472935B1 (en)  20071029  20130625  Iwao Fujisaki  Communication device 
US8498672B1 (en)  20011018  20130730  Iwao Fujisaki  Communication device 
US8543157B1 (en)  20080509  20130924  Iwao Fujisaki  Communication device which notifies its pinpoint location or geographic area in accordance with user selection 
US8639214B1 (en)  20071026  20140128  Iwao Fujisaki  Communication device 
US8676273B1 (en)  20070824  20140318  Iwao Fujisaki  Communication device 
US8825090B1 (en)  20070503  20140902  Iwao Fujisaki  Communication device 
US8825026B1 (en)  20070503  20140902  Iwao Fujisaki  Communication device 
US9139089B1 (en)  20071227  20150922  Iwao Fujisaki  Intervehicle middle point maintaining implementer 
Families Citing this family (2)
Publication number  Priority date  Publication date  Assignee  Title 

US7091982B2 (en)  20040514  20060815  Nvidia Corporation  Low power programmable processor 
US7664338B2 (en) *  20040928  20100216  Qualcomm Incorporated  Perspective transformation of twodimensional images 
Citations (4)
Publication number  Priority date  Publication date  Assignee  Title 

JPH0677842A (en)  19920828  19940318  Mitsubishi Electric Corp  Quantizer 
JPH0778269A (en)  19930630  19950320  Nec Corp  Threedimensional plotting device 
JPH0799578A (en)  19930928  19950411  Nec Corp  Picture processing device 
JPH09305789A (en)  19960521  19971128  Hitachi Ltd  Arithmetic method and graphics display device 
Family Cites Families (13)
Publication number  Priority date  Publication date  Assignee  Title 

US4760548A (en) *  19860613  19880726  International Business Machines Corporation  Method and apparatus for producing a curve image 
US5028848A (en) *  19880627  19910702  HewlettPackard Company  Tile vector to raster conversion method 
US5715385A (en) *  19920710  19980203  Lsi Logic Corporation  Apparatus for 2D affine transformation of images 
US5581665A (en) *  19921027  19961203  Matsushita Electric Industrial Co., Ltd.  Threedimensional object movement and transformation processing apparatus for performing movement and transformation of an object in a threediamensional space 
WO1996003717A1 (en) *  19940722  19960208  Apple Computer, Inc.  Method and system for the placement of texture on threedimensional objects 
KR0170934B1 (en) *  19941229  19990320  배순훈  Highspeed affine transformation apparatus in the fractal encoding 
AUPP091197A0 (en) *  19971215  19980108  Liguori, Vincenzo  Direct manipulation of compressed geometry 
US6215915B1 (en) *  19980220  20010410  Cognex Corporation  Image processing methods and apparatus for separable, general affine transformation of an image 
US6389154B1 (en) *  19980715  20020514  Silicon Graphics, Inc.  Exact evaluation of subdivision surfaces generalizing box splines at arbitrary parameter values 
US6483514B1 (en) *  19990415  20021119  Pixar Animation Studios  Motion blurring implicit surfaces 
JP4244444B2 (en)  19990528  20090325  ソニー株式会社  Data processing apparatus, the dividing circuit and an image processing device 
GB2359884B (en) *  19991125  20040630  Canon Kk  Image processing method and apparatus 
JP2002008060A (en) *  20000623  20020111  Hitachi Ltd  Data processing method, recording medium and data processing device 

2002
 20020620 EP EP02743661A patent/EP1406214A4/en not_active Withdrawn
 20020620 JP JP2003507765A patent/JPWO2003001458A1/en active Granted
 20020620 IL IL15445102A patent/IL154451D0/en unknown
 20020620 CN CN 02802446 patent/CN1465036A/en not_active Application Discontinuation
 20020620 WO PCT/JP2002/006157 patent/WO2003001457A1/en active Application Filing
 20020620 US US10/296,811 patent/US6970178B2/en active Active
 20020620 IL IL15445002A patent/IL154450D0/en active IP Right Grant
 20020620 EP EP02743660A patent/EP1406213A4/en not_active Withdrawn
 20020620 KR KR1020037002479A patent/KR100924250B1/en active IP Right Grant
 20020620 JP JP2003507764A patent/JP4046233B2/en active Active
 20020620 KR KR1020037002480A patent/KR20030043935A/en not_active Application Discontinuation
 20020620 CN CN 02802444 patent/CN1465035A/en not_active Application Discontinuation
 20020620 US US10/344,452 patent/US7030880B2/en active Active
 20020620 WO PCT/JP2002/006159 patent/WO2003001458A1/en active Application Filing
 20020621 TW TW91113609A patent/TWI257795B/en active
 20020621 TW TW91113608A patent/TWI239488B/en active

2003
 20030213 IL IL15445003A patent/IL154450A/en unknown
 20030213 IL IL15445103A patent/IL154451A/en unknown
Patent Citations (5)
Publication number  Priority date  Publication date  Assignee  Title 

JPH0677842A (en)  19920828  19940318  Mitsubishi Electric Corp  Quantizer 
JPH0778269A (en)  19930630  19950320  Nec Corp  Threedimensional plotting device 
JPH0799578A (en)  19930928  19950411  Nec Corp  Picture processing device 
US5748793A (en)  19930928  19980505  Nec Corporation  Quick image processor of reduced circuit scale with high image quality and high efficiency 
JPH09305789A (en)  19960521  19971128  Hitachi Ltd  Arithmetic method and graphics display device 
Cited By (116)
Publication number  Priority date  Publication date  Assignee  Title 

US8750921B1 (en)  20011018  20140610  Iwao Fujisaki  Communication device 
US8538485B1 (en)  20011018  20130917  Iwao Fujisaki  Communication device 
US9883021B1 (en)  20011018  20180130  Iwao Fujisaki  Communication device 
US9883025B1 (en)  20011018  20180130  Iwao Fujisaki  Communication device 
US8498672B1 (en)  20011018  20130730  Iwao Fujisaki  Communication device 
US9537988B1 (en)  20011018  20170103  Iwao Fujisaki  Communication device 
US10284711B1 (en)  20011018  20190507  Iwao Fujisaki  Communication device 
US8064964B1 (en)  20011018  20111122  Iwao Fujisaki  Communication device 
US8200275B1 (en)  20011018  20120612  Iwao Fujisaki  System for communication device to display perspective 3D map 
US8068880B1 (en)  20011018  20111129  Iwao Fujisaki  Communication device 
US8086276B1 (en)  20011018  20111227  Iwao Fujisaki  Communication device 
US8538486B1 (en)  20011018  20130917  Iwao Fujisaki  Communication device which displays perspective 3D map 
US8744515B1 (en)  20011018  20140603  Iwao Fujisaki  Communication device 
US9247383B1 (en)  20011018  20160126  Iwao Fujisaki  Communication device 
US8805442B1 (en)  20011018  20140812  Iwao Fujisaki  Communication device 
US9197741B1 (en)  20011018  20151124  Iwao Fujisaki  Communication device 
US9154776B1 (en)  20011018  20151006  Iwao Fujisaki  Communication device 
US8290482B1 (en)  20011018  20121016  Iwao Fujisaki  Communication device 
US9026182B1 (en)  20011018  20150505  Iwao Fujisaki  Communication device 
US8682397B1 (en)  20030208  20140325  Iwao Fujisaki  Communication device 
US8229512B1 (en)  20030208  20120724  Iwao Fujisaki  Communication device 
US8241128B1 (en)  20030403  20120814  Iwao Fujisaki  Communication device 
US8425321B1 (en)  20030403  20130423  Iwao Fujisaki  Video game device 
US8430754B1 (en)  20030403  20130430  Iwao Fujisaki  Communication device 
US8064954B1 (en)  20030926  20111122  Iwao Fujisaki  Communication device 
US9077807B1 (en)  20030926  20150707  Iwao Fujisaki  Communication device 
US8233938B1 (en)  20030926  20120731  Iwao Fujisaki  Communication device 
US8195228B1 (en)  20030926  20120605  Iwao Fujisaki  Communication device 
US8781527B1 (en)  20030926  20140715  Iwao Fujisaki  Communication device 
US8165630B1 (en)  20030926  20120424  Iwao Fujisaki  Communication device 
US8260352B1 (en)  20030926  20120904  Iwao Fujisaki  Communication device 
US8781526B1 (en)  20030926  20140715  Iwao Fujisaki  Communication device 
US8229504B1 (en)  20030926  20120724  Iwao Fujisaki  Communication device 
US8774862B1 (en)  20030926  20140708  Iwao Fujisaki  Communication device 
US8295880B1 (en)  20030926  20121023  Iwao Fujisaki  Communication device 
US8301194B1 (en)  20030926  20121030  Iwao Fujisaki  Communication device 
US8311578B1 (en)  20030926  20121113  Iwao Fujisaki  Communication device 
US8326357B1 (en)  20030926  20121204  Iwao Fujisaki  Communication device 
US8326355B1 (en)  20030926  20121204  Iwao Fujisaki  Communication device 
US8331983B1 (en)  20030926  20121211  Iwao Fujisaki  Communication device 
US8331984B1 (en)  20030926  20121211  Iwao Fujisaki  Communication device 
US8335538B1 (en)  20030926  20121218  Iwao Fujisaki  Communication device 
US8340720B1 (en)  20030926  20121225  Iwao Fujisaki  Communication device 
US8160642B1 (en)  20030926  20120417  Iwao Fujisaki  Communication device 
US8346303B1 (en)  20030926  20130101  Iwao Fujisaki  Communication device 
US8346304B1 (en)  20030926  20130101  Iwao Fujisaki  Communication device 
US8351984B1 (en)  20030926  20130108  Iwao Fujisaki  Communication device 
US8364202B1 (en)  20030926  20130129  Iwao Fujisaki  Communication device 
US8364201B1 (en)  20030926  20130129  Iwao Fujisaki  Communication device 
US8380248B1 (en)  20030926  20130219  Iwao Fujisaki  Communication device 
US8391920B1 (en)  20030926  20130305  Iwao Fujisaki  Communication device 
US8417288B1 (en)  20030926  20130409  Iwao Fujisaki  Communication device 
US8150458B1 (en)  20030926  20120403  Iwao Fujisaki  Communication device 
US8121641B1 (en)  20030926  20120221  Iwao Fujisaki  Communication device 
US8095181B1 (en)  20030926  20120110  Iwao Fujisaki  Communication device 
US8095182B1 (en)  20030926  20120110  Iwao Fujisaki  Communication device 
US8447353B1 (en)  20030926  20130521  Iwao Fujisaki  Communication device 
US8447354B1 (en)  20030926  20130521  Iwao Fujisaki  Communication device 
US8055298B1 (en)  20030926  20111108  Iwao Fujisaki  Communication device 
US8694052B1 (en)  20030926  20140408  Iwao Fujisaki  Communication device 
US9596338B1 (en)  20030926  20170314  Iwao Fujisaki  Communication device 
US8532703B1 (en)  20030926  20130910  Iwao Fujisaki  Communication device 
US10237385B1 (en)  20030926  20190319  Iwao Fujisaki  Communication device 
US8442583B1 (en)  20030926  20130514  Iwao Fujisaki  Communication device 
US8712472B1 (en)  20030926  20140429  Iwao Fujisaki  Communication device 
US8244300B1 (en)  20030926  20120814  Iwao Fujisaki  Communication device 
US8238963B1 (en)  20031122  20120807  Iwao Fujisaki  Communication device 
US9955006B1 (en)  20031122  20180424  Iwao Fujisaki  Communication device 
US8565812B1 (en)  20031122  20131022  Iwao Fujisaki  Communication device 
US9674347B1 (en)  20031122  20170606  Iwao Fujisaki  Communication device 
US9325825B1 (en)  20031122  20160426  Iwao Fujisaki  Communication device 
US8554269B1 (en)  20031122  20131008  Iwao Fujisaki  Communication device 
US8121635B1 (en)  20031122  20120221  Iwao Fujisaki  Communication device 
US9094531B1 (en)  20031122  20150728  Iwao Fujisaki  Communication device 
US8295876B1 (en)  20031122  20121023  Iwao Fujisaki  Communication device 
US9554232B1 (en)  20031122  20170124  Iwao Fujisaki  Communication device 
US8224376B1 (en)  20031122  20120717  Iwao Fujisaki  Communication device 
US8195142B1 (en)  20040323  20120605  Iwao Fujisaki  Communication device 
US8121587B1 (en)  20040323  20120221  Iwao Fujisaki  Communication device 
US8270964B1 (en)  20040323  20120918  Iwao Fujisaki  Communication device 
US7202877B2 (en) *  20040521  20070410  Texas Instruments Incorporated  Sprite rendering 
US20050259107A1 (en) *  20040521  20051124  Thomas Olson  Sprite rendering 
US8433364B1 (en)  20050408  20130430  Iwao Fujisaki  Communication device 
US10244206B1 (en)  20050408  20190326  Iwao Fujisaki  Communication device 
US9143723B1 (en)  20050408  20150922  Iwao Fujisaki  Communication device 
US8208954B1 (en)  20050408  20120626  Iwao Fujisaki  Communication device 
US9549150B1 (en)  20050408  20170117  Iwao Fujisaki  Communication device 
US9948890B1 (en)  20050408  20180417  Iwao Fujisaki  Communication device 
US20070176914A1 (en) *  20060127  20070802  Samsung Electronics Co., Ltd.  Apparatus, method and medium displaying image according to position of user 
US20080007543A1 (en) *  20060706  20080110  Tyco Electronics Corporation  Autogain switching module for acoustic touch systems 
US20080166046A1 (en) *  20070104  20080710  Dipesh Koirala  Efficient fixedpoint realtime thresholding for signal processing 
US7936921B2 (en) *  20070104  20110503  Freescale Semiconductor, Inc.  Efficient fixedpoint realtime thresholding for signal processing 
US9396594B1 (en)  20070503  20160719  Iwao Fujisaki  Communication device 
US8825090B1 (en)  20070503  20140902  Iwao Fujisaki  Communication device 
US9185657B1 (en)  20070503  20151110  Iwao Fujisaki  Communication device 
US8825026B1 (en)  20070503  20140902  Iwao Fujisaki  Communication device 
US9092917B1 (en)  20070503  20150728  Iwao Fujisaki  Communication device 
US9596334B1 (en)  20070824  20170314  Iwao Fujisaki  Communication device 
US9232369B1 (en)  20070824  20160105  Iwao Fujisaki  Communication device 
US10148803B2 (en)  20070824  20181204  Iwao Fujisaki  Communication device 
US8676273B1 (en)  20070824  20140318  Iwao Fujisaki  Communication device 
US9082115B1 (en)  20071026  20150714  Iwao Fujisaki  Communication device 
US8639214B1 (en)  20071026  20140128  Iwao Fujisaki  Communication device 
US8676705B1 (en)  20071026  20140318  Iwao Fujisaki  Communication device 
US9094775B1 (en)  20071029  20150728  Iwao Fujisaki  Communication device 
US8472935B1 (en)  20071029  20130625  Iwao Fujisaki  Communication device 
US8755838B1 (en)  20071029  20140617  Iwao Fujisaki  Communication device 
US9139089B1 (en)  20071227  20150922  Iwao Fujisaki  Intervehicle middle point maintaining implementer 
US8543157B1 (en)  20080509  20130924  Iwao Fujisaki  Communication device which notifies its pinpoint location or geographic area in accordance with user selection 
US9060246B1 (en)  20080630  20150616  Iwao Fujisaki  Communication device 
US9241060B1 (en)  20080630  20160119  Iwao Fujisaki  Communication device 
US8340726B1 (en)  20080630  20121225  Iwao Fujisaki  Communication device 
US10175846B1 (en)  20080630  20190108  Iwao Fujisaki  Communication device 
US8452307B1 (en)  20080702  20130528  Iwao Fujisaki  Communication device 
US9049556B1 (en)  20080702  20150602  Iwao Fujisaki  Communication device 
US9326267B1 (en)  20080702  20160426  Iwao Fujisaki  Communication device 
Also Published As
Publication number  Publication date 

JP4046233B2 (en)  20080213 
TWI239488B (en)  20050911 
US20030185460A1 (en)  20031002 
WO2003001458A1 (en)  20030103 
EP1406213A1 (en)  20040407 
US6970178B2 (en)  20051129 
IL154451A (en)  20090720 
KR20030045036A (en)  20030609 
CN1465035A (en)  20031231 
JPWO2003001457A1 (en)  20041014 
US20030184543A1 (en)  20031002 
KR100924250B1 (en)  20091030 
TWI257795B (en)  20060701 
EP1406214A1 (en)  20040407 
JPWO2003001458A1 (en)  20041014 
CN1465036A (en)  20031231 
WO2003001457A1 (en)  20030103 
EP1406214A4 (en)  20090114 
IL154451D0 (en)  20030917 
EP1406213A4 (en)  20090114 
IL154450A (en)  20081126 
IL154450D0 (en)  20030917 
KR20030043935A (en)  20030602 
Similar Documents
Publication  Publication Date  Title 

Sproull et al.  A clipping divider  
Loop et al.  Resolution independent curve rendering using programmable graphics hardware  
US6577305B1 (en)  Apparatus and method for performing setup operations in a 3D graphics pipeline using unified primitive descriptors  
US5222205A (en)  Method for generating addresses to textured graphics primitives stored in rip maps  
US6437780B1 (en)  Method for determining tiles in a computer display that are covered by a graphics primitive  
US5043922A (en)  Graphics system shadow generation using a depth buffer  
US7362328B2 (en)  Interface and method of interfacing between a parametric modelling unit and a polygon based rendering system  
US6181352B1 (en)  Graphics pipeline selectively providing multiple pixels or multiple textures  
US7006110B2 (en)  Determining a coverage mask for a pixel  
US5973705A (en)  Geometry pipeline implemented on a SIMD machine  
US4674058A (en)  Method and apparatus for flexigon representation of a two dimensional figure  
EP0314335B1 (en)  A parallel surface processing system for graphics display  
Kaufman et al.  3D scanconversion algorithms for voxelbased graphics  
US6285779B1 (en)  Floatingpoint complementary depth buffer  
US6052126A (en)  Parallel processing threedimensional drawing apparatus for simultaneously mapping a plurality of texture patterns  
US5469222A (en)  Nonlinear pixel interpolator function for video and graphic processing  
EP1125253B1 (en)  Shading 3dimensional computer generated images  
US6333747B1 (en)  Image synthesizing system with texture mapping  
US6762756B2 (en)  Graphics processor, system and method for generating screen pixels in raster order utilizing a single interpolator  
US6741247B1 (en)  Shading 3dimensional computer generated images  
US20040090437A1 (en)  Curved surface image processing apparatus and curved surface image processing method  
US5946000A (en)  Memory construct using a LIFO stack and a FIFO queue  
EP0430501A2 (en)  System and method for drawing antialiased polygons  
US20030201994A1 (en)  Pixel engine  
US5278948A (en)  Parametric surface evaluation method and apparatus for a computer graphics display system 
Legal Events
Date  Code  Title  Description 

AS  Assignment 
Owner name: HI CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TANIOKA, YASUHISA;TSUTSUMI, JUNYA;KAWABATA, KAZUO;AND OTHERS;REEL/FRAME:014139/0284 Effective date: 20030127 

STCF  Information on status: patent grant 
Free format text: PATENTED CASE 

FPAY  Fee payment 
Year of fee payment: 4 

FPAY  Fee payment 
Year of fee payment: 8 

MAFP  Maintenance fee payment 
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553) Year of fee payment: 12 