GB2287387A - Texture mapping - Google Patents
Texture mapping Download PDFInfo
- Publication number
- GB2287387A GB2287387A GB9504140A GB9504140A GB2287387A GB 2287387 A GB2287387 A GB 2287387A GB 9504140 A GB9504140 A GB 9504140A GB 9504140 A GB9504140 A GB 9504140A GB 2287387 A GB2287387 A GB 2287387A
- Authority
- GB
- United Kingdom
- Prior art keywords
- sss
- polygon
- fmul
- texture
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
Abstract
A computer '3D' graphics system comprises a processor operable by an image processing program, memory associated with said processor and a visual display unit, wherein said program generates an image in the form of a multi-faceted polygon 3 displayed in model space by the visual display unit and texture held in the form of a bit map in said memory is mapped on to each facet of the polygon by said program, and is characterised in that; said programme provides for a virtual texture surface 4 disposed in model space forwardly of said polygon 3; said programme 'projects' information from said texture bit map on to said texture surface 4, and said programme subsequently drapes said projected information from said texture surface on to the individual facets 1, 2 of the polygon 3 such that projected information is draped coherently across contacting edges 6 of adjacent facets 1, 2 and sharp transitions of shading at said edges are removed (compare Figs 5 and 7). <IMAGE>
Description
"Texture Mappina" This invention relates to texture mapping generally and more particularly, but not exclusively, to the draping of an image of a human face 'on to' a multi-faceted planar polygon model moveable in 3D virtual model space of a virtual reality computer system.
In modern computing the illusion of 3D images in computer generated graphics is created by displaying a perspective image of an object on a generally flat video screen. Movement of such objects, i.e, translation or rotation, depends on the sequential display of frames showing appropriate perspective images as 'viewed' from different angles or positions. Ideally each graphics frame would be akin to the photographic image used in films. However, this is extremely consuming of computing power so much so that typically overnight processing is required to produce short portions of video action, such as those used in television presentations and in modern film special effects. Producing imagery in this fashion may be suitable if the images are then stored on video-tape or CD-ROM for replaying in real time.
However, in virtual reality applications it is essential that graphics are generated in real time for instantaneous viewing or projection. In other 3D graphics environments such as CAD CAM or televisual presentations graphics it would be advantageous that images are generated in real time. Consequently, it should be appreciated that whilst the following description is principally concerned with virtual reality software applications the invention as herein disclosed is not limited to such applications and may be utilised in other computer graphics environments.
To conserve computing power in real time graphics, whilst enabling the screen refresh time to be within operational requirements, virtual images of objects of any form can be built up from a number of discrete tessellating planar polygons. It will be readily understood that it is far easier to calculate the movement and transformation in shape of a fixed number of planar polygons than it would be to calculate the movement of each and every individual pixel.
If all the planar polygons of a given image were of the same colour it would be possible to discern only the outline or periphery of the image. If the image were subject to transformation the outline's shape would clearly change and it would appear to be as though one were observing an ever changing hole of a given colour.
To make inside edges visible it is necessary to vary the hue or colour adjacent edges. This is achieved by the well known technique of texturing wherein a bitmap, or section thereof, relating to each polygon is stored within computer memory to which reference is made to determine what colour and intensity should be displayed at any given pixel location of the transformed polygon. This basic technique can be extended to apply any desired pattern or image to the 'surface' of the planar polygon.
To create an image of a human-like head in virtual model space, a multi-faceted virtual model or 'dummy' is generated from a plurality of tessellating planar polygons.
Each polygon is chosen to represent the median plane of, or part of, an essential facial feature, e.g.: temples, forehead, eyebrows, eye sockets, nose profile, etc. The greater the resolution desired the more facets required.
Hitherto in virtual reality applications facial images have appeared somewhat cartoon-like detracting from the realism of the experience. This situation can be improved by 'draping' a video image of a real face on to the multi-faceted polygon virtual model. Using conventional texture mapping techniques each polygon is treated separately and a mapping function is generated by concatenating all transformations applied to polygon vertices which transform all the individual polygons from texture space (that represented by a bitmap of the video image of the face for example) into a perspective projection of the polygon model in observer space on to the video screen or display.The resultant transformation matrix is then 'inverted' to map screen pixels into texture space, i.e: to relate each pixel on the screen to a position in the screen bitmap corresponding to each frame of the final displayed image. In effect the video facial image is itself treated as being in tessellated form, i.e: each tessellation relates to a respective one of the planar polygons of the virtual model. Disadvantageously, this technique results in unsightly edge effects and peculiar distortions of perspective.
It is an object of the invention to provide a technique for draping texture which enables the use of a texture bitmap which can be 'spread' over a modelled 'surface' comprising planar polygons arranged edge-to-edge whilst retaining the coherence of the video image represented by the bitmap across the edges to give the appearance of a smoothly contoured model.
According to the invention a computer '3D' graphics system comprises a processor operable by an image processing program, memory associated with said processor and a visual display unit, wherein said program generates an image in the form of a multi-faceted polygon displayed in model space by the visual display unit and texture held in the form of a bit map in said memory is mapped on to each facet of the polygon by said program, and is characterised in that; said programme provides for a virtual texture surface disposed in model space forwardly of said polygon; said programme 'projects' information from said texture bit map on to said texture surface, and said programme subsequently drapes said projected information from said texture plane on to the individual facets of the polygon such that projected information is draped coherently across contacting edges of adjacent facets.
Preferably said texture surface is planar and/or a said polygon is caused to rotate in model space the texture surface is 'caused' to rotate so as to continue to lie generally ahead of said polygon thereby enabling texture projected thereon to be draped on to visible facets of the polygon.
Typically, said programme drapes said projected information orthogonally from said texture plane on to the individual facets of the polygon. The said system may provide a real time graphics system in which said image is updated in real time and the image processing program may form a part of a virtual reality games software application.
Desirably, said image is draped with a video image or cartoon caricature of a human face and said video image may be provided by a video camera associated with apparatus of the virtual reality games application.
The invention will now be described, by way of example only, with reference to
Appendix 1 of this Description which represents a software listing typically used in the implementation of the invention and Appendix 2 which is similar to Appendix 1 but annotated with descriptions of code function and with reference to the drawings in which::
Figure 1 illustrates a screen image of a humanoid body moving in model space having a head on to which a video image of a human face has been draped using a method in accordance with the invention,
Figure 2 illustrates diagrammatically a number of vector axes relating to conventional texture mapping,
Figure 3 illustrates diagrammatically the relationship between vector axes of
Figure 2 in relation to an illustrated polygon facet when using a texture draping technique in accordance with the invention,
Figure 4 illustrates diagrammatically a multi-faceted planar polygon model in virtual model space and its relationship with a texture projection plane created in virtual space,
Figure 5 illustrates a front view of a multi-faceted polygon image of a computer
caricature of a human head on to which texture has been mapped according to
conventional techniques,
Figure 6 illustrates a side view of the head shown in Figure 5,
Figure 7 illustrates a front view of the polygon image of Figure 5 on to which texture has been orthogonally draped in accordance with the invention, and
Figure 8 illustrates a side view of the head shown in Figure 7 and similarly draped with texture.
Referring initially to Figure 4, unlike the aforedescribed standard mapping technique, in which texture is 'rotated' into the plane of each individual polygon of a virtual model, in accordance with the invention a draped texture mapping function is calculated by treating all related polygons 1,2 (i.e: all those making up the 'image' of say the front view of a face) as a coherent group 3. Within the virtual model space (using mathematical trigonometric matrix techniques) a virtual texture surface or plane 4 is 'created' so as to lie generally ahead of the group 3, typically parallel to the median plane of the group 3. It will be appreciated that this plane 4 is not visible and is said to lie in texture space.A video image of say a human face is stored as a bitmap within computer memory and from this a complementary planar image/representation 5 (not shown in detail) is generated on the texture plane 4.
The texture image 5 is then orthogonally 'projected' on to the polygons 1,2 of the group 3. This requires three parameters for each of the polygons 1,2 to relate their respective positions relative to the texture plane 4. For example this enables a position (a,b) on the texture plane 4 to be projected to fall at (x,y) on the polygon 1. It will be known from generation of the image 5 that position (a,b) corresponds to a position (u,v) in the bitmap for that image. Once all transformations of (X,Y) and resulting bitmap locations (U,V) are concatenated in the form of a matrix it is possible to invert that matrix to provide a mapping function matrix [L] which relates each position (x,y) to the corresponding location (u,v) on the bitmap and from this the image 5 can be draped or wrapped around the group 3.
Appendix A represents a software listing of assembler code for a Motorola graphics chip MC88110. This code is used to perform mathematically in real time the orthogonal projection of say a texture image 5 on to each of the group 3 of the polygons 1,2 and to generate the mapping matrix [L1 to be used to calculate the position on the texture bitmap from which pixels of a screen (on which group 3 is displayed) are to be coloured.
The resultant draped image remains coherent across inter-polygon boundaries such as edge 6. Advantageously, because of the perspective of a video image, such as that represented by texture image 5, it is not necessary for the perspective of the resultant virtual image in model space to be wholly reliant on the perspective of polygon group 1. Consequently, it is possible to reduce the number of polygons required to represent any given object relative to what was hitherto required with conventional mapping techniques. The means for accomplishing this are already well known. For example, it is usual with the image of say a face in model space that when it is perceived to be in the far distance on the screen a single polygon facet represents the face, whereas when it is in close-up and fills the screen a multitude of facets are required to give the image depth.Now using this invention fewer facets are required in close-up to provide the same illusion of substance, i.e: less facets are required for any desired resolution of the image.
It will be appreciated that reducing the number of polygons making up the screen image as a whole will similarly reduce the computing power required to regenerate the image each frame because transformation of a reduced number of discrete planar polygons needs to be calculated. Thus, not only is a visually more realistic image displayed, but also a technical improvement is present resulting in more efficient or economic operation of computer hardware.
As shown in Figure 1, in a preferred embodiment of the invention an image of a human face 20 is superimposed, i.e: wrapped, on to a virtual head 21 shown in model space. The remainder of the body 22 is representative of the angular images obtained by conventional texture mapping techniques. In this embodiment a minimum number of polygons make up the face region of the head 21 as compared to a conventional virtual image that would require, unlike in this instance, the use of one or more polygon(s) to represent each discernable feature of the face 20.
In a further aspect of the invention it is proposed that the video image of a human face be draped on to a model head 21 in real time. This would require that the head 21 be one of a number of cursors of predetermined shape and configuration each corresponding to the shape and profile of known human head forms. Video apparatus would be used to take a 'snapshot' of a user of the virtual reality experience and image processing means would be utilised to display in the model space a closely corresponding cursor on to which the 'snapshot' would be draped.
The image processing means would not only have to find the closest resembling cursor, but would also have to massage the 'snapshot' image so that it would realistically wrap around the chosen cursor, i.e: possibly peripheral portions of the 'snapshot' would not be displayed. It could be that the cursor being multi-faceted would permit interchangeability of various parts thereof, such as for example different nose profiles. This would permit a closer approximation to the user 'snapshot' to provide a better'photo-fit' of the facial structure.
Additionally, hardware requirements of the invention to facilitate the aforedescribed further aspect would include a video camera and fixtures. The ambit of the invention is taken to include those elements of hardware which, from the abovewritten description, would, to the man skilled in the art at the priority date of this
Application, obviously be required.
To understand better the implementation of the invention the following description is provided which relates to the mathematics of conventional texture mapping and the texture draping technique of this invention. In well known texture mapping techniques texture as stored in a bitmap is considered to be defined as a texture map on a texture plane parallel to the plane of the display screen or projection plane. A mapping function is required to translate the texture from the texture plane on to texture applied to a polygon facet disposed in model space. The texture map is 'placed' in the same plane as the polygon facet. This new position is described by a rotation matrix which will rotate the texture map to a standard position in the world co-ordinates as seen in Figures 2 where u represents the x-axis, v the z-axis and the plane normal lies along the y-axis. The polygon plane is described by three orthogonal directions and a point at which the texture map origin is to be placed. Model space in which the polygon resides is defined by the x,y,z-axes illustrated in Figure 2(i).
Texture is considered to be an intensity function (I) mapped on to a u,v plane where u corresponds to the screen axis and v corresponds to the screen z axis, since the projection plane is the x,z plane in model space; where I = I(u,v).
On the polygon plane, i.e. that of polygon facet 100 shown in Figure 2(iii):
P is the texture origin in x,y,z coordinates;
T is the direction of the texture u axis in model space (unit vector);
B is the direction of the v axis in model space;
N is the direction of the polygon normal in model space, and
B=N x T.
Any point on the texture map Vt is given by:
V, = lul
|0| lvi The following steps rotate the texture map about its origin into the plane of the polygon and place the texture on the point P in model space:i) Vr = [Ri].Vt where
[Ri] | r00 r01 r02 | | 1 0 0|
| r10 r11 r12 | x | 0 -1 0|
| r20 r21 r22 | | 0 0 1| and
T = |r00|, N= |r01| , B = |r02
|r10| |r11| |r12
|r20| |r21| |r22 ii) Vp = Vr+P
Now the texture is set up in model space on the plane of the polygon.
The polygon may be moved around in model space and rotated about some point not on the polygon plane. The texture origin must be moved to this point of rotation, the rotation applied, then the origin moved to the model space origin to keep the transformations the same for polygon and texture.
iii) Vc = Vp - C Move to centre of rotation iv) Vs = [Ri].Vc Rotate using model rotation matrix v) Ve = Vs - E Move origin to eyepoint
To project the texture it is necessary to divide the x and z values by the y value, and scale up to the range (S) of the projection plane.
This defines the screen coordinates as xs and ys.
It is then required to get u and v in terms of xs and ys. If the equations are worked through from the beginning putting in the values calculated from the step before in terms of u and v, then terms at the end can be collected and solved for xs and ys.
from i) Vr=[Ri].Vt=[Ri].|u| 101 |v| from ii) Vp = Vr - P = [Ri]. |u| - P 101 |v| from iii) Vc = Vp - C = [Ri]. |u| | (P-C) 101 |v| from iv) Vs= [Rs].Vc = [Rs].[Ri].|u| + tR).(P - C) 101 |v| from v) Ve = Vs - E = [Rs].[Ri].|u| + [Rs].(P-C) - E 101 |v|
So Ve = [Rs].|i00 i01 i02|.|u| + [Rs].|px - cx| - |ex|
|i10 i11 i12| |o| |py - cy| |ey|
|i20 i21 i22| |v| |pz - cz| |ez|
= [Rs].|i00u + i02v| + [Rs].|px - cx| - |ex|
|i10u + i12v| |py - cy| |ey|
|i20u + i22v| |pz - cz| |ez|
= [Rs].|i00u + i02v + px - cx| - |ex|
|i10u + i12v + py - cy| |ey|
|i20u + i22v + pz - cz| |ez|
=s00(i00.u + i02.v + px - cx) + s01(i10.u + i12.v + py - cy) +
s02(i20.u + i22.v + pz - cz) - ex
etc.
Collecting terms in u and v
= (s00.i00 + s01.i10 + s02.i20).u +(s00.i02 + s01.i12 + s02.i22).v +
s00(px - cx) + s01(py - cy) + S02(pz - cz) - ex
etc.
This can be expressed in matrix form as follows:
Ve = [L].|u|
|v|
|1|
Wherel00 = s00.i00 + s01.i10 + s02.i20 l01 = s00.i02 + s01.i12 + s02.i22
l02 = s00(px - cx) + s01(py - cy) + s02(pz - cz) - ex
l10 = s10.i00 + s11.i10 + s12.i20
l11 = s10.i02 + s11.i12 + s12.i22 l12 = s10(px - cx) + s11(py - cy) + s12(pz - cz) - ey
l20 = s20.i00 + s21.i10 + s22.i20
l21 = s20.i02 + s21.i12 + s22.i22 l@@ = s20(p@ - c@) + s21(py - c@) + s22(p@ - c@) From equations vi), the screen coordinates can now be expressed:
These equations can be multiplied out and terms in u and v collected::
xs(l10.u + l11.v + l12) = S.(l00.u + l01.v + l02)
Hence (xs.l10 - S.l00).u + (xsl11 - S.l01).v + xs.l12 - S.l02 = 0
Similarly (ys.l10 - S.l20).u + (ysl11 - S.l21).v + ys.l12 - S.l22 = 0
This can be expressed:
g00.u + g01.v + g02 = 0
g10.u + g11.v + g12= 0
So |g00 g01| |u| = |-g02|
|g10g11| |v| |-g12|
Solving this equation for uand v, gives : lul 1 I g11 -g01I I-g02I |v| Q |-g10 g00| |-g12|
Where Q = g00.g11 - g10.g01 Hence u.(g00.g11 - g10.g01) = g01.g12 - g11.g02 and v.(g00.g11 - g10.g01) = g10.g02 - g00.g12
Substituting back into the equations the values for [G]:-
u.(D0.xs + D1.ys + D2) = U0xs + U1.ys + U2
v.(Do.xs + Dl.ys + D2) = V0xs + V1.ys + V2
Where the values of U, V and D are as follows::
U0 = l12.l21 - l11.l22
U1 = l02.l11 - l01.l12
U2 = S.(l11.l01 - l21.l02)
V0 = l10.l22 - l12.l20
V1 =100.112102.110 V2 = S.(l10.l02 - l22.l00)
D0 = l20.l11 - l10.l21
D1 = l10.l01 - l00.l11
D2 = S.(l00.l21 - l20.l01)
It is clear from the equations that U, V and D can be calculated at the start of each frame, and stay constant for the frame. The terms of the equations containing ys will change for each raster line, and the terms containing xs will change for each pixel on a raster line.
In accordance with the invention a plurality of texture polygons are treated as a cohesive group and using a texture draping technique a texture pattern can be draped so as to be coherent across edges of polygons to have an appearance of being draped over the group from a particular direction. The generation of the mapping parameters is slightly different from that aforedescribed in relation to a conventional mapping technique. The difference can be described with reference to the L matrix of the prior texture mapping description. It will be understood that the L matrix hereinafter described relates to the draping technique of this invention and this is described with reference to Figure 3 of the drawings.
Considering the case where the texture map is defined on a plane parallel with the xy plane of model space where the map is required to be draped on a group of polygons defined in 3D model space the the orthogonal projection in accordance with the invention is given by: Vr = | u
| v | z0 + au + bv
Hence Ve = [Rs].[Vr + P - C] - E where similarly to the aforedescribed conventional texture mapping Vr is defined with respect to the texture datum in model space, and Z0@ a and b define the plane of the polygon 100 shown in Figure 3 which is to be textured.
By collecting terms in u and v:
Ve = (s00+as0)u+(s01+bs02)v+s02.z0+s00(px-cx)+s01(py-cy)+s02(pz-pz)-ex, etc.
Soputting:- Ve=[U x |u|
|v|
|l|
Then:-l00 = s00 + as02 l01 = s01 + as01
l02 = s00(px - cx) + s01(py - cy) + s02(pz - cz)-ex l10 = s10 + as12
l11 = s11 + bs12
l12 = s10(px-cx) + s11(py - cy) + s12(pz - cz) - ey l20 = s20 + as22
l21 = S21 + bs22 l22 = s20(px-cx) + s21(py - cy) + s22(pz - cz) - ez
From the values of the above the mapping parameters are calculated as before.
Putting the origin of the texture map at xO, yO, zO in model space, then:
u = x - x0
v = y - y0
w = z - z0
Orthogonal projection gives:- w = au + bv, Hence for three points A,B,C shown in Figure 3 on the polygon on to which texture is draped:
UA = XA - XO
vA = yA - y0
WA = ZA - ZO and similarly for B and C, giving:
ZA = ZO + aUA + bVA
ZB = ZO +aUB + bVB z0 = z0 + au0 + bv0 Hence values of a, b, z0 can be substituted for each of the polygon facets to be draped.
To illustrate the different visual effects of prior art texture mapping techniques and the texture draping technique of this invention Figures 5 to 8 should be referred to. In Figure 5 texture has been applied to each individual facet of the polygon image as can be seen from the clearly tesselated image shown, i.e. the edges of each polygon facet are highly visible except in the regions of the hair and beard where a dark colour masks the presence of the facet edges. Figure 6 shows a side view of the head of Figure 5 and it can be seen how polygons of Figure 5 are changed in shape by virtue of being 'viewed', i.e. represented on the image as being viewed, from a different position relative to the head image.
Figures 7 and 8 represent views corresponding to Figures 5 and 6 respectively, in which texture has been draped on to the same multi-faceted polygon image of a human head. In this instance the images of the human face applied to the polygon image is far more life like and generally by draping texture across facet images these edges are rendered invisible to the viewer. Thus, the invention provides a way of applying texture from a conventional bitmap, for example, in a manner which results in image quality vastly improved over the aforedescribed texture mapping techniques.
APPENDIX 1
Function: Calculate the mapping matrix for a list of textured polygons
Imputs: r28 - Pointer to source data buffer
r29 - Pointer to destination data buffer
Outputs: none
TEXT
ALIGN 8 polyWrapTexMatXY:
subu r31,r31,4
st rl,r31,0 Id r26,r30,modPnt
ld x1,r26,modcat
ld x2,r26,modcat+4
id x3,r26,modcat+8
ld x4,r26,modcat+12
ld x5,r26,modcat+16
ld x6,r26,modcat+20
Id x7,r26,modcat+24
Id x8,r26,modcat+28
Id x9,r26,modcat+32
ld x10,r26,moddum
ld x11,r26,moddum+4
ld x12,r26,moddum+8 pmXY2:Id r2,r28,0 ; No. of vertices
bcnd eq0,r2,pmXYEnd ; if none then exit
st r2,r27,0 ; Store No. vertices
Id r4,r28,4 ; Read polygon texture number
st r4,r27,4 ; Store polygon texture number
Id r3,r28,8 ; Read polygon range
st r3,r27,8 ; Store polygon colour
Id r5,r30,texStr
mak r3,r4,8 < 6 > ; * 64 (range 0-255)
addu r3,r3,r5 ; add texture structure base address
Id x13,r28,12 ; texture a ld x14,r28,16 ;texture b
ld x15,r28,20 ;texture z0
bbl 1 7,r4,pmXY bbl 18,r4,pmXZ
bb1 19,r4,pmYZ pmXY:Id x25,r3,texPos ; texture Pos X
Id x26,r3,texPos+4 ; texture Pos Z **** fmul.sss xl 6,x25,x13 ; a * Tx
fmul.sss x17,x26,x14 ; b * Ty
fadd.sss x15,x15,x16 ; y0 + (a * Tx)
fadd.sss x15,x15,x17 ; yO + (a * Tx) + (b * Ty)
fmul.sss x16,x13,x3 ; a * mO2 fadd.sss x16,x16,x1 ; mOO + (m02 * a) = 100
fmul.sss x17,x1,x25 ;m00*Tx
fmul.sss x18,x2,x26 ;m01*Ty
fadd.sss x17,x17,x18 ;(m00*Tx) + (m01 * Ty)
fmul.sss x18,x15,x3 ;z0 * m02
fadd.sss x17,x17,x18 ;(z0 * m02) + (m00 * Tx)+(m01 * Ty)
fmul.sss x17,x17,x10 ;Mx + (m00 * Tx) + (m01 * Ty) + (m02 * zO) = 101
fsub.sss x17,x0,x17 ; negate 101
fmul.sss x18,x14,x3 ;b * m02
fadd.sss x18,x18,x2 ;m01 + (m02 * b) = l02
fmul.sss x19,x13,x6 ; a * m12
fadd.sss x19,x19,x4 ;m10 + (m12 * a) = 110
fmul.sss x20,x4,x25 ;m10 * Tx
fmul.sss x21 ,x5,x26 ; ml 1 * Ty
fadd.sss x20,x20,x21 ;(m10 * Tx) + (m11 * Ty)
fmul.sss x21,x15,x6 ; z0 * m12
fadd.sss x20,x20,x21 ;(z0 * m12) + (m10 * Tx) + (m11 * Ty)
fadd.sss x20,x20,x11 ;Mx + (M12 * z0) = l11
fsub.sss x20,x0,x20 ;-(Mx + (m12 * z0)) = l11
fmul.sss x21,x14,x6 ; b * ml2 fadd.sss x21,x21,x5 ;m11 + (m12 * b) = l12
fmul.sss x22,x1 3,x9 ; a * m22
fadd.sss x22,x22,x7 ; m20 + (m22 * a) = 120
fmul.sss x23,x7,x25 ; m20 * Tx
fmul.sss x24,x8,x26 ;m21 * Ty
fadd.sss x23,x23,x24 ;(m10 * Tx) + (m11 * Ty)
fmul.sss x24,x15,x9 ;z0 * m22
fadd.sss x23,x23,x24 ;(z0 * m12) + (m10 * Tx)+(m11 * Ty)
fadd.sss x23,x23,x12 ;Mx + (m22 * z0)= l21
fsub.sss x23,x0,x23 ; -(Mx + (m22 * zO)) = 121
fmul.sss x24,x14,x9 ; b * m22
fadd.sss x24,x24,x8 ; m21 + (m22 * b) = 122
br pmMap pmXZ: Id x25,r3,texPos ; texture Pos X
Id x26,r3,texPos+8 ; texture Pos Z ****
fmul.sss xl 6,x25,x13 ; a * Tx
fmul.sss x17,x26,x14 ; b * Ty
fadd.sss x15,x15,x16 ; yO + (a * Tx)
fadd.sss x15,x15,x17 ; y0 + (a * Tx) + (b * Ty)
fmul.sss x16,x13,x2 ; a * m02
fadd.sss X16,x16,x1 ; mOO + (m02 * a) = 100
fmul.sss x17,x1,x25 ; mOO * Tx fmul.sss xl8,x3,x26 ; mOl * Ty fadd.sss x17,x17,x18 ;(m00 * Tx) + (m01 * Ty)
fmul.sss x18,x15,x2 ;z0 * m02
fadd.sss x17,x17,x18 ; (z0* m02) + (mOO * Tx) +(m01 * Ty) fadd.sss x17,x17,x10 ;Mx + (m00 * Tx) + (m01 * Ty) +
(m02 * z0) = l01 fsub.sss xl7,xO,x17 ; negate 101
fmul.sss x18,x14,x2 ;b * m02
fadd.sss x18,x18,x3 ;m01 + (m02 * b) = l02
fmul.sss xl9,x13,x5 ; a * m12
fadd.sss x19,x19,x4 ;m10 + (m12 * a) = 110
fmul.sss x20,x4,25 ; m10 * Tx
fmul.sss x21,x6,x26 ;m11 * Ty
fadd.sss x20,x20,x21 ;(m10 * Tx) + (m11 * Ty)
fmul.sss x21 ,xl 5,x5 ; zO * ml 2 fadd.sss x20,x20,x21 ; (zO * m12) + (ml 0 * Tx) + (ml 1 * Ty) fadd.sss x20,x20,x11 ;Mx + (m12 * zO) = l11
fsub.sss x20,xO,x20 ; -(Mx + (m12 * zO)) = l11
fmul.sss x21,x14,x5 ;b * m12
fadd.sss x21,x21,x6 ;m11 + (m12 * b) = l12
fmul.sss x22,x13,x8 ; a * m22
fadd.sss x22,x22,x7 ;m20 + (m22 * a) = 120
fmul.sss x23,x7,x25 ;m20 * Tx
fmul.sss x24,x9,x26 ; m21 * Ty
fadd.sss x23,x23,x24 ;(m10*Tx) + (m11 * Ty)
fmul.sss x24,x15,x8 ;z0 * m22
fadd.sss x23,x23,x24 ;(z0 * m12) + (m10 * Tx) + (m11 * Ty)
fadd.sss x23,x23,x12 ;Mx + (m22 * z0) = l21
fsub.sss x23,xO,x23 ; -(Mx + (m22 * zO)) = 121
fmul.sss x24,x14,x8 ;b * m22
fadd.sss x24,x24,x9 ;m21 + (m22 * b) = l22
br pmMap pmYZ:Id x25,r3,texPos+4 ; texture Pos Y
Id x26,r3,texPos+8 ; texture Pos Z
fmul.sss xl 6,x25,x13 ; a * Tx
fmul.sss x17,x26,x14 ; b * Ty
fadd.sss x15,x15,x16 ; xO + (a * Ty)
fadd.sss x15,x15,x17 ; x0 + (a * Ty) + (b * Tz)
fmul.sss x16,x13,x1 ;a * m00
fadd.sss x16,x16,x2 ;m01 + (m00 * a) = 100
fmul.sss x17,x2,x25 ;m01 * Ty
fmul.sss xl 8,x3,x26 ; mO2 * Tz
fadd.sss x17,x17,x18 ;(m01 * Ty) + (m02 * Tz)
fmul.sss x18,x15,x1 ;x0 * m00
fadd.sss x17,x17,x18 ;(x0 * m00) + (m01 * Ty) + (m02 * Tz)
fadd.sss x17,x17,x10 ;Mx + (m01 * Ty) + (m02 * Tz) +
(m00 * x0) = l01
fsub.sss x17,x0,x17 ;negate l01
fmul.sss x18,x14,x1 ;b * m00
fadd.sss x18,x18,x3 ;m02 + (m00 * b) = l02
fmul.sss x19,x13,x4 ; a * mlO fadd.sss x19,x19,x5 ;m11 + (m10 * a) = l10
fmul.sss x20,x5,x25 ;m11 * Ty
fmul.sss x21 ,x6,x26 ; ml 2 * Tz fadd.sss x20,x20,x21 ;(m11 * Ty) + (m12 * Tz)
fmul.sss x21,x15,x4 ;x0 * m10
fadd.sss x20,x20,x21 ;(x0 * m10) + (m11 * Tx) + (m12 * Ty)
fadd.sss x20,x20,x11 ;Mx + (m10 * x0) = l11
fsub.sss x20,x0,x20 ;-(Mx + (m10 * x0)) = l11
fmul.sss x21,x14,x4 ;b * m10
fadd.sss x21,x21,x6 ;m12 + (m10 * b) = l12
fmul.sss x22,x13,x7 ; ;a * m20
fadd.sss x22,x22,x8 ;m21 + (m20 * a) = l20
fmul.sss x23,x8,x25 ; m21 * Ty
fmul.sss x24,x9,x26 ; m22 * Tz fadd.sss x23,x23,x24 ; (m21 * Ty) + (m22 * Tz)
fmul.sss x24,x1 5,x7 ; x0 * m20
fadd.sss x23,x23,x24 ; (xO * m20) + (m21 * Ty) + (m22 * Tz)
fadd.sss x23,x23,x12 ;Mx + (m20 * x0) = l21
fsub.sss x23,x0,x23 ;-(Mx + (m20 * x0)) = l21
fmul.sss x24,x14,x7 ;b * m20
fadd.sss x24,x24,x9 ; m22 + (m20 * b) = 122 pmMap:Id r4,r30,scrPnt
Id x25,r4,scrdist ; SCREEN distance value
Id x29,r3,texScale
Id x30,r3,texScale+4
fmul.sss x26,x24,x17 ; 122 * 101
fmul.sss x31,x18,x23 ; 102 * 121
fsub.sss x26,x26,x31 ;(l22 * l01) - (l02 * l21)
fmul.sss x26,x26,x25 ;((l22 * l01) - (l02 * l21))*
SCREEN = m00
fmul.sss x27,x16,x23 ;l00 * l21
fmul.sss x31,x22,x17 ;120 * l01
fsub.sss x27,x27,x31 ;(l00 * l21) - (l20 * l01)
fmul.sss x27,x27,x25 ;((l00 * l21) - (l20 * l01)) *
SCREEN = m01
fmul.sss x28,x16,x24 ;l00 * l22
fmul.sss x31,x22,x18 ;l20 * l02
fsub.sss x28,x28,x31 ;(l00 * l22) - (l20 * l02)
fmul.sss x28,x28,x25 ;((l00 * l22) - (l20 * l02)) *
SCREEN = m02
fmul.sss x26,x26,x29
fmul.sss x27,x27,x30
st x26,r27,12
st x27,r27,16 st x28,r27,20
fmul.sss x26,x18,x20 ; 102 * Ill fmul.sss x31,x21,x17 ;l12 * l01
fsub.sss x26,x26,x31 ;(l02 * l11) - (l12 * l01) = m10
fmul.sss x27,x19,x17 ; 110 * 101
fmul.sss x31,x16,x20 ;l00 * l11
fsub.sss x27,x27,x31 ;(l10 * l01) - (l00 * l11) = m11
fmul.sss x28,x19,x18 ;l10 * l02
fmul.sss x31,x16,x21 ;l00 * l12
fsub.sss x28,x28,x31 ;(l10 * l02) - (l00 * l12) = m12
fmul.sss x26,x26,x29
fmul.sss x27,x27,x30
st x26,r27,24
st x27,r27,28
st x28,r27,32
fmul.sss x26,x21,x23 ;l12 * l21
fmul.sss x31,x24,x20 ;l22 * l11
fsub.sss x26,x26,x31 ;(l12 * l21) - (l22 * l11) = m20
fmul.sss x27,x22,x20 ;l20 * l11
fmul.sss x31,x19,x23 ;l10 * l21
fsub.sss x27,x27,x31 ;(l20 * l11) - (l10 * l21) = m21
fmul.sss x28,x22,x21 ;l20 * l12
fmul.sss x31,x19,x24 ;l10 * l22
fsub.sss x28,x28,x31 ;;(l0 * l12) - (l0 * l22) = m22
fmul.sss x26,x26,x29
fmul.sss x27,x27,x30
st x26,r27,36
st x27,r27,40
st x28,r27,44
addu r28,r28,24 ; Adjust source pointer to verts
addu r27,r27,48 , Adjust destination pointer to verts pmXY1: Id xl 3,r28,0
Id xl4,r28,4 st xl3,r27,0 st xl4,r27,4 addu r28,r28,8
addu r27,r27,8
subu r2,r2,1
or r0,r0,r0
bond ne0,r2,pmXY1
br pmXY2 pmXYEnd: st r0,r27,0
ld r1,r31,0
addu r31,r31,4
jmp rl
APPENDIX 2
TextureWrap:
subu r31,r31,4 ;Adjust stack pointer
st r1,r31,0 ;Save return address
Id r26,r30,modPnt ; Get pointer to model data ;Read model transformation matrix m[31 [3]
Id xl ,r26,modcat ; mOO
Id x2,r26,modcat+4 ; m01
Id x3,r26,modcat+8 ; mO2 Id x4,r26,modcat+l2 ; m10
Id x5,r26,modcat+l6 ; mull Id x6,r26,modcat+20 ; m12
Id x7,r26,modcat+24 ; m20
Id x8,r26,modcat+28 ; m21
Id x9,r26,modcat+32 ; m22 ; Read model datum point mX,mY,mZ
Id x10,r26,moddum ; mX
Id xl 1 ,r26,moddum ; mY
Id xl2,r26,moddum ; mZ ; Process polygon data header
Id r2,r28,0 ; Read number of polygon vertices
bcnd eqO,r2,twEnd ; If zero then not a valid polygon so exit
st r2,r27,0 ; Save number of polygon vertices
Id r4,r28,4 ; Read polygon texture data index
st r4,r27,4 ;Save polygon texture data index
ld r3,r28,8 ;Read additional polygon data
st r3,r27,8 ;Save additional polygon data ; Calculate pointer to polygon texture data
Id r5,r30,texStr ; Get base address of texture data table
mak r3,r4,8 < 6 > ; Calculate index into texture data table
addu r3,r3,r5 ; Add texture data table base address ; Read texture data
Id x13,r28,12 ; Read texture a
Id xl4,r28,16 ; Read texture b
Id x15,r28,20 ; Read texture zO Test plane of projection
bbl 17,r4,twXY ; Branch on plane of projection being XY
bbi 18,r4,twXZ ; Branch on plane of projection being XZ
bbl 1 9,r4,twYZ ; Branch on plane of projection being YZ ; Calculate 1 [3] [3] matrix for projection in plane XY twXY Id x25,r3,texPos ; Read texture position tX
Id x26,r3,texPos+4 ;Read texture position tY
fmul.sss xl 6,x25,x13 ; a + tX fmul.sss x17,x26,x14 ;b * tY
fadd.sss x15,x15,x16 ;z0 + (a * tX)
fadd.sss x15,x15,x17 ;z0 + (a * tX) + (b * tY) = P
fmul.sss x16,x13,x3 ;m02 * a
fadd.sss x16,x16,x1 ;m00 + (m02 * a) = 100
fmul.sss x17,x1,x25 ;m00 * tX
fmul.sss x18,x2,x26 ;m01 * tY
fadd.sss x17,x17,x18 ;(m00 * tX) + (m10 * tY)
fadd.sss x18,x15,x3 ;m02 * p
fadd.sss x17,x17,x18 ;(m02 * p) + (m00 * tX) + (m01 * tY)
fadd.sss x17,x17,x10 ;mX + (m02 * p) + (m00 * tX) +
(m01 * tY) = l01
fsub. sss x17,x0,x17 ;l01-=l01
fmul.sss x18,x14,x3 ;m02 * b
fadd.sss x18,x18,x2 ;m01 + (m02 * b) = l02
fmul.sss x19,x13,x6 ;m12 * a
fadd.sss x19,x19,x4 ;m10 + (m12 * a) = 110
fmul.sss x20,x4,x25 ;m10 * tX
fmul.sss x21,x5,x26 ;m11 * tY
fadd.sss x20,x20,x21 ;(m10 * tX) + (m11 * tY)
fmul.sss x21,x15,x6 ;;m12 * p
fadd.sss x20,x20,x21 ;(m12 * p) + (m10 * tX) + (m11 * tY)
fadd.sss x20,x20,x11 ;mY + (m12 * p) + (m10 * tX) +
;(m11 *tY) = l11
fsub.sss x20,x0,x20 ;l11 -=l11
fmul.sss x21,x14,x6 ;m12 * b
fadd.sss x21,x21,x5 ;m11 + (m12 * b) = l12
fmul.sss x22,x13,x9 ;m22 * a
fadd.sss x22,x22,x7 ;m20 + (m22 * a) = 120
fmul.sss x23,x7,x25 ;m20 * tX
fmul.sss x24,x8,x26 ;m21 * tY
fadd.sss x23,x23,x24 ; (m20 * tX) + (m21 * tY)
fmul.sss x24,x15,x9 ;m22 * p
fadd.sss x23,x23,x24 ;(m22 * p) + (m20 * tX) + (m21 * tY)
fadd.sss x23,x23,x12 ;mZ + (m22 * p) + (m20 * tX) +
(m21 * tY) = l21
fsub.sss x23,x0,x23 ;l21 -= l21
fmul.sss x24,x14,x9 ;m22 * b
fadd.sss x24,x24,x8 ;m21 + (m22 * b) = 122
br twMap ; Calculate l[3] [3] matrix for projection in plane XZ twXZ:Id x25,r3,texPos ; Read texture position x
Id x26,r3,texPos+8 ; Read texture position z
fmul.sss xl 6,x25,x13 ; a * tX
fmul.sss xl7,x26,x14 ; b * tY
fadd.sss x15,x15,x16 ;z0 + (a * tX)
fadd.sss x15,x15,x17 ;z0 + (a * tX) + (b * tY) = p
fmul.sss x16,x13,x2 ; mOl * a fadd.sss x16,x16,x1 ;m00 + (m01 * a) = 100
fmul.sss x17,x1,x25 ;m00 * tX
fmul.sss xl 8,x3,x26 ; m02 * tY fadd.sss x17,x17,x18 ;(m00 * tX) + (m02 * tY)
fmul.sss x18,x15,x2 ;m01 * p
fadd.sss x17,x17,x18 ;(m01 *p) + (m00 * tX) + (m02 * tY)
fadd.sss x17,x17,x10 ;mX + (m01 * p) + (m0 * tX) +
(m02 * tY) = l01
fsub.sss x17,x0,x17 ;l11 -= l11
fmul.sss x18,x14,x2 ;m01 * b
fadd.sss x18,x18,x3 ;m02 + (m01 * b) = l02
fmul.sss x19,x13,x5 ;m11 * a
fadd.sss x19,x19,x4 ;m10 + (m11 * a) = 110
fmul.sss x20,x4,x25 ;m10 * tX
fmul.sss x21,x6,x26 ;;m12 * tY
fadd.sss x20,x20,x21 ;(m10 * tX) + (m12 * tY)
fmul.sss x21,x15,x5 ;m11 * p
fadd.sss x20,x20,x21 ;(m11 * p) + (m10 * tX) + (m12 * tY)
fadd.sss x20,x20,x11 ;mY = (m11 * p) + (m10 *tX) +
(m12 * tY) = l11
fsub.sss x20,x0,x20 ;l11 -= l11
fmul.sss x21,x14,x5 ;m11 * b
fadd.sss x21,x21,x6 ;m12 + (m11 * b) = l12
fmul.sss x22,x13,x8 ;m21 * a
fadd.sss x22,x22,x7 ;m20 + (m21 * a) = 120
fmul.sss x23,x7,x25 ;m20 * tX
fmul.sss x24,x9,x26 ;m22 * tY
fadd.sss x23,x23,x24 ;(m20 * tX) + (m22 * tY)
fmul.sss x24,x15,x8 ;m21 * p
fadd.sss x23,x23,x24 ;(m21 * p) + (m20 * tX) + (m22* tY)
fadd.sss x23,x23,x12 ;mZ + (m21 * p) + (m20 * tX) +
(m22 * tY) = l21
fsub.sss x23,x0,x23 ;l21 -= l21
fmul.sss x24,x14,x8 ; m21 * b
fadd.sss x24,x24,x9 ; m22 + (m21 * b) = 122
br twMap ; Calculate I [3] [3] matrix for projection in plane YZ twYZ Id x25,r3,texPos+4 ; Read texture position y
Id x26,r3,texPos+8 ; Read texture position z
fmul.sss x16,x25,x13 ;a * tX
fmul.sss x17,x26,x14 ;b * tY
fadd.sss x15,x15,x16 ;z0 + (a tX)
fadd.sss x15,x15,x17 ;z0 + (a * tX) + (b * tY) = p
fmul.sss x16,x13,x1 ;m00 *a
fadd.sss x16,x16,x2 ;m01 + (m00 * a) = 100
fmul.sss x17,x2,x25 ;m01 + tX
fmul.sss x18,x3,x26 ;m02 + tY
fadd.sss x17,x17,x18 ;(m01 * tX) + (m02 * tY)
fmul.sss x18,x15,x1 ;m00 * p
fadd.sss x17,x17,x18 ;(m00 * p) + (m01 * tX) + (m02 * tY)
fadd.sss x17,x17,x10 ;mX + (m00 * p) + (m01 * tX) +
(m02 * tY) = l01
fsub.sss x17,x0,x17 ;l01 -= l01
fmul.sss x18,14,x1 ;m00 * b
fadd.sss x18,x18,x3 ;m02 + (m00 * b) = l02
fmul.sss x19,x13,x4 ;m10 *a
fadd.sss x19,x19,x5 ;m11 + (m10 * a = 110
fmul.sss x20,x5,x25 ;m11 * tX
fmul.sss x21,x6,x26 ;m12 * tY
fadd.sss x20,x20,x21 ;(m11 * tX) + (m12 * tY)
fmul.sss x21,x15,x4 ;;m10 * p
fadd.sss x20,x20,x21 ;(m10 * p) + (m11 * tX) + (m12 * tY)
fadd.sss x20,x20,x11 ;mY + (m10 * p) + (m11 * tX) +
(m12 * tY) = l11
fsub.sss x20,x0,x20 ;l11 -= l11
fmul.sss x21,x14,x4 ;m10 * b
fadd.sss x21,x21,x6 ;m12 + (m10 * b) = l12
fmul.sss x22,x13,x7 ;m20 * a
fadd.sss x22,x22,x8 ;m21 + (m20 * a) = 120
fmul.sss x23,x8,x25 ;m21 * tX
fmul.sss x24,x9,x26 ;m22 * tY
fadd.sss x23,x23,x24 ;(m21 * tX) + (m22 * tY)
fmul.sss x24,x15,x7 ;m20 * p
fadd.sss x23,x23,x24 ;*m20 * p) + (m21 * tX) + (m22 * tY)
fadd.sss x23,x23,x12 ;mZ + (m20 * p) + (m21 * tX) +
(m22 * tY) = l21
fsub.sss x23,x0,x23 ;l21 -= l21
fmul.sss x24,x14,x7 ;m20 * b
fadd.sss x24,x24,x9 ;m22 + (m20 * b) = l22 twMap:Id r4,r30,scrPnt ; Get address of screen data
Id x25,r4,scrdist ; Read screen distance d
Id x29,r3,texScale ; Read texture scale u
Id x30,r3,texScale+4 ; Read texture scale v ; Calculate texture mapping matrix t [3] [3]
fmul.sss x26,x24,x17 ;l22 * l01
fmul.sss x31,x18,x23 ;l02 * l21
fsub.sss x26,x26,x31 ; (122 * 101 ) - (102 * 121)
fmul.sss x26,x26,x25 ; ((122 * 101) - (102 * 121)) * d = tOO
fmul.sss x27,x16,x23 ;l00 * l21
fmul.sss x31,x22,x17 ;l20 * l01
fsub.sss x27,x27,x31 ;(l00 * l21) - (l20 * l01)
fmul.sss x27,x27,x25 ;(l00 * l21) - (l20 * l01) * d = t01
fmul.sss x28,x16,x24 ;l00 * l22
fmul.sss x31,x22,x18 ;l20 * l02
fsub.sss x28,x28,x31 ;(l00 * l22) - (l20 * l02)
fmul.sss x28,x28,x25 ;((l00 * l22) - (l20 * l02)) * d = t02
fmul.sss x26,x26,x29 ;t00 = t00 * scaleu
fmul.sss x27,x27,x30 ;t01 = t01 * scalev
st x26,r27,12 ;Store mapping matrix t00
st x27,r27,16 ;Store mapping matrix t01
st x28,r27,20 ;Store mapping matrix t02
fmul.sss x26,x1 8,x20 ; 102 * 111 fmul.sss x31,x21,x17 ;l22 * l01
fsub.sss x26,x26,x31 ;(l02 * l11) - (l12 * l01) = t10
fmul.sss x27,x19,x17 ;l10 * l01
fmul.sss x31,x16,x20 ;l00 * l11
fsub.sss x27,x27,x31 ;(l10 * l01) - (l00 * l11) = t11
fmul.sss x28,x19,x18 ; 110 * 102
fmul.sss x31,x16,x21 ; 100 * 112
fsub.sss x28,x28,x31 ; (110 * 102) - (100 * 112) = t12
fmul.sss x26,x26,x29 ;t10 = t10 * scaleu
fmul.sss x27,x27,x30 ;t20 = t20 * scalev
st x26,r27,24 ; Store mapping matrix t10
st x27,r27,28 ;Store mapping matrix t11
st x28,r27,32 ;Store mapping matrix t12
fmul.sss x26,x21,x23 ;112*121 fmul.sss x31 ,x24,x20 ; 122 * 111
fsub.sss x26,x26,x31 ; (122 * 121) - (122 * 111) = t20
fmul.sss x27,x22,x20 ;l20 * l11
fmul.sss x31,x19,x23 ;l10 * l21
fsub.sss x27,x27,x31 ;(l20 * l11) - (l10 * l21) = t21
fmul.sss x28,x22,x21 ; 120 * 112
fmul.sss x31,x19,x24 ;l10 * l22
fsub.sss x28,x28,x31 ;(l20 * l12) - (l10 * l22) = t22
fmul.sss x26,x26,x29 ;t20 = t20 * scaleu
fmul.sss x27,x27,x30 ;t21 = t21 * scalev
st x26,r27,36 ; Store mapping matrix t20
st x27,r27,40 ; Store mapping matrix t21
st x28,r27,44 ; Store mapping matrix t22 twEnd: Id rl ,r3l ,O ; Restore return address
addu r31,r31,4 ; Restore stack pointer
jmp r1 ; return to calling function
Claims (11)
- CLAIMS: 1. A computer '3D' graphics system comprising a processor operable by an image processing program, memory associated with said processor and a visual display unit being a head mounted display or video screen disposed forwardly of a nominal viewer position, wherein said program generates a '2D' perspective image of a multi-faceted polygon which is displayed in model space by the display unit, texture to be applied to facets of the polygon is held in the form of a bitmap in said memory and said program being adapted to alter the displayed perspective image to represent rotation and/or translation of the polygon in model space; said program providing a virtual texture surface spaced from the image between the display unit and the viewer position on to which information from the bitmap is 'projected' mathematically and said program being adapted to drape said projected information from said texture surface simultaneously on to all of the visible polygon facets of the displayed image thereby draping projected information coherently across contacting edges of adjacent visible facets of the polygon.
- 2. A computer '3D' graphics system comprising a processor operable by an image processing program, memory associated with said processor and a visual video display unit being a head mounted display or video screen disposed forwardly of a nominal viewer position, wherein said program generates a first '2D' perspective image of a multi-faceted polygon as 'viewed' from a first direction which is displayed in model space by the display unit, texture to be applied to all facets of the polygon whether visible or not is held in the form of a bitmap in said memory and said program alternatively is adapted to generate and cause to be displayed a second '2D' perspective image of the polygon as viewed from a second direction different from said first direction thereby representing rotation and/or translation of the polygon in model space; said program providing a virtual texture surface spaced from the image between the display unit and the viewer position on to which information from the bitmap is 'projected' mathematically and said program being adapted to drape said projected information from said texture surface simultaneously on to all of the visible polygon facets of the displayed image being said first perspective image or said second perspective image thereby at any given instant or frame of the video unit draping projected information coherently across contacting edges of adjacent visible facets of the polygon.
- 3. A computer system in accordance with claim 1 or claim 2, in which said texture surface is planar.
- 4. A computer system in accordance with any one of the preceding claims, in which as said polygon is caused to rotate in model space the texture surface is 'caused' to rotate so as to continue to lie generally ahead of said polygon thereby enabling texture projected thereon to be draped on to visible facets of the polygon.
- 5. A computer system in accordance with any one of the preceding claims, in which said program drapes said projected information orthogonally from said texture surface simultaneously on to all of the displayed individual facets of the polygon.
- 6. A computer system in accordance with any one of the preceding claims, in which said system provides a real time graphics system in which said image is updated in real time.
- 7. A computer system in accordance with claim 6, in which said image processing program forms a part of a virtual reality games software application.
- 8. A computer system in accordance with any one of the preceding claims, in which said image is draped with a video image or cartoon caricature of a human face.
- 9 A computer system in accordance with claim 8 when dependent on claim 7, in which the said video image of the human face is provided by a video camera associated with apparatus of the virtual reality games application.
- 10. A computer system in accordance with any one of the preceding claims, in which said program comprises a routine substantially identical to that disclosed in Appendix 1 of the Description.
- 11. A computer system in accordance with any one of the preceding claims, in which said program comprises a routine or sub-routine substantially identical to that disclosed in Appendix 2 of the Description.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9403924A GB9403924D0 (en) | 1994-03-01 | 1994-03-01 | Texture mapping |
Publications (2)
Publication Number | Publication Date |
---|---|
GB9504140D0 GB9504140D0 (en) | 1995-04-19 |
GB2287387A true GB2287387A (en) | 1995-09-13 |
Family
ID=10751096
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB9403924A Pending GB9403924D0 (en) | 1994-03-01 | 1994-03-01 | Texture mapping |
GB9504140A Withdrawn GB2287387A (en) | 1994-03-01 | 1995-03-01 | Texture mapping |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB9403924A Pending GB9403924D0 (en) | 1994-03-01 | 1994-03-01 | Texture mapping |
Country Status (3)
Country | Link |
---|---|
AU (1) | AU1818895A (en) |
GB (2) | GB9403924D0 (en) |
WO (1) | WO1995024021A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2313278A (en) * | 1996-05-14 | 1997-11-19 | Philip Field | Mapping images onto three-dimensional surfaces |
WO1998058351A1 (en) * | 1997-06-17 | 1998-12-23 | British Telecommunications Public Limited Company | Generating an image of a three-dimensional object |
GB2340007A (en) * | 1998-07-20 | 2000-02-09 | Damien James Lee | Three Dimensional Image Processing |
WO2001063560A1 (en) * | 2000-02-22 | 2001-08-30 | Digimask Limited | 3d game avatar using physical characteristics |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111459266A (en) * | 2020-03-02 | 2020-07-28 | 重庆爱奇艺智能科技有限公司 | Method and device for operating 2D application in virtual reality 3D scene |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2256109A (en) * | 1991-04-12 | 1992-11-25 | Sony Corp | Transforming a two-dimensional image video signal on to a three-dimensional surface |
GB2263837A (en) * | 1992-01-28 | 1993-08-04 | Sony Corp | Mapping a 2-d image onto a 3-d surface such as a polyhedron. |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2612260B2 (en) * | 1986-09-24 | 1997-05-21 | ダイキン工業株式会社 | Texture mapping equipment |
US4945495A (en) * | 1987-10-21 | 1990-07-31 | Daikin Industries, Ltd. | Image memory write control apparatus and texture mapping apparatus |
-
1994
- 1994-03-01 GB GB9403924A patent/GB9403924D0/en active Pending
-
1995
- 1995-03-01 GB GB9504140A patent/GB2287387A/en not_active Withdrawn
- 1995-03-01 WO PCT/GB1995/000439 patent/WO1995024021A1/en active Application Filing
- 1995-03-01 AU AU18188/95A patent/AU1818895A/en not_active Abandoned
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2256109A (en) * | 1991-04-12 | 1992-11-25 | Sony Corp | Transforming a two-dimensional image video signal on to a three-dimensional surface |
GB2263837A (en) * | 1992-01-28 | 1993-08-04 | Sony Corp | Mapping a 2-d image onto a 3-d surface such as a polyhedron. |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2313278A (en) * | 1996-05-14 | 1997-11-19 | Philip Field | Mapping images onto three-dimensional surfaces |
WO1998058351A1 (en) * | 1997-06-17 | 1998-12-23 | British Telecommunications Public Limited Company | Generating an image of a three-dimensional object |
GB2341070A (en) * | 1997-06-17 | 2000-03-01 | British Telecomm | Generating an image of a three-dimensional object |
GB2341070B (en) * | 1997-06-17 | 2002-02-27 | British Telecomm | Generating an image of a three-dimensional object |
US6549200B1 (en) * | 1997-06-17 | 2003-04-15 | British Telecommunications Public Limited Company | Generating an image of a three-dimensional object |
GB2340007A (en) * | 1998-07-20 | 2000-02-09 | Damien James Lee | Three Dimensional Image Processing |
WO2001063560A1 (en) * | 2000-02-22 | 2001-08-30 | Digimask Limited | 3d game avatar using physical characteristics |
Also Published As
Publication number | Publication date |
---|---|
GB9403924D0 (en) | 1994-04-20 |
GB9504140D0 (en) | 1995-04-19 |
WO1995024021A1 (en) | 1995-09-08 |
AU1818895A (en) | 1995-09-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1424655B1 (en) | A method of creating 3-D facial models starting from facial images | |
EP0990224B1 (en) | Generating an image of a three-dimensional object | |
US5694533A (en) | 3-Dimensional model composed against textured midground image and perspective enhancing hemispherically mapped backdrop image for visual realism | |
US7138999B2 (en) | Refinement of a triangular mesh representing a three-dimensional object | |
US5808619A (en) | Real-time rendering method of selectively performing bump mapping and phong shading processes and apparatus therefor | |
US5995110A (en) | Method and system for the placement of texture on three-dimensional objects | |
US5377313A (en) | Computer graphics display method and system with shadow generation | |
US5786822A (en) | Method and apparatus for mapping texture on an object displayed at a varying view angle from an observer | |
Bimber et al. | Occlusion shadows: Using projected light to generate realistic occlusion effects for view-dependent optical see-through displays | |
JP3759971B2 (en) | How to shade a 3D image | |
JP3649469B2 (en) | Animation data creation method and creation apparatus | |
CN109739356A (en) | Control method, device and the VR helmet that image is shown in VR system | |
Van Wijk | Ray tracing objects defined by sweeping planar cubic splines | |
JP3126575B2 (en) | 3D image generator for stereoscopic vision | |
JP3459401B2 (en) | Stereoscopic image generation method using Z buffer | |
DE69729027T2 (en) | Object visualization in context | |
US5793372A (en) | Methods and apparatus for rapidly rendering photo-realistic surfaces on 3-dimensional wire frames automatically using user defined points | |
JP3538263B2 (en) | Image generation method | |
GB2287387A (en) | Texture mapping | |
Eisert et al. | Facial expression analysis for model-based coding of video sequences | |
Iwadate et al. | VRML animation from multi-view images | |
JP2000057372A (en) | Image processor, image processing method and storage medium | |
JPH09245192A (en) | Method for realizing virtual environment generation realizing and its device | |
Rak et al. | View-Consistent Virtual Try-on of Glasses using a Hybrid NeRF-Mesh Rendering Approach | |
Okuya et al. | Reproduction of perspective in cel animation 2D composition for real-time 3D rendering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |