IE980591A1 - Three dimensional image processing - Google Patents
Three dimensional image processingInfo
- Publication number
- IE980591A1 IE980591A1 IE980591A IE980591A IE980591A1 IE 980591 A1 IE980591 A1 IE 980591A1 IE 980591 A IE980591 A IE 980591A IE 980591 A IE980591 A IE 980591A IE 980591 A1 IE980591 A1 IE 980591A1
- Authority
- IE
- Ireland
- Prior art keywords
- vertices
- object model
- face
- representations
- initial
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
Abstract
For three dimensional image processing, an object (26) is defined as a representation (26) having interconnected vertices (21) and separate and independently addressable face representations (22). Interconnected vertices shared between adjoining faces define the object model. Each face representation (22) includes a vertex reference, texture co-ordinates (U, V), and image references. Transformation of the object involves transformation only of the vertices (21), the references of the face representations (22) being tied to transformed vertices to allow rendering to take place. <Figure 2>
Description
“Three Dimensional Image Processing
The invention relates to three dimensional image processing to provide virtual reality two dimensional displays.
There are numerous applications for such image processing, such as visualisation of architectural designs or game-playing. Two dimensional simulation of three dimensional objects allows much greater scope in such applications. For example, the user can much more clearly visualise a building plan by “navigating” through the building and seeing the views somewhat as they would appear in real life. This is achieved by both the three dimensional aspect and also full texture colouring.
However, a major problem with image processing generally, and particularly three dimensional image processing, has been the fact that a large processing capacity is required to achieve a reasonable response time. One of the major demands on the processor capacity is transformation of objects to simulate different viewing positions. This operation conventionally involves a technique such as described in EP 676724 (Toshiba). In this system, a linear transform is performed between each vertex of a polygon and corresponding vertex of texture data in a three-dimensional space to obtain identical sizes and co-ordinate values between the polygon and the texture data. The linear transform is performed using complex algorithms and coordination of both the vertex data and the texture data is therefore processorintensive.
It is therefore an object of the invention to provide an image processing method which involves simpler operations for transforming three-dimensional objects so that less processing capacity is required. If such an object can be achieved it would allow use of conventional microcomputers in a broader range of three dimensional image processing applications. It would also allow shorter response times. . —........—.
According to the invention, there is provided an image processing method carried out by a digital data processor connected to a memory and an image database, the method comprising the steps of:generating an initial object model as a stored set of interconnected vertices with respect to a Cartesian system in which adjoining faces have shared vertices;
separately storing individually addressable representations of object faces, each face representation being referenced to at least three object model vertices and texture co-ordinates; and mapping the object model to a new position by transforming the vertices and subsequently rendering the object model faces using the face representation vertex references and texture co-ordinates.
In one embodiment, the initial object model is stored with respect to an initial Cartesian system, and the origin of the initial Cartesian system is transformed to the origin of the current display scene, and the vertices are subsequently transformed according to a current scene view position.
Preferably, the vertex references are maintained during the transformation.
In one embodiment, the face representations include image database references, and rendering is performed using retrieved images referenced by the face representations.
Preferably, images associated with the object model are retrieved from the image database at the stage of transforming the vertices.
In one embodiment, the method comprises the further step, before rendering, of performing perspective transformation with respect to the vertices referenced in the face representations.
In another embodiment, the method comprises of further step of modifying the object shape by changing the object model vertices without aqcessing the face representations.
In one embodiment, the method comprises the further step of sub-dividing an original object by storing an object model for each of a plurality of parts of the original object, and subsequently modifying the shapes of each object model.
The invention will be more clearly understood from the following description of some embodiments thereof, given by way of example only with reference to the accompanying drawings in which:Fig. 1 is a flow diagram illustrating an image processing method of the invention; and
Fig. 2 is a set of diagrams illustrating the manner in which an object is transformed.
Referring to the drawings, an image processing method 1 comprising steps 2 to 10 inclusive is illustrated. The method 1 is for transforming a three dimensional object with textured faces so that it can be viewed from a different perspective on a two-dimensional pixel array.
In steps 2 to 4, an object is defined. Referring to Fig. 2 also, the definition is indicated by the numeral 20 and a simple object, namely a cube, is indicated by the numeral 26. Taking the example of the cube 26, in step 2 the processor defines an
-4 object model as a set of interconnected vertices 21. The cube 26 has eight vertices referenced 0 to 7. Each vertex has an x, y, and z value and a set of values for a unitary cube is given in Fig. 2. These values are with respect to an original Cartesian system 25, shown in Fig. 2. The vertices are interconnected to form the object model in a manner whereby adjoining faces have shared vertices. It will be appreciated that little processing time is required to define the object model, even if the object is much more complex than the sample object illustrated.
In step 3, the processor stores in memory a face representation 22 for each face of the object 26. The face representation 22 is individually addressable in memory as a separate entity from the object model 21. Each face representation comprises references to the vertices of the object model, as indicated by the arrow A in Fig. 2. These references are simple and there is no need to store the vertices. In addition, each face representation includes a pair of texture co-ordinates U and V for each vertex reference. Textures are used to enhance the appearance of an object when displayed, a texture being a two-dimensional image which is mapped to a face during subsequent rendering after transformation. A face 27 having a human face image is illustrated in Fig. 2. The face representation includes a reference to each of the vertices as they are located in the Cartesian system 25. The texture co-ordinates indicate the positions of the associated images on the faces. Each texture co-ordinate ranges from 0 to 1, this being the scale from one vertex to another in a straight line. The extreme texture co-ordinate values are indicated within parentheses at each vertex in the face 27.
Finally, each face representation 22 includes a reference to an image in an image database 29 as shown in Fig. 2. These references are used to retrieve the image which is to be rendered onto the face after transformation of the object.
As indicated by the decision step 4, the object definition process proceeds with each face in turn. Each face may be represented as a triangle, or as a quadrangle,
-5depending on the nature of the object and the manner in which images arc applied to faces. For example, if a full image is applied to one of the faces of the cube 26, then it is simpler to define spaces as being quadrangles.
In step 5, a processor makes a duplicate copy in memory of the Cartesian system 25 and then transforms this system to the scene which is to be displayed. This transformation involves determining the location of the origin of the transformed system as being the desired location of a user or a camera viewing the scene. This transformation involves initially a transformation to the new origin, and subsequently rotation is that the viewpoint is orientated in the z direction..
Once the new Cartesian system has been established in step 5, transformation of the object can begin. A simple example of transformation of an object arises in simulation of a person walking down a corridor and walking around comers and into rooms. As this scene progresses, in each successive displayed frame the objects which are displayed are transformed because they are viewed from a different position. Both the Cartesian systems and the vertices are transformed for every frame in which the viewpoint has changed.
The algorithms used in step 6 for vertex transformation are processed using matrix multiplication. In this case, there is multiplication of a 4 x 4 matrix with a 4 x 1 matrix.
It will be appreciated that this transformation is a relatively fast operation because it does not involve processing of pixels within the faces or lines adjoining the vertices. It may be performed on a conventional microcomputer with a satisfactory response time. Another very important aspect is that an object model comprising interconnected vertices is transformed. This is much more efficient than transformation of faces having individual vertices which are not interconnected as an object model.
-6Subsequently, the vertex references of the face representations 22 are mapped to the transformed vertices of the object model 21. In many instances, no mapping is required because the face representation 22 includes only the vertex references and these do not change.
An important aspect of the invention is that steps 6 and 7 complete transformation of the object without the need for transformation of the faces or the texture data. This has been achieved primarily because the steps 2 to 4 define the object by associating texture co-ordinates with the face representations 22, rather than as an integral part of the object definition or with the object model 21.
In step 8, the processor uses the image references from each face representation 22 in turn to retrieve an image. In step 9 there is perspective transformation, for display of a three-dimensional image on a two-dimensional screen with pixel co-ordinates.
This again involves processing with matrix multiplication.
Finally, in step 10 the transformed object is rendered using the images which have been retrieved. A particularly fast response is achieved during rendering because the image bitmap file of the image database 29 is referenced to the object model. The bitmap file was retrieved into memory in step 5, when processing of the object model began. These bitmaps are then processed in real time on a frame-by-frame basis with a fast response.
Another advantage of the invention is that an object shape may be modified by changing the vertices without accessing the face representations. This is very simple. According to the object complexity, an object may be sub-divided by generating an object model for each of a number of object parts and subsequently modifying the shapes of the object parts.
- 710
The invention is not limited to the embodiments described, but may be varied in construction and detail.
Claims (10)
1. An image processing method carried out by a digital data processor connected to a memory and an image database, the method comprising the steps of:generating an initial object model as a stored set of interconnected vertices with respect to a Cartesian system in which adjoining faces have shared vertices; separately storing individually addressable representations of object faces, each face representation being referenced to at least three object model vertices and texture co-ordinates; and mapping the object model to a new position by transforming the vertices and subsequently rendering the object faces using the face representation vertex references and texture co-ordinates.
2. A method as claimed in claim 1, wherein the initial object model is stored with respect to an initial Cartesian system, and the origin of the initial Cartesian system is transformed to the origin of the current display scene, and the vertices are subsequently transformed according to a current scene view position.
3. A method as claimed in claim 1 or 2, wherein the vertex references are maintained during the transformation.
4. A method as claimed in claim 3, wherein the face representations include image database references, and rendering is performed using retrieved images referenced by the face representations. -9 - '
5. A method as claimed in claim 4, wherein images associated with the object model are retrieved from the image database at the stage of transforming the vertices. 5
6. A method as claimed in any preceding claim, comprising the further step, before rendering, of performing perspective transformation with respect to the vertices referenced ih the face representations.
7. A method as claimed in any preceding claim, comprising the further step of 10 modifying the object shape by changing the object model vertices without accessing the face representations. g. A method as claimed in claim 7, comprising the further step of sub-dividing an original object by storing an object model for each of a plurality of parts of 15 the original object, and subsequently modifying the shapes of each object model.
8. 9. An image processing method substantially as hereinbefore described with reference to the accompanying drawings.
9.
10. An image processing system comprising a digital data processor connected to a memory and an image database, the processor comprising means for: generating an initial object model as a stored set of interconnected vertices 25 with respect to a Cartesian system in which adjoining faces have shared vertices; separately storing individually addressable representations of object faces, each face representation being referenced to at least three object model 30 vertices and texture co-ordinates; and - ΙΟ mapping the object model to a new position by transforming the vertices and subsequently rendering the object faces using the face representation vertex references and texture co-ordinates. A system as claimed·ίη claim 10, wherein the processor comprises means for storing the initial object model with respect to an initial Cartesian system, for transforming the origin of the initial Cartesian system to the current display scene, and for subsequently transforming the vertices according to a current scene view position. A system as claimed in claims 10 and II, wherein the vertex references are maintained during the transformation. A system as claimed in claim 12, wherein the face representations include image database references, and the processor comprises means for performing and rendering using retrieving images referenced by the face representations.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IE980591A IE980591A1 (en) | 1998-07-20 | 1998-07-20 | Three dimensional image processing |
GB9816092A GB2340007A (en) | 1998-07-20 | 1998-07-23 | Three Dimensional Image Processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IE980591A IE980591A1 (en) | 1998-07-20 | 1998-07-20 | Three dimensional image processing |
Publications (1)
Publication Number | Publication Date |
---|---|
IE980591A1 true IE980591A1 (en) | 2000-02-09 |
Family
ID=11041853
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
IE980591A IE980591A1 (en) | 1998-07-20 | 1998-07-20 | Three dimensional image processing |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB2340007A (en) |
IE (1) | IE980591A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005051504A1 (en) * | 2003-11-28 | 2005-06-09 | Mario Castellani | Electronic game for computer or slot machine |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2267202B (en) * | 1992-05-08 | 1996-05-22 | Apple Computer | Multiple buffer processing architecture for intergrated display of video and graphics with independent color depth |
GB2270243B (en) * | 1992-08-26 | 1996-02-28 | Namco Ltd | Image synthesizing system |
GB9403924D0 (en) * | 1994-03-01 | 1994-04-20 | Virtuality Entertainment Ltd | Texture mapping |
-
1998
- 1998-07-20 IE IE980591A patent/IE980591A1/en not_active IP Right Cessation
- 1998-07-23 GB GB9816092A patent/GB2340007A/en not_active Withdrawn
Also Published As
Publication number | Publication date |
---|---|
GB9816092D0 (en) | 1998-09-23 |
GB2340007A (en) | 2000-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US4600919A (en) | Three dimensional animation | |
US5561745A (en) | Computer graphics for animation by time-sequenced textures | |
US7728848B2 (en) | Tools for 3D mesh and texture manipulation | |
EP1064619B1 (en) | Stochastic level of detail in computer animation | |
US6226005B1 (en) | Method and system for determining and/or using illumination maps in rendering images | |
US6529192B1 (en) | Method and apparatus for generating mesh models of 3D objects | |
US6600489B2 (en) | System and method of processing digital terrain information | |
US6879328B2 (en) | Support of multi-layer transparency | |
CN100458850C (en) | Method and apparatus for cost effective digital image and video editing, using general 3D graphic pipeline | |
Hanson et al. | Interactive visualization methods for four dimensions | |
WO1996036011A1 (en) | Graphics system utilizing homogeneity values for depth for occlusion mapping and texture mapping | |
EP1008112A1 (en) | Techniques for creating and modifying 3d models and correlating such models with 2d pictures | |
WO2002007092A2 (en) | Multiprocessor system for 3d image rendering | |
JPH0757117A (en) | Forming method of index to texture map and computer control display system | |
WO2003038761A2 (en) | Rendering 3d objects with parametric texture maps | |
Brosz et al. | Single camera flexible projection | |
US5420940A (en) | CGSI pipeline performance improvement | |
US5793372A (en) | Methods and apparatus for rapidly rendering photo-realistic surfaces on 3-dimensional wire frames automatically using user defined points | |
EP0856815B1 (en) | Method and system for determining and/or using illumination maps in rendering images | |
US20040004621A1 (en) | Method for interactively viewing full-surround image data and apparatus therefor | |
KR0166106B1 (en) | Apparatus and method for image processing | |
EP1027682A1 (en) | Method and apparatus for rapidly rendering an image in response to three-dimensional graphics data in a data rate limited environment | |
IE980591A1 (en) | Three dimensional image processing | |
IES980590A2 (en) | Three dimensional image processing | |
Blythe et al. | Lighting and shading techniques for interactive applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
MM9A | Patent lapsed through non-payment of renewal fee |