IES80694B2 - Three dimensional image processing - Google Patents

Three dimensional image processing

Info

Publication number
IES80694B2
IES80694B2 IES980590A IES80694B2 IE S80694 B2 IES80694 B2 IE S80694B2 IE S980590 A IES980590 A IE S980590A IE S80694 B2 IES80694 B2 IE S80694B2
Authority
IE
Ireland
Prior art keywords
vertices
object model
face
representations
faces
Prior art date
Application number
Inventor
Damien James Lee
Original Assignee
Damien James Lee
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Damien James Lee filed Critical Damien James Lee
Priority to IES980590 priority Critical patent/IES980590A2/en
Publication of IES80694B2 publication Critical patent/IES80694B2/en
Publication of IES980590A2 publication Critical patent/IES980590A2/en

Links

Landscapes

  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

For three dimensional image processing, an object (26) is defined as a representation (26) having interconnected vertices (21) and separate and independently addressable face representations (22). Interconnected vertices shared between adjoining faces define the object model. Each face representation (22) includes a vertex reference, texture co-ordinates (U, V), and image references. Transformation of the object involves transformatoin only of the vertices (21), the references of the face representations (22) being tied to transformed vertices to allow rendering to take place.

Description

“Three Dimensional Image Processing” The invention relates to three dimensional image processing to provide virtual reality two dimensional displays.
There are numerous applications for such image processing, such as visualisation of architectural designs or game-playing. Two dimensional simulation of three dimensional objects allows much greater scope in such applications. For example, the user can much more clearly visualise a building plan by “navigating” through the building and seeing the views somewhat as they would appear in real life. This is achieved by both the three dimensional aspect and also full texture colouring.
However, a major problem with image processing generally, and particularly three dimensional image processing, has been the fact that a large processing capacity is required to achieve a reasonable response time. One of the major demands on the processor capacity is transformation of objects to simulate different viewing positions. This operation conventionally involves a technique such as described in EP 676724 (Toshiba). In this system, a linear transform is performed between each vertex of a polygon and corresponding vertex of texture data in a three-dimensional space to obtain identical sizes and co-ordinate values between the polygon and the texture data. The linear transform is performed using complex algorithms and coordination of both the vertex data and the texture data is therefore processorintensive.
It is therefore an object of the invention to provide an image processing method which involves simpler operations for transforming three-dimensional objects so that less processing capacity is required. If such an object can be achieved it would allow use of conventional microcomputers in a broader range of three dimensional image processing applications. It would also allow shorter response times.
S & ΰ ό 9 4 - 2 According to the invention, there is provided an image processing method carried out by a digital data processor connected to a memory and an image database, the method comprising the steps of:5 generating an initial object model as a stored set of interconnected vertices with respect to a Cartesian system in which adjoining faces have shared vertices; separately storing individually addressable representations of object faces, each face representation being referenced to at least three object model vertices and texture co-ordinates; and mapping the object model to a new position by transforming the vertices and subsequently rendering the object model faces using the face representation vertex references and texture co-ordinates.
In one embodiment, the initial object model is stored with respect to an initial Cartesian system, and the origin of the initial Cartesian system is transformed to the origin of the current display scene, and the vertices are subsequently transformed according to a current scene view position.
Preferably, the vertex references are maintained during the transformation.
In one embodiment, the face representations include image database references, and 25 rendering is performed using retrieved images referenced by the face representations.
Preferably, images associated with the object model are retrieved from the image database at the stage of transforming the vertices. -3 In one embodiment, the method comprises the further step, before rendering, of performing perspective transformation with respect to the vertices referenced in the face representations.
In another embodiment, the method comprises of further step of modifying the object shape by changing the object model vertices without accessing the face representations.
In one embodiment, the method comprises the further step of sub-dividing an original object by storing an object model for each of a plurality of parts of the original object, and subsequently modifying the shapes of each object model.
The invention will be more clearly understood from the following description of some embodiments thereof, given by way of example only with reference to the accompanying drawings in which:Fig. 1 is a flow diagram illustrating an image processing method of the invention; and Fig. 2 is a set of diagrams illustrating the manner in which an object is transformed.
Referring to the drawings, an image processing method 1 comprising steps 2 to 10 inclusive is illustrated. The method 1 is for transforming a three dimensional object with textured faces so that it can be viewed from a different perspective on a two-dimensional pixel array.
In steps 2 to 4, an object is defined. Referring to Fig. 2 also, the definition is indicated by the numeral 20 and a simple object, namely a cube, is indicated by the numeral 26. Taking the example of the cube 26, in step 2 the processor defines an -4object model as a set of interconnected vertices 21. The cube 26 has eight vertices referenced 0 to 7. Each vertex has an x, y, and z value and a set of values for a unitary cube is given in Fig. 2. These values are with respect to an original Cartesian system 25, shown in Fig. 2. The vertices are interconnected to form the object model in a manner whereby adjoining faces have shared vertices. It will be appreciated that little processing time is required to define the object model, even if the object is much more complex than the sample object illustrated.
In step 3, the processor stores in memory a face representation 22 for each face of the object 26. The face representation 22 is individually addressable in memory as a separate entity from the object model 21. Each face representation comprises references to the vertices of the object model, as indicated by the arrow A in Fig. 2. These references are simple and there is no need to store the vertices. In addition, each face representation includes a pair of texture co-ordinates U and V for each vertex reference. Textures are used to enhance the appearance of an object when displayed, a texture being a two-dimensional image which is mapped to a face during subsequent rendering after transformation. A face 27 having a human face image is illustrated in Fig. 2. The face representation includes a reference to each of the vertices as they are located in the Cartesian system 25. The texture co-ordinates indicate the positions of the associated images on the faces. Each texture co-ordinate ranges from 0 to 1, this being the scale from one vertex to another in a straight line. The extreme texture co-ordinate values are indicated within parentheses at each vertex in the face 27.
Finally, each face representation 22 includes a reference to an image in an image database 29 as shown in Fig. 2. These references are used to retrieve the image which is to be rendered onto the face after transformation of the object.
As indicated by the decision step 4, the object definition process proceeds with each face in turn. Each face may be represented as a triangle, or as a quadrangle, -5depending on the nature of the object and the manner in which images are applied to faces. For example, if a full image is applied to one of the faces of the cube 26, then it is simpler to define spaces as being quadrangles.
In step 5, a processor makes a duplicate copy in memory of the Cartesian system 25 and then transforms this system to the scene which is to be displayed. This transformation involves determining the location of the origin of the transformed system as being the desired location of a user or a camera viewing the scene. This transformation involves initially a transformation to the new origin, and subsequently rotation is that the viewpoint is orientated in the z direction..
Once the new Cartesian system has been established in step 5, transformation of the object can begin. A simple example of transformation of an object arises in simulation of a person walking down a corridor and walking around comers and into rooms. As this scene progresses, in each successive displayed frame the objects which are displayed are transformed because they are viewed from a different position. Both the Cartesian systems and the vertices are transformed for every frame in which the viewpoint has changed.
The algorithms used in step 6 for vertex transformation are processed using matrix multiplication. In this case, there is multiplication of a 4 x 4 matrix with a 4 x 1 matrix.
It will be appreciated that this transformation is a relatively fast operation because it does not involve processing of pixels within the faces or lines adjoining the vertices. It may be performed on a conventional microcomputer with a satisfactory response time. Another very important aspect is that an object model comprising interconnected vertices is transformed. This is much more efficient than transformation of faces having individual vertices which are not interconnected as an object model. -6Subsequently, the vertex references of the face representations 22 are mapped to the transformed vertices of the object model 21. In many instances, no mapping is required because the face representation 22 includes only the vertex references and these do not change.
An important aspect of the invention is that steps 6 and 7 complete transformation of the object without the need for transformation of the faces or the texture data. This has been achieved primarily because the steps 2 to 4 define the object by associating texture co-ordinates with the face representations 22, rather than as an integral part of the object definition or with the object model 21.
In step 8, the processor uses the image references from each face representation 22 in turn to retrieve an image. In step 9 there is perspective transformation, for display of a three-dimensional image on a two-dimensional screen with pixel co-ordinates.
This again involves processing with matrix multiplication.
Finally, in step 10 the transformed object is rendered using the images which have been retrieved. A particularly fast response is achieved during rendering because the image bitmap file of the image database 29 is referenced to the object model. The bitmap file was retrieved into memory in step 5, when processing of the object model began. These bitmaps are then processed in real time on a frame-by-frame basis with a fast response.
Another advantage of the invention is that an object shape may be modified by changing the vertices without accessing the face representations. This is very simple. According to the object complexity, an object may be sub-divided by generating an object model for each of a number of object parts and subsequently modifying the shapes of the object parts. -7The invention is not limited to the embodiments described, but may be varied in construction and detail.

Claims (5)

Claims
1. An image processing method carried out by a digital data processor connected to a memory and an image database, the method comprising the steps of:5 generating an initial object model as a stored set of interconnected vertices with respect to a Cartesian system in which adjoining faces have shared vertices; 10 separately storing individually addressable representations of object faces, each face representation being referenced to at least three object model vertices and texture co-ordinates; and mapping the object model to a new position by transforming the vertices and 15 subsequently rendering the object faces using the face representation vertex references and texture co-ordinates.
2. A method as claimed in claim 1, wherein the initial object model is stored with respect to an initial Cartesian system, and the origin of the initial 20 Cartesian system is transformed to the origin of the current display scene, and the vertices are subsequently transformed according to a current scene view position, and wherein the vertex references are maintained during the transformation, and wherein the face representations include image database references, and rendering is performed using retrieved images referenced by 25 the face representations, and wherein images associated with the object model are retrieved from the image database at the stage of transforming the vertices.
3. A method as claimed in any preceding claim, comprising the further step, before rendering, of performing perspective transformation with respect to the 30 vertices referenced in the face representations, and the method comprising the -9further step of modifying the object shape by changing the object model vertices without accessing the face representations.
4. An image processing method substantially as hereinbefore described with reference to the accompanying drawings.
5. An image processing system comprising a digital data processor connected to a memory and an image database, the processor comprising means for: generating an initial object model as a stored set of interconnected vertices with respect to a Cartesian system in which adjoining faces have shared vertices; separately storing individually addressable representations of object faces, each face representation being referenced to at least three object model vertices and texture co-ordinates; and mapping the object model to a new position by transforming the vertices and subsequently rendering the object faces using the face representation vertex references and texture co-ordinates.
IES980590 1998-07-20 1998-07-20 Three dimensional image processing IES980590A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
IES980590 IES980590A2 (en) 1998-07-20 1998-07-20 Three dimensional image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
IES980590 IES980590A2 (en) 1998-07-20 1998-07-20 Three dimensional image processing

Publications (2)

Publication Number Publication Date
IES80694B2 true IES80694B2 (en) 1998-12-02
IES980590A2 IES980590A2 (en) 1998-12-02

Family

ID=11041852

Family Applications (1)

Application Number Title Priority Date Filing Date
IES980590 IES980590A2 (en) 1998-07-20 1998-07-20 Three dimensional image processing

Country Status (1)

Country Link
IE (1) IES980590A2 (en)

Also Published As

Publication number Publication date
IES980590A2 (en) 1998-12-02

Similar Documents

Publication Publication Date Title
US6529192B1 (en) Method and apparatus for generating mesh models of 3D objects
US6226005B1 (en) Method and system for determining and/or using illumination maps in rendering images
US5561745A (en) Computer graphics for animation by time-sequenced textures
EP1064619B1 (en) Stochastic level of detail in computer animation
US7148890B2 (en) Displacement mapping by using two passes through the same rasterizer
US6879328B2 (en) Support of multi-layer transparency
US6600489B2 (en) System and method of processing digital terrain information
CN100458850C (en) Method and apparatus for cost effective digital image and video editing, using general 3D graphic pipeline
US6888544B2 (en) Apparatus for and method of rendering 3D objects with parametric texture maps
US7714872B2 (en) Method of creating texture capable of continuous mapping
US5877769A (en) Image processing apparatus and method
US7129941B2 (en) Sample replication mode with depth value calculation
Hanson et al. Interactive visualization methods for four dimensions
WO1996036011A1 (en) Graphics system utilizing homogeneity values for depth for occlusion mapping and texture mapping
GB2259432A (en) Three dimensional graphics processing
EP1008112A1 (en) Techniques for creating and modifying 3d models and correlating such models with 2d pictures
US5420940A (en) CGSI pipeline performance improvement
US5745667A (en) 3d graphics apparatus using texture images with displacement information
US5793372A (en) Methods and apparatus for rapidly rendering photo-realistic surfaces on 3-dimensional wire frames automatically using user defined points
EP0856815B1 (en) Method and system for determining and/or using illumination maps in rendering images
KR0166106B1 (en) Apparatus and method for image processing
IE980591A1 (en) Three dimensional image processing
IES80694B2 (en) Three dimensional image processing
Sabella et al. Toward fast color-shaded images of CAD/CAM geometry
Blythe et al. Lighting and shading techniques for interactive applications

Legal Events

Date Code Title Description
MM4A Patent lapsed