GB2395411A - Game system and image generation method - Google Patents
Game system and image generation method Download PDFInfo
- Publication number
- GB2395411A GB2395411A GB0403555A GB0403555A GB2395411A GB 2395411 A GB2395411 A GB 2395411A GB 0403555 A GB0403555 A GB 0403555A GB 0403555 A GB0403555 A GB 0403555A GB 2395411 A GB2395411 A GB 2395411A
- Authority
- GB
- United Kingdom
- Prior art keywords
- image
- value
- game
- data
- information storage
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/45—Controlling the progress of the video game
- A63F13/49—Saving the game status; Pausing or ending the game
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/10—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
- A63F2300/1018—Calibration; Key and button assignment
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/66—Methods for processing data by generating or executing the game program for rendering three dimensional images
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Generation (AREA)
Abstract
A game system which generates a game image for a domestic game, comprises: means which sets an adjustment data for adjusting display properties of a monitor based on operational data inputted by a player through a game controller; save means which saves the set adjustment data in a saved information storage device for storing personal data of the player; and means which performs transformation processing on image information of an original image based on the adjustment data obtained by adjusting the display properties or loaded from the saved information storage device. Also disclosed is creating a focused image e.g. corrected by a video filter for index colour texture mapping.
Description
GB 2395411 A continuation (74) Agent and/or Address for Service: Page
Hargrave Southgate, Whitefriars, Lewins Mead, BRISTOL, BS1 2NT, United Kingdom
Game System and Image Generation Method Technical Field
5 Tile present invention relates to a game system. program and image generation method.
Background Art
There is known an image generating system which can generate an image as viewed 10 within a virtual three-dimensional or object space from a given viewpoint. Such a system is very popular since one can experience a socalled virtual reality through it. Now considering an image generating system for playing a racing game, a player can enjoy a three-dimensional shooting game by manipulating a racing car (or object) to run in an object space and to compete against racing cars which are manipulated by other players and computer.
15 In such a game system, it is desirable that a transformation known as gamma correction relating to an image is performed to correct the nonlinear characteristics of a monitor (or display section).
There are known two techniques for realizing such a gamma correction.
A first technique has provided a gamma-correction lookup table (LUT) on a main 20 memory 802. as shown in Fig.] A. CPU 800 (or a software running on the CPU) reads the color information (ROB) ofthe respective pixels in an original image out of a frame buffer 808 in VRAM 806. The gammacorrection LUT is then referred to based on the read color information to obtain the gamma-corrected color information. Next, the gamma-corrected color information so obtained is written in the corresponding pixel in the frame buffer. Such 25 a procedure will be repeated for all the pixels in the original image.
On the other hand, a second technique provides a gamma-correction circuit 814 located downstream of a drawing processor 812, as shown in Fig. 1 B. which is operated under control of the CPU 810 and designed to realize the gamma correction in hardware. The gamma correction circuit 814 performs the gamma correction relative to the color information 30 generated by the drawing processor 812, the gamma-corrected color information being then
outputted toward a monitor 816.
However, the first technique of Fig. 1A requires a software running on the CPU 800 for perforrningall the procedures of reading the colorinformation out oftheframe buffer 808, referring the gamma- correction LUT 804, reading the color information out of the gamma 5 correction LUT 804 and writing the read color information back to the frame buffer 808.
Thus, the process will not be executed at any higher speed and it is difficult to complete the gamma correction for all the pixels in a display screen within one frame. Moreover, the processing load in the CPU 800 becomes very heavy. This adversely affects any other processing. 10 On the other hand, the second technique of Fig. 1 B can realize the gamma correction with a higher speed since the gamma-correction circuit 814 used is of dedicated hardware.
Therefore, the gamma correction for all the pixels in a display screen can easily be accomplished within one frame. Since the processing load of the CPU 810 is reduced, furthermore, the other processing will not adversely be affected by the gamma correction.
15 However, the second technique of Fig. I B separately requires the gamma correction circuit 814 which is of dedicated hardware. Thus, the game system will be increased in scale, leading to increase of the manufacturing cost.
In domestic game systems, particularly, the severe reduction of cost is required to make the products more popular. Thus, most domestic game systems have not included such a 20 hardware gamma-correction circuit as show. in Fig. I B. To realize He gamma correction in the domestic game systems, they cannot but take such a first technique as shown in Fig. l A. Since it is difficult in the first technique to complete the gamma correction for the entire screen within one frame as described, however, it will adversely affect the other processing. As a result, the domestic game systems could not but abandon the adoption of the 25 gamma correction itself.
The game systems of the prior art have a further problem described below.
An image generated by any prior art game system was not focused depending on the
distance from a viewpoint, unlike the image viewed through human eyes. Thus, the image was represented such that all the objects in the image were focused.
30 However, such an image is so unnatural that the human will never see in its daily life.
To provide more realistic images, it is therefore desirable that they are generated to have objects which are focused depending on the distance between the viewpoint and the objects, the direction of line-of-sight and so on. However, when defocused images are generated by computing the distance between each ofthe individual objects and the viewpoint 5 in the game space to provide the necessary degree of defocusing for every object, the processing load will hugely be increased.
In a game system which is required to generate images corresponding to the viewpoint which varies in real time by using a limited hardware resource, it is important that such images being focused as if it were real views are generated with reduced processing load.
Disclosure of the Invention
In view of the aforementioned problems, an objective of the present invention is to provide a game system, program and image generating method which can implement video 15 filtering such as gamma correction with reduced processing load.
Another objective of the present invention is to provide a game system, program and image generating method which can generate more realistic images with reduced processing load. More particularly, it is to provide a game system, program and image generating method which can generate an image focused like a real view with reduced processing load.
20 To this end, in a first aspect the present invention provides a game system which generates a game image for a domestic game, comprising: means which sets an adjustment data for adjusting display properties of a monitor based on operational data inputted by a player through a game controller; save means which saves the set adjustment data in a saved information storage device for storing personal data of the player; and means which performs 25 transformation processing on image information of an original image based on the adjustment data obtained by adjusting the display properties or loaded from the saved information storage device. In a second aspect, the present invention provides a computer-usable program embodied on an information storage medium or in a carrier wave for generating a game image 30 for a domestic game, the program comprising a processing routine for a computer to realize:
means which sets an adjustment data for adjusting the display properties of a monitor based on operational data inputted by a player through a game controller; save means which saves the set adjustment data in a saved information storage device for storing personal data of the player; and means which performs transformation processing on image information of an 5 original image based on the adjustment data obtained by adjusting the display properties or loaded from the saved information storage device.
In a third aspect, the present invention provides a method of generating a game image for a domestic game, comprising a step of: setting an adjustment data for adjusting display properties of a monitor based on operational data inputted by a player through a game 10 controller; saving the set adjustment data in a saved information storage device for storing personal data of the player; and performing transformation processing on ir..agc infoii^llaio of an original image based on the adjustment data obtained by adjusting the display properties or loaded from the saved information storage device.
Preferably, data of a control point of a free-form curve representing transformation 15 properties of the image information are saved in the saved information storage device as the adjustment data by the save means.
Brief Description ofthe Drawings
20 Figs. 1A and iB illustrate first and second techniques for implementing gamma correction. Fig. 2 is a block diagram of a game system according to this embodiment.
Fig. 3 illustrates the index color texture-mapping process.
Fig. 4 illustrates a technique of effectively using the index color texture-mapping LUT 25 to transform an original image.
Figs. 5A and 5B exemplify game images generated according to this embodiment.
Fig. 6 illustrates a technique of dividing the original image into a plurality of blocks and texture mapping an image for each block onto the corresponding polygon having a size equal to that of each block using the LUT.
30 Figs. 7A and 7B exemplify the transformation property of the gamma correction and
a gamma correction lookup table.
Figs. 8A and 8B exemplify the transformation property of the negative/positive inversion and a negative/positive inversion lookup table.
Figs. 9A, 9B and 9C exemplify the transformation properties of posterization, 5 solarization and binarization.
Figs. 1 OA and 1 OB exemplify monotone (sepia) filtering LUTR and LUTG.
Fig. 11 exemplifies a monotone (sepia) filtering LUTB.
Fig. I 2 illustrates a technique of obtaining a transformed image for the original image by masking other color components when a particular color component is set as an index 1 0 number.
Fig. 13 illustrates a technique of blending color information obtained from LUTE, LUTG and LUTB to provide a transformed image for the original image.
Fig. 14 illustrates a technique of preparing an alpha()-plane by performing the texture mapping using the LUT.
15 Fig. 15 illustrates a technique of setting Z value at an index number in the LUT.
Fig. 16 illustrates a technique of setting an alpha value depending on the Z value and using the set alpha value to blend the original image with a defocused image.
Figs. 1 7A and 1 7B exemplify an original image and its defocused image.
Fig. 18 illustrates a technique of setting an alpha value depending on the Z value.
20 Figs. 1 9A, 1 9B and] 9C illustrate a technique of updating an alpha value of a deeper pixel from a virtual object by drawing the virtual object.
Fig. 20 illustrates a technique of blending an original image with its defocused image by transforming Z value into Z2 value and also transforming the Z2 value into an alpha value.
Fig. 21 illustrates a problem occurred when the Z2 value is formed by the high-order 25 bits of the Z value including its most-significant bit.
Fig. 22 illustrates a technique of forming the Z2 value by the bits I to J lower than the most-signifcant bit of the Z value while clamping the Z2 value at a given value.
Fig. 23 illustrates a technique of transforming Z value into Z2 value using LUT.
Fig. 24 shows an example of LUT1 for transforming the bits 1 S-8 in the Z value.
30 Fig. 25 shows an example of LUT2 for transforming the bits 23-16 in the Z value.
s
Figs. 26A and 26B show an example of LUT3 for transforming a Z2 value into an alpha value and also exemplify the characteristic curve in that transformation.
Fig.27 illustrates a clamping process.
Fig.28 illustrates a bilinear filtering type texture mapping process.
5Fig. 29 illustrates a technique of effectively using the bilinear filtering mode to generate a defocused image.
Fig. 30 also illustrates a technique of effectively using the bilinear filtering mode to generate a defocused image.
Figs.31A and 31 B illustrate the principle of generating a defocused image through the 10bilinear filtering type interpolation.
Figs.32A and 32B also illustrate theprin.ciple ^genelatiilg a defocuscu image through the bilinear filtering type interpolation.
Fig.33 illustrates a problem in the prior art relating to the adjustment of brightness in
a monitor.
15Fig. 34 illustrates a technique of saving the data of the adjusted brightness in the monitor in a saved information storage device.
Fig. 35 is a flowchart illustrating the details of the process according to this embodiment. Fig. 36 is a flowchart illustrating the details of the process according to this 20e-m-bodirnent. Fig. 37 is a flowchart illustrating the details of the process according to this embodiment. Fig. 38 is a flowchart illustrating the details of the process according to this embodiment. 25Fig. 39 is a flowchart illustrating the details of the process according to this embodiment. Fig.40 shows a structure of hardware by which this embodiment can be realized.
Figs.41 A,41 B and 41 C show various system forms to which this embodiment can be applied.
Best Mode for Carrving Out the Invention Preferred embodiments of the present invention wild now be described in connection with the drawings.
1. Configuration Fig. 2 shows a block diagram of a game system (or image generating system) according to this embodiment. In this figure, this embodiment may comprise at least a processing section 10 100. Each of the other blocks may take any suitable form.
The processing section 100 performs processing for control of the cn.ii-e syste:n, commands to the respective blocks in the system, game processing, image processing, sound processing and so on. The function thereof may be realized through any suitable hardware means such as various processors (CPU, DSP and so on) or ASIC (gate array or the like) or 15 a given program (or game program).
An operating section 160 is used to input operational data from the player and the function thereof may be realized through any suitable hardware means such as a lever, a button, a housing or the dike.
A storage section 170 provides a working area for the processing section 100, 20 communication section 196 and others. The function thereof may be realized by any suitable hardware means such as RAM or the like.
An information storage medium (which may be a computer-usable storage medium) 180 is designed to store information includingprograms, data and others. The function thereof may be realized through any suitable hardware means such as optical memory disk (CD or 25 DVD), magneto- optical disk (MO), magnetic disk, hard disk, magnetic tape, memory (ROM) or the like. The processing section 100 performs processing in the present invention (or this embodiment) based on the information that has been stored in this information storage medium 180. In other words, the information storage medium 180 stores various pieces of information (programs or data) for causing a computer to realizing the means of the present invention (or 30 this embodiment) which are particularly represented by the blocks included in tile processing
section 100.
Part or the whole of the information stored in the information storage medium 1 80 will be transferred to the storage section 170 when the system is initially powered on. The information stored in the information storage medium 180 may contain at least one of program 5 code set for processing the present invention, image data, sound data, shape data of objects to be displayed, table data, list data, information for instructing the processing in the present invention, information for performing the processing according to these instructions and so on.
A display section 90 is to output an image generated according to this embodiment and the function thereof can be realized by any suitable hardware means such as CRT, LCD 10 or HMD (Head-Mount Display).
A sound output section 192 is to output a snared generated according to Allis embodiment and the function thereof can be realized by any suitable hardware means such as loudspeaker. A saved information storage device (or portable information storage device) 194 stores 15 the player personal data (or data to be saved) and may take any suitable form such as memory card, portable game machine and so on.
A communication section 196 performs various controls for communication between the game system and any external device (e.g., host machine or other game system). The function thereof may be realized through any suitable hardware means such as various types 20 of processors or communicaficn ASTS or according to any suitable program.
The program or data for executing the means in the present invention (or this embodiment) may be delivered from an information storage medium included in a host machine (or server) to the information storage medium 180 through a network and the communication section 196. The use of such an information storage medium in the hose 25 device (or server) falls within the scope of the invention.
The processing section 100 further comprises a game processing section I 10, an image generation section 130 and a sound generation section 150.
The game processing section 110 performs processing such as coin (or charge) reception, setting of various modes,.: game proceeding, setting of screen selection, 30 determination of the position and rotation angle (about X-, Y- or Z-axis) of an object (or one
or more primitive surfaces), movement ofthe object (motion processing), determination ofthe view point (or virtual camera' position) and visualline angle (or rotational virtual camera angle), arrangement of the object within the object space, hit checking, computation of the game results (or scores), processing for causing a plurality of players to play in a common 5 game space, various game computations including game- over and other processes, based on operational data from the operating section 160 and according to the personal data and game program from the portable information storage device 194.
The image generation section 130 performs image processing according to the commands or the like from the game processing section 110. For example, the image I O generation section 13 0 may generate an image within an object space as viewed from a virtual camera (or viewpoint), which image is in turn outputted toward the display, section I S^. 1 lie sound generation section 150 performs sound processing according to the commands and the like from the game processing section 1 10 for generating BGMs, sound effects, voices or the like which are in turn outputted toward the sound output section 192.
15 All the functions of the game processing section 110 and the image and sound generation sections 130, 150 may be realized by any suitable hardware means or according to the program. Alternatively, these functions may be realized by both the hardware means and program. The game processing section 110 further comprises a movement/action computation 20 section 1 12, a adjustment information setting section 1 14 and a saving section 1] 6.
The movement/action computation section 112 is to calculate the information of movement for objects such as motorcars and so on (positional and rotation angle data) and the information of action for the objects (positional and rotation angle data relating to the parts in the objects). For example, the movement/action computation section 112 may cause the 25 objects to move and act based on the operational data inputted by the player through the operating section 160 and according to the game program.
More particularly, the movement/action computation section 112 may determine the position and rotational angle of the object, for example, for each one frame (1/60 seconds).
For example, it is now assumed that the position ofthe object for (k-l) frame is PMk-l, the 30 velocity is VMk-l, the acceleration is Amk-l, time for one frame is At. Thus, the position
PMk and velocity VMk of the object for k frame can be determined by the following formulas (I) and (2): PMk = PMk- I + VMk- I x At (1) 5 VMk = VMk-] + Amk-1 x At (2) The adjustment information setting section] 14 is to set (orprepare) an adjustment data used for adjusting the display property of the display section (or monitor) 190 such as brightness, color density, color tone, sharpness or the like, based on the operational data 10 inputted by the player through the operating section (or game controller) 160.
The saving section 1 1 6 is to save the adjustment data set b, the adjas, rlleilt infurmailon setting section 114 (or data used for adjusting the brightness, color density, color tone, sharpness or the like) in the saved information storage device 194.
In this embodiment, the transformation relati ve to the image information of the original ] 5 image is performed based on the adjustment data which is obtained by adjusting the display property or loaded from the saved information storage device 194. In this case, such a transformation will be realized by the function of an index-number setting section 134 or drawing section l 40 (or texture mapping section 142), all of which will be described later.
The image generation section 130 comprises a geometry-processing section 132, an 20 index-number setting section 134 and a drawing section i40.
The geometry-processing section 132 performs geometry-processing (or three dimensional computation) such as coordinates transformation, clipping, perspective transformation, light-source calculation and so on. After subjected to the geometry-processing (or perspective transformation) , the object data (such as shape data including the vertex 25 coordinates and others in the object, or vertex texture coordinates, brightness data and the like) is saved in a main storage region (or main memory) 172 in the storage section 170.
The index-number setting section 134 is to set the image information of the original image (e.g., perspective-transformed image information) as an index number in an LUT (lookup table) section 178 for an index color texture-mapping. The image information of the 30 original image may take any of various information forms such as color information (ROB,
YlN or the like), an alpha(c) value (or any information stored in association with each of the pixels, other than the color information), a depth value (Z value) and so on.
In this embodiment, an alpha value for each pixel in the original image is set at a value corresponding to the depth value for that pixel by performing the index color texture-mapping 5 relative to the virtual object using the lookup table in which the depth value for each pixel in the original image has been set as an index number. Thus, a so-called depth of field can be
represented according to this embodiment.
Relationship between a depth value (index number) and an alpha value for each pixel in the LUT is preferably set such that a pixel which is located farther from the focus of the ] O virtual camera has a larger alpha value (or has a higher blending rate for a defocused image in a broad sense). In addition, the depth of field or defocusing cocci may variably be
controlled by changing the relationship between a depth value and an alpha value for each pixel in the LUT.
The drawing section 140 is to draw the geometry-processed object (or model) in a ] 5 drawing region] 74 (which is a region in a frame buffer, an additional buffer or the like for storing the image information by a pixel unit). The drawing section 140 comprises a texture mapping section 142, an alpha(oc)-blending section 144, a hidden surface removal section 146 and a masking section 148.
The texture mapping section 142 performs a process of mapping a texture stored in the 20 texture storage section 176 onto an object (including a process of specifying a texture to be mapped on an object, a process of transferring a texture and other processes). In such a case, the texture mapping section]42 can perform the texture mapping using the index color texture-mapping LUT (lookup table) stored in the LUT storage section 178.
In this embodiment, the texture mapping section 142 performs the texture mapping 25 relative to the virtual object (such as apolygon having a size equal to that ofthe display screen, a polygon having a size equal to that of a block of divided screen and the like) using the LUT in which image information of an original image is set as an index number. Thus, a process of transforming a depth value (Z value) into N bits, a process of transforming a depth value into an alpha value and other image transformation processing (e.g., video filtering such as 30 gamma correction, negative/positive inversion, posterization, solarization, binarization,
monotone filtering and sepia filtering) can be realized with reduced processing load.
In addition, this embodiment implements the generation of a defocused image (or most defocused image) blended with the original image through the alpha blending (such as narrow sensed alpha blending, additive alpha blending, subtractive alpha blending, or translucency 5 processing) by effectively utilizing texture mapping by texel interpolation method (bilinear or bilinear filtering).
More particularly, the texture mapping section 142 maps the original image set as a texture on the virtual object (which is an object having its shape equal to that of the defocused region) through the texel interpolation method while, for example, shifting the texture 10 coordinates by a value smaller than one pixel (texel) (or shifting it from the texture coordinates obtained based on the drawn position of the original image.) thlJS, th.e def.DCUSed image to be blended with the original image can be generated through a simplified procedure by which the texture coordinates are only shifted.
The alpha-blending section 144 is to blend (combine) an original image with its 15 defocused image based on an alpha value (A value) which has been set for each of the pixels in the drawing region 174 (a frame buffer, an additional buffer or the like). For example, when narrow-sensed alpha blending is performed, the original image is blended with the defocused image as represented by the following formulas: 20 RQ = (1 - A) X R. + a x R2 (3) GQ = (1 - a) X G. + a x G2 (4) BQ = (1 - a) X B] + X B2 (5) where RI, G] and BE are respectively color (or brightness) R. G and B components in 25 the original image already drawn in the drawing region 174; R2, G2 and B2 are respectively COIOT R. G and B components in the defocused image; and RQ, GQ and BQ are respectively R. G and B components generated through the alpha blending.
The alpha-blending section 144 performs processing (such as additive alpha blending or alpha blending) for blending: transformed color information (R. G and B) obtained by 30 setting the K-th color component (e.g., R component) in the original image as an index number
in LUT; transformed color information obtained by setting the L-th color component (e.g., G component) as an index number in LUT; and transformed color information obtained by setting the M-th color component (e.g., B component) as an index number in LUT.
The hidden surface removal section 146 performs the hidden-surface removal 5 according to Z-buffer algorithm using a Z buffer (e.g., depth buffer or Z plane) 179 in which the Z value (depth value) is stored. In this embodiment, the Z value written in this Z buffer 179 is transformed into an alpha value on which the original image is blended with the defocused image. The masking section 148 performs the masking such that when the color information I O is to be transformed by setting the color component (e.g., R component) in the original image as an index number in the LUT, the other color components (e g., G and B c^rmoncn.s) in the transformed color information will not be drawn in the drawing region (a frame buffer or an additional buffer). When the color information is to be transformed by setting the G component as an index number, the masking will be carried out relative to the R and B 15 components. When the color information is to be transformed by setting the B component as an index number, the masking will be carried out r elative to the R and G components.
The game system of this embodiment may be dedicated for a single-playermode in which only a single player can play the game or may have a multiplayer mode in which a plurality of players can play the game.
20 If a plurality of players play the game, only a single terminal may be used to generate game images and sounds to be provided to all the players. Alternatively, a plurality of terminals interconnected through a network (transmission lien or communication line) may be used in the present invention.
25 2. Features ofthis embodiment 2.1 Use of index color texture-mapping As described, the first technique shown in Fig. I A carmot essentially realize the gamma correction (or video filtering) since it requires an excessive processing load on CPU.
30 Furthermore, the second technique shown in Fig. ID cannot realize the gamma correction in
the domestic game system, since it does not have such a gamma-correction circuit which is of dedicated hardware.
The inventor aimed at the presence of the lookup table, LIJT, which was used in the index color texture-mapping.
5 More particularly, in the index color texture-mapping process, the index number rather than the actual color information (ROB) is stored in association with each texel in the texture as shown at Al in Fig. 3, for saving the capacity of the texture storage section. The color information specified by the index number is also stored in the index color texturemapping LUT (or color pallet), as shown at A2 in Fig. 3. When it is wanted to perform the texture l O mapping relative to an object, the LUT is referred to based on the index number for each texel in the texture to read the corresponding color information out of the LUT and to dr a w the rosa color information in the frame buffer.
The texture mapping in such an index color mode reduces the number of usable colors (e.g., 256 colors), compared with the texture mapping in the conventional modes using no 15 LUT. However, the index color texturemapping does not require the storage of the actual color information (e.g. , l 6-bit color information) in the texture storage section. This enables the capacity of the texture storage section to be greatly saved.
This embodiment is characterized by using such an index color texturemapping process in a form different from the conventional forms.
20 As shown at B in Fig. 4, the image information (e.g., color information) of an original image drawn in a f ame buffer (which is, in a broad sense, a drawing region) is first set as an index number in a gamma correction lookup table (LUT). In other words, this image information is considered as an index number. As shown al: B2, the LUT in which the image information of the original image is set as an index number is then used to perform the index 25 color texture mapping relative to a virtual object (e.g., a polygon having a size equal to that ofthe display screen) for transforming the image information of the original image. As shown at B3, the transformed image information is then drawn back to the frame buffer (or drawing region). In such a manner, this embodiment successfully obtains such a gamma corrected image 30 as shown in Fig. 5B from such an original image as shown in Fig. 5A. In other words, the
image of Fig. 5B has a stronger contrast than the image of Fig. 5A.
For example, the first technique of Fig. 1 A cannot increase its processing speed and can also increase its processing load on CPU since it must execute all the processes of reading the image information of the original image, referring to the gamma correction LUT and writing 5 the color information back to the frame buffer in the software running on the CPU.
On the contrary, this embodiment can realize the gamma correction by effectively using the index color texture mapping which is executed by the dedicated hardware drawing processor (or drawing section) with increased speed. According to this embodiment, therefore, the gamma correction can be executed at a speed higher than that of the first technique shown 10 in Fig. 1A. It becomes easier that the gamma correction for the entire display screen is completed within one frame (em., 1/60 seconds or 1!30 seconds).
Since the index color texture mapping can be executed by the drawing processor operable independently of the main processor (CPU), the increase in the processing load on the main processor (CPU) can be minimized. As a result, the gamma correction will not 15 adversely affect any other processing.
The conventional game systems could not very increase the capacity of the drawing processor. It was thus difficult that the drawing of the original image into the frame buffer as well as the drawing of the polygon having a size equal to that of the display screen are completed within one frame.
20 However, the ganne systems have been capable of using a drawing processor having its very high fill rate (or the number of texels renderable for one second) since the capacity of the drawing processor was highly improved in comparison with the capacities of the other circuit blocks. Therefore, the drawing of the original image into the frame buffer as well as the drawing of the polygon having a size equal to that of the display screen can easily be 25 completed within one frame. Thus, the gamma correction can freely be realized by effectively using the index color texture-mapping.
In addition, the second technique of Fig. 1B leads to increase of the game system manufacturing cost since it separately requires a gamma- correction circuit which is of a dedicated hardware. The domestic game systems not originally having such a garnma 30 correction circuit could not realize the second technique shown in Fig. 1B and but take the
technique ofFig. IA.
On the contrary, this embodiment realizes the gamma correction by effectively using the index color texture-mapping which is executed by the hardware originally possessed by the drawing processor. According to this embodiment, therefore, it is not required that such 5 a gamma- correction circuit as shown in Fig. l B is newly provided with the cost of the game system being minimized. Even in the domestic game system not originally having any gamma-correction circuit, the gamma correction can be realized through hardware with increased speed.
Although Fig. 4 has been described to realize the gamma correction (or video filtering) 10 byperforrning the texture mapping on a polygon having a size equal to that of a display screen, the texture mapping may be executer1 on a polygon having a size equal, a size of a block obtained by dividing the display screen into blocks.
In other words, as shown at C1 in Fig. 6, the original image (or display screen) on the frame buffer is divided into blocks, each of which has an image to be texture mapped onto a 1 S polygon having a size equal to the size thereof using the LUT, as shown at C2. The resulting image having a size equal to that of the corresponding block is then drawn back to the frame buffer (or drawing region).
Alternatively, there may be generated such a polygon that includes all or part of an object image perspective-transformed (or transformed into the screen coordinate system) and 20 has its magnitude variable depending on that of the perspective-transformed object. The texture mapping will then be executed relative to such a polygon.
In such a manner, if a texture-mapped polygon is temporarily drawn in the additional buffer, for example, the area of VRAM occupied by the additional buffer can be reduced.
When the texture mapping is performed on a polygon having a size equal to that of the 25 display screen as shown in Fig. 4, an additional buffer having a size equal to that ofthe display screen must be allocated on VRAM for temporarily drawing the polygon having a size equal to that of the display screen. This may adversely affect any other processing.
If the texture mapping is performed on a polygon having a size equal to a size of a block of divided screen as shown in Fig. 6, it is only required that an additional buffer having 3 0 a size equa] to a size of the block is provided on VRAM. Therefore, the area occupied by the
additional buffer can be reduced. As a result, the limited hardware resource can effectively be utilized.
2.2 Various types of video filtering (LUT) 5 Fig. 7A shows the transformation property of the gamma correction.
In this figure, Bezier curve (which is, in a broad sense, free-surface curve) representing the transformation property of the gamma correction is specified by four control points CP0, CP1, CP2 and CP3. In such a case, Y-coordinate of CP0 is set at Y0=0 while Y-coordinate of CP3 is setat Y3=255. The transformation properly ofthe gamma correction can be adjusted 10 by variably changing Y1 and Y2 which are Y-coordinates of CP] and CP2, respectively.
The relational expression hetwe.e.n input and output valacs A, 'l in the gamma correction may be represented, for example, by the following formulas: Y= Y20 + (X/255) x (Y21 - Y20) (3) where Y20 = Y10 + (X/255) x (Y11 - Y10) Y21 =Y11 +(X/255) x (Y12 - Yll) 20 Y1Q = Y0 + (Y,'255) x (Y1 Y0) Y11 = Y1 + (X/255) x (Y2 - Yl) Y12 = Y2 + (X/255) x (Y3 - Y2) When an index number is set at the input value X in the above formula (3) and when 25 outputs ROUT, GOUT and BOUT for the respective color components are set at the output value Y. such a ganna correcting LUT as shown in Fig. 7B is provided. This LUT is then transferred to VRAM and used to perform the index color texture-mapping as described in connection with Fig. 4 or other figure. Thus, there can be obtained an image which is formed by performing the video filtering of the gamma correction relative to the original image.
30 According to this embodiment, various types of video filtering other than the gamma
correction may be realized relative to the original image on the frame buffer.
Fig. 8A shows the transformation property of a negative/positive inversion video filtering. The relational expression between input and output values X, Y in the negative/positive inversion may be represented by the following formula: s Y = 255 - X (it) When an index number is set at the input value X in the above formula and when outputs ROUT, GOUT and BOUT for the respective color components are set at the output 10 value Y. such a negative/positive inversion LUT as shown in Fig. 8B is provided. The resulting LUTis then used to perform the index colorte,+ule-lllai;; gucscilocuinconnection with Fig. 4 for obtaining the original image to which the negative/positive inversion video filtering is carried out.
Fig. 9A shows the transformation property of a posterization video filtering for 15 representing a multi-gradation image while restricting some gradations. The relational expression between input and output values X, Y in the posterization may be represented by the following formula: Y= {INT (X/VAL)} x VAL (5) where INT (R) is a function for rounding down the decimal point in R to provide an integer; and VAL is any value.
Fig. 9B shows the transformationpropertyofa solarization video filtering forproviding such an image effect that the inclination of a curve function between input and output values 25 is inverted at a point. Moreover, Fig. 9C shows the transformation property of a binarizing video filtering for realizing the high-contrast effect in an image.
This embodiment may further realize monotone video filtering or sepia video filtering.
When color components prior to the monotone filtering are respectively RIN, GIN and BIN and when the color components after the monotone filtering are respectively ROUT, 30 GOUT and BOUT, the transformation of the monotone filtering may be represented by the
following formulas: ROUT=0.299XR +0.587 XG +0.114XB (6)
GOUT=0.299XR +0.587 XG +0.]14XB (7)
5 ROIJT=0.299xRT+0.587xGT+o.ll4xBr (8) The following formulas (9), (10) and (11) are defined herein as the relational expression between output values (ROUTR,GOUTR,BOUTR) relative to the input value Rot, the relational expression between output values (ROUTG,GOUTG,BOUTG) relative 10 to the input value GIN and the relational expression between output values (ROUTB,GOUTB, BOUTB) relative to the input value Bled respectively.
(ROUTR,GOUTR,BOUTR)
=(o.299xRW'o299XRW,o299XR) (9 15 (ROUTG,GOUTG,BOUTG)
=(0.587XG, 0.587 EGG, 0.587 EGG) (10)
(ROUTB,GOUTB,BOUTB)
=(0.114 X BIN'o.ll4 x BIN'o.ll4 x BIN) (1]) 20 Based on the above formulas (9), (10) and (i i), such monotone filtering lookup tables LUTR, LUTG and LUTB as shown respectively in Figs. 1 OA, I OB and I I are provided. These lookup tables LUTR,LUTG and LUTB are then used to perform the index color texture mapping for obtaining the original image to which the monotone filtering is applied.
For the sepia filtering, the transformation thereof may be represented by the following 25 formulas: ROUT=0.299XRIN+0.587XG +0.114XB +6(12)
GOUT=0.299XRtN+0.587xG +0.114XB -3 (13) BOUT=0.299XRIN+0.587XG +0.114XBW 3(14)
However, the clamping shall be carried out such that: 0<(ROUT,GOUT,BOUT) <255
5 With the sepia filtering, the following formulas may be defined.
(ROUTR,GOUTR,BOUTR)
=(0.299 XRIN+2,0299xR -1,0.299XR-1)(15) (ROUTG,GOUTG,BOUTG)
10 =(0.587 XG +2,0.587 XG -1,0.587XG -1)(16)
(ROUTB,GOUTB,BOUTB)
=(0.114XBIN+2,0.ll4xB -1,0.114XB -1)(17) Based on the above formulas (lS), (16) and (17), sepia filtering lookup tables LUTR, 15 LUTG and LUTB are provided. These lookup tables LUTR,LUTG and LUTB are then used to perform the index color texture-mapping for obtaining the original image to which the sepia filtering is applied.
2.3 Masking 20 The gm.^=a collection requires such an LUT that outputs one value (ROUT,GOUT or BOUT) relating to one input value (RIN, GIN or BIN).
However, the index color texture mapping LUT as shown in Fig. 3is not designed for gamma correction and will thus output a plurality of values (e.g., ROUT, GOUT and BOUT) relating to one input value (index number). There is thus a problem in that the mismatching 25 for this LUT should be overcome.
If the image information of the original image (e.g., R,G,B,Z value or alpha value) is to be set at an index number in the LUT, this embodiment performs a masking process in which only the necessary image information among the transformed image information is drawn in a drawing region (a frame buffer or an additional buffer) such that the other image 30 information will not be drawn. If the image information of the original image is color
information and when one color component in the original image is set at an index number in the LUT, the masking process is carried out such that the other color components after transformed will not be drawn in the drawing region.
More particularly, when an LUT in which the R plane value in the original image is set 5 as an index number is used to perform the texture mapping as shown at D] in Fig. 12, three plane values, R(ROUT), G(GOUT) and B(BOUT), will be outputted. In such a case, as shown at D2, only the outputted R plane values are drawn and the other G and B plane values will not be drawn in the drawing region through the masking process.
When the texture mapping was carried out after the G plane value in the original image 10 had been set as an index number as shown at D3 in Fig. 12, only the outputted G plane value is drawn in the drawing region such that the other R and B plane values will not be drawn in the drawing region through the masking process, as shown at D4.
When the texture mapping was carried out after the B plane value in the original image had been set as an index number as shown at D5 in Fig. 12, only the outputted B plane value 15 is drawn in the drawing region such that the other R and G plane values will not be drawn in the drawing region through the masking process.
In such a manner, the index color texture-mapping LUT not originally designed for gamma correction can be used to execute the transformation of the original image, with reduced processing load.
2.4 Blending The masking technique described in connection with Fig. 12 is also useful for any of various types of video filtering other than the gamma correction, such as negative/positive inversion, posterization, solarization, binarization and so on which have been described in 25 connection with Figs. 8A to 9C.
On the contrary, it is desirable to take the following technique if it is wanted to realize the monotone or sepia video filtering. This technique is to blend the color information (R. G. B) obtained by setting the R component in the original image as an index number in the LUT, the color information (R. G. B) obtained by setting the G component as an index number in 30 the LUT and the color information (R. G. B) obtained by setting the B component as an index
number in the LUT.
More particularly, as shown at El in Fig. 13, three plane values, R(ROUTR) , G(GOUTR) and B(BOUTR) as shown at E2 are obtained by setting the R(RJ plane value in the original image as an index number and using such an LUT as shown in Fig. l OA for 5 texture mapping. In this case, the relational expression between RIN and (ROUTR,GOUTR, BOUTR) may be represented by the above formula (9) or (15).
When the G(GIN) plane value in the original image is set at an index number and the LUTG of Fig. 1 0B is used to perform the texture mapping as shown at E3 in Fig. 13, three plane values, R(ROUTG),G(GOUTG) and B(BOUTG) as shown at E4 are obtained. In such 10 a case, the relational expression between GO and (ROUTG,GOUTG,BOUTG) may be represented by the above formula (10> on t16).
When the B(BIN) plane value in the original image is set at an index number and the LUTG of Fig. 1 l is used to perform the texture mapping as shown at ES in Fig. 13, three plane values, R(ROUTB),G(GOUTB) and B(BOUTB) as shown at E6 are obtained. In such a case, 15 the relational expression between BIN and (ROUTB,GOUTB,BOUTB) may be represented by the above formula (11) or (17).
Furthermore, as shown at E7 in Fig. 13, the color information of R(ROUTR), G(GOUTR) and B(BOUTR) shown at E2, the color information of R(ROUTG), G(GOUTG) and B(BOUTG) shown at E4 and the color information of R(ROUTB), G(GOUTB) and 2Q B(BOUTB) shown at E6 are blended (or added) together.
In such a manner, the monotone or sepia video filtering can be realized as shown by the above transformation formulas (6), (7) and (8) or (12), (13) and (14).
2.5 Application for Z value and alpha value 25 Use of the color information R,G and B outputted based on the index color texture mapping LUT has been described.
However, the alpha value (that is A-value information other than the color information set in association with the pixels) outputted based on the index color texture-mapping LUT may be used.
30 For example, as shown in Fig. 14, the R (or G or B) plane value may be set as an index
number in the LUT which is in turn used to perform the texture mapping for generating an alpha (or aOUT) plane. The resulting alpha plane may be used to perform the masking process or the like.
More particularly, an LUT is used in which an alpha value is set such that an alpha 5 value (or an OUT) becomes zero when an R value is O to 127 and that an alpha value becomes 255 when an R value is 128 to 255. The masking process will not be performed relative to a pixel having its alpha value smaller than 255 and will be carried out relative to a pixel having its alpha value equal to 255. Thus, the masking process will be performed only relative to pixel having their R values equal or larger than 128. As a result, the masking 10 process may be carried out depending the magnitude of R value in each pixel.
The generated alpha plane value may 7ue used as an alpha blending coefficient (transparency, translucency or opacity).
The image information set as an index number in the LUT is not limited to color information and may be one that is on the drawing region (VRAM) and can set as an index 15 number in the LUT.
For example, as shown in Fig. 15, the Z value (or depth value) may be set as an index number in the LUT.
In such a case, the alpha plane value obtained by performing the index color texture mapping after the Z value has been set as an index number may be used as an alpha blending no coeds cient, for example. Thus, an alpha value can be set depending on the Z value so that the depth of field or the like can be represented by using a defocused image.
More particularly, such an CUT as shown in Fig. 15 is used to perform the texture mapping such that alpha values aA, aB, aC and aD for pixels A, B. C and D in an original image are set at values corresponding to Z values ZA, ZB, ZC and ZD for the respecting pixels 25 A, B. C and D, as shown at F1 in Fig. 16. Thus, for example, such an alpha plane as shown at F2 in Fig.16 may be generated. More particularly, an alpha value will be set to be increased as that a pixel corresponding to that an alpha value is farther from the focus of the virtual camera 10 (gazing point) (or a pixel has a larger difference between its Z value and the Z value of the focus). Thus, the rate of blending the original image with its defocused image increases 30 as the pixel thereof is spaced farther apart from the focus of the virtual camera 10.
Next, as shown at F3 in Fig. 16, alpha blending ofthe original and its defocused images is carried out based on the generated alpha plane (an alpha value set for each pixel). Fig. 1 7A shows an original image while Fig. 1 7B shows its defocused image.
Byperforming alpha blending ofthe original image (Fig. 1 7A) and its defocused image S (Fig. 1 7B) based on the alpha values set depending on the Z values (or depth values) in such a manner, for example, an image can be generated in which the degree of defocusing therein will be increased as the pixels thereof are spaced farther apart from the focus of the virtual camera (that is, a focused point). This enables a so- called depth of field to be represented.
Thus, unlike a conventional game image in which all objects in a screen are focused, this 10 embodiment can generate amore realistic and natural image focused depending on the distance from a viewpoint like a rea! view. . As a result, the played's Mel for virtual reality can highly be improved.
Fig. 18 exemplifies the setting of an alpha value depending on a Z value. In this figure, the alpha value is normalized to have its magnitude equal to or less than 1.0.
15 In Fig. 18, the area is partitioned into regions ARO-AR4 and ARI' to AR4' depending on Z values Z1 to Z4 and Z1' to Z4' (threshold values). alpha values aO to a4 and al' to a4' are set relative to these regions ARO to AR4 and ARI' to AR4'.
For example, a pixel located in the region AR1 between Z1 and Z2 may be set at al; a pixel located in the region AR2 between Z2 and Z3 may be set at a2; a pixel located in the 20 region ART' letween Zi! and Z2i may be set at al'; and a pixel located in the region AR2' between Z2' and Z3' may be set at a2'.
The alpha values for the respective regions may be represented by the following relational expressions: 25 aO<al <a2<a3<a4 (18) aO < al' < a2' < as'< owl' (19) As will be apparent from these formulas, the alpha value is increased as the pixel is located farther from the focus of the virtual camera 10 (or gazing point). In other words, the 30 alpha value is so set that the rate of blending between the original image and its defocused
image is increased as the pixel has a larger difference between its Z value and the Z value of the focus of the virtual camera 10.
By so setting the alpha value, a more defocused image can be generated as the pixel is located farther apart from the focus of the virtual camera. This enables a so-called depth of 5 field to be represented.
And yet, this embodiment is advantageous in that the processing load thereof is highly reduced since the Z value for each pixel can be transformed into the alpha value through only a single texture mapping using the LUT.
One of the alpha value setting techniques not using the LUT may be considered to be 10 such a technique as shown in Figs. 1 9A, I 9B and 1 9C.
As shown in Fig 1 9A the nInha value of a deeper piAcl fi-oill a -viriuai object OB i (or polygon) having its Z value set as Zl is updated by drawing it in a frame buffer. In other words, the alpha value of the deeper pixel from the object OB I is updated by effectively using the hidden-surface removal based on the Z value.
15 Next, as shown in Fig. 1 9B, the alpha value for a deeper pixel from a virtual object OB2 having its Z value set at Z2 is updated by drawing it in the frame buffer. Similarly, as shown in Fig. 1 9C, the alpha value for a deeper pixel from a virtual object OB3 having its Z value set at Z3 is updated by drawing it in the frame buffer.
In such a manner, the alpha value of a pixel in the area ARI can be set as a 1; the alpha 20 value of apixel in the area AR2 can be set as a2; and the alpha value of a pixel in the area AR3 can be set as a3. In other words, the alpha value for each pixel can be set at a value corresponding to the Z value thereof.
However, this technique requires that the virtual object drawing process is repeated times corresponding to the number of threshold steps for Z value. For example, in Fig. 18, the 25 drawing process should be repeated eight times. This is disadvantageous in that the drawing load is increased. On the contrary, if the number of threshold steps is reduced to relieve the drawing load, the boundary between the threshold Z values will be viewed as a stripe on the display screen, leading to reduction of the image quality.
According to the technique ofthis embodiment in which the Z value is transformed into 30 the alpha value using the LUT, the Z values for all the pixels can be simultaneously be
transformed into alpha values at a time. If the index number (entry) in the LUT is of 8 bits, the alpha values partitioned by 256 threshold Z value steps can be obtained. It can be prevented that the boundary between the threshold Z values will be viewed as a stripe on the display screen. Therefore, a high-quality image can be generated with reduced processing 5 load. 2.6 Formation of 8-bit Z value To improve the accuracy of the hidden-surface removal in the game system, the number of bits in the Z value is very large, such as 24 bits or 32 bits.
10 On the other hand, the number of bits in the index number (entry) of the index color texture-mapping llJT is smaller the". that ofhc Z va'ae, such as 8 bits.
Where the Z value is to be transformed into the alpha value using such an index color texture-mapping LUT as shown in Fig. 15, therefore, a preprocess in which the Z value is transformed into a Z value having the same number of bits as that of the LUT index number 15 is required. If the number of bits in the LUT index number is 8 bits, the Z value must be transformed into Z2 value (or second depth value) of 8 bits.
In this case, to obtain a consistent Z2 value, upper eight bits including the most signifcant bit in the Z value must be used to form the Z2 value. In Fig. 20, eight bits ranging between bit 23 (or most-significant bit) and bit] 6 is selected and used to form the Z2 value.
20 It has been found, however, that if the upper bits ranging between bit 23 and bit] 6 in the Z value are used to form the Z2 value which is in turn transformed into the alpha value using the LUT, the number of partitions for alpha values will be reduced.
As shown in Fig. 21, for example, it is assumed that the Z value is of 4 bits and that the upper two bits of the Z value are selected and used to form the Z2 value. In this case, if the 25 Z2 values are transformed into alpha values, the alpha values will be partitioned into four steps by threshold values within all the range of Z = 0 to 15.
However, an object in front of a screen SC (perspective-transformed screen) as shown atOB1 inFig.21 willusuallybenear-clipped. If anobjectlocatedinarangeofZ= into 15 in Fig. 21 is near-clipped, forexample, it becomes rare that the most-significant bit in the Z 30 value becomes one (l). In any event, the object as OB2 located near the screen SC is not
required that its degree of defocusing is accurately controlled since it will necessarily be formed to be the most defocused image. Therefore, two steps of partitioning the portion as shown by G] in Fig. 21 are unnecessary.
To avoid such a situation, this embodiment transforms the Z value (depth value) for S each pixel in an original image into a Z2 value (second depth value) formed by lower bits I to J (e.g., bits 19 to 12) than the most-signifcant bit of the Z value. This Z2 value is set as an index number in the index color texture-mapping LUT and then performs the texture mapping for determining the alpha value for each pixel. Based on the determined alpha value for each pixel, alpha blending is performed on the original image and its defocused image.
10 In such a manner, as shown in Fig. 22, the degree of defocusing can accurately be controlled by using the alpha values which Ore partitioned With c.llul.i-step thresholds (four steps in Fig. 22) only relating to some objects (e.g., OB3 and OB4) that are located near the focus of the virtual camera 10 (or gazing point). Thus, the quality of the generated image can be improved.
15 In this embodiment, the Z2 value may be clamped at a given value depending on the bit values other than the Z value bits I to J. More particularly, as shown in Fig. 22, the Z2 value may be clamped at the maximum value (which is, in a broad sense, a given value) when the upper bit in the Z value becomes one (1), for example. Thus, the Z2 value will be set maximum for any object which is not required to control its degree of defocusing accurately, 20 slach as OB1 or OB2. A consistent image can be generated even though the Z value has been transferred into the Z2 value formed by its bits I to J. 2.7 Transformation of Z value using LUT This embodiment realizes the process of transforming the Z value described in 25 connection with Fig. 20 into the Z2 value (transformation into 8-bit form) by using the index color texture-mapping LUT for the texture mapping. In other words, the Z value is transformed into the Z2 value by setting the Z value as an index number in an LUT and using that LUT to perform the index color texture-mapping relative to a virtual object.
It is now assumed that a 24-bit Z value is transformed into a Z2 value using an LUT 30 as shown in Fig. 23. In this case, as shown at Hi in Fig. 23, bits 15 to 8 (or M to N) in the Z
value are set as index numbers in a first lookup table LUT1 which is in turn used to perform the index color texture-mapping for transforming the Z value into Z3 value (or third depth value). Next, as shown at H2 in Fig. 23, bits 23 to 16 (or K to L) in the Z value are set as index 5 numbers in a second loolup table LUT2 which is in turn used to perform the index color texture-mapping for transforming the Z value into Z4 value (or fourth depth value).
Finally, as shown at H4 in Fig. 23, these Z3 and Z4 values are used to determine Z2 value which is in turn transformed into alpha value by using a third lookup table LUT3 as shown at H5.
10 More particularly, the Z3 value obtained by the transformation of LUTI is drawn in a drawing region (a frame buffer or In acidi+i^nql buffer). Theical,ei-, Lue Z4 value obtained by the transformation of LUTE is drawn in the drawing region. At this time, the Z2 value is determined while the lower four bits (or effective bits) in the Z3 value are masked so that these bits will not be overwritten by the Z4 value.
15 The technique of Fig. 23 enables any 8 bits (which are, in a broad sense, I to J) in the Z value to be fetched.
If it is wanted to fetch 8 bits in the Z value for setting them as index numbers in the LUT, only 8 bits in the Z value within a predetermined range, such as bits 23 to 16, 15 to 8 or 7 to 0, may be fetched.
20 On He other hand, the type of 8 bits to be fetched in the Z value is determined depending on the range of near-clipping or the focus position of the virtual camera (or gazing point), as described in connection with Figs. 20 and 21.
If only 8 bits of the Z value within a predetermined range, such as bits 23 to 16, 15 to 8 or 7 to 0, can be fetched, an appropriate alpha value for most accurately controlling the 25 degree of defocusing near the focus of the virtual camera cannot be provided.
For example, it is now assumed that if bits 19 to 12 in the Z value are fetched as Z2 value, the effective number of partitions ofthe alpha value (or the number ofthreshold values of the Z value) can be stepped into 256. In such a case, if only bits 23 to 16 or 15 to 8 in the Z value can be fetched, the effective number of partitions in the alpha value will be stepped 30 into only 16, leading to reduction of the image quality.
On the contrary, if the technique of Fig. 23 is used, any bits I to J in the Z value can be fetched as Z2 value even though only 8 bits in the Z value within a predetermined range can be fetched as described above. Thus, the threshold values of the Z value for partitioning the alpha value can optimally be set depending on the range of near-clipping or the focus position 5 of the virtual camera and an image with its improved quality can be generated.
Figs. 24, 25 and 26A shows examples of LUT], LUT2 and LUT3. Fig. 26B shows a curve of transformation property in the LUT3 for transforming Z2 value into alpha value.
As shown in Fig. 24, the LUTI is used to shift bits 15 to 8 (or M to N) of the Z value inputted as index numbers rightward by four bits. For example, 0x 10 and Ox20(hexadecimal 10 notation) may be transformed into 0x01 and 0x02, respectively.
As shown in Fig 2S, the LTTT2 is used to shi* bits it. to i u (or to L) of the G value inputted as index numbers leftward by four bits. For example, 0x01 and 0x02 may be transformed into 0x 10 and 0x20, respectively.
If the inputted Z value is larger than 0x0F, the output of the LUT2 is clamped at 0xF0, 15 as shown at Ql in Fig. 25.
An example in which the Z value is clamped is shown in Fig.27. Referring to Fig.27, when bit 20 (or bit other than bits I to J) becomes one (1), the output of the LUT2 is clamped at 0xF0. Thus, Z2 value becomes 0xFI.
For example, if bits 19 to 12 in the Z value are directly fetched without clamping the 20 output of the LUT2, the Z2 value will be 0x] 1, notwithstanding the bit 20 is one (1). This raises a problem in that the depth of field will wrongly be set.
The clamping of the LUT2 output can prevent such a problem and properly set the depth of field.
Moreover, the clamping ofthe LUT2 output will not provide any unnatural image since 25 the degree of defocusing for the near-clipped object OB I or the object OB2 located near the screen SC is only set maximum, as will be apparent from Fig. 22.
2.8 Generation of defocused image - This embodiment effectively uses the bilinear filtering type (or texel interpolation type) 30 texture mapping to generate the defocused image (Fig. 17B) to be blended with the original
image (Fig. 1 7A).
There may be produced a positional deviation between a pixel and a texel in the texture mapping. In such a case, the point sampling type texture mapping renders the color CP (which 5 is, in a broad sense, image information) at a pixel P (or sampling point) the color CA at a texel TA nearest the point P. as shown in Fig. 28.
On the other hand, the bilinear filtering type texture mapping provides the color CP of the point P which are interpolated by' the colors CA, CB, CC and CD oftexeis TA, TB, TC and TD surrounding the point P. 10 More particularly, coordinate ratio in X-axis direction p:1 - (0 < 1), and coordinate ratio in Y-axis directing Al -,' (0 - Err ') are de,er1rineu based on the coordinates of TA to TD and P. In this case, the color CP ofthe point P (or the output color in the bilinear filtering type texture mapping) may be represented by the following formula: CP = (1 - p) x (1 y) x CA + X(1 - y) x CB +(1 -)XYxCC+pxyxcD (20) This embodiment aims at the fact that the bilinear filtering type texture mapping 20 automatically interpolates the colors, thereby generating a defocused image.
More particularly, as shown at Rl in Fig. 29, an original image drawn in a frame buffer may be set as a texture. When this texture (or original image) is to be mapped on a virtual object through the bilinear filtering type texture mapping process, texture coordinates given to the vertexes ofthe virtual object are shifted (dislocated or moved) rightward and downward, 25 for example, by (0.5, 0.5). In such a manner, a defocused image in which the colors of the pixels in tle original image spread into the surrounding pixels can automatically be generated through the bilinear filtering type interpolation.
If it is wanted to defocus the entire image, the shape of a victual object onto which the texture (or original image) is mapped is set to be equal to the shape of a screen (or defocused 3 0 region). For example, if the vertex coordinates of the screen are (X, Y) = (0, 0), (640, 0), (640,
480), (0, 480), the vertex coordinates of the virtual object are also (X, Y) = (0, 0), (640, 0), (640, 480), (0, 480).
In this case, if texture coordinates (U. V) given to the vertexes N7XI, VX2, VX3 and VX4 of the virtual object are respectively set at (0, 0), (640, 0), (640, 480), (0, 480), the 5 positions of the pixels in the screen will be identical with tle positions of the texels in the texture. Therefore, the image will not be defocused.
On the contrary, if texture coordinates (U. V) given to the vertexes VX1, VX2, VX3 and VX4 of the virtual object are respectively set at (0.5, 0.5), (640.5, 0.5), (640.5, 480.5), (0.5, 480.5), the positions of the pixels in the screen will not be identical with the positions of 10 the texels in the texture. Thus, the color interpolation will be executed to provide a defocused image through the bilinear filtering type inteolatioln.
* If it is wanted to defocus part of the screen, the shape of the virtual object may become equal to that of its defocused region.
As shown at R3 in Fig. 30, this embodiment a first defocused image by setting the 15 original image as a texture and shifting it in the rightward and downward direction (first shift direction) by 0.5 for performing the bilinear filtering type texture mapping. Next, as shown at R4 in Fig. 30, the first defocused image is then set as another texture and shifted by 0.5 in the leftward and upward direction (second shift direction). The bilinear filtering type texture mapping is then executed to generate a second defocused image. Alternatively, the 20 afolerllentioiled procedure (or shining in the rightward and downward direction and in the leftward and upward direction) may be repeated several times. Thus, a more natural and more defocused image can be generated.
The principle of generating the defocused image through the bilinear filtering type interpolation will be described below.
25 For example, it is now assumed as shown in Fig. 3 IA that the bilinear filtering type texture mapping is carried out after the texture coordinates are shifted by 0.5 texels in the rightward and downward direction. In this case, = = 1/2 in the above formula (20).
Therefore, if the colors in texels T44, T45, T54 and T55 are respectively made C44, C45, C54 and C55, the color CP44 of a pixel P44 may be represented by the following formula.
CP44 = (C44 + C45 + C54 + C55)/4 (2)
As will be apparent from the foregoing, the color C44 of the texel T44 (which corresponds to the original color of the pixel P44 in the original image before transformation) 5 will spread into the surrounding pixels P33, P34, P43 and P44 by each 1/4 through the transformation shown in Fig. 31A.
Thereafter, as shown in Fig.31 B. the image obtained in Fig.31 A is set as a texture, the coordinates of which are then shifted by 0.5 in the leftward and upward direction for performing the bilinear filtering type texture mapping. In this case, the pixels P33, P34, P43 10 and P44 in Fig. 31A will correspond to the texels T33, T34, T43 and T44 in Fig. SIB. The color C44 spread into the P33, P.34; P43 and Pdd (T33, T3 4, T43 mId IA44) by each i i4 in Fig. 31 A is then magnified further 1/4 and will spread into the four surrounding pixels. Eventually, the original color C44 of T44 will spread into the surrounding area by 1/16 = 1/4 1/4.
Thus, the color C44 (which corresponds to the original color of the pixel P44 in the 15 original image drawn in the frame buffer) will spread into the pixels P33, P34 and P35 respectively by 1/16, 2/16 and 1/16 through the transformations of Figs. 31A and SIB.
Furthermore, the color C44 will spread into the pixels P43, P44 and P45 respectively by 2/] 6, 4/16 and 2/16 and into the pixels P53, P54 and P55 respectively by I /I 6, 2/16 and I /16.
As a result, such a plane filter as shown in Fig. 32A will be applied to the original 20 i.m.age through the trOulsfoimaions of Figs. 3 IA and 3 IB. Such a plane filter can uniformly spread the color of each pixel in the original image into the surrounding area. This can generate an ideal defocused image of the original image.
If the set of transformations in lTigs.31 A and 31 B are repeated two times, such a plane filter as shown in Fig. 32B will be applied to the original image. Such a plane filter can 25 generate a more ideal defocused image than that of Fig. 32A.
2.9 Adjustment of monitor brightness In a certain RPG or horror game in which a player controls a character on a screen to seek a dungeon, the brightness of the game image is usually set darkly for the purpose of 30 causing the player to feel the dark atmosphere in the dungeon.
In such a case, if the brightness of the monitor on which the game image is displayed is set biased to the darkness, the contrast in the game image will become lower. This raises a problem in that the details of the shape or stripe in the dungeon will not clearly be looked or that a game image different from that intended by the game developer will be displayed.
5 To overcome such a problem, one technique may be considered in which a brightness adjustment button 22 on a monitor 20 is controlled directly by a player to adjust the brightness in the entire screen, as shown in Fig. 33.
However, such a technique is not convenient for the player since he or she must manually control the adjustment button 22 to adjust the brightness of the screen.
10 If the brightness of the monitor 20 has been adjusted for that game, the player must re adjust the brightness for seeing any other image sock TV Brunei-, vidico or other game) after the previous game has been terminated.
Thus, this embodiment has such a purpose that a player can use a game controller 30 to set an adjustment data for adjusting the brightness of the monitor 20 (which is, in a broad IS sense, the displayed property), as shown in Fig. 34.
In Fig. 34, for example, the adjustment data may be set to increase the brightness on the entire screen when the player operates a cross key 32 on the game Gondolier 30 to indicate the leftward direction and to decrease the brightness on the entire screen when the player operates a cross key 32 on the game controller 30 to indicate the rightward direction.
90 The set adjustment data is then saved in a saved information storage device 40 for storing the personal data (or saving data) of the player.
This embodiment performs the transformation relative to the image information of the original image based on the adjustment data which has been obtained by the adjustment of brightness (or displayed property) or loaded from the saved information storage device 40.
25 More particularly, the gamma-correction transformation property (Fig. 7A) is specified based on the adjustment data. This specified transformation property is then used to prepare the gamma correction LUT (Fig. 7B) which is in turn used to perform the transformation relative to the original image through such a technique as described in connection with Fig. 4 or other.
3 0 In such a manner, the player can adjust the brightness of the monitor 20 using the game
controller 30 without operation ofthe brightness adjustment button 22.
The adjustment data stored in the saved information storage device 40 is only effective for the game at which the player has carried out such an adjustment. If the player is to see any other image source after the previous game has terminated, therefore, the brightness will not 5 be required to be restored. If the previous game is to be again played, the brightness of the monitor 20 will be adjusted based on the adjustment data loaded from the saved information storage device 40. Therefore, the re-adjustment of the brightness is not required. The convenience for the player can highly be improved.
Since the capacity of the saved information storage device 40 is limited, it is desirable 10 that the adjustment data to be saved is as small as possible.
When the adjustment data for specifying; for example, the trnsvllllaLioil rOIJerly of gamma correction is to be saved, it is desirable that the control point data in a Bezier curve for representing the transformation property of gamma correction (which is, in a broad sense, a free-surface curve) is saved in the saved information storage device 40. For example, Y 15 coordinates for the control points CPO, CPI, CP2 and CP3 in Fig. 7A or only Y coordinates for the control points CP1 and CP2 may be saved. In such a manner, the storage capacity necessary to save the adjustment data can be saved with the remaining storage capacity being used for the other applications.
When the remaining storage capacity of the saved information storage device 40 is 20 relatively large, all the contents of idle gamma correction LUT shown in Fig. 7B may be saved in the saved information storage device 40.
3. Process of this embodiment 25 The details of the process of this embodiment will be described in connection with flowcharts shown in Figs. 35 to 39.
Fig. 35 is a flowchart illustrating a process ofthis embodiment in which the technique of Fig. 12 is taken.
First of all, such a transformation LUT as shown in Fig. 7B has been transferred to 30 VRAM (step S l).
Next, the R plane value in the original image on the frame buffer is set as an index number in the Ll IT, as described at D I in Fig. 12. This LUT is then used to apply the texture mapping to a polygon having a size equal to that of the display screen. This polygon is then drawn in an additional buffer (step S2). At this time, the other G and B values have been 5 masked, as described at D2 in Fig. 12.
Next, as described at D3 in Fig. 12, the G plane value in the original image on the frame buffer is set as an index number in the LUT which is in turn used to apply the texture mapping to a polygon having a size equal to that of the display screen. This polygon is then drawn in an additional buffer (step S3). At this time, the other R and B values have been 10 masked as described at D4 in Fig. 12.
Next, as described at D5 in Fig 12, the R pLn.e value in the original image on the frame buffer is set as an index number in the LUT which is in turn used to apply the texture mapping to a polygon having a size equal to that of the display screen. This polygon is then drawn in an additional buffer (step S4). At this time, the other R and G values have been 15 masked as described at D6 in Fig. 12.
Finally, the images drawn in the additional buffers are drawn in the frame buffer through the texture mapping or the like (step S5).
Fig. 36 is a flowchart illustrating a process of this embodiment which takes the technique of Fig. 13.
20 First of all, such LUTR, LUTG and LUTB as shown in Figs. IOA, IOB and 11 are prepared and previously transferred to VRAM (step S 10).
Next, as described at E I in Fig. 13, the R plane value in the original image on the frame buffer is set as an index number in the LUTR. This LUTR is then used to apply the texture mapping to a polygon having a size equal to that of the display screen. This polygon is then 25 drawn in a first additional buffer (step S 1 1).
Next, as described at E3 in Fig. 13, the G plane value in the original image on the frame buffer is set as an index number in the LUTG which is in turn used to apply the texture mapping to a polygon having a size equal to that of the display screen. This polygon is then drawn in a second additional buffer (step S12).
30 Next, as described at ES in Fig. 13, the B plane value in the original image on the frame
buffer is set as an index number in the LUTB which is in turn used to apply the texture mapping to a polygon having a size equal to that of the display screen. This polygon is then drawn in a third additional buffer (step S13).
Next, the image drawn in the first additional buffer is drawn in the frame buffer (step 5 S 14). The image drawn in the second additional buffer is then additionally drawn in the frame buffer (additive alpha blending) (step S 15). Finally, the image drawn in the third additional buffer is additionally drawn in the frame buffer (step S 16).
Figs. 37 and 38 show a flowchart for a process of transforming the Z value of the original image into alpha value which is in turn used to blend the original image with its 10 defocused image (see Fig. 16).
First of all, the original image (or Der.snective-transfrrmed image) is drawls in the fi-arne buffer (step S2 1). At this time, Z value for each pixel will be written into the Z buffer.
Next, LUTI (Fig. 24) for transforming bits 15 to 8 in the Z value ofthe Z buffer, LUT2 (Fig. 25) for transforming bits 23 to 16 and LUT3 (Fig. 26A) for transforming the Z value of 15 8-bit form into alpha value (A value) are transferred to VRAM (step S22).
Bits 15 to 8 in the Z value is set as an index number in the LUTI which is in turn used to perform the texture mapping relative to a virtual polygon. This polygon is then drawn in an additional buffer (step S23).
Bits 23 to 16 in the Z value is set as an index number in the LUT2 which is in turn used 20 to perform the texture Clapping relative to a virtual polygon. This polygon is then drawn in an additional buffer (step S24). At this time, four lower bits (or data effective bits) in the Z value of 8-bit form have been masked for avoiding any overwriting.
Next, Z2 value of 8-bit form obtained at the step S24 is set as an index number in the LUT3 which is in turn used to perform the texture mapping relative to a virtual polygon. This 25 polygon is drawn in the frame buffer (alpha plane) (step S25).
Next, the original image drawn in the work buffer at the step S21 is mapped on a virtual polygon through the bilinear filtering type interpolation while shifting the texture coordinates U and V by (0.5, 0.5) . This virtua] polygon is drawn in an additional buffer (step S26).,
30 Next, the image drawn in the additional buffer at the step S26 is mapped on a virtual
polygon through the bilinear filtering type interpolation while shifting the texture coordinates U. V by (-0.5, -0.5). This polygon is drawn in the frame buffer (step S27). At this time, the alpha blending is carried out by using the alpha value drawn in the frame buffer at the step S25 to blend the original image with its defocused image.
5 In such a manner, a so-called depth of field can be represented.
Fig. 3 9 is a flowchart for illustrating a process of adjusting the brightness as described in connection with Fig. 34.
First of all. it is judged whether or not the brightness adjustment data exists in a memory card (or saved information storage device) and also whether or not the brightness 10 adjustment data is to be loaded (step S30). If it is judged that the adjustment data should be loaded, it will be loaded from the memory card (step S3 I. If it is judged that the adjustment data should not be loaded, it is judged that the player has selected an option screen for the brightness adjustment (display screen of Fig. 34) (step S32). If not so, the brightness adjustment data is set at a previously provided initial value (step 15 S33). On the other hand, if the player has selected the option screen, the brightness adjustment data is set (prepared) based on the operational data from the player as described in Fig. 34 (step S34). The set brightness adjustment data is then saved in the memory card (step S35).
Next, the game image is dynamically transformed for each frame, based on the resulting brightness adjustment data (initial adjustment data, set adjustment data or loaded 20 adjustmer.t data), through tale technique as described in connection with Fig. 4 or other figure (step S36) 4. Hardware configuration 25 Hardware configuration for implementing this embodiment is shown in Fig. 40.
A main processor 900 operates to execute processing such as game processing, image processing, sound processing and other types of processing according to a program stored in a CD (information storage medium) 982, a program transferred through a communication interface 990 or a program stored in a ROM (information storage medium) 950.
30 A coprocessor 902 is to assist the processing of the main processor 900 and has a
product-sum operator and analog divider which can perform high-speed parallel calculation to execute a matrix (or vector) calculation at high speed. If a physical simulation for causing an object to move or act (motion) requires the matrix calculation or the like, the program running on the main processor 900 instructs (or asks) that processing to the coprocessor 902.
5 A geometry processor 904 performs a geometry processing such as coordinate transformation, perspective transformation, light source calculation, curve formation or the like and has a product-sum operator and analog divider which can perform high-speed parallel calculation to execute a matrix (or vector) calculation at higl1 speed. For example, for the coordinate transformation, perspective transformation or light source calculation, the program 10 running on the main processor 900 instructs that processing to the geometry processor 904.
A data expanding nroce.ssor 906 performs a decoding pr occss for expelling, image and sound compressed data or a process for accelerating the decoding process in the main processor 900. In the opening screen, intermission screen, ending screen or other game screen, thus, an MPEG compressed animation may be displayed. The image and sound data to be 15 decoded may be stored in the storage devices including ROM 950 and CD 982 or may externally be transferred through the communication interface 990.
A drawing processor 910 is to draw or render an object constructed by primitive surfaces such as polygons or curved faces at high speed. On drawing the object, the main processor 900 uses a DMA controller 970 to deliver the object data to the drawing processor 2Q 910 and also to transfer a texture to a texture storage section 924, if necessary. Thus, the drawing processor 910 draws the object in a frame buffer 922 at high speed while performing a hidden-surface removal by the use of a,-buffer or the like, based on the object data and texture. The drawing processor 910 can also perform alpha blending (or translucency processing), depth cueing, mm-mapping, fogging, bilinear filtering, bilinear filtering, anti 25 aliasing, shading and so on. As the image for one frame is written into the frame buffer 922, that image is displayed on a display 912.
A sound processor 930 includes any multi-channel ADPCM sound source or the like to generate high-quality game sounds such as BGMs, sound effects and voices. The generated game sounds are outputted from a speaker 932.
30 The operational data from a game controller 942, saved data from a memory card 944
and personal data may externally be transferred through a serial interface 940.
ROM 950 has stored a system program and so on. For an arcade game system,the ROM 950 functions as an information storage medium in which various programs have been stored. The ROM 950 may be replaced by any suitable hard disk.
5 RAM 960 is used as a working area for various processors.
DMA controller 970 is to control the DMA transfer between processors and memories (RAM, VRAM, ROM and so on).
CD controller 980 drives a CD (information storage medium) 982 in which the programs, image data or sound data have been stored and enables these programs and data to 10 be accessed.
The communication interface 990 performs Eta transfer between the image gcilei-a.h-
system and any external instrument through a network. In such a case, the network connectable with the communication interface 990 may take any of communication lines (analog phone line or ISDN) or high-speed serial bus. The use of the communication line 15 enables the data transfer to be performed through the internet. If the high-speed serial interface bus is used, the data transfer may be carried out between the other game systems.
All the means of the present invention may be realized (executed) only through hardware or only through a program which has been stored in an information storage medium or which is distributed through the communication interface. Alternatively, they may be 20 executed troth through the hardware End program.
If all the means of the present invention are executed both through the hardware and program, the information storage medium will have stored a program for realizing the respective means of the present invention through the hardware. More particularly, the aforementioned program instructs the respective processors 902, 904, 906, 910 and 930 which 25 are hardware and also delivers the data to them, if necessary. Each ofthe processors 902, 904, 906, 9]0 and 930 will execute the corresponding one of the means of the present invention based on the instruction and delivered data.
Fig. 41A shows an arcade game system to which this embodiment is applied. Players enjoy a game by controlling levers 1102 and buttons] 104 while viewing a game images 30 displayed on a display 1 100. A system board (circuit board) 1 106 included in the game system
includes various processors and memories which are mounted thereon. An information (program or data) for realizing all the means of the present invention has been stored in a memory 1108 on the system board 1106, which is an information storage medium. Such information will be referred to "stored information" dater.
5 Fig. 4IB shows a domestic game system to which this embodiment is applied. A player enjoys a game by manipulating game controllers 1202 and 1204 while viewing a game picture displayed on a display 1200. In such a case, the aforementioned stored information pieces have been stored in DVD 1206 and memory cards 1208, 1209 which are detachable information storage media in the game system body.
10 Fig. 41C shows an example wherein this embodiment is applied to a game system which includes a host machine 1300 and terming 1304-1 to 13n4 n Gamma devices orporiabie telephones) connected to the host machine 1300 through a network (which is a small-scale network such as LAN or a global network such as INTERNET) 1302. In such a case, the above stored information pieces have been stored in an information storage medium 1306 such 15 as magnetic disk device, magnetic tape device, semiconductor memory or the like which can becontrolledbythehostmachine 1300,forexample. If each ofthe terminals 1304-1 to 1304 n are designed to generate game images and game sounds in a stand-alone manner, the host machine 1300 delivers the game program and other data for generating game images and game sounds to the terminals 1304- I to 1304-n. On the other hand, if the game images and sounds 20 cannot he generated by the terminals in the stand-aione manner, the host machine 1300 will generate the game images and sounds which are in turn transmitted to the terminals 1304- I to 1304n. In the arrangement of Fig. 41C, the means of the present invention may be decentralized into the host machine (or server) and terminals. The above information pieces 25 for realizing the respective means of the present invention may be distributed and stored into the information storage media of the host machine (or server) and terminals.
Each of the terminals connected to the network may be either of home or arcade type.
When the arcade game systems are connected to the network, it is desirable that each of the arcade game systems includes an saved information storage device (memory card or portable 30 game machine) which can not only transmit the information between the arcade game systems
but also transmit the information between the arcade game systems and the domestic game systems. The present invention is not limited to the abovedescribed embodiment, and various modifications can be made within the scope of the claims.
5 The image information set as the index number in the index color texture-mapping lookup table is particularly desirable to be the information described in connection with this embodiment, but the present invention is not limited to such information.
The image transformation (or video filtering) realized by the present invention is not limited to those described in connection with Figs. 7A to 11.
10 Although it is particularly desirable that the technique of transforming the image information in such a form as is to save the adjustment infornmti^n. in the saved infol-illaiior storage device is that described in connection with Fig. 4, the present invention is not limited to such a technique. The image information may be transformed through the techniques described in connection with Figs. IA and IB, for example.
15 Although it is particularly desirable that the transformation of the depth value into the second depth value is realized through the technique of Fig. 23 using the index color texture mapping lookup table, such a transformation may be realized through any other technique.
The transformation property of the lookup table used to transform the depth value is not limited to any of such transformation properties as shown in Figs. 24, 25, 26A and 6B, but 20 may he curried out in any of various other forms.
Although it is particularly desirable that the defocused image to be blended with the original image is generated through the technique described in connection with Figs. 29 and 30, the present invention is not limited to such a technique. For example, the defocused image may be generated by blending the original image with its shifting image or by blending an 25 original image on the present frame with an original image on the previous frame.
The invention of generating the defocused image through the texel interpolation is not limited to the technique described in connection with Figs. 29 to 32B. For example, an area smaller than a screen may be set in which an original image will be defocused, rather than defocusing of the entire screen.
30 Although this embodiment has been described as to the depth value increasing as the
pixel thereof is spaced nearer away from the viewpoint, the depth value may be increased as the pixel thereof is spaced farther away from the viewpoint.
The present invention may similarly be applied to any of various other games such as fighting games, shooting games, robot combat games, sports games, competitive games, rol] 5 playing games, music playing games, dancing games and so on.
Furthermore, the present invention can be applied to various image generating systems such as arcade game systems, domestic game systems, large-scaled multi-player attraction systems, simulators, multimedia terminals, image generating systems, game image generating system boards and so on.
Claims (6)
1. A game system which generates a game image for a domestic game, comprising: means which sets an adjustmentdataforadjusting displayproperties of a monitor teased 5 on operational data inputted by a player through a game controller; save means which saves the set adjustment data in a saved information storage device for storing personal data of the player, and means which performs transformation processing on image information of an original image based on the adjustment data obtained by adjusting the displayproperties or loaded from l O the saved information storage device.
2. The game system as defined in claim 1, wherein data of a control point of a free-form curve representing transformation properties of the image information is saved in the saved information storage device as the adjustment data by the save means.
3. A computer-usable program embodied on an information storage medium or in a carrier wave for generating a game image for a domestic game, the program comprising a processing routine for a computer to realize: means which sets an adjustment data for adjusting the display properties of a monitor 90 based on operational data inputted by a player through a game controller; save means which saves the set adjustment data in a saved information storage device for storing personal data of the player; and means which performs transformation processing on image information of an original image based on the adjustment data obtained by ad lusting the display properties or loaded from 25 the saved information storage device.
4. The program as defined in claim 3, wherein data of a control point of a free-form curve representing transformation properties of the image information is saved in the saved information storage device as the adjustment data by the save means.
5. A method of generating a game image for a domestic game, comprising a step of: setting an adjustment data for adjusting display properties of a monitor based on operational data inputted by a player through a game controller, saving the set adjustment data in a saved information storage device for storing 5 personal data of the player; and performing transformationprocessing on image information of an original image based on the adjustment data obtained by adjusting the display properties or loaded from the saved information storage device.
10
6. The method as den ned in claim 5, wherein data of a control point of a free-form curve representing transformation properties of the irn. age i..fo.,matioil is sa-veu in the saved information storage device as the adjustment data.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2000020464 | 2000-01-28 | ||
JP2000213988 | 2000-07-14 | ||
JP2000213725 | 2000-07-14 | ||
GB0123285A GB2363045B (en) | 2000-01-28 | 2001-01-23 | Game system and image creating method |
Publications (3)
Publication Number | Publication Date |
---|---|
GB0403555D0 GB0403555D0 (en) | 2004-03-24 |
GB2395411A true GB2395411A (en) | 2004-05-19 |
GB2395411B GB2395411B (en) | 2004-07-14 |
Family
ID=32180581
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB0403555A Expired - Fee Related GB2395411B (en) | 2000-01-28 | 2001-01-23 | Game system and image generation method |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2395411B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105094920B (en) * | 2015-08-14 | 2018-07-03 | 网易(杭州)网络有限公司 | A kind of game rendering intent and device |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07204350A (en) * | 1994-01-12 | 1995-08-08 | Funai Electric Co Ltd | Tv-game system |
-
2001
- 2001-01-23 GB GB0403555A patent/GB2395411B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07204350A (en) * | 1994-01-12 | 1995-08-08 | Funai Electric Co Ltd | Tv-game system |
Also Published As
Publication number | Publication date |
---|---|
GB2395411B (en) | 2004-07-14 |
GB0403555D0 (en) | 2004-03-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7116334B2 (en) | Game system and image creating method | |
US6927777B2 (en) | Image generating system and program | |
JP2006318389A (en) | Program, information storage medium, and image generation system | |
US7566273B2 (en) | Game system and image generating method | |
JP4804120B2 (en) | Program, information storage medium, and image generation system | |
JP3249955B2 (en) | Image generation system and information storage medium | |
JP2006318388A (en) | Program, information storage medium, and image forming system | |
JP2003323630A (en) | Image generation system, image generation program and information storage medium | |
JP2002063596A (en) | Game system, program and information storage medium | |
JP4223244B2 (en) | Image generation system, program, and information storage medium | |
JP3280355B2 (en) | Image generation system and information storage medium | |
JP3990258B2 (en) | Image generation system, program, and information storage medium | |
JP2004334661A (en) | Image generating system, program, and information storage medium | |
JP2006011539A (en) | Program, information storage medium, and image generating system | |
US7796132B1 (en) | Image generation system and program | |
JP2006252426A (en) | Program, information storage medium, and image generation system | |
JP4656616B2 (en) | GAME SYSTEM, PROGRAM, AND INFORMATION STORAGE MEDIUM | |
JP4159082B2 (en) | Image generation system, program, and information storage medium | |
JP4656617B2 (en) | GAME SYSTEM, PROGRAM, AND INFORMATION STORAGE MEDIUM | |
JP4443083B2 (en) | Image generation system and information storage medium | |
JP3467259B2 (en) | GAME SYSTEM, PROGRAM, AND INFORMATION STORAGE MEDIUM | |
JP4913898B2 (en) | GAME SYSTEM, PROGRAM, AND INFORMATION STORAGE MEDIUM | |
JP4632855B2 (en) | Program, information storage medium, and image generation system | |
JP4056035B2 (en) | Image generation system, program, and information storage medium | |
GB2395411A (en) | Game system and image generation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PCNP | Patent ceased through non-payment of renewal fee |
Effective date: 20150123 |