MXPA95004904A - Method for producing image data, image data processing device and regis medium - Google Patents
Method for producing image data, image data processing device and regis mediumInfo
- Publication number
- MXPA95004904A MXPA95004904A MXPA/A/1995/004904A MX9504904A MXPA95004904A MX PA95004904 A MXPA95004904 A MX PA95004904A MX 9504904 A MX9504904 A MX 9504904A MX PA95004904 A MXPA95004904 A MX PA95004904A
- Authority
- MX
- Mexico
- Prior art keywords
- data
- image data
- dimensional image
- polygon
- dimensional
- Prior art date
Links
- 238000004519 manufacturing process Methods 0.000 title claims description 14
- 230000001131 transforming Effects 0.000 claims abstract description 48
- 230000005540 biological transmission Effects 0.000 claims abstract description 28
- 230000000875 corresponding Effects 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 8
- 238000003672 processing method Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000000926 separation method Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 54
- UIIMBOGNXHQVGW-UHFFFAOYSA-M buffer Substances [Na+].OC([O-])=O UIIMBOGNXHQVGW-UHFFFAOYSA-M 0.000 description 36
- 238000000034 method Methods 0.000 description 20
- 239000003086 colorant Substances 0.000 description 14
- 230000003287 optical Effects 0.000 description 9
- 206010011906 Death Diseases 0.000 description 8
- 239000000463 material Substances 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 230000004048 modification Effects 0.000 description 7
- 238000006011 modification reaction Methods 0.000 description 7
- 230000001276 controlling effect Effects 0.000 description 6
- 230000002093 peripheral Effects 0.000 description 5
- 230000011664 signaling Effects 0.000 description 5
- 230000003313 weakening Effects 0.000 description 5
- 238000005286 illumination Methods 0.000 description 4
- 230000000051 modifying Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 3
- RZVAJINKPMORJF-UHFFFAOYSA-N p-acetaminophenol Chemical compound CC(=O)NC1=CC=C(O)C=C1 RZVAJINKPMORJF-UHFFFAOYSA-N 0.000 description 2
- 125000002306 tributylsilyl group Chemical group C(CCC)[Si](CCCC)(CCCC)* 0.000 description 2
- 241000710770 Langat virus Species 0.000 description 1
- 241000668842 Lepidosaphes gloverii Species 0.000 description 1
- 101710017808 Srst Proteins 0.000 description 1
- 230000003044 adaptive Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 230000003446 memory effect Effects 0.000 description 1
- 201000008048 nemaline myopathy 5 Diseases 0.000 description 1
- 230000000717 retained Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000007493 shaping process Methods 0.000 description 1
- 230000001360 synchronised Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000004804 winding Methods 0.000 description 1
Abstract
The present invention relates to a method of processing image data to produce a three-dimensional image data that is converted by perspective view transformation into a two-dimensional image data is transferred into a transmission standard to draw an image on a device of two-dimensional display, the method comprises: providing a data format for the image data, the format includes the information that has been transformed into perspective view, and which is arranged in an identical manner to that of the transmission standard determined for the two-dimensional image data
Description
"METHOD FOR PRODUCING THE IMAGE DATA, PROCESSING DEVICE OF THE IMAGE DATA AND MEANS OF REGISTRATION"
INVENTORIES: MASAYOSHI TANAKA, MASAAKI OKA, TEIJI YUTAKA, 5 KAORU HAGI ARA and HIDETOSHI ICHIOKA, Japanese citizens domiciled in c / o Sony Corporation, 7-35, Kitashinagawa 6-Chome, Shinaga a-u, - Tokyo, Japan, cede all your rights to SONY CORPORATION, a company - duly organized and constituted in accordance with the Laws of Japan, domiciled at 7-35, Kitashinagawa 6-Chome, Shinagawa-ku,] _Q Tokyo - Japan, for the invention described below.
This application claims ownership under the International Convention based on Japanese Patent Application Number P06-300020 filed on December 2, 1994.
BACKGROUND OF THE INVENTION
The present invention relates generally to image data processing and, more particularly, to improvements in methods and apparatus 5 to produce improved image data by data processing and with a recording medium carrying this image data. The common practice in the prior art is that the images produced in a television receiver, a monitor or a CRT display device of the home video game machine, a microcomputer or a graphics computer, are essentially two-dimensional. These images are usually animated by moving and varying a character or two-dimensional object on a flat two-dimensional background. However, these images or two-dimensional prints are limited both in the modeling of a background and the movement of the character objects, thus failing to render more realistic images, particularly in a video game. For an improvement, different methods have been proposed to render the images or three-dimensional engravings highly realistic and some of them will be described below. One of the various predetermined movements of a character object viewed from several directions can be selected and displayed according to a visual variation, such as a change in the point of view in the image. Also, a simulated three-dimensional image can be created by overlapping a plurality of two-dimensional graphics, one over the other, in a direction without depth. A texture mapping method can also be provided in which the surface of a polygon is filled with a texture map (of material or pattern) to generate an image pattern. In another method, a variation of colors occurs by changing the color data of the image with the use of a table to query colors. In a typical example of a prior art home video game machine, the manipulation information is input from an input device such as an input notebook or game lever, and passed through an interface to along a main bus manifold by the action of a CPU consisting mainly of a microprocessor. During the input of the manipulation data, the three-dimensional data stored in a main memory is transmitted by the action of a video processor to a source video memory for temporary storage. The aforementioned CPU also functions to transfer to the video processor, a specific sequence for reading a series of image data segments from the source video memory to overlap them, one above the other on the screen. According to the reading sequence of the image data segments, the video processor reads the image data segments from the source video memory and displays them in their overlapped location. While the image data segments are being read and displayed, the audio components of the manipulation information are fed to an audio processor which, in turn, captures the corresponding audio data from an audio memory for synchronization with the data. of image. For example, the source video memory can retain a background of a chessboard pattern and a group of rectangular image segments or moving objects representing cross sections has a cylindrical object in the background. Other areas besides the cross sections of the cylindrical object in moving objects can be drawn in transparency. A synchronization generator mounted on the video processor generates a read address signal in response to a synchronization signal of the image data. The read address signal of the synchronization generator is transmitted via the main bus to a read address frame determined by the CPU. The synchronization generator also reads the image segments of the source video memory in response to a signal from the read address box.
The removed video data segments are then fed to an overlap processor where they overlap, one above the other, in the sequence predetermined by a priority table and passed through the main bus collector from the CPU. Since the background comes first and is then followed by rectangular objects in movement, the group of objects in movement is placed in superposition one above the other, in the background. Then, the other areas in addition to the cross sections of the cylindrical object of the aforesaid moving objects that overlap one another on the bottom, are reproduced to a transparency by an appropriate transparency processor. As a result, the data of the two-dimensional image of the cylindrical object can be reproduced as the three-dimensional VDO data of the original image. However, it is necessary to produce a new file of a certain format, convert the original data to its desired form and then prepare a format of the desired form of the original data towards the new file. A processing method for converting the original data into a certain format includes the processing of the geometric data of an original object or data to produce its three-dimensional graphic data capable of being displayed on a two-dimensional screen from a specific format, as it is applicable to the machine. domestic video games. This method includes a processing frequency of the image data (which will be referred to below as three-dimensional graphic processing) in which the three-dimensional graphic data of an original geometric object is produced for display on a two-dimensional screen and which allows the geometric data The original of the object supplied to a terminal is processed by a coordinate transformation device to produce a data packet of a certain format which is then transmitted to a playback device for drawing. The original geometric data is composed of a group of polygons (which are unitary configurations of graphs) including triangles, quadrilaterals and other configurations and are handled with a drawing device (and expressed as a three-dimensional model in an exhibition). Each data of the polygon includes the type of polygon (triangle, quadrilateral or similar) an attribute of the polygon (transparent or semitransparent), a color of the polygon, coordinates and three-dimensional that represent vertices, three-dimensional vectors that represent the normal through a vertex, and two-dimensional coordinates representing a storage location of the texture data. These are known file formats that contain two or more of the geometric data. The package data of the format produced by the processing action of the coordinate transformation device has information to draw a polygon on the screen of an exhibit device including the type of polygon (triangle, quadrilateral or the like), an attribute of the polygon ( transparent or semitransparent),. two-dimensional coordinates representing vertices, a color of a vertex, and two-dimensional coordinates that represent a storage location of a texture data. Figure 57 shows a typical format of an ordinary file that contains a series of packet data. For example, CODE is a code that represents a type (polygon, line, moving object or similar) of content, V and U represent X and Y coordinate values respectively in the texture source space, R, G and B they are the values of R, G and B of the three primary colors of a polygon, and X and Y are values of the X and Y coordinate respectively indicating the vertices of the polygon. The content and length of the data packet varies depending on the configuration and size of the polygon. In order to convert the existing file of the above described format to a file of a new and improved format, the next steps of a process in the coordinate transformation device would have to be carried out. 1. The size of a desired data packet for each configuration of the polygons is calculated and stored in a certain area of an applicable memory. 2. The following procedure is repeated for each polygon: (1) The type and attribute of a polygon are combined together to form a word and written in a region 0 of the data of the package. (2) The color of a vertex is determined from the normal of the vertex and the color of the polygon and is written in a region 0 and two regions 3 and 6 of the package data. (3) The two-dimensional coordinates are calculated from the three-dimensional coordinates of the vertex and are written in regions 1, 4 and 7 of the package data. (4) The two-dimensional coordinates of a texture are then written in regions 2, 5 and 8 of the package data.
As indicated, at least these three steps are required to produce a new file (package data file) the image data of an original file
(data file of the configuration of the object): 1. An area of the memory is preserved to produce a new file. 2. The data in an original file is prepared in a format and stored. 3. The calculated data of the original file data is stored in a new format. The aforementioned steps 1 to 3 are costly in time and labor. Accordingly, there has been a long-standing need for processing the improved image data wherein a file containing the original image data to be transformed is easily converted into a new format, for a data processing apparatus of image to process this image data and a recording medium to carry this improved image data. The present invention fills these needs.
SUMMARY OF THE INVENTION In short, and in general terms, the present invention provides a new and improved method and apparatus for producing the image data, wherein a file containing the image data to the original to be transformed easily becomes a new one. format, for an image data apparatus for processing this image data and a recording medium for carrying this enhanced image data. By way of example and not necessarily by way of limitation, a method for producing an image data according to the present invention is provided to convert a three-dimensional image data by perspective view transformation into a two-dimensional image data and transfer the image data. Two-dimensional image data in a given transmission standard for drawing an image on a two-dimensional display screen. In particular, the method included in the structure of the three-dimensional image data excluding the information to be transformed into a perspective view, placed in an identical manner to that of the determined transmission standard of the two-dimensional image data. The three-dimensional image data may include information about the shading and an object to be drawn on the two-dimensional display screen. The structures of the three-dimensional image data and the two-dimensional image data may be identical to one another in a minimum unit of one or more words. The three-dimensional image data may include all the content of the two-dimensional data. Similarly, an image data processing apparatus in accordance with the present invention includes a coordinate transformation means for converting the three-dimensional image data by perspective view transformation into a two-dimensional image data, and a means of drawing to transfer the two-dimensional image data into a given transmission standard to draw an image on a two-dimensional display screen, wherein a structure of the three-dimensional image data excluding the information to be transformed into a perspective view is placed identical to that of a given transmission standard of the two-dimensional image data. The coordinate transformation means is operated so that the information to be transformed into a perspective view is discriminated from the other data of the three-dimensional image data of which the structure is identical to that of the determined transmission standard of the image data two-dimensional, subjected to transformation of perspective view and combined with other data, that the structure is identical to that of the transmission standard determined for the production of two-dimensional image data. As indicated above, this three-dimensional image data may include information about shading on an object to be drawn on the two-dimensional display screen. A recording medium, in accordance with the present invention, is provided to retain the image data created by the method and apparatus described above of the invention. Since the structure of the three-dimensional image data excluding the information to be transformed into perspective view is identical to that of the given transmission standard of the two-dimensional image data, the two-dimensional image data of a given transmission standard can be obtained simply processing the information that is going to be transformed into perspective view. Furthermore, since the three-dimensional image data includes the information about shading on an object to be drawn on the two-dimensional display screen, no additional calculation will be necessary to generate the shading data on the object during the production of the data of two-dimensional image. Therefore, the present invention fills a need that has existed for a long time for the processing of the improved image data wherein a file containing the original image data to be transformed, easily becomes a new format, for a image data processing apparatus for processing this image data and a recording medium for carrying this improved image data. These and other objects and advantages of the invention will become apparent from the following more detailed description when taken in conjunction with the accompanying drawings of the illustrative embodiments.
DESCRIPTION OF THE DRAWINGS
Figure 1 is a functional diagram of a total system arrangement of an image data processing apparatus, in accordance with the present invention; Figure 2 is a diagram illustrating the representation in a display device; Figure 3 is a diagram showing the graduation of the representation in a display device; Figure 4 is a diagram illustrating the function
»Clamping with staples for drawing - Figure 5 is a diagram illustrating a texture page;
Figure 6 is a diagram showing the structure of a CLUT; Figure 7 is a diagram illustrating the fundamental bases for drawing an object in motion; Figure 8 is a diagram illustrating double frame regulation; Figure 9 is a diagram showing the format of a TOD file; Figure 10 is a diagram showing the format of a table in the TOD format; Figure 11 is a diagram showing the format of a PACKAGE of a PICTURE; Figure 12 is a diagram illustrating the structure of the "package data" of the attribute type; Figure 13 is a diagram illustrating the structure of the "package data" to connect the calculation of the light source; Figure 14 is a diagram illustrating the structure of a "flag" of the coordinate type (RST); Figure 15 is a diagram illustrating the structure of the "package data" of the coordinate type (RST); Figure 16 is a diagram illustrating the structure of the "package data" of the TMD data of the ID type;
Figure 17 is a diagram illustrating the structure of the "package data" of the host object of type ID; Figure 18 is a diagram illustrating the structure of the "package data" of the matrix type; Figure 19 is a diagram illustrating the structure of a "flag" of the type of light source; Figure 20 is a diagram illustrating the structure of the "package data" of the type of light source; Figure 21 is a diagram illustrating the structure of a "flag" of the camera type; Figure 22 is a diagram illustrating the allocation of other bits when the "camera type" is 0; Figure 23 is a diagram illustrating the allocation of other bits when the "camera type" is 1; Figure 24 is a diagram illustrating a first structure of the "package data" of the camera type; Figure 25 is a diagram illustrating a second structure of the "package data" of the camera type; Figure 26 is a diagram showing a TMD format;
Figure 27 is a diagram showing the structure of a head and the TMD format; Figure 28 is a diagram showing the structure of an OBJTABLE of the TMD format; Figure 29 is a diagram showing the structure of a PRIMITIVE of the TMD format; Figure 30 is a diagram showing the structure of a "mode" of a PRIMITIVE; Figure 31 is a diagram showing the structure of a "flag" of a PRIMITIVE; Figure 32 is a diagram showing the structure of a VETRIX of the TMD format; Figure 33 is a diagram showing the structure of a NORMAL of the TMD format; Figure 34 is a diagram showing the format of a fraction of the fixed decimal point; Figure 35 is a diagram showing the structure of the TBS parameter in the "package data" of a PRIMITIVE; Figure 36 is a diagram showing the structure of the CBA parameter in the "package data" of a PRIMITIVE; Figure 37 is a "mode" bit allocation diagram with the application of the triangle polygon s and the light source calculation showing a modification of the "package data" of a PRIMITIVE; Figure 38 is a diagram illustrating the structure of the "package data" of a PRIMITIVE with the application of the triangle polygon and the calculation of the light source; Figure 39 is a diagram that illustrates the structure of the "package data" of a PRIMITIVE with the application of the polygon triangle, but without calculation of light source; Figure 40 is a diagram of the "mode" bit allocation with the application of a quadrilateral polygon and light source calculation showing a modification of the "packet data" of a PRIMITIVE; Figure 41 is a diagram illustrating the structure of the "package data" of a PRIMITIVE with the application of the quadrilateral polygon of the light source calculation; Figure 42 is a diagram illustrating the structure of the "package data" of a PRIMITIVE with the application of the quadrilateral polygon, but without calculating the light source; Figure 43 is a diagram of the bit allocation of the "mode" of a line drawing, showing a modification of the "package data" of a PRIMITIVE;
Figure 44 is a diagram of a structure of the "package data" of a line drawing showing a modification of the "package data" of a PRIMITIVE; Figure 45 is a diagram of the "mode" bit allocation of a three-dimensional moving object drawing showing a modification of the "package data" of a PRIMITIVE; Figure 46 is a diagram of a structure of the "package data" of a three-dimensional moving object drawing showing a modification of the "package data" of a PRIMITIVE; Figure 47 is a flow chart illustrating the sequence of steps for the perspective view transformation of the TMD format data of the present invention; Figure 48 is a flow chart illustrating the sequence of actions carried out in a common three-dimensional graphical coordinate transformation device; Figure 49 is a flow chart illustrating the sequence of actions of the coordinate transformation device when shading in an object is not carried out in real time; Figure 50 is a diagram of another modality of the TMD format;
Figure 51 is a diagram showing the structure of the "polygon data" in the TMD format for the modality of Figure 50; Figure 52 is a diagram showing the structure of the "package data" for an additional mode; Figure 53 is a functional diagram illustrating a system arrangement of a prior art image production apparatus (or home video game machine); Figure 54 is a schematic and combined block diagram illustrating an image production method as carried out by the prior art image production apparatus; Figure 55 is a functional diagram showing an arrangement of an image data processing system of the prior art; Figure 56 is a diagram showing the structure of a conventional file for the configuration data of the object; and Figure 57 is a diagram showing the structure of a conventional file for "package data".
DESCRIPTION OF THE PREFERRED EMBODIMENTS Referring now to the drawings, the like reference numbers represent equal or corresponding parts through the figures of the drawing. A typical example of a prior art home video game machine is illustrated in Figure 53. As shown in Figure 53, the manipulation information introduced from an input device 391, such as an entry notebook to a The game lever is passed through an interface 393 and is inserted along a main bus 399 via the action of a CPU 391 consisting mainly of a microprocessor. As indicated above, during the input of the manipulation data, a three-dimensional data stored in the main memory 392 is transmitted by the action of a video processor 396 to a source video memory 395 for temporary storage. The CPU 391 also functions to transfer to the video processor 396 a specific sequence for reading a series of segments of the image data from the source video memory 395 to overlap them one on the other on the screen. In accordance with the reading sequence of the image data segments, the video processor 396 reads the segments of the image data from the source video memory 395 and displays them in their overlapped placement. While the segments of the image data are being read and displayed, the audio components of the manipulation information are fed to an audio processor 397 which, in turn, picks up the corresponding audio data from an audio memory 398 for synchronization. with the image data. As can best be seen in Figure 54, a procedure for providing a three-dimensional data based on a two-dimensional data format in the home video game machine illustrated in Figure 53 is shown. Figure 54 illustrates the display of a cylindrical object in the background of a chessboard pattern in a three-dimensional image. The source video memory 395 of Figure 54 retains a background 200 of a checkerboard pattern and a group of rectangular image segments or objects in motion., 202, 203 and 204 which represents in cross sections of the cylindrical object in the bottom 200. Other areas than the cross sections of the cylindrical object in the moving objects 201,202, 203 and 204 are drawn in transparency. A synchronization generator 400 mounted on the video processor 396 is used to generate a read address signal in response to a synchronization signal of the image data. The read address signal of the generator 400 is transmitted through the main bus 399 to a read address frame 401 which is determined by the CPU 391 shown in Figure 53. The synchronization generator 400 also reads the image segments that the source video memory 395 in response to a signal from the read address frame 401. The removed video data segments are then fed to an overlap processor 403 where they overlap, one over the other, in the sequence determined by a priority frame 402 and passed through the main bus 399 from the CPU 391. Since the background 200 comes first and then is followed by rectangular moving objects 201, 202, 203 and 204 in that order, the group of moving objects are placed one on top of the other, on the bottom 200. Then, the other areas besides the cross sections of the cylindrical object of the moving bodies 201, 202, 203 and 204 that are overlapped, one above the other on the bottom, are reproduced in a transparency by a transparency processor 404. As a result, the two-dimensional image data of the cylindrical object can be reproduced as the three-dimensional VDO data of the original image as shown in Figure 54. However, as indicated above, it is necessary for the production of a new file of a certain format convert the original data to its desired shape and, then, prepare a format of the desired form of the original data towards the new file. A processing method to convert the original data into a specific format includes the processing of the geometric data of an object or an original data to produce its three-dimensional graphic data capable of being displayed on a two-dimensional screen from a specific format, as applicable to a machine of home video games. As illustrated in Figure 55, this method includes an image data processing sequence
(which will be referred to below as three-dimensional graphics processing) wherein the three-dimensional graph data of an original geometric object is produced to be displayed on a two-dimensional screen. The system shown in Figure 55 allows the original geometric data of the object supplied to a terminal 500 to be processed by a coordinate transformation device 501 to produce a data packet of a given format which is then transmitted to a reproducing device 502. to draw The original geometric data is composed of a group of polygons (which are unitary configurations of graphs, including triangles, quadrilaterals and other configurations and which are handled with a drawing device) and expressed as a three-dimensional model in an exhibition device. Each data of the polygon includes the type of polygon (triangle, quadrilateral or similar), an attribute of the polygon (transparent or semitransparent), a color of the polygon, the three-dimensional coordinates that represent vertices, the three-dimensional vectors that represent the normal through the vertex , and two-dimensional coordinates that represent a storage location of the texture data. Figure 56 illustrates these known file formats that contain two or more of the geometric data. The package data of the format produced by the processing action of the coordinate transformation device 501 carries the information to draw a polygon on the screen of an exhibition device including the type of polygon (triangle, quadrilateral or the like), an attribute of the polygon (transparent or semitransparent), two-dimensional coordinates that represent vertices, a color of a vertex, two-dimensional coordinates that represent a storage location of a texture data. Figure 57 shows a typical format of an ordinary file that contains a series of packet data. As shown in Figure 57, CODE is a code that represents a peak (polygon, line, moving object or similar) of the content, V and U represent coordinate values X and Y, respectively in the space of the texture source R, G and B are values of R, G and B of the three primary colors of a polygon, and X and Y are values of coordinate X and Y respectively indicating vertices of the polygon. The content and length of the package data varies depending on the configuration and size of the polygon. In order to convert the existing file of the format shown in Figure 56 into a file of a new and improved format as shown in Figure 57, the following steps of a process will have to be performed on the 501 coordinate transformation information device. : 1. The size of a desired data packet for each configuration of the polygons is calculated and stored in a certain area of an applicable memory. 2. The following procedure is repeated for each polygon: (1) The type and attribute of the polygon are combined together to form a word and written in a region 0 of the package data. (2) The color of a vertex is determined from the normal of the vertex and the color of the polygon and is written in a region 0 and two regions 3 and 6 of the package data. (3) The two-dimensional coordinates are calculated from the three-dimensional coordinates of the vertex and written in regions 1, 4 and 7 of the package data. (4) The two-dimensional coordinates of a texture are then written in regions 2, 5 and 8 of the package data. As indicated, at least three of these steps are required to produce a new file (package data file) of an image data from an original file (object configuration data file): 1. An area of memory to produce a new file. 2. The data in the original file is prepared in a format and stored. 3. The calculated data of the original file data is stored in a new format.
The aforementioned steps 1 to 3 are costly in time and labor and therefore there is a need for improved data processing efficiency. Before describing a main embodiment of the present invention in the form of a method for producing the image data, an image processing system of another embodiment of the present invention will be explained to generate a three-dimensional graphic data of the image data produced by the image data processing method of the present invention, in order to improve the subsequent understanding of the primary modality. Referring now to the drawings, Figure 1 shows an arrangement of the image processing system installed in a home video game machine. The image processing system is essentially designed to be used in a home video game machine, a microcomputer or a device of the graphic computer apparatus. The image processing system of the embodiment of Figure 1 allows an operator to play a game by controlling the related data (eg, game programs) removed from a recording medium such as an optical disc (v. Gr. ., a CD-ROM) that is also designed by means of the present invention for storing the data in a specified format. More specifically, the image processing system of the embodiment shown in Figure 1 comprises a main controller module 50 comprised of a central processing unit (CPU) 51 and its peripheral devices (including a controller 52 of the peripheral device), a graphics module 60 essentially composed of a graphics processing unit (CPU) 62 for drawing an image in a frame buffer 63, a sound module 70 composed of a sound processing unit (CPU) 71 and other devices for emitting a music or a sound effect, an optical disk controller module 80 for controlling an optical disc driver (CD-ROM) 81 that acts as an auxiliary memory means and the decoding of the played data, a communication controller module 90 controlling the input of the control signals from a controller 92, the admission and output of the information on the graduation of the parameter of the game in a sub-memory (or memory card) 93 and a main bus collector B connected from the main control module 50 to the communication controller module 90. The main controller module 50 comprises the CPU 51, the controller 52 of the peripheral device for controlling interruption actions, time sequences, memory actions and transmission of a direct memory access signal (DMA), a main composite memory 53 .gr., of 2-megabytes of RAM, and a ROM 54 for example of 512 kilobytes, wherein the programs, including an operating system for operating the master memory 53, the graphic module 60 and a sound module 70 have stored. The CPU 51 can be a 32-bit reduced instruction set (RISC) computer for carrying out the operation system stored in the ROM 54 in order to control the entire system. The CPU 51 also includes a command cache and a memory to control the actual storage. The graphic module 60 comprises a GTE 61 consisting of a co-processor for computing coordinates in order to carry out a coordinate transformation process, the GPU 62 for drawing an image in response to the command signals from the CPU 51 , the frame buffer 63 has, v. gr., a megabyte for storing the graphic data that is provided by the GPU 62, and an image decoder 64 (referred to below as "MDEC") to decode a coded, compressed and encoded image data by a orthogonal transformation process such as a discrete cosine transformation. The GTE 61 may have a parallel processor to perform a plurality of arithmetic operations in parallel and acts as a co-processor for the CPU 51 to perform the high-speed actions for the coordinate transformation and calculation of the light source, the vector and the matrix of the annotation of the fixed decimal point. More specifically, GTE 61 is able to carry out the calculation of polygon coordinates typically at 1.5 million per second for flat shading where each polygon triangle is drawn in a single color. This allows the image processing system to reduce the load of the CPU 51 to a minimum and thus carry out the coordinate calculations at a higher speed. The GPU 62 responds to a polygon drawing command from the CPU 51 to draw a polygon or graph to the frame buffer 63. The GPU 62 can draw up to 360,000 polygons per second and also has a two dimensional address space independently of the CPU 51 for mapping the frame buffer 63.
The frame buffer 63 comprises a so-called double RAM port which performs at the same time a withdrawal of the drawing data from the GPU 62 or a transfer of the data from the main memory 53 and a release of the data for display. Likewise, the frame intermediate memory 63 may have a size of one megabyte which constitutes a pixel array of 1,024 horizontally by 512 vertically in the 16-bit format. Any desired area in the size of the frame buffer 63 can be supplied to a video output means 65 such as a display means. In addition to the area supplied as a video output, the frame buffer 63 includes an area of the color query table (referred to below as "CLUT") for the storage of a CLUT that is used as a reference during the drawing of the graphs or polygons with the action of the GPU 62 and a texture area for storing the texture data that will be transformed into a coordinate and maps are drawn in the graphs or polygons drawn by the GPU 62. CLUT areas as textures can be varied dynamically depending on a change of display area. The frame buffer 63 can thus effect an access of the pattern to the area on display and a high speed DMA transfer to and from the main memory 53. The GPU 62 can also carry out, in addition to the plane shading, the Gouraud breach where the color of a polygon is determined by interpolation of the vertex color, and texture map stroke where a selected texture of the texture area is determined. Fixed to a polygon. For Gouraud shading or texture mapping, GET 61 can perform the coordinate calculation at a rate of up to 500,000 polygons per second. The MDEC 64 is responsible for the command signal from the CPU 51 for decoding an immobile or moving image data removed from a CD-ROM disk and stored in the main memory 53 and subsequently stored again in the main memory 53. More particularly, the MDEC 64 performs a reverse discrete cosine transformation operation (referred to as an inverse DCT) at a high speed to expand the compressed data of the color image and mobile compression standard ( known as JPEG) or the standard encoding of the moving image for storage media (known as MPEG, but for intra-frame compression in this mode).
The produced image data is transferred through the GPU 62 to the frame buffer 63 and can therefore be used as a background for the image drawn by the GPU 62. The sound module 70 comprises the unit (SPU)
71 sound processor that responds to a command of the CPU 51 to generate a music or sound effect, a sound buffer 72 having, by way of example and not necessarily limiting mode, 512 kilobytes for storing the data of audio of voice or of music sound, the data of the sound source removed from the CD-ROM and a loudspeaker 73 that acts as a means of sound output to emit a music or sound effect generated with the SPU 71. The SPU 71 has an adaptive differential pulse code modulation (ADPCM) signal decoding function to reproduce an audio data that the converted four-bit ADPCM format of the 16-bit audio data, and a playback function to play the sound source data stored in the sound buffer 72 for outputting a music or effect sound, and a modulation function for modulating the audio data stored in the sound buffer 72 or for reproduction. More specifically, the CPU 71 has an ADPCM sound source with 24 voices where the movement parameters of the winding and time coefficients are automatically modified and activated by a signal from the CPU 51. The SPU 71 controls its address space traced on the map with the sound buffer 72 and can carry out the reproduction of the audio data by direct transmission of the ADPCM data with key connection / key disconnection or modulation of the CPU 51 to the memory intermediate 72 sound. Correspondingly, the sound module 70 is used as a sampling sound source to generate an effect or music sound corresponding to the audio data stored in the sound buffer 72 upon receipt of a command signal from the CPU 51. The module The optical disk controller comprises the disk driver 81 for removing a program or data from an optical disk of the CD-ROM, a decoder 82 for decoding a coded stored program or data accompanied by error correction codes (ECC), and an intermediate memory 83 for example of 32 kilobytes for storing the data withdrawn from an optical disk. The optical disk driver module 80 composed of the disk driver 81, the decoder 82 and other components for reading the data of a disk, is also placed to support other disk formats including CD-DA and CD-ROM XA. The decoder 82 also serves as a member of the sound module 70. The audio data removed by the disk drive 81 from the disk is not limited to the ADPCM format (for storage on the CD-ROM discs XA) but may be a common PCM mode produced by analog-to-digital conversion. The ADPCM data may be recorded in a 4-bit differential form calculated from the 16-bit digital data and is first subjected to error correction and decoding in the decoder 82, transmitted to the SPU 71 where it is converted from digital to analog and supplied to speaker 73 for playback. The PCM data may be recorded in the form of a 16-bit digital signal and decoded by the decoder 82 to drive the loudspeaker 73. An audio output of the decoder 82 is first sent to the SPU 71 where it is mixed with an output of the SPU and is released through a reverb unit for audio playback. The communication controller module 90 comprises a communication controller device 91 for controlling communications along the main bus B with the CPU 51, the controller 52 for the input of commands by the operator and the memory card 9.3 for storage of the driver. game adjustment data. The controller 92 is an interface for transmitting operator instructions for software application and may carry 16 command keys for the input of the instructions. The commands attributed to the keys as predetermined by the communication controller device 91 are fed to the communications controller device 91 in synchronous mode at a rate of 60 times per second. The communication controller device 91 then transmits the key commands to the CPU 51. The controller 92 has two connectors placed thereon to connect a number of controllers one after the other through the multi-lead port. Correspondingly upon receipt of the operator command, the CPU 51 begins to carry out a corresponding process action which is determined by a game program. When the initial adjustment or gradation is requested in a game to be played, the CPU 51 transfers the data related to the communication controller device 91, which, in turn, stores the data in the memory card 93. The memory card 93 is separated from the main bus B and can be freely installed or removed while the main bus B is energizing. This will allow the game ranking data to be stored on two or more of the memory cards 93. The system of this embodiment of the present invention is also provided with a parallel input and output port of 16 bits (I / O) and an asynchronous serial (I / O) input and output port 102. The system can be connected in port 101 parallel I / O with any other peripheral device and in port 102 in series I / O with another machine of video games for communications. Between the main memory 103, the GPU 62, the MDEC 64 and the decoder 82, it is required to transfer at high speed an immense amount of image data to read a program, display a text or draw a graph. The image processing system of this mode is therefore adapted to allow direct data transfer or DMA transfer between the main memory 53, the GPU 62, the MDEC 64 and the decoder 82 without using the CPU 51. instead of this, it is under the control of the controller 52 of the peripheral device. As a result, the load to the CPU 51 during data transfer will be considerably reduced, thus ensuring high-speed data transfer operations.
The video game machine of the present invention allows the CPU 51 to perform the operation system stored in RAM 54 upon power-up. As the operating system is performed, the actions of the graphics module 60 and the sound module 70 are correctly controlled by the CPU 51. Furthermore, when the operation system is invoked the CPU 51 starts to start the whole system by checking each action and then driving the optical disk driver module 80 to carry out a desired game program stored on an optical disk. During the execution of the game program, the CPU 51 operates the graphics module 60 and the sound module 70 in response to the control inputs by the operator to control the display of images and the reproduction of the music or the effect sounds. . The representation of the image data in the display device by the image data processing apparatus of the present invention will be explained below. The GPU 62 displays the area of a desired graphics model produced by the frame buffer 63 in the video output medium 65 or the display device, v. gr., such as a CRT. The area is referred to below as the exhibition area. The relationship between the display area and the display screen is illustrated in Figure 2. The GPU 62 is designed to support ten different display modes shown below. Resolution Mode: Lón Remarks 0 256 (HX 240 (V) Non-interlaced 1 320 (H) X 240 (V) Non-interlaced 2 512 (H) X 240 (V) Non-interlaced 3 640 (H) X 240 (V) No interlaced 4 256 (H) X 480 (V) Interlaced 5 320 (H) X 480 (V) Interlaced 6 512 (H) X 480 (V) Interlaced 7 640 (H) X 480 (V) Interlaced 8 384 (H) X 240 (V) Non-interlaced 9 384 (H) X 480 (V) Interlaced
The size or number of pixels of the display screen is variable and both the start and end locations of exihibition (which are expressed by (DTX, DTY) and (DBX, DBY) respectively in a coordinate plane) can be determined separately in the horizontal direction and the vertical direction, respectively, as shown in Figure 3. The relationship between a scale of applicable values in the coordinate and the exition mode will be shown below. It will be noted that DTX and DBX are multiples of 4. Therefore, the minimum screen size consists of 4 pixels horizontally by 2 pixels vertically (in a non-interlaced mode) or 4 pixels (in an interlaced mode). * The scale of values applicable along the X axis: Mode DTX DBX 0 and 1 0 to 276 4 to 280 1 and 5 0 to 348 4 to 352 2 and 6 0 to 556 4 to 560 3 and 7 0 to 700 4 to 704 8 and 9 0 to 396 4 to 400 * The scale of values applicable along the Y axis: DTY mode DBY From 0 to 3 and 8 From 0 to 241 From 4 to 243
From 4 to 7 and 9 From 0 to 480 From 4 to 484
In addition, the GPU 62 supports two display color modes, the 16-bit mode (32,768 colors) and the 24-bit direct mode (full color). The 16-bit direct mode (which will be referred to below as the 16-bit mode) offers 32,768 colors. Even when limited to the number of colors capable of being displayed compared to the 24-bit direct mode (which will be referred to below as the 24-bit mode), the 16-bit mode allows color calculations of the GPU 62 that are carried out in 24-bit mode, and also has a function that simulates a display of almost full color (24-bit color). The 24-bit mode offers 16,777,216 colors (full color) and provides a mapped bit display of the image data transferred to the frame buffer 63, but ceases to trigger any drawing action by the GPU 62. Whilst the bit length of a pixel comprises 24 bits, the values of the coordinate and location in the frame buffer 63 have to be determined on the basis of the 16-bit format. For example, the 24-bit data of 640x480 is treated as 960x480 in the frame buffer 63. Likewise, DBX is expressed by a multiple of 8. Correspondingly, the minimum display size in the 24-bit mode is 8 pixels in the horizontal by two pixels in the vertical. The drawing functions of the GPU 62 will be described below. Drawing functions include: drawing moving objects to generate moving objects (eg, a polygon) that varies from lxl points to 256x256 points in a 4-bit CLUT mode (4-bit format with 16 colors per object in movement), an 8-bit CLUT mode (8-bit format with
256 colors per moving object) and 16-bit CLUT mode (16-bit format with 32,768 colors per object in motion); the drawing of the polygon to make the drawing of a polygon (triangle, quadrilateral and the like) of which each vertex is defined by the coordinate values and then carry out the flat shading to fill the polygon with a single color, the shading Gouraud for provide a graduation in the polygon by assigning a different color to each vertex, and texture map plotting to apply (a texture pattern of) a two-dimensional image data to the surface of the polygon; line drawing where the graduation is applicable; and transferring image data to transfer the image data from the CPU 51 to the frame buffer 63, from the frame buffer 63 to the CPU 51, and from the frame buffer 63 to it. Another function can be added, such as semi-transparent reproduction in which the pixels are averaged (also known as alpha mixing because the pixel data are combined together at an alpha relationship or a desired ratio), agitated or fused to smooth the interface. colors with the use of noise, clamping with staples to eliminate particularities outside the drawing area, or "offsetting" where the origin of the drawing moves depending on the area of the drawing. The coordinate system where a graph is plotted is based on an 11-bit format thus assigning each value of X and Y to a scale of -1024 to +1023. As shown in Figure 4, the size of the frame buffer 63 is 1024x512 and any enlargement can be doubled. The origin of a pattern can be determined arbitrarily within the frame buffer 63 by controlling the offset values of the coordinates. Due to the stapling function the pattern is applicable to any configuration only within the frame buffer 63. Since the moving object supported by the GPU 62 represents 256x256 points at maximum, its horizontal and vertical lengths can be determined freely within that scale. The image data (or a moving object pattern) to be attached to the moving object is assigned to a non-display area of the frame buffer 63 as shown in Figure 5. Therefore, the The pattern of the moving object is transmitted to the frame buffer 63 before starting the drawing command. A number of moving object patterns can be retained in the form of page units of 256x256 pixels as long as the memory areas of the frame buffer 63 are available. The pixel size of 256x256 is called a texture page. The location of each texture page is determined by assigning a page number to the parameter of a drawing command that is called TSB to specify the point (address) of the texture page. The pattern of the moving object is classified into three types of color mode, 4-bit CLUT mode, 8-bit CLUT mode and 16-bit CLUT mode. The 4-bit and 8-bit CLUT modes employ a CLUT. The CLUT is shown in Figure 6 where from 16 to 256 of the R, G and B values of the three primary colors to create visible colors to be displayed are aligned in the frame buffer 63. The values of R, G and B are numbered in a sequence from the left end to the frame buffer 63 and the color of a pixel in the pattern of the moving object is identified by the number. The CLUT can be selected for each moving object and moving objects can be associated with their respective CLUT. In Figure 6, each entry represents a single pixel of the 16-bit mode and each CLUT is equal to 1x16 bits (in 4-bit mode) or 1x255 bits (in 8-bit mode) of the image data. The storage location of the CLUT in the frame buffer 63 is determined by assigning coordinate values at the left end of the CLUT to be used towards the parameter of a drawing command called CBA to specify the point (address) of the CLUT. The drawing of a moving object is shown schematically in Figure 7 where U and V of the drawing controls are parameters to specify the location on a texture page as expressed in a horizontal direction and in a vertical direction, respectively. Also, X and Y are parameters to specify the location of a drawing area. The GPU 62 employs a moving display technique which is known as double-box regulation where, as shown in Figure 8, two quadrilateral patterns are prepared in the frame buffer 62; one is displayed while one graph is being mapped on the other. When the drawing has become complicated the two patterns are changed. This allows the display of the rewriting action to be avoided. The change in frame buffer 63 can be carried out during the vertical interval. Also, since the origin settings of the coordinates of a plot to be drawn are arbitrarily determined on the GPU 62, they can be used with movements to signal a plurality of the buffers. A data format produced by the image data processing method of the present invention in the designated image processing apparatus will be described below. The graphics handled by the image processing apparatus of the embodiment of the invention described first are classified into two types, three-dimensional graph and fundamental two-dimensional graph. The three-dimensional graph is implemented using the modeling data (which will be referred to below as the TMD data) that represents an attribute of the facet of the configuration of a realistic model or object to be drawn and the animation data (at which will be referred to below as the data TOD) that includes the location data of the object. The two-dimensional data includes the image data (which will be referred to below as the TIM data) used as a pattern of moving object or texture base. The data of the BG map
(which is referred to as the BGD data) for mapping a background, the cell data (which will be referred to as the CEL data) and the information data (referred to as the ANM data) to animate the moving object. The format of the TOD animation data (referred to as the TOD format) is designed to assign the data of the three-dimensional object on a time basis. More specifically, each frame of the three-dimensional animation data (composed of a series of tables) is expressed with an essential data to generate, vary and terminate the three-dimensional object, and the data in the table is aligned along the a time base A file in the TOD format (referred to as the TOD file) consists of a file head and a series of frame data as shown in Figure 9. The head shown in Figure 9 comprises two words (from 64 bits) placed on the front end of the TOD file, which carry four different types of information that are: (a) "ID file" (8 bits) "ID file" that indicates the file as an animation file; (b) "version" (8 bits) indicating a version of the animation; (c) "resolution" (16 bits) "resolution" represents a time interval (in taps of 1/60 second) during which a frame is being displayed; and (d) "number of frames" (32 bits) indicating the number of frames in the file. The head is followed by a number of frames that are aligned in time sequence. As shown in Figure 10, the table comprises a "box head" and a series of PACKAGES. The "box head" placed at the front end of the BOX of Figure 10, contains two words that carry the following information: (a) "frame size" (16 bits) indicating the size of all the data in the table (including the head of the frame) that is expressed in words (4 bytes); (b) "number of packets" (16 bits) that represent the number of packets in the box; and (c) "frame number" (32 bits) that indicate the frame number. The "head of the box" is followed by a number of PACKAGES. As shown in Figure 11, the PACKAGE comprises a "packet header" of a word and a "packet data". The type of PACKAGE is not always the same and the "package data" in each PACKAGE varies not only when the PACKAGE type is different, but also when it is identical. As shown in Figure 11, the PACKAGE comprises a "pack header" and a "packet data". The "packet header" contains the following information: (a) "object ID" (16 bits) "object ID" indicates the type of a reference object; (b) the "package type" (4 bits) that represents a type of package that explains the content of the "package data"; (c) "flag" (4 bits) that depends on the "package type"; and (d) the "packet length" (8 bits) indicating the length of the packet (including the "packet header") that is expressed in words in (4 bytes). The "package data" also contains other information including the data ID, TMD (ID of the modeling data) and the SRST values that will be described subsequently. The package is identified by the "package type" stored in the spindle. The "package type" is represented by a set of numbers assigned to the details of the data as shown below: 0 Attribute 1 Coordinate (RST) 10 Data ID TMD 11 Guest object ID 100 Value of MATRIX 101 Content of the TMD data 110 Light source 111 Camera 1000 Object control 1001-1101 User definition 1110 System reservation 1111 Specific command This will be explained subsequently in greater detail. The "Attribute" is expressed with 0000 of the "package type", indicating that the "package data" contains information for the attribute's graduation. In this case, "flag" is not used. The "packet data" consists of two words, as shown in Figure 12. The first word is a mask comprising signaling bits of value change and signaling bits without change . The value of the value change signaling bits are expressed by 0 and the signaling bits without change are represented by 1. In the second word, the bits indicated by the value change signaling bits are loaded with new data and the remaining bits are expressed by 0. The faults assigned to the bits of no change are different, 1 in the first word and 0 in the second word . The bits of the second word in the "package data" have the following details: Bits 0 to 2 Weakening of the material: 00 (Weakening of the Material 0) 01 (Weakening of the Material 1) 02 (Weakening of the Material 0) 03 (Weakening Material 1) Bit 3 Illumination mode 1: 0 (no haze) 1 (with haze) Bit 4 Illumination mode 2: 0 (with material) 1 (without material) Bit 5 Illumination mode 3: 0 (Illumination mode of the user) 1 (User failure lighting mode) Bit 6 Light source: 0 (Without light source calculation) 1 (Connected for light source calculation) Bit 7 During overflow: 0 (with Z overflow clamp) 1 (without Z overflow clamp) Bit 8 Back clamp: 0 (Yes) 1 (No) Bits 9 to 27 System reservation (initiated by 0) Bits 28 to 29 Semitransparency scheme: 00 (50 percent) 01 (Added 100 percent) 10 (Added 50 percent) 11 (Added 25 percent) Bit 30 Semitransparenc ia: 0 (Connected) 1 (Disconnected) Bit 31 Display: 0 (Yes) 1 (No) When the light source calculation is graduated in connection, the bits in the "package data" are as shown in Figure 13 As is evident, bit 6 in the first word is 0 indicating that the change is requested for light source information while the other bits of no change remain at 1. In the second word, bit 6 is 1 indicating that the calculation of light source is connected and the other bits that have not changed remain at the failure value of 0. "Coordinate (RST)" is expressed with 0001 of the "package type" and the "package data" contains the data to graduate the values of the coordinate. In this case, the "flag" is illustrated in Figure 14. As shown, the "matrix type" represents the type of an RST matrix; for example, 0 indicates an absolute matrix and 1 is a differential matrix of the previous table. Also, "rotation" means a rotation flag (R); 0 is no and 1 is yes. "Graduation" is a graduation flag (S); 0 is no and 1 is yes. Similarly, "transfer" is a parallel movement flag (T); 0 is no and 1 is yes. The assignment in the "package data" is varied depending on a bit pattern of "rotation", "graduation", and "transfer" (for parallel movement) in the "flag", as illustrated in Figure 15. As shown, Rx, Ry, and Rz represent the components of a rotation along the X axis, the Y axis, and the Z axis respectively. Likewise, Sx, Sy and Sz are components of a graduation along the X axis, Y axis and Z axis, respectively, and Tx, Ty and Tz represent the components of the parallel movement along the X axis, Y axis and Z axis, respectively. "TMD ID" is represented by 0010 of the "packet type" and the "packet data" holds the modeling data (TMD data) ID in a reference object, as shown in Figure 16. The TMD ID data consists of two bytes The "Guest ID object" is expressed with 0011 of the
"packet type" and the "packet data" store the host ID object of a reference object, as shown in Figure 17. The guest ID object comprises 2 bytes and, in this case, the "flag" is not uses. The "type of matrix" is represented by 0100 of the "type of package" and the "package data" retains the data to grade the members of the coordinate. In this case, the "flag" is not used. Figure 18 illustrates an assignment in the "package data". When the "package type" is 0101, the "package data" carries the TMD data, as will be explained subsequently in greater detail. The "light source" is expressed with 0110 of the
"type of package" and the "package data" retain the data for a graduation of the light source. In this case, the object ID "represents another light source, different from a common" object ID "Also, the" flag "retains the specific information as shown in Figure 19. In Figure 19, the" data type " "indicates the type of data, an absolute when it is 0 and a difference from the previous table when it is 1. The" address "is an address flag, 0 is no and 1 is yes. Similarly," color "represents a flag of color; 0 is no and 1 is yes The assignment in "package data" varies depending on a bit pattern of "address" and "color" in the "flag", as shown in Figure 20. When the "package type" is 0111 for
"camera", the "package data" retains the data for graduation of the point of view data. The "ID object" is then an ID camera, but not a common "ID object". The "flag" is also specified as shown in Figure 21. If the "camera type" bit shown in Figure 21 is 0, the remaining bits are as shown in Figure 22. If it is 1, the bits The rest are as shown in Figure 23. More specifically, the "data type" indicates a type of data; 0 is an absolute value and 1 is a difference from the previous table, as is evident in Figure 22. The "position and reference" in Figure 22 is a flag for location of the point of view and reference; 0 is no and 1 is yes. Similarly, "z-angle", as shown in Figure 22, is a flag for an angle of the reference location from the horizontal; 0 is no and 1 is yes. In Figure 23, the "data type" also indicates a type of data; 9 is an absolute value and 1 is a difference from the previous table. The "rotation" as shown in Figure 23 is a flag for rotation (RF); 0 is no and 1 is yes. Similarly, the "transfer" as shown in Figure 23 is a parallel movement flag (T); 0 is no and 1 is yes. The assignment in the "package data" therefore varies depending on the content of the "flag" as shown in Figures 24 and 25. The "object control" is expressed by 1000 of the "package type" as designate to control an object. In this case, the "package data" does not carry information. Finally, when the "package type" is 1111 for specific control, the animation data is controlled. The modeling data format (which will be referred to below as the TMD format) will be explained below. In common three-dimensional graphics, an object is expressed by a set of polygons. The data that represents the object is called the modeling data. The vertices of each polygon are represented by coordinate values in three-dimensional space. The coordinate transformation device described in the prior art is provided to convert the vertex locations of a polygon by perspective view transformation with respect to two-dimensional coordinate values which are then subjected to a device for drawing. The data is transmitted to the device in the form of packets. Commonly, a package contains the data for a polygon. The packages vary in structure and size, depending on the type of polygon. In the format described above, in accordance with the present invention, the structure of the polygon data for a geometric graph, except for a portion of the data, is placed identically to the structure of a packet, thereby allowing the coordinate transformation device to carry out processing at a higher speed. There are certain applicable three-dimensional coordinate systems, including an object coordinate system for representing the configuration and size of a three-dimensional object, a word coordinate system for indicating the location of a three-dimensional object in space, and a coordinate system for screen to show a three-dimensional object projected on a screen. For reasons of simplification, the description will be made together with the object coordinate and screen systems for the three-dimensional objects. The format (TMD) of the present invention for the data and geometry of the object of the modeling data is intended for use in a three-dimensional extended graphics library of the image processing apparatus of the previous embodiment, installed in a computer domestic video games, a microcomputer or a graphics computer. The data of the TMD format can be downloaded directly to a memory as the factors of the functions attributed to the extended graphic library. The information to be carried out in a file (referred to below as a TMD file) of the TMD format remains in an RSD file for the highest compendium text data while using a three-dimensional tool or tool of artist, and moves through a specific command ("RSDlink" command) to the TMD format during the production of a program. The data in the TMD file is a set of primitives that represents the polygons and lines of an object. An individual TMD file can retain a plurality of objects to be drawn. The coordinate values in the TMD file are designated in a managed space in the extended graphics library of the image data processing apparatus of the present invention, wherein the rightward direction is in a positive direction along the axis x, down is in a positive direction along the y axis, and backward is in a positive direction along the z axis. The coordinate values of the object are expressed by integrals of the 16 bit data of each of the coordinate value scales -32767 to +327687. In a format in the design step (referred to as an RSD format), the values of a vertex are floating point numbers, and therefore, the files that are to be moved from RSD to TMD have to be matched on the scale by expansion and compression. For this purpose, scale adjustment references are prepared and installed in a structure of the object to be explained later. When the values of the vertices in the format of the TMD format are multiplied by scale references, they are returned to the original scale in the design step. This helps determine an optimal scale for plotting values in the word coordinate system.
The TMD format according to the present invention will be explained later in greater detail. As shown in Figure 26, the TMD format comprises four blocks that contain the datum data for a three-dimensional object (OBJ TABLE) in a file
TMD, the primitive data (PRIMITIVE), the vertex data
(VÉRTICE), and the normal data (NORMAL). A head (HEAD) of the TMD format in the Figure
26 retains three words (12 bytes) that bear the data of the structure of the format illustrated in Figure 27.
As shown in Figure 27, ID is the 32-bit data
(a word) that represents a version of the TMD file.
FLAGS also constitute 32 bits of data (one word) representing one type of TMD format structure. The least significant bit (LSB) is a FIXP bit, which will be described subsequently and the other bits are all reserved as represented by Os. The FIXP bit indicates whether the indicator of the construction of the object is a real address or not, and this will also be explained subsequently in greater detail. When the FIXP bit is 1, the indicator is a real address. If it is 0, it is an offset from the front. NOBJ is an integral that represents the number of objects.
The OBJ of Figure 26 contains a table consisting of a group of object structures accompanied by indicators indicating the storage locations of the objects as shown in Figure 28. Each structure of the object is expressed by: object structure. { u_long * vert_top; u_long n-vert; u_long * normal-Top; u-long n_normal; u_long * primitive_top; u_long n-primitive; long scale; } where vert top: front end direction of the vertex n_vert: number of VÉRTICE, normal_tope: front end address NORMAL n_normal: number NORMAL, primitive-top: front end direction PRIMITIVE, n_primitive: number of the POLYGON scale: Graduation factor.
The indicators (vert__top, normal_top, and primitive-top) in the structure of the object are varied depending on the FIXP bit in the HEAD. When FIXP is 1, the indicator is a real address. When FIXP is 0, the indicator is a relative address with the front end of the OBJECT assigned to address 0. The graduation factor is of the type "long" with a sign and its power of 2 represents a scale value. For example, when the graduation factor is 0 of the structure of the object, the scale is 1/1. When it is 2, the scale is 4 and when it is -1, the scale is 1/2. The PRIMITIVE of Figure 26 contains a series of primitive packets of the object as shown in Figure 29. Each individual packet carries a single primitive. The primitives defined by the TMD format are used with the functions in the extended graphics library for perspective transformation and are converted into drawing primitives. The package shown in Figure 29 is variable in length and its size and structure are changed depending on the type of primitive. The "Mode" in the packet of Figure 29 comprises 8 bits indicating the type and attribute of its primitive and its assignment is shown in Figure 30. 3 bits of the CODE in Figure 30 represent a code indicative of the type of content; 001 is a polygon (triangle, quadrilateral, etc), 0.10 is a line and 011 is a rectangle of an object in motion. Also, the OPTION retains the optional bits and varies depending on the value of the code
(which is typed in a list of the components of the package data that will be explained subsequently). The "Flag" in the packet of Figure 29 is an 8-bit data that represents the optional information for its performance and its bit allocation is shown in Figure 31. GOR in Figure 31 is available with the calculation requirements of luminous font and no texture, and polygon application. When GOR is 1, it indicates a polygon of graduation and when it is 0, a polygon of a single color. When FCE is 1, the polygon is of two sides and when it is 0, it is only one side (applicable when the CODE represents a polygon code). Likewise, when LGT is 1, the calculation of light source is not involved and when it is 0, it is involved, "ilen" of Figure 29 contains an 8-bit data representing a word length of the package data. Similarly, "olen" is an 8-bit data indicating a word length of the drawing primitive that is generated during the process. The "package data" is composed of several parameters for vertex and normal that are determined by the type of primitive. The structure of the "package data" will also be explained subsequently in greater detail. The VERTICE shown in Figure 26 is a train of data structures that represent vertices. The format of each structure is illustrated in Figure 32. In the Figure
32, VX, VY and VZ are values of the x, y and z coordinate
(16 bit integer) and a vertex, respectively. The NORMAL in Figure 26 is a train of data structures that indicate normal. The format of each structure is shown in Figure 33 where NX, and NZ are components of x, y, and z (fixed decimal point fraction of 16 bits) of a normal, respectively. More specifically, NX, NY and NZ are expressed by signed 16-bit decimal point fractions when 4096 represents 1.0. Its bit allocation is shown in Figure 34 where the sign is expressed by a bit, an integer is 3 bits, and a fraction is 12 bits. The structures of the package data that depend on the types of the primitive will be explained below. The parameters in the package data are classified in Vertex (n), Normal (n), Un, Vn, Rn, Gn, Bn, TBS and CBA. The vertex (n) is a 16-bit index value indicating the location of the VETRESS. It is indicative of the number of an element counted from the front end of the VÉRTICE format shown in Figure 26, specifying an object including the polygon. Normal (n), like Vertex (n), represents a 16-bit Index value that points to the location of NORMAL. Un and Vn are coordinate values x and y respectively, in the space of the texture source of each vertex. Rn, Gn and Bn are values of R, G and B, respectively, which represent a color of the polygon as expressed by the unsigned 8-bit integers. If the calculation of the light source is not involved, luminance faults must be preliminarily provided. The TBS parameter carries information about the texture and pattern of the moving object and its format is shown in Figure 35. The TPAGE in Figure 35 represents the number (from 0 to 31) of the texture page. Likewise, the ABR is a semitransparency regime (mixing regime) and is only eligible when ABE is 1. When ABR is 00, the regime is 50 percent back + 50 percent polygon. When it is 01, 100 percent back + 100 percent polygon. When 10, 100 percent back + 50 percent polygon. When 11, 100 percent back-100 percent polygon. TPF in Figure 35 represents a color mode.
00 in TPF provides a 4-bit mode, 01 means an 8-bit mode, and 10 represents a 16-bit mode. The CBA parameter indicates the storage location of the CLUT in the frame buffer 63, as shown in Figure 36. CLX of Figure 36 is 6 bits higher than 10 bits of the CLUT X coordinate value in the buffer 63 of frame and CLY is 9 bits of the Y-coordinate value of CLUT in the frame buffer 63. The structure of the package data itself will be explained below. The explanation is first made by referring to a polygon of a triangular configuration with the calculation of light source. Figure 37 shows the bit allocation of a mode value in the PRIMITIVE. As shown, IIP represents a shading mode; 0 provides a flat shading mode and 1 provides a Gouraud shading mode. Likewise, TME is used to assign a texture; 0 represents disconnected and 1 is connected. TGE offers a luminance calculation during texture map plotting; 0 is connected and 1 is disconnected (where a texture is directly applied). These parameters are applicable to any polygon configuration. The packet data structure is as shown in Fig. 38. More specifically, Fig. 38A shows a flat color shading mode with the texture assignment disconnected. Figure 38B is in the Gouraud shading mode in a single color with the texture assignment being disconnected. Figure 38C is a graduated plane shading mode with the texture assignment being disconnected. Figure 38D is in a graduated Gouraud graduation mode with the texture assignment being disconnected. Figure 38E is in a plane shading mode with the texture assignment connected. Figure 38F is in a Gouraud shading mode with texture mapping being connected. In the meantime, "mode" and "flag" express a state of a one-sided polygon with the semitransparency disconnected. An example of the structure of the packet data will be explained below, referring to a polygon of a triangular configuration without the use of a light source calculation. The bit assignment of a mode value in the primitive is identical to that in Figure 37. The structure of the packet data is as shown in Figure 39. In more detail, Figure 39A shows a mode of plane shading with the texture assignment disconnected. Figure 39B is in a Gouraud shading mode in graduation with the texture assignment being disconnected. Figure 39C is in a plane shading mode with texture mapping being connected. Figure 39B is in a Gouraud shading mode in graduation with the texture assignment being connected. Another example of the packet data structure will be explained below, referring to the polygon of a quadrilateral configuration and using the calculation of the light source. The bit assignment of a mode value in the
PRIMITIVE is shown in Figure 40, where the bits are assigned in the same manner as in Figure 37. The structure of the packet data is specified as shown in Figure 41. In particular, Figure 41A shows a mode of shaded plane with the texture assignment disconnected. Figure 41B is in a Gouraud shading mode with the texture assignment being disconnected. Figure 41C is a graduated plane shading mode with the texture assignment being disconnected. Figure 41D is in a graduated Gouraud shading mode with the texture assignment being disconnected. Figure 41E is in a flat shading mode with texture mapping being connected. Figure 41F is in a Gouraud shading mode with texture mapping being connected.
A further example of the structure of the package data will be explained below with reference to a polygon of a quadrilateral configuration without the use of calculation of the light source. The bit assignment of a mode value in the
PRIMITIVE is shown in Figure 40, where the bits are assigned in the same manner as in Figure 37. The structure of the packet data is as shown in Figure 42. More specifically, Figure 42A shows a mode shaded plane with the texture assignment disconnected. Figure 42B is in a Gouraud shading mode with the texture assignment being disconnected. Figure 42C is a plane shading mode with texture mapping being connected. Figure 42D is in a Gouraud shading mode (in graduation) with the texture assignment being connected. The structure of the package data will be explained below with reference to the lines. The bit assignment of a mode value in the
PRIMITIVE is shown in Figure 43. IIP in Figure 43 represents the connection and disconnection of the graduation; when it is 0, the graduation is disconnected (in a single color and when 1, the graduation is switched in. Also, ABE indicates the connection and disconnection of the semitransparency processing, 0 represents disconnected and 1 is connected. this example is as shown in Figure 44. Figure 44A shows the graduation being disconnected, and Figure 44B indicates the graduation being connected.The packet data structure will be explained below with reference to a three-dimensional moving object. Three-dimensional moving object has three three-dimensional coordinate values and its graph content is similar to those of a common moving object.The bit allocation of a mode value in the PRIMITIVE is shown in Figure 45. SIZ in Figure 45 is the size of the object in motion, 00 represents a free size (determined by the values of W and H), 01 is a size of lxl, 10 is 8x8, and 11 is 16x16. Also ABE indicates the semitransparency processing; 0 represents disconnected and 1 connected. The specific packet data structure as shown in Figure 46. As will be apparent, Figure 46A shows a moving object size being free, Figure 46B represents a moving object size of lxl, Figure 46C provides 8x8 , and Figure 46D is 16x16.
In the TMD file format shown in Figure 26, a region carrying the modeling data representing the configuration of an object is identical in part to the data structure of the prior art package. This allows GTE 61 (coordinate transformation device) to complete processing of the region by simply copying the data based on a word. For example, three regions 1, 2 and 3 shown in Figure 38F may correspond to the data of the prior art package. A sequence of GTE 61 actions upon receipt of the TDM format data will be described below with reference to Figure 47. As shown in Figure 47, the data of a reference polygon is captured in the SIO Step and classified in the Sil Step types. When a word (32 bits) is extracted in Step S12 and examined in Step S13 whether or not it is common with the package data. When the word is common, it is copied into the packet data in Step S17. If it is not common, the procedure to Step S14 where a process is done according to VÉRTICE and NORMAL. This is followed by producing a packet data in Step S15. Then, it is examined in Step S16 whether the transformation of the coordinate of a polygon has been completed or not. If it has not been completed, the procedure returns back to step S12 and if the answer is yes, it is terminated. Figure 48 shows a considerable sequence of steps for the three-dimensional coordinate transformation data. The data of the configuration of an object (modeling data) is admitted in Step Sl and the coordinate is transformed in step S2. This is followed by the calculation of the light source in Step S3. Step S4 is then examined whether the processing of all the polygons has been completed or not. If it has not been completed, the procedure goes back to Step S2. If it is judged that if completed in Step S4, a data item is released from the packet in step S5. To vary the image in real time, steps S2 and S3 have to be repeated at high speed. When the shading does not need to be carried out in real time, its step S3 is removed from a circuit by moving in this way to the flow chart shown in Figure 49. When judged not in step S4 of Figure 49, the procedure returns back to Step S3. In this case, the three regions 0, 3 and 6 of Figure 38F, for example, are determined only once and the coordinate transformation step will be less loaded. Another form of the format for the object shaping data (modeling data), in accordance with the present invention, will be explained below to simplify the coordinate transformation process when shading is not carried out in real time. Figure 50 shows the format of a file for another embodiment of the present invention. Data represented by the TYPE representing the type and attribute of the polygons and data, represented by NPACKET, indicating the number of the polygons, is provided in the head of the file. These two items are followed by a set of "polygon data" blocks that are equal in number to the polygons. The "polygon data" is shown in Figure 51 and contains two packet data and three-dimensional coordinate values for the vertices of the polygon. The data of the package of Figure 51 is composed as shown in Figure 52, which is similar to that of Figure 57. It is also varied in structure and length, depending on the type of polygon. In this mode, since the shading in the object is not calculated in real time, the following parameters can be written before starting the transformation of the coordinate: CODE, (BO, GO, RO), (VO, UO), (Bl , Gl, Rl), (VI, Ul), (B2, G2, R2), (V2, U2). When the aforementioned parameters have been determined, a point from the location of each vertex is represented by: (YO, XO), (Yl, XI), (Y2, X2) that is only calculated for the coordinate transformation, using the which the procedure of the coordinate transformation is simplified. Likewise, it is unnecessary to provide a storage area for the packet data in the memory. In accordance with this embodiment of the present invention, the memory is only filled with the data that is to be varied in the coordinate transformation process, thereby contributing to an economy of both time and labor. As indicated above, the structure of the three-dimensional image data according to the present invention, excluding the information that is to be transformed into perspective view, it is arranged to be identical to that of the given transmission norm of the two-dimensional image data. Therefore, when the information to be transformed into a perspective view for the three-dimensional data has been processed, a corresponding two-dimensional image data of the determined transmission standard is obtained. More specifically, an original file containing the data to be transformed can be converted into the new format. In addition, the three-dimensional image data, in accordance with the present invention, is capable of carrying the data for shading on an object to be drawn on a bi-dimensional display device, thus eliminating an extra calculation to generate the shading data during the reproduction of the two-dimensional image data. In this way, the present invention fills a need that has existed for a long time for the processing of the improved image data where a file containing the original image data to be transformed easily becomes a new format for an apparatus of image data processing in order for this image data and a recording medium to process this enhanced image data. It will be apparent from the foregoing that, although specific forms of the invention have been illustrated and described, various modifications can be made without departing from the spirit and scope of the invention. Accordingly, the invention is not intended to be limited except by the appended claims.
Claims (34)
1. An image data processing method for producing a three-dimensional image data that is converted by perspective view transformation into a two-dimensional image data is transferred in a transmission standard to draw an image on a two-dimensional display device, the method comprises: providing a data format for the image data, the format includes the information that has been transformed into a perspective view, and which is arranged in an identical manner to that of the transmission standard determined for the two-dimensional image data.
2. A method according to claim 1, wherein the data format includes a data for an individual polygon.
3. A method according to claim 1, wherein the data format includes a texture data.
4. A method according to claim 1, wherein the data format is to provide a command that includes a combination of individual polygon coordinate data, and texture data for the polygon.
5. A method according to any of claims 2 or 4, wherein the polygon data is transformed into a coordinate data.
6. An image data processing method according to claim 1, wherein the image data includes information about shading on an object to be drawn on the screen of the two-dimensional display device.
7. An image data processing apparatus comprising: a coordinate transformation means for converting the three-dimensional image data by perspective view transformation in the two-dimensional image data; and a drawing means for providing the two-dimensional image data in a given transmission standard to draw a corresponding image on a screen of the two-dimensional display device, wherein a data format structure of the three-dimensional image data, excluding the information that it is to be transformed into a perspective view, it is arranged in an identical way to that of a transmission standard selected for the two-dimensional image data; a coordinate transformation means for discriminating the information that is to be transformed into a perspective view of the other data of the three-dimensional image data; and a combining means for combining the transformed data with the other three-dimensional data to provide a command to draw a two-dimensional image on the transmission standard selected for the production of the two-dimensional image data.
8. An apparatus according to claim 7, wherein the data format includes the data for an individual polygon.
9. The apparatus according to claim 7, wherein the data format includes a texture data.
10. The apparatus according to claim 7, wherein the data format is for providing a command that includes a combination of the individual polygon coordinate data and the texture data for the polygon.
11. The apparatus according to any of claims 8 or 10, wherein the polygon data is a transformed coordinate data.
12. An image data processing apparatus according to claim 7, wherein the three-dimensional image data includes information about the shading of an object to be drawn on a screen of the two-dimensional display device.
13. A recording medium comprising: a storage element for carrying the recorded three-dimensional image data that is converted by perspective view transformation into a two-dimensional image data and transferred in a given transmission standard to draw an image in a two-dimensional display device, the registered data has a data format to include the information that has been transformed into perspective view and that is placed in an identical manner to that of the transmission standard determined for a two-dimensional image data.
14. A recording medium according to claim 13, wherein the three-dimensional image data includes information about shading on an object to be drawn on the screen of the two-dimensional display device.
15. A recording medium according to claim 14, wherein the data format includes a data for an individual polygon.
16. A recording medium according to claim 14, wherein the data format includes a texture data.
17. A recording medium according to claim 14, wherein the data format is to provide a command that includes a combination of the individual polygon coordinate data and the texture data for the polygon.
18. A recording medium according to any of claims 15 or 17, wherein the polygon data is a transformed coordinate data.
19. An image data processing apparatus comprising: (a) a separation means for separating the first data representing a three-dimensional image and a second data for the image data of a polygon; (b) a conversion means for converting the first data into a two-dimensional image data; (c) an instruction generating means for combining the converted two-dimensional image data with the second data to generate the instruction data for each polygon.
20. An apparatus according to claim 19, wherein the conversion means i2 - converts the first data into a two-dimensional image data by perspective transformation.
21. An apparatus according to claim as claimed in any of claims 19 or 20, wherein the conversion means is a graphic transformation engine.
22. An apparatus according to claim 19 further comprising: a drawing means for drawing a graphic image in a graphics memory in response to the instruction data; and a means for providing the graphic image read from the graphic memory to a display device.
23. An apparatus according to claim 22, wherein the drawing means is a graphic processing unit.
24. An apparatus according to claim 19, wherein the second data represents a mapped texture in the polygon.
25. An apparatus according to claim 24, wherein the second data includes an address where the data of the texture graph is stored.
26. An apparatus according to claim 19, wherein the first data includes the vertex coordinates of the polygon.
27. An apparatus according to claim 26, wherein the first data includes an indicator as to where the coordinates of the polygon are stored.
28. A method for processing a three-dimensional image comprising the steps of: (a) separating a first data representing a three-dimensional image and a second data for the image data of a polygon; (b) converting the first data into a two-dimensional image data; (c) combining the converted two-dimensional image data with the second data to generate the instruction data for each polygon.
29. A method according to claim 28, wherein the instruction data provides a command.
30. A method according to claim 28, wherein the conversion step converts the first data into the two-dimensional image data by perspective transformation.
31. A method according to any of claims 28, 29 or 30, and further comprising the steps of: drawing a graphic image in a graphics memory in response to the instruction data; and providing the graphic image read from the graphic memory towards an display device.
32. A recording medium according to any of claims 13 to 17, wherein the storage element is a CD-ROM.
33. An image data processing apparatus comprising: a graphic transformation engine for converting the three-dimensional image data by perspective view transformation into a two-dimensional image data; and a graphics processing unit for providing the two-dimensional image data in a given transmission standard to draw a corresponding image on a two-dimensional display device screen, wherein a structure of the data format of the three-dimensional image data, excluding the information which is to be transformed into a perspective view, is positioned identically to that of a transmission standard selected for the two-dimensional image data; a graphic transformation engine to discriminate the information that is to be transformed into a perspective view of another data of the three-dimensional image data; and a combining means for combining the transformed data with another three-dimensional data in order to provide a command for drawing a dimensional image in the transmission standard selected for the production of the two-dimensional image data.
34. An image data processing apparatus comprising: (a) a data separator for separating the first data representing a three-dimensional image and a second data for the image data of a polygon; (b) a data transformation unit for converting the first data into a two-dimensional image data; (c) an instruction generator for combining the converted two-dimensional image data with the second data to generate the instruction data for each polygon. SUMMARY OF THE INVENTION One method and apparatus includes a geometry transfer engine (GTE) 61 which acts as a coordinate transformation means for converting the three-dimensional image data of the TMD format into a two-dimensional image data by perspective view transformation, and a graphics processing unit (GPU) 62 which acts as a drawing means for transferring the two-dimensional image data in a given transmission standard to draw an image on a screen of the two-dimensional display device. A structure of the three-dimensional image data excluding the information to be transformed into perspective view, is placed in an identical manner to that of a given transmission standard of the two-dimensional image data. Correspondingly, in the GTE 61, the information that is to be transformed into perspective view is discriminated from the other data of the three-dimensional image data in which the structure is identical to that of the determined transmission norm of the two-dimensional image data, which is submitted to the perspective view transformation and is combined with the other data where the structure is identical to that of the transmission standard determined for the production of the two-dimensional image data. An original format file including the data to be transformed is easily converted into a file that has a new format. In testimony of which, I have signed the above description and novelty of the invention as a proxy of SONY CORPORATION, in Mexico City, Federal District today, November 24, 1995. p.p.de SONY CORPORATION.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP30002094A JP3578498B2 (en) | 1994-12-02 | 1994-12-02 | Image information processing device |
JPP06-300020 | 1994-12-02 |
Publications (2)
Publication Number | Publication Date |
---|---|
MX9504904A MX9504904A (en) | 1998-07-31 |
MXPA95004904A true MXPA95004904A (en) | 1998-11-09 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU702762B2 (en) | Method of producing image data, image data processing apparatus, and recording medium | |
CA2164269C (en) | Image processing in which the image is divided into image areas with specific color lookup tables for enhanced color resolution | |
JP3725524B2 (en) | Method for generating computer display image and computer processing system and graphics processor for generating image data | |
EP0715278B1 (en) | Method of producing image data and associated recording medium | |
US5933148A (en) | Method and apparatus for mapping texture | |
KR100609614B1 (en) | Method and apparatus for transmitting picture data, processing pictures and recording medium therefor | |
WO2000025269A1 (en) | Recording medium, image processing device, and image processing method | |
JP3462566B2 (en) | Image generation device | |
JP3548642B2 (en) | Image information generating apparatus and method, image information processing apparatus and method, and recording medium | |
MXPA95004904A (en) | Method for producing image data, image data processing device and regis medium | |
JP3547236B2 (en) | Image information generating apparatus and method, and image information processing apparatus and method | |
JP3698747B2 (en) | Image data generation method and image processing system | |
JPH08161511A (en) | Image generating device | |
JPH08161465A (en) | Method for generating image data file, recording medium and method for preparing image | |
JPH10334249A (en) | Image information generating method, image information generator and recording medium |