MXPA97003402A - Method and apparatus for da processing - Google Patents

Method and apparatus for da processing

Info

Publication number
MXPA97003402A
MXPA97003402A MXPA/A/1997/003402A MX9703402A MXPA97003402A MX PA97003402 A MXPA97003402 A MX PA97003402A MX 9703402 A MX9703402 A MX 9703402A MX PA97003402 A MXPA97003402 A MX PA97003402A
Authority
MX
Mexico
Prior art keywords
data
unit
figures
image
coordinates
Prior art date
Application number
MXPA/A/1997/003402A
Other languages
Spanish (es)
Other versions
MX9703402A (en
Inventor
Oka Masaaki
Original Assignee
Sony Computer Entertainment:Kk
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP11630096A external-priority patent/JP3537259B2/en
Application filed by Sony Computer Entertainment:Kk filed Critical Sony Computer Entertainment:Kk
Publication of MXPA97003402A publication Critical patent/MXPA97003402A/en
Publication of MX9703402A publication Critical patent/MX9703402A/en

Links

Abstract

The present invention relates to a data processing system in which, to improve the processing speed, a main PCU transmits the coordinates of a central point in a three-dimensional space on a main bus to a programmable pre-processor, which generates the data of the figure to draw a plurality of unit figures (polygons) around the central point transmitted from the CPU and sends the data of the figure to a GPU. The GPU performs the generated processing according to the figure data supplied from the PPP, to draw an image defined by the combination of the unit figures on a graphics memory.

Description

METHOD? APPARATUS FOR DATA PROCESSING This application claims priority in accordance with the International Convention based on Japanese Patent Application No. PÜ8-1163ÜÜ filed on May 10, 1996.
BACKGROUND OF THE INVENTION Field of the Invention This invention relates, in general, to a method and apparatus for processing data, in which the data of a figure, used to draw a plurality of unit figures, are generated from the data of a single figure in order to improve the processing speed of the device. More particularly, the invention relates to a method and apparatus for data processing that can be used to advantage in a graphics computer, such as video equipment, using a computer, a special effects device (efector) or a machine. of video games, by means of which it is possible to improve the data processing.
Description of the Related Art It is a common practice in the prior art to employ a video game machine in which a main central processing unit (CPU) contains a geometry transfer machine (GTE), which is a mathematical processor. to execute the processing of the geometry, such as the transformation of coordinates, the transformation of perspective, calculations of cuts or light sources. The main CPU defines a three-dimensional model as a combination of basic unit figures, such as triangles or quadrangles (three-dimensional model) to generate data to delineate a three-dimensional image. For example, when an idimensional object is displayed the main CPU resolves the object into a plurality of unit figures and causes a GTE to perform the processing of the geometry to generate the data of the figure to draw each unit figure. The main CPU is connected to a main bus and causes the data of the figure generated by the GTE to be transferred over the main bus 101 to a graphics processing unit (GPU). Upon receiving the figure data from the main CPU, the GPU performs the process to generate the image data based on the pixels written in a graphics memory from the Z values that specify the color and length data. of the vertices of the unit figures contained in the data of the figure (the information on the distance from the starting point and the length), taking into account the color and the Z values of all the pixels that make up the unit figure. In this way the unit figure is drawn in the graphics memory. In addition, the CPU performs the control to read the image data written in the graphics memory to send the image data thus read as visual signals through a display controller, such as a CRT (CRTC) controller as visual signal, to display in a display device, such as a television receiver, a cathode ray tube (CRT), a liquid crystal monitor or the like. This allows to display the background of a video game, characters or similar. Meanwhile, the data of the unit figures whose geometries have been processed by the GTE with the control of the CPU, are coordinated in the three-dimensional space of the vertices of the unit figure. Therefore, the volume of data is irrelevant for the size of the unit figure. On the other hand, the speed with which the unit figure is drawn, depending on the number of pixels that exist in the graphics memory, will depend on the size of the unit figure, that is, the number of pixels that make up the unit figure . Therefore, if there is a large number of pixels that make up the unit figure, the drawing of the unit figure will take some time, whereas, if the unit figure is made up of a smaller number of pixels, it will not take much time to draw the unitary figure. So, in the continuous processing of a plurality of small size unit figures it is frequent that the data of the figure is not transferred in time from the main CPU on the main bus, in spite of the fact that the GPU has finished processing the the image and is ready to perform the following process. In other words, when a large number of small size unit figures are transferred from the main CPU to the GPU on the main bus, the transfer rate is limited by the main bus, with the result that it becomes more difficult to improve the Total processing speed of the device. Accordingly, there is a need to improve data processing for a higher processing speed. The present invention clearly meets these needs.
SUMMARY OF THE INVENTION In summary, and in general terms, the present invention provides improvements in the methods and apparatus for data processing, by means of which the data processing speed can be substantially improved. More particularly, by way of example and not necessarily as limitation, the present invention provides an image information processing system in which is included: a drawing means for effecting the drawing of the image according to the data of the figure , the drawing means is configured to draw a unitary figure and to draw an image defined by a combination of these unitary figures; the output means for issuing a drawing command; and the means of generating the data of the figure, this means configured to draw a plurality of unit figures in random response to the transmitted drawing command, from the output means, on a bus to send the figure data thus generated to the middle of drawing. In another aspect, the present invention provides a method for processing image information, including the steps of: providing a drawing command on a bus, generating a plurality of figure data in random response to the command of received drawing, and the drawing of a unitary figure according to the data of the figure. With the aforementioned image information processing method and apparatus, the figure data transmitted by the output means is received on a pre-established bus, from the data of the individual figure the data of the figure are generated to draw a figure plurality of unit figures, which are sent to the drawing medium, in this way an overall increase in the processing speed of the entire system is achieved.
Therefore, the present invention satisfies a need that has existed for a long time to improve data processing at a higher processing speed, the present invention clearly satisfies these needs. These and other objects and advantages of the invention will be apparent from the following more detailed description when considered together with the drawings of the illustrative modalities that accompany it.
BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a block diagram illustrating a conventional video game machine; The figure ?. is a plan view showing the structure of the video game machine which is a mode of the present invention; The figure is a front view of the video game machine of Figure 2; Figure 4 is a side view of the video game machine of Figure 2; Figure 5 is a plan view showing a CD-ROM; Figure 6 is a block diagram of the system for a gaming machine embodying the invention; Figure 7 is a flowchart to illustrate the processing of a programmed preprocessor (PPP) as shown in figure faith; Figure 8A and 8B illustrate a unitary figure generated by random numbers; Figure 9 shows a unit figure generated by dividing the unit figure into a two-dimensional space; Figure 10 shows a unit figure which is generated by dividing the unit figure into a three dimensional space; and Figure 11 illustrates a screen cutout.
DESCRIPTION OF THE PREFERRED MODALITIES Now, in relation to the drawings, the same reference numerals designate equal or corresponding parts in all the figures of the drawing. Figure 1 shows a common example of a video game machine of the prior art, wherein a main central processing unit (CPU) 111 containing a geometry transfer machine (GTE) which is a mathematical processor to execute in geometry processing such as coordinate transformation, perspective transformation, calculations for image clipping or the light source. The main CPU 111 defines a three-dimensional model as a combination of basic unit figures, such as triangles or quadrangles (three-dimensional model) to generate data to delineate a three-dimensional image. When a three-dimensional object is displayed, for example, the main CPU 111 provides the resolution of the object in a plurality of unit figures and causes the GTE 117 to perform the processing of the geometry to generate the data of the figure in order to draw each unit figure. The main CPU 111 is connected to a main bus 1U1 and causes the data of the figure generated by the GTE 117 to be transferred over the main bus 101 to a graphics processing unit (GPU) 115. When it receives the data from the figure from the main CPU 111, the GPU 115 performs the processing to generate the writing of the image data based on the pixels in a graphics memory 118 from the Z values which specify the color data and the length of the data. the vertices of the unit figures contained in the data of the figure (the information on the distance from the starting point and all along), taking into account the color and the Z values of all the pixels that make up the unit figure. In this way, the unit figure is drawn in the graphics memory. In addition, the CPU 111 performs the control to read the image data written in the graphics memory 118 to send this image data in this way read as video signals through a display controller, such as the CRT controller (FIG. CRTC) as a visual signal, to display on a display device, such as a television receiver, a tube of cathode ray, a liquid crystal monitor or the like. This allows the background of a video game, characters or the like to be displayed. Meanwhile, the data of the unit figures, to which the geometry was processed in the GTE 117 under the control of the CPU 111, are the coordinates, in the three-dimensional space, of the vertices of the unit figure. Therefore, the volume of data is irrelevant for the size of the unit figure. On the other hand, the speed of the drawing with which the unit figure is drawn as a function of the pixels that exist in the graphics memory 118 depends on the size of the unit figure, which is the number of pixels that make up the unit figure. Therefore, if there is a large number of pixels forming the unit figure, the drawing of the unit figure will take some time, whereas, if the unit figure is constituted by a smaller number of pixels, the drawing of the unit figure will not take long. so long. Therefore, in the continuous processing of a plurality of small size unit figures, it is frequent that the data in the figure is not transferred in time from the main CPU 111 on the main bus 101, despite the fact that the GPU 11.5 The generated processing has finished and is ready to perform the following process. In other words, when a large number of small size unit figures are transferred from the main CPU 111 to the GPU 115 on the main bus 101, the transfer rate is limited by the main bus 101, with the consequence that It makes it more difficult to improve the overall processing speed of the device. Now, with reference to Figures 2-4 of the drawings, a video game machine, according to one embodiment of the invention, is constituted by a part of the main body of the machine 2, an actuator unit 17 having a terminal connection part in substantially tetragonal form 26 connected to the main body part of the machine 2 and a recording device 38 similarly connected to the main body part of the machine 2. The main body part of the machine 2 is of substantially tetragonal shape and is provided, in a middle part of this, a disk loading unit 3 for loading in this a recording medium for the game that has recorded in this program or data to play the game. In the present embodiment, a CD (compact disk) -ROM 51 as shown in the example of FIG. 5, is installed, so that it can be separated, on the disk loading unit 3. The recording medium for a game , however, it is not limited to one disc. On the left side of the game loading unit 3 a reset switch 4 is installed, which is activated when the game is reinitialized, and a switch of the power source 5 which is activated when the power source is switched on or off . On the right side of the game loading unit 3 is installed a disk actuator switch 6 which is activated to open / close the disk loading unit 3. On the front side of the main body part of the machine 2 are installed the connection units 7A, 7B to which the actuator unit 17 and the recording device 38 can be connected as a whole. Although the connecting units 7A, 7B are installed to connect the two series of actuator units 17 and the recording devices 38, it is possible to provide several connection units, which are necessary to connect more than two sets of actuator units 17 and control devices. register 38. The connector units 7A, 7B are formed in two rows, as shown in figures 3 and 4. The upper row includes a register insertion unit 8 connected to the recording device 3b, while the lower row includes a connection terminal insertion unit 12 connected to a connection terminal 26 of the actuator unit 17. The insertion opening of the register insertion unit 8 is rectangular in shape, elongated in the transverse direction. The lower side corners of the insert opening are much more rounded than the upper side corners thereof to prevent the registration device 38 from being inserted in the up-down position. The register insertion unit 8 is also provided with a shutter 9 to protect the connection terminals (not shown) to ensure the internal electrical connection. The plug 9 is installed in such a way that it is perpetually pushed outward under the force of a spring that has the shape of a helical torsion spring. Therefore, when inserted into the recording device 38, the shutter 9 opens toward the rear on a front insertion side of the registration device 38. When the recording device 38 is removed, the shutter 9 is reset under the spring force and automatically remains in the closed state to protect the internal connection terminal from dust, dirt and external environment. Now, again in relation to Figures 3 and 4, the insertion unit of the connection terminal 12 has an insertion opening with a rectangular shape, elongated in the transverse direction. The lower side corners of the insertion opening are much more rounded than the upper side corners thereof to prevent the terminal connection part 26 of the actuator unit 1 / from being inserted in the top-to-bottom position. In addition, the profile of the insertion opening is different from the insertion opening of the register insertion unit 38 to prevent the recording device from being inserted therein. In this form, the insert openings for the recording device ÍH and the actuator unit 17 are different from each other in size and pertile to avoid insertion errors. As shown in Figure 2, the actuator unit 17 is configured so that it can be held between the palms of the hands so that the five fingers of both hands move freely. The actuator unit 17 is constituted by the first and second rounded and interconnected operating parts 18, 19 symmetrically formed in the left and right direction, the first and second square-shaped support parts 20, 21 projecting from the first and second rounded working parts 18, 19, a selection switch 22 and a start switch 23 arranged in the middle part of the first and second rounded operant parts 18, 19, the third and fourth operating parts? , 25 protrude from the front sides of the first and second operating parts 18, 19 and a terminal connecting part 26 electrically connected to the main body part of the machine 2 by a cable 2 1. However, the cable 27 can be omitted, if the overall structure is configured in that way. The terminal connection part 26 is installed on the distal end of the cable 2 / adapted to be electrically connected to the main body part of the machine?. The connecting terminal part 26 has a holding part 26A having its rough side surfaces for slip-proof effects, such as by knurling. The part holding the terminal connecting part 26 is formed as a so-called telescopic part and has its size, which is the wiW and the length L equal to that of a fastener 38A of the recording device 38, as already explained. The recording device 38 contains a non-volatile memory, such as a flash memory. The registration device 38 has the fastener 38A (FIG. 4) configured in the same manner as the fastener of the connection terminal 26, so that the recording device can be easily installed or removed from the main body part of the machine 2. The recording device 38 is in the sequence so that, when the game is temporarily terminated, the prevailing state of the game is stored in the recording device. In this way, when the game is restarted, the data is read from the registration device 38 so that the game can be restarted from the state corresponding to the stored state, which is since the game was suspended.
When the game is played on a video game machine described in the above, the user connects the actuator unit 17 to the main body part of the machine 2 and, if necessary, the recording device 8 is also connected to the part of the main body of the machine 2. In addition, the user activates the operating switch of the disk 6 to set the CD-ROM 51 as the registration medium for the game in the disk loading unit 3. The user also activates the the source of power to turn on the power source of the part of the main body of the machine 2. Since the main body part of the machine 2 now reproduces the image and the dialogue for the game, the user activates the actuator unit 17 to play the game. game. The total electrical system shown in the figure 6 shares many of the components and functions basic like those shown in figure 1 and which are represented by the same numbers. That is, the present part of the main body of the machine? it is configured basically in the same way as the video game machine of Figure 1 except that a programmable preprocessor (PPP) 120 is recently provided between a main bus 101 and the GPU 120, to facilitate the practice of the present invention . The main body part of the machine 2 has two types of buses, which is a main bus 101 and a sub-bus 102 for exchanging data between the respective sub-system blocks. The main bus 101 and the sub-bus 102 are interconnected by a bus controller 116. The main bus 101 is connected, in addition to the bus controller 116, a main CPU 111 (output means), consisting of, for example, a microprocessor, a main memory 112, which consists of, for example, a random access memory (RAM), a direct access controller to the main memory (DMAC) 113, a decoder (MDEC) 114 and an MPEG (group) of moving image experts), a GPU (the drawing medium) 115 and a PPP (programmed preprocessor) as the means for generating the image 120. The sub-bus 102 are connected, other than the bus controller 116, the GPU 115, such as a CPU 121 configured in the same manner as the main CPU 111, a sub-memory 122 configured in the same manner as the main memory 121, a sub-DMAC 123, a read-only memory (ROM) 124 having an operating system or the like stored thereon, a system for sound processing (SPU) 125, a communication unit 126 in asynchronous transmission mode (ATM) ), a subsidiary storage device 127 and an interface (I / F) for the input device 128. The main bus 101 is designed for high-speed data communication, while the sub-bus is designed for communication of data. data at low speed. That is, the sub-bus 102 is used for the data that can be exchanged at a low speed to guarantee a high-speed operation on the main bus 101. The main bus 101 can be disconnected from the sub-bus 102, while the sub bus 102 may be connected to the main bus 102 under the control of the bus controller 116. If the main bus 101 and the sub-bus 102 are disconnected from each other, only devices connected to the main bus 101 can be accessed from the main bus 101, while only the devices connected to the sub-bus 102 can be accessed from the sub-bus 102. However, if the sub-bus 102 is connected to the main bus 101 it is possible to access the any of the devices from the main bus 101 or from the sub-bus 102. Meanwhile, in an initial state, such as immediately after turning on the power source of the system, the driver of the bus 112 is in the open state. erto, (ie, the main bus 101 remains connected to the sub-bus 102). The main CPU 111 is designed to perform various processing operations according to a program stored in the main memory 112. When the system is started, the main CPU 111 reads a start program via the bus controller 116 from the ROM 124 connected to the sub-bus 102 and executes the read program. This causes the main CPU 111 to load the application program, in the present it is the boot program, and the necessary data in the main memory 112 or in the sub-memory 122 from the subsidiary storage device 127. The main CPU 111 executes then the program that loaded on the main memory 112. The main CPU 111 contains in it the GTE 117 as explained in the previous one. This GTE 117 has a parallel operative subsystem, to execute in parallel a plurality of processing operations, and executes the processing of the geometry, such as the transformation of coordinates, calculations of light sources, operations of matrices or vector operations, to a fast speed in response to the requests of the main CPU 111. The GTE 11 / in this way performs the processing corresponding to the requests from the main CPU 111 (geometry processing) to generate the data of the figure of the unit figures to supply the figure data to the main CPU 111. Upon receiving the data of the figure from the GTE 117, the main CPU 111 generates a packet containing the data of the figure and transfers the packet on the main bus 101 to the GPU 115 or PPP 120. Meanwhile, the main CPU 111 contains a backup memory 119 and has access to this backup memory 119 instead of the memory Main line 112 to speed up processing. The main DMAC 113 performs control of the DMA transfers to the devices connected to the main bus 101. If the bus controller 116 is in the open state, the main DMAC also performs control over the devices connected to the sub. bus 102. The MDEC 114 is an I / O device capable of operating in parallel with the main CPU 111 and is configured to function as an image expander machine. That is, the MDEC 114 is configured to decode the encoded and compressed moving image data. The GPU 115 is configured to function as a registration processor. That is, the GPU 11b is configured to perform the generated process after receiving a packet transmitted from the main CPU 111, the main DMAC 113 or from the PPP 120 and to write the data of the images corresponding to the unit figure in FIG. the graphics memory 118 based on the Z values ordered as data of the figure in the package to specify the color data and the length of the vertices of the unit figures. The GPU 115 is also configured to read the image data written in the graphics memory 118 to output the image data read as or video signals. The GPU 115 is also configured to receive packets from the devices connected to the sub-bus 102 if necessary to perform the generated according to the figure data ordered in the packets.
The graphics memory 118 is constituted by, for example, a DRAM for temporarily storing the image data supplied from the GPU llb. The graphics memory has the property that it can be accessed according to the pages at a fast speed but can only be accessed through the pages at an extremely slow speed. For now, the graphics memory 118 has a sufficient area to store the image data for two frames so that the image data can be read from one of the areas while the image data is written to the other area. . The PPP 120 receives a packet transmitted from the main CPU 111 or from the main DMAC 113 and generates the data of the figure to draw a plurality of unit figures from the only figure data ordered in the packet, to pack the data of the figure and send the resulting packet to the GPU 115. The sub CPU 121 reads and executes the program stored in the sub-memory 122 to perform various processing operations. In the same manner as the main memory 120, the sub-memory 122 is designed to store necessary programs or data. The sub-DMAC 123 is designed to control the transfer of DMA for the devices connected to the sub-bus 102. The sub-DMAC 123 is designed to acquire the bus rights only when the bus controller 116 is in the closed state , which is when the main bus 101 is disconnected from the sub-bus 102. The ROM 124 stores in the memory the boot program and the operating system as already explained. Meanwhile, the ROM stores both the program for the main CPU 111 and the program or the sub-CPU 121. In memory, the ROM 124 is of slow access speed and, therefore, is connected to the sub-bus 102. The SPU 125 is configured to receive a packet transmitted from the sub-CPU 121 or the sub-DMAC 123 to read the dialogue data from the sound memory 129 according to the sound command ordered in the packet. The SPU 125 is configured to send and output the dialog data emitted to a speaker, (not shown). The ATM communication unit 126 is configured to control communication over a public network, (not shown) ATM communication. This facilitates the user of the video game machine to play the video game with a user of another machine for video games directly or after having exchanged the data with the user of the other machine for video games through a pre-established central station. The subsidiary storage device 127 is configured to reproduce the information (programs and data) stored on the CD-ROM 51 (Figures 2 and 5) in, for example, a disk unit. The subsidiary storage device 127 is also configured to register or read data for the recording device 38 (Figure 2). The I / F for the input device 128 is an interface for accepting an input from the outside, such as a signal corresponding to the activation of the actuator unit 17 as a control palette (figure 2) or an image or dialogue reproduced by another device, and is configured to emit a signal corresponding to the input from the outside on the sub-bus 102. The sound memory 129 keeps the data of the dialogue in the memory. In the part of the main body described in the above of the gaming machine 102, the boot program is read from the ROM 124 in the main CPU 111 and is executed when the system is turned on to read the program and the data from the CD- ROM 51 established in the subsidiary storage device 112 to be developed in the main memory 101 and the sub-memory 122. The program developed in the main memory 101 and the sub-memory 122 is executed in the main CPU 111 or in the sub-memory. CPU 121 to reproduce the image and the game dialog. In the main CPU 111 the data of the unit figure for drawing a unit figure constituting a three-dimensional image is generated according to the data stored in the main memory 112. These data of the unit figure are packed and sent by the main bus 101 to GPU 115 or PPP 120. When receiving a packet from the main CPU 111, the PPP 120 first unpacks the packet in step SI in the flow chart of figure 7 to take the data of the figure corresponding to the unit figure ordered in this. The PPP 120 also generates in figure S2 the figure data corresponding to a plurality of unit figures from the single figure data to perform the coordinate transformation or the transformation of the perspective on the figure data generated if necessary . The PPP 120 packages the plurality of data of the figure thus obtained, and, in step S3, it sends the packet to the GPU llb. The GPU 115 receives the packet from the main CPU 111 or the PPP 120 and performs the generated processing according to the figure data arranged in the packet to write the image data in the graphics memory 118. The GPU 115 it reads the image data previously written from I to graphics memory 118 and outputs the image data as visual signals. This displays an image of the game. On the other hand, the sub-CPU 121 generates a sound command to instruct the generation of the dialogue according to the data stored in the sub-memory 122. This sound command is packaged and sent by the sub-bus 102 to the SPU 125. The SPU 125 reads and outputs the dialogue data from the sound memory 129, according to the sound command, from the sub-CPU 121. This produces the background music (BM) or the other dialogue for the game. Referring to FIGS. 8A, 8B to 11, processing is further explained by means of the PPP. The main CPU 111 is configured to transmit a packet on the main bus 101 to the PPP 120. The packet includes the coordinates of a pre-established central point O (xO, yü, zü) in a three-dimensional space as a figure data and a command to arrange the drawing of a plurality of triangles in throne to the center point 0. Upon receiving the packet, the PPP 120 generates a point O 'at a point placed at a random distance from the center point O in a random direction, as shown in Figure 8A. This point O 'confirms that it is the center of the unitary figure that in the present is a triangle. That is, PPP 120 generates three random numbers ox, oy and oz, and the point O 'is a point represented by a coordinate (?? + ox, yo + oy, zO + oz). In addition, PPP 120 generates three points A, B and C as vertices of the unit figure, in points each separated at a random distance in a random direction from the 0 'point, as shown in Figure 8A. That is, the PPP 120 generates nine random numbers rxO, rxl, rx2, ryO, ryl, ry2, rzO, rzl, rz2. The points represented by the coordinates (xu + ox + rxü, yO + oy + ryO, zO + oz + rzO), (xO + ox + rxl, yO + oy + ryl, zO + oz + rzl), (xO + ox + rx2, yO + oy + ry2, zO + oz + rz2) as points A, B and C respectively. The PPP 120 repeats the previous processing to generate the data of the figure to draw a plurality of unit figures around the center point or, as shown in Fig. 8R. If the coordinates in the three-dimensional space of the three vertices A, B, C of a unit figure corresponding to the data of the generated figure are (xu, yO, zü) a (x2, y2, z2), the PPP 120 performs the transformation of the coordinates according to, for example, the following equations: Sx + RllXk + R12Yk + Rl3Zk + TRX SYk + R21Xk + R22Yk + R23Zk + TRY SZk + R31Xk + R32Yk + R33Zk + TRZ (1) where k = 0 , 1, 2. Rij defines the elements of a line and a column j of the preset rotation matrix R y (TRX, TRY, TRZ) defines the translation vectors. After finding the coordinates (Sxk, Syk, Szk) in three-dimensional space after the coordinate transformation, the PPP 120 performs the transformation of the perspective on the coordinates (Sxk, Syk, Szk) according to the following equations: SSXk - SXk (h / SZk) SSYk - SYk (h / SZk) (2) For coordinate transformation (SSXk, SSYk) in a two-dimensional space. The PPP 120 performs the transformation of coordinates or the transformation of the perspective in all the plurality of the unit figures, which in the present are triangles, drawn around the central point or, according to the equations (1) or (2) . The PPP 120 packs the resulting coordinates (SSXk, SSYk) in the bidemensional space and sends the packet to the GPU 115. In this case, the GPU 115 performs the generated according to the coordinates (SSXk, SSYk) in the provided two-dimensional space by PPP 120. Up to now, the processing described above in PPP 120 is done in CPU 1.11, so that the data in the figure corresponds to a large number of units of small size figures, as shown in the Figure ÜB is transferred over the main bus 101 to the GPU 115. The result is that the processing speed of the entire system is limited by the main bus 101. In a preferred embodiment of the present invention, since only the data of Figures corresponding to a single central point O are transmitted from the CPU 111 on the main bus 101, the data of the figure can be transmitted immediately (ie, the bandwidth can be reduced). Furthermore, since the GPU llb performs the generation of the small size unit figure, the processing can be performed quickly giving rise to an increased processing speed throughout the system. In addition, the PPP 120 performs part of the processing carried out so far by the CPU 111, in this way it promptly frees the load that is otherwise imposed on the CPU 111, improving the processing speed of the entire system. Since it is not necessary for the main memory 112 to store all the data in the figure corresponding to the large number of unit figures, as shown in Fig. 8B, the storage capacity of the main memory 112 can be reduced compared to the case in which all the data of the figure is stored. The aforementioned technique for drawing a figure is particularly effective in representing the manner of an explosion of a unitary figure.
In the previous explanation, unitary figures are scattered around a point as a center. However, it is possible to draw scattered unit figures around a pre-established reference figure, such as the segment of a line, a triangle or a quadrangle. Also, in the above explanation, the plurality of triangles are drawn around a central point. However, the unit figures drawn in this way are not limited to the triangles, but can also be points, straight lines (line segments) or quadrangles. If the image data corresponding to a unitary figure in a two-dimensional space, which in the present is a quadrangle ABCD, as shown in Figure 9, are written to a DRAM constituting the graphics memory 118 on the GPU 115, and writing is done through a page limit of the DRAM, access to the DRAM requires time, thus obstructing the acceleration of processing. Therefore, if when drawing the quadrangle ABCD in the two-dimensional space as shown in figure 9, the areas defined by six horizontal line segments, numbered from 1 to 6, extend parallel to the transversal direction of the screen (horizontal lines 1 to 6) and seven vertical segment lines, numbered 1 ai /, which extend parallel to the vertical direction of the screen (vertical lines la), each corresponds to a page of the DRAM, the figure corresponding to the Image data written on each page is desirably processed as a unit in view of the speed of processing. The main CPU 111 is configured to transmit a packet that includes the data of the figure consisting of a plurality of unitary figures in the three-dimensional space on the main bus 101 to the PPP 120. If the PPP 120 must receive this packet in which the writing of the image data resulting from the transformation in the unitary figure on the two-dimensional space occurs through the page limit of the DRAM, as shown in figure 9, the PPP 120 transforms the vertices into the three-dimensional space of the Unitary figure ordered as data of the figure in the package by transforming coordinates or transforming the perspective according to equation (1) or (2). The PPP 120 is configured to generate the data of the figure corresponding to the plurality of unit figures obtained by dividing the ABCD quad in the bidimensionai space (two-dimensional plane), as shown in Figure 9, in terms of an area corresponding to a page of the DRAM as a unit.
If now the coordinates of the four vertices from A to D of the quadrangle ABCD in the space bidi ensíonai, obtained after the transformation of the perspective, are determined by (SSXO, SSYO) to (S3X3, SSY3) the horizontal lines 1 to the 6 are determined by the equations y = a + bm, where m = 0. 1, 2, 3, 4, 5 and the vertical lines from 1 to / are determined by the equations x = c + dn, where n = ü, 1, 2, 3, 4, 5, 6, PPP 120 solves the simultaneous equations of equations (3) and (4): (SSXÜ - SSX1) (Y - SSYO) = ÍSSYü - SSY1) ( Y- SSXO) y = a + bm (3) (SSXÜ - SSX1) (Y - SSYO) = (SSYO - SSY1) (Y - SSXÜ) x = c + dm (4) for the different values of mon to find the points of intersection of line segment AB with horizontal lines 1 through 6 or vertical lines 1 through 7. PPP 120 also formulates similar simultaneous equations for line segments BC, CD and DA to find points of intersection. This is the case with the horizontal lines 1 to 6 or the vertical lines 1 to 7. In the present case, the PPP 120 solves 42 simultaneous equations (= 4 (6 + /)). After finding the points of intersection of line segments AB, BC, CD and DA with horizontal lines 1 through 6 or vertical lines 1 through /, for example, points P or T in figure 9, on 9, PPP 120 divides, in addition to these points of intersection, the quadrangle ABCD, in terms of one page of the DRAM as a unit, based on the points of the intersections thus found and the points of intersection of the lines horizontal 1 to 6 with vertical lines 1 to ia /. This generates the data of the Figure corresponding to a large number of units of small size figures. Therefore, the PPP 120 divides the quadrangle ABCD into a plurality of units of figures of a size to be within a page of the DRAM as a hectomole APQRST, as shown in Figure 9 and packages the data of the figure corresponding to the figure unit for transmitting the figure data corresponding to the figure unit up to the GPU llb. In this case, the GPU 115 performs the generated processing based on the figure unit contained within a DRAM page obtained from the PPP 120. Until now, since the processing by the PPP 120 as described above is performed through the CPU 111, the figure data corresponding to a large number of small size figure units, as shown in Fig. 9, are transmitted on the main bus 101 to the GPU llb, if desired, for increase the speed of access to the graphics memory 118. In this way, the processing speed of the entire system is limited by the main bus 101. However, since the data of the unit figure transmitted from the CPU 111 on the Main bus 101 are the ones corresponding to a large quadrangle ABCD, the data of the unit figure can be transmitted immediately. Further, in the GPU 115, since the processing generated for the small size unit figure is done through the GPU 115, and the writing of the image data corresponding to a single unit figure is not performed beyond the limit from the DRAM page, processing can be accelerated, thus increasing the processing speed of the entire system. Since the PPP 120 shares with the CPU 111 the processing hitherto performed by the main CPU 111, the load imposed on the CPU 111 can be released as a consequence. Next, if by displaying the unit figure on a screen, the unitary figure protrudes from the screen, the protruding part tends to affect the display. It is necessary to trim this protruding part. When it is necessary to trim the screen, the main CPU 111 is configured to transmit a packet, in which the data of the figure is included in the three-dimensional space of the unit figure, on the main bus 101 to the PPP 120. On receiving this packet , PPP 120 divides the unit figure, corresponding to the data of the figure ordered in the package, into several unit figures in the three-dimensional space. This generates data of the figure corresponding to a plurality of I 'i. u ra a i ta ri as. In this sense, upon receiving 'the coordinates (xii, yii, zO) up to (x3, y3, z3) of the vertices A, B and C in the three-dimensional space shown by the example of FIG. 10 as data of the figure, PPP 120 internally divides the coordinates of the vertices into 8 x 8 to generate 64 unit figures of small size, such as APQR quadrangles, according to the following equations. Xíj = ((8 - i) (U - j) xO + (8 - i) jxl ^ i (0-j) x3 + i jx.) / 64 Yij = ((8 - i) (ü -; j) and? + (8 - i) jyl «•? (ü-j) yJ + ijy2) / 64 Zíj = ((8 - i) (ü - j) zü + (8 - i) jzl + í (8-j) z3 + iiz2) / 64 > 1 where i and j are integers from 0 to 8 The PPP 120 calculates the coordinates (Xij, Yij, Zij) of the vertices of these 64 unit figures. The PPP 120 transforms the vertices (Xij, Yij, Zij) in the three-dimensional space, which were found in this way, by transforming the coordinates and transforming the perspective according to equations (1) or (2) to generate a plurality of unitary figures of small size in the two-dimensional space.
These unit values are obtained by the internal division x 8 of the ABCD quadrangle to form 64 unit figures after the transformation of coordinates and transformation of the perspective, in this order. The PPP 120 then packages the coordinates (SSXij, SSYi) as data of the figure and transmits the resulting packet to the GPU 115. In this case, the GPU llb performs the generated only in the data of the figure corresponding to the 6 figures. PPP 120 units that contain the parts that are displayed in the screen box, as shown in figure 11, shaded. This avoids the misdirection of parts of the image that are not going to be displayed on the screen. Up to now, since the processing by means of the PPP 120 is done through the CPU 111, the data of the figure corresponding to the large amount of data of the figure of small size that are shown in figure 10 are transmitted on the bus 101 leads to the GPU 115. This causes limitation in the processing speed of the entire device by the main bus 101. However, since the data of the unit figure transmitted from the CPU 111 on the main bus 101 correspond to the from the only large-sized quadrangle ABCD, the data of the unit figure can be transmitted immediately. Since the GPU 115 performs the generated on the small size unit figures, the slow process can be accelerated, thus causing an increase in the processing speed of the entire system. Since the PPP 120 again shares processing with the CPU 111, usually performed by the main CPU 111, the load imposed on the CPU 111 can be released as a consequence. Although the present invention has been explained in connection with the case of the application for a video game machine, the present invention can also be applied to an effector to give special effects to an image or a device for computer graphics processing, such as ei CAD. Meanwhile, the method for generating a plurality of unit figures is not limited to. method described in the above.
In the above-described embodiment, the packet is transferred from the main CPU 111 to the PPP 120. However, if the unit figure to be drawn is not going to be processed by the PPP 120, as when the unit figures that are to be To draw are small in size and in smaller quantity, the main CPU 111 directly transmits the corresponding packet to the unitary figure towards the GPU 115 without transmitting the ai PPP 120 packet. In this case, the images are drawn in a conventional manner. Therefore, the present invention satisfies an existing need to improve data processing at a higher processing speed. It will be evident from the aforementioned that, although the particular forms of the invention have been illustrated and described, various modifications can be made without departing from the spirit and scope of the invention. Accordingly, it is proposed that the invention be not limited, except by the appended claims.

Claims (27)

1. An image information processing system consisting of: the drawing means for drawing images in response to the data of the figure, configured to draw a unit figure and to draw an image defined by a combination of unit figures; the output means for issuing a drawing command; the means of generating the data of the figures, this means configured to draw a plurality of unit figures in random spaces in response to the drawing command transmitted from the output means; and a bus for supplying the data of the figures from the generated medium to the drawing medium.
2. The image information processing system, according to claim 1, wherein the output means transmits the coordinates of a pre-established reference figure as a drawing command on a preset bus, and the generated means generates the data of the figure to draw a plurality of unit figures around the pre-established reference figure to send the data of the figures to the drawing medium.
3. The image information processing system, according to claim 1 or 2, wherein the drawing means is connected to the output means by the preset bus.
4. A method for processing image information consisting of the steps of: supplying a drawing command on a bus; the generated of a plurality of data of the figure in random response to the supplied drawing command; and the drawing of the unit figures according to the data of the figure.
5. The method of processing image information, according to claim 4, wherein the step of generating the data of the figure includes the step of generating the unit figures. The method of processing image information, according to claim 4 or claim 5, wherein the drawing command includes at least the data of the coordinates and wherein the step of generating a plurality of data of the figure includes the step of generating a plurality of data of the figure around these coordinate data. 7. An image processing system consisting of: an image memory; the means to transform the coordinates of a unitary figure into a three-dimensional space; the means for dividing the unit figure to which the coordinates were transformed into a plurality of unit figures that comply with a page limit of the image memory; and the means for generating the data of the figure according to the divided unit figures. The image processing system, according to claim 7, wherein the image memory is a DRAM. 9. An image processing method, consisting of the steps of: transforming the coordinates of a unit figure into a three-dimensional space; dividing the unit figure to which the coordinates were transformed into a plurality of unit figures in compliance with a page limit of an image memory in which an image is to be drawn; and the generated data of the figure according to the. divided unit figures. 10. An image information processing system consisting of: the drawing means for drawing the image according to the data of the received figure; and the means of generating the data of the figure, the means configured to draw a plurality of unit figures in response to the data for a single unit figure. 11. An information processing system, according to claim 10, wherein the plurality of unit figures surround the single unit figure. 12. An information processing system, according to claim 10 or claim 11, wherein the plurality of unit figures are random figures. 13. The image information processing system, according to claim 10, 11 or 12, wherein the data for a single unit figure includes the coordinates of a pre-established reference figure directed on a bus, and the generated means generates the data of the figure to draw the plurality of unit figures around a reference figure to send the data of the figure to the drawing medium. 14. An image information processing method comprising the steps of: generating data for a plurality of unit figures in response to the data provided for a single unit figure; Draw all the unit figures according to the figures in the figure. 15. A method for processing image information, according to claim 14, wherein the plurality of figures are random. 1
6. The method of processing image information, according to claim 14 or claim 15, wherein the data includes at least the data of the coordinates and wherein the step of generating the plurality of data of the figure includes the step of generating a plurality of data of the figures around the coordinate data. 1
7. An image processing system consisting of: the means for transforming the coordinates of a unitary figure into space, the means for dividing the unitary figure transformed into its coordinates into a plurality of unitary figures; and the means for generating the data of the figure according to the divided unit figures. 1
8. The image processing system, according to claim 17, further includes an image memory. 1
9. The image processing system, according to claim 18, wherein the image memory is a DRAM. 20. An image processing method, consisting of the steps of: the transformation of coordinates of a unitary figure into the multidimensional space; dividing the unitary figure transformed into its coordinates into a plurality of unitary figures; and generate data of the figure according to the divided unit figures. 21. The method of image processing, according to claim 20, wherein the plurality of figures are random. 22. An image information processing system consisting of: the output means for issuing a drawing command; and the means for generating data of the figure, the means configured to draw a plurality of unit figures in random response to the drawing command transmitted from the output means. 23. The image information processing system, according to claim 22, wherein the output means transmits the coordinates of a preset reference figure as a drawing command, and wherein the generated means generates the data of the figure to draw the plurality of unit figures around the pre-established reference figure. 24. The image information processing system, according to any of claims 22 or 23, wherein the drawing command includes at least the coordinate data and wherein the generated means generates the plurality of data from the figure around the coordinates data. 25. An apparatus for image processing consisting of: an image memory; the first means to transform the coordinates of a unitary figure into a multidimensional space; the second means for dividing the unit figure to which the coordinates were transformed into a plurality of unit figures in compliance with a page limit of the image memory; the third means for generating the data of the figure according to the divided unit figures. 26. The apparatus for image processing, according to claim 25, wherein the image memory is a DRAM. 27. A method for image processing, consisting of the steps of: transforming the coordinates of a unit figure into a multidimensional space; the division of the unit figure to which the coordinates were transformed into a plurality of unit figures in compliance with a page limit of an image memory in which to draw an image; and generate the data of the figure according to the divided unit figures.
MX9703402A 1996-05-10 1997-05-09 Data processing method and apparatus. MX9703402A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP11630096A JP3537259B2 (en) 1996-05-10 1996-05-10 Data processing device and data processing method
JP8-116300 1996-05-10

Publications (2)

Publication Number Publication Date
MXPA97003402A true MXPA97003402A (en) 1998-04-01
MX9703402A MX9703402A (en) 1998-04-30

Family

ID=14683611

Family Applications (1)

Application Number Title Priority Date Filing Date
MX9703402A MX9703402A (en) 1996-05-10 1997-05-09 Data processing method and apparatus.

Country Status (9)

Country Link
US (1) US6246418B1 (en)
EP (1) EP0806743B1 (en)
JP (1) JP3537259B2 (en)
KR (1) KR100482391B1 (en)
CN (1) CN1103480C (en)
CA (1) CA2204227C (en)
DE (1) DE69730645T2 (en)
MX (1) MX9703402A (en)
TW (1) TW336303B (en)

Families Citing this family (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100601606B1 (en) * 1999-05-13 2006-07-14 삼성전자주식회사 Data processing apparatus and method usable software/hardware compounded method
JP3564440B2 (en) * 2001-08-08 2004-09-08 コナミ株式会社 Moving image generation program, moving image generation method and apparatus
JP2003263650A (en) * 2002-03-12 2003-09-19 Sony Corp Image processor and image processing method
US7681112B1 (en) 2003-05-30 2010-03-16 Adobe Systems Incorporated Embedded reuse meta information
US7847800B2 (en) * 2004-04-16 2010-12-07 Apple Inc. System for emulating graphics operations
US7231632B2 (en) * 2004-04-16 2007-06-12 Apple Computer, Inc. System for reducing the number of programs necessary to render an image
US8704837B2 (en) * 2004-04-16 2014-04-22 Apple Inc. High-level program interface for graphics operations
US7636489B2 (en) * 2004-04-16 2009-12-22 Apple Inc. Blur computation algorithm
US7248265B2 (en) * 2004-04-16 2007-07-24 Apple Inc. System and method for processing graphics operations with graphics processing unit
US8134561B2 (en) 2004-04-16 2012-03-13 Apple Inc. System for optimizing graphics operations
US8130237B2 (en) * 2004-06-24 2012-03-06 Apple Inc. Resolution independent user interface design
US7397964B2 (en) * 2004-06-24 2008-07-08 Apple Inc. Gaussian blur approximation suitable for GPU
US8068103B2 (en) 2004-06-24 2011-11-29 Apple Inc. User-interface design
US7490295B2 (en) * 2004-06-25 2009-02-10 Apple Inc. Layer for accessing user interface elements
US7652678B2 (en) * 2004-06-25 2010-01-26 Apple Inc. Partial display updates in a windowing system using a programmable graphics processing unit
US20050285866A1 (en) * 2004-06-25 2005-12-29 Apple Computer, Inc. Display-wide visual effects for a windowing system using a programmable graphics processing unit
US8453065B2 (en) 2004-06-25 2013-05-28 Apple Inc. Preview and installation of user interface elements in a display environment
US8302020B2 (en) 2004-06-25 2012-10-30 Apple Inc. Widget authoring and editing environment
US7761800B2 (en) * 2004-06-25 2010-07-20 Apple Inc. Unified interest layer for user interface
US8239749B2 (en) * 2004-06-25 2012-08-07 Apple Inc. Procedurally expressing graphic objects for web pages
US8566732B2 (en) 2004-06-25 2013-10-22 Apple Inc. Synchronization of widgets and dashboards
US7546543B2 (en) 2004-06-25 2009-06-09 Apple Inc. Widget authoring and editing environment
US7737971B2 (en) 2004-07-01 2010-06-15 Panasonic Corporation Image drawing device, vertex selecting method, vertex selecting program, and integrated circuit
NO20045586L (en) * 2004-12-21 2006-06-22 Sinvent As Device and method for determining cutting lines
US7227551B2 (en) * 2004-12-23 2007-06-05 Apple Inc. Manipulating text and graphic appearance
US8140975B2 (en) * 2005-01-07 2012-03-20 Apple Inc. Slide show navigation
US8543931B2 (en) 2005-06-07 2013-09-24 Apple Inc. Preview including theme based installation of user interface elements in a display environment
US7752556B2 (en) 2005-10-27 2010-07-06 Apple Inc. Workflow widgets
US7743336B2 (en) 2005-10-27 2010-06-22 Apple Inc. Widget security
US9104294B2 (en) 2005-10-27 2015-08-11 Apple Inc. Linked widgets
US7954064B2 (en) 2005-10-27 2011-05-31 Apple Inc. Multiple dashboards
US8543824B2 (en) 2005-10-27 2013-09-24 Apple Inc. Safe distribution and use of content
US7707514B2 (en) 2005-11-18 2010-04-27 Apple Inc. Management of user interface elements in a display environment
US8155682B2 (en) * 2006-05-05 2012-04-10 Research In Motion Limited Handheld electronic device including automatic mobile phone number management, and associated method
US20070279429A1 (en) * 2006-06-02 2007-12-06 Leonhard Ganzer System and method for rendering graphics
US8869027B2 (en) 2006-08-04 2014-10-21 Apple Inc. Management and generation of dashboards
US20080168367A1 (en) * 2007-01-07 2008-07-10 Chaudhri Imran A Dashboards, Widgets and Devices
US8954871B2 (en) 2007-07-18 2015-02-10 Apple Inc. User-centric widgets and dashboards
US8667415B2 (en) 2007-08-06 2014-03-04 Apple Inc. Web widgets
US8156467B2 (en) * 2007-08-27 2012-04-10 Adobe Systems Incorporated Reusing components in a running application
US8176466B2 (en) 2007-10-01 2012-05-08 Adobe Systems Incorporated System and method for generating an application fragment
US9619304B2 (en) 2008-02-05 2017-04-11 Adobe Systems Incorporated Automatic connections between application components
US8656293B1 (en) 2008-07-29 2014-02-18 Adobe Systems Incorporated Configuring mobile devices
CN103308942B (en) * 2012-03-12 2015-12-02 中国石油天然气股份有限公司 A kind of method and system of visual geological data
US9715758B2 (en) 2013-07-16 2017-07-25 Samsung Electronics Co., Ltd. Image processing apparatus and method using virtual point light (VPL) information
US20180012327A1 (en) * 2016-07-05 2018-01-11 Ubitus Inc. Overlaying multi-source media in vram

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4811245A (en) * 1985-12-19 1989-03-07 General Electric Company Method of edge smoothing for a computer image generation system
US5060169A (en) * 1987-07-01 1991-10-22 Ampex Corporation Video simulation of an airbrush spray pattern
US4825391A (en) * 1987-07-20 1989-04-25 General Electric Company Depth buffer priority processing for real time computer image generating systems
JPH01111276A (en) * 1987-10-23 1989-04-27 Nippon Sheet Glass Co Ltd Automatic plotter for dot pattern
US5367615A (en) * 1989-07-10 1994-11-22 General Electric Company Spatial augmentation of vertices and continuous level of detail transition for smoothly varying terrain polygon density
JPH0596812A (en) * 1991-10-07 1993-04-20 Brother Ind Ltd Print processor
JPH05219355A (en) * 1992-02-03 1993-08-27 Ricoh Co Ltd Picture processor
JP3223639B2 (en) * 1993-04-15 2001-10-29 ソニー株式会社 Image memory read address generation method
JPH0778267A (en) * 1993-07-09 1995-03-20 Silicon Graphics Inc Method for display of shadow and computer-controlled display system
WO1996013006A1 (en) * 1994-10-20 1996-05-02 Mark Alan Zimmer Digital mark-making method
WO1997002546A1 (en) * 1995-07-03 1997-01-23 Tsuneo Ikedo Computer graphics circuit

Similar Documents

Publication Publication Date Title
MXPA97003402A (en) Method and apparatus for da processing
EP0806743B1 (en) Data processing method and apparatus
JP3620857B2 (en) Image processing apparatus and image processing method
EP1024458B1 (en) Image drawing device, image drawing method, and providing medium
EP0820036B1 (en) Image forming apparatus
JPH09305793A (en) Recording medium, recorder, recording method, device and method for processing information
EP1312047B1 (en) Apparatus and method for rendering antialiased image
JP3495189B2 (en) Drawing apparatus and drawing method
US20020060687A1 (en) Texture rendering method, entertainment apparatus and storage medium
US6867766B1 (en) Image generating apparatus, image generating method, entertainment system, and recording medium
JP3795580B2 (en) Drawing apparatus and drawing method
US6489967B1 (en) Image formation apparatus and image formation method
JP3758753B2 (en) Data transfer apparatus and data transfer method
JP2004139625A (en) Data processor and data processing method
JP4074531B2 (en) Drawing apparatus, drawing method, and providing medium
MXPA97007957A (en) Image processing apparatus and ima processing method
JPH11161816A (en) Data storage device, method and device for control data storage and image generating device