MXPA97007536A - Apparatus for general directions, apparatus for exhibiting images, method for generating addresses and method for exhibiting image - Google Patents

Apparatus for general directions, apparatus for exhibiting images, method for generating addresses and method for exhibiting image

Info

Publication number
MXPA97007536A
MXPA97007536A MXPA/A/1997/007536A MX9707536A MXPA97007536A MX PA97007536 A MXPA97007536 A MX PA97007536A MX 9707536 A MX9707536 A MX 9707536A MX PA97007536 A MXPA97007536 A MX PA97007536A
Authority
MX
Mexico
Prior art keywords
signals
images
buffers
image
addresses
Prior art date
Application number
MXPA/A/1997/007536A
Other languages
Spanish (es)
Other versions
MX9707536A (en
Inventor
Ohba Akio
Original Assignee
Sony Computer Entertainment:Kk
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP8020333A external-priority patent/JPH09212146A/en
Application filed by Sony Computer Entertainment:Kk filed Critical Sony Computer Entertainment:Kk
Publication of MX9707536A publication Critical patent/MX9707536A/en
Publication of MXPA97007536A publication Critical patent/MXPA97007536A/en

Links

Abstract

The image data, extracted from a VRAM memory 18, is sent by means of the line buffers, 75a to 75d, to a selection synthesis unit 63. The line buffer 75d captures the data of the images supplied from the outside to send the captured image data to the VRAM 18. This VRAM 18 can write the image data from the outside, supplied by means of the line buffer 75d and extract image data based on addresses from a controller, in the same way as other image data. On the other hand, the cache memories 74a and 74b can extract image data under the control of the controller 71 to display multiple images in a mosaic-like pattern on a display screen.

Description

APPARATUS TO GENERATE ADDRESSES. APPARATUS TO EXHIBIT IMAGES. METHOD FOR GENERATING ADDRESSES AND METHOD FOR EXHIBITING IMAGES g.ampp Taeniea This invention relates to an apparatus for generating addresses, an apparatus for displaying images, a method for generating addresses and a method for displaying images, used in a graphics computer, a special effects device or a gaming machine of video, which are imaging equipment that uses a computer. Background Art An apparatus for displaying images, having a memory of these images, such as a personal computer or a television game machine, the data written in the image memory are read according to the synchronization signals of, for example , the NTSC system (National Television System Committee). Such an image displaying apparatus includes a cathode ray tube (CRTC) controller 302, for generating previously established directions, based on synchronization signals generated by a circuit 301 that generates a synchronization signal, a VRAM 303 for extracting image data. of a frame, based on the addresses designated by the CRTC 302 and a D / A converter 305, to convert the supplied frame data via a line buffer 304 into analog data, as shown, for example in Figure 1. CRTC 302 includes a horizontal synchronization counter 311, for counting the horizontal synchronization signals, a circuit 312 that reduces the horizontal resolution, to decrease the horizontal resolution to a previously established value, if necessary , a horizontal slicer circuit 313, to start the horizontal tracking lines, and an adder circuit 314, to sum the data from the c ircuito 312 reducer of horizontal resolution and circuit 313 horizontal slicer. In addition, the CRTC 302 includes a horizontal synchronization counter 316, for counting the horizontal synchronization signals, a vertical resolution reducing circuit 317, to lower the vertical resolution to a previously set value, if necessary, a vertical slicer circuit 318 , to initiate slice vertical tracking lines, an adder circuit 319, to add data from the vertical resolution reducer circuit 317 and the vertical slicer circuit 318, and a 320 address generating circuit, to generate directions based on the signals horizontal synchronization and the horizontal and vertical synchronization signals, supplied there. In the above-described image displaying apparatus, the circuit 301 that generates synchronization signals generates both horizontal and vertical synchronization signals, which are sent to the CRTC 302. In the CRTC 302, the horizontal synchronization counter 311 counts the signals of horizontal synchronization supplied from circuit 301 that generates synchronization signals. The horizontal resolution reducing circuit 312 reduces the number of the horizontal synchronization signals, if necessary, to decrease the horizontal resolution of the image data extracted from the VRAM 303. When a previously established time is reached by counting the signals of horizontal synchronization, by the horizontal synchronization counter 311, the horizontal slicer circuit 313 generates horizontal slicing data, for slicing, at a previously established position, of the horizontal tracking line and transmits this horizontal slicing data to a horizontal slicer. 314 adder circuit The adder circuit 314 superimposes the horizontal slicing data on the supplied horizontal synchronization signals and transmits the superimposed data to the address 320 generating circuit. On the other hand, the vertical resolution reducing circuit 316 counts the vertical synchronization signals from the circuit 301 that generates synchronization signals. The vertical resolution reducing circuit 317 reduces the number of vertical synchronization signals, if necessary, to decrease the vertical resolution of the image data extracted from the VRAM 303. When a pre-established time is reached by counting the vertical synchronization signals by the vertical synchronization counter 318, the vertical slicer circuit 318 it generates vertical slicing data, for slicing at a previously established position of the vertical tracking line and transmits this vertical slicing data to the summing circuit 314. The summing circuit 319 superimposes the vertical slicing data on the horizontal synchronization signals supplied and transmitted. the data superimposed on the circuit 320 that generates addresses. This circuit 320 which generates addresses, generates the addresses associated with the superimposed data therein provided and transmits the resulting addresses to the VRAM 303. The VRAM 303 sends the image data associated with the supplied addresses via a line buffer 304 to the converter D / A 305. The D / A converter 305 converts the supplied image data into analog data to produce video signals. Thus, the image data written on VRAM 303 is displayed directly via CRTC 302 on an exhibition screen. However, if frame data, including multiple images in the VRAM 303, is written, it has not been possible, with the CRTC 302 employed in the apparatuses that display images, described above, to slice the multiple images to display the slice images in a desired location on a single screen. Also, it has not been possible with the CRTC 302 to capture the data from multiple images supplied from the outside to display this captured image data on the screen. In view of the state of the art, illustrated above, it is an object of the present invention to provide an apparatus that generates addresses, an apparatus that displays images, a method that generates addresses and a method that displays images, by which multiple images can be displayed in multiple places on a single screen and thus an image supplied from the outside and displayed on the screen can also be captured. EXPOSITION OF THE INVENTION An apparatus that generates addresses, in accordance with the present invention, includes an element that generates addresses, by extracting signals from the images written in an image memory based on the synchronization signals, a plurality of buffers ( intermediate memories), respectively supplied with image signals extracted from the image memory based on the addresses and control elements, to independently control the signals of the images produced by the buffers, so that the image signals supplied to the buffers are displayed on a single screen. In the address generating apparatus, according to the present invention, at least one of the buffers preferably captures image signals supplied from the outside to guide the captured image signals to the image memory. An apparatus exhibiting images, in accordance with the present invention, includes elements that produce addresses that have address-generating elements, to extract the signals of images written in an image memory based on the synchronization signals, a plurality of buffers , respectively supplied with the image signals extracted from the image memory based on the addresses, and a control element, for independently controlling the image signals produced by the buffers, so that the image signals supplied to the buffers are displayed in a single screen, and elements of synthesis, to synthesize the signals of images produced by the buffers. In the apparatus exhibiting images, according to the present invention, preferably at least one of the buffers that captures the image signals supplied from the outside, to guide these signals of captured images to the image memory. In the apparatus exhibiting images, according to the present invention, preferably the synthesizing element is controlled in the program based on the calculations previously established by the control element. The apparatus exhibiting images, according to the present invention, preferably includes one or more cache memories fed with the image signals extracted from the image memory, for writing the supplied image signals. The control element sequentially controls the extraction of the image signals written in the cache, to cause a plurality of images of the same kind to be displayed on a single screen. In the apparatus exhibiting images, according to the present invention, the buffer preferably consists of an online memory. A method that generates addresses, according to the present invention, includes the generation of addresses for extracting signals from images written in an image memory, based on synchronization signals, supplying the signals of images extracted from the image memory, based on in the directions to the buffers, and independently control the signals of images produced by the buffers, so that the image signals supplied to the buffers are displayed on a single screen. A method exhibiting images, according to the present invention, includes generating addresses for extracting the signals of images written in an image memory, based on the synchronization signals, which supply image signals extracted from the image memory, based on in the directions of the buffers, which independently control the signals of the images produced by the buffers, so that the signals of the images supplied to the buffers are displayed on a single screen, and synthesize the signals of images produced by the buffers for the exhibition BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a block or block diagram to illustrate a conventional CRTC; Figure 2 illustrates a typical representation on a display of video signals produced by means of the CRTC; Figure 3 illustrates a schematic structure of a video game machine employing the present invention; Figure 4 illustrates typical examples of a texture image and a target color in the image display method, in accordance with the present invention; Figure 5 illustrates a PCRTC employing an address generating apparatus according to the present invention; Figure 6 illustrates the conceptual structure of the CRTC; Figure 7 illustrates a typical representation of an exhibitor of the video signals produced via the PCRTC; Figure 8 illustrates a specific structure of the PCRTC; Figure 9 is a plan view of a video game machine, which employs the present invention; Figure 10 is a rear side view of the video game machine; Figure 11 is a side view of the video game machine; and Figure 12 is a plan view showing a CD-ROM loaded in the video game machine. THE BEST MODE FOR CARRYING OUT THE INVENTION Referring to the drawings, the preferred embodiments of the present invention will be explained in detail. The present invention is applied to a video game machine, configured as shown in Figure 3. This video game machine is designed to extract and execute a video program stored, for example, on an optical disc, for playing the game sensitive to user instructions, and configured as shown in Figure 3. That is, the video game machine has two bus classes (collectors), ie a main bus 1 and a secondary bus 2.
The main bus 1 and the secondary bus 2 are interconnected by means of a bus controller 16. To the main bus 1 a main central processing unit (main CPU) 11, comprising a microprocessor, a main memory 12, comprising a random access memory (RAM), a main memory direct access controller (DMAC) is connected. main) 13, an MPEG 14 decoder and an image processing unit or graphic processing unit GPU 15. A secondary processing unit (sub-CPU) is connected to the secondary bus 2. 21, comprising a random access memory (RAM), a subsidiary direct memory access controller (sub-DMAC) 23, a read-only memory (ROM) 24, which has stored programs, such as operating systems, a sound processing unit (SPU) 25, a communication controller or asynchronous transmission mode (ATM) 26, a subsidiary memory 27, an input device 28 and a CD-ROM drive 30. The bus controller 16 is a device on the main bus 1 to execute the switching between this main bus 1 and the secondary bus 2 and is initially in an open state. The main CPU 11 is a device operated by a program in the main memory 12. Since the bus controller 16 is initially in the open state during startup, the main CPU 11 extracts the initial program from the ROM 24 and the secondary bus 2 and executes the same to reproduce the application program and the necessary data from the CD-ROM by the CD-ROM drive 30 to load into the main memory 12 and n the devices in the secondary bus 2. In the main CPU 11 a Geometry transfer machine (GTE), configured to execute the process, such as the transformation of coordinates. The GTE 17 has a parallel computing mechanism, for the parallel execution of multiple calculations and executing calculations, such as the transformation of coordinates, light source calculations, matrix calculations or vectors at high speed, sensitive to requests for calculations from the main CPU 11. The main CPU 11 defines a three-dimensional model as a combination of basic unit figures (polygons) such as triangles or quadrangles, based on the results of the calculations by the GTE 17 prepares the instructions for the delineation corresponding to respective polygons to delineate a three-dimensional image. The main CPU 11 also forms instruction packets for the delineation and sends the instructions for the delineation as a command pack to the GPU 15.
The main DMAC 13 is a device on the main bus 1 for handling the DMA transfer of a device on the main bus 1. The main DMAC 15 has a device on the secondary bus 2 as an object, when the bus controller 16 is in an open state. The GPU 15 is a device on the main bus 1, which functions as an auxiliary processor, This GPU 15 interprets the instructions for the delineation there transmitted as a command packet from the DMAC 13 to calculate the Z value and the colors of all the pixels that make up the polygon from the color data of the vertex points and the Z values that specify the depth. In addition, the GPU 15 performs auxiliary processes to write the pixel data in a frame buffer 18, such as an image memory sensitive to the Z values. The MDEC 14 is an input / output connection device, capable of operating parallel to the CPU, and it's a device on the main bus 1 that works like the machine that expands images. This MDEC 14 decodes the encoded image data after the orthogonal transformation, such as the discrete cosine transformation. The sub-CPU 21 is a device on the secondary bus 2, which operates by a program on the sub-memory 22.
The sub-DMAC 23 is a device on the secondary bus 2 for handling the DMA transfer for the device in the sub-meter 22. The sub-DMAC 23 acquires rights for the bus only when the bus controller 16 is closed. The SPU is a device on the secondary bus 2 that functions as a sound processor. This SPU 25 extracts and produces data from the sound source from the sound memory 29, responsive to the sound command sent as a command pack from the sub-CPU 21 or the sub-DMAC 23. The ATM 26 is a device for communications on the secondary bus 2. The subsidiary memory 27 is a data input / output device on the secondary bus 2 and is composed of a non-volatile memory, such as a flash memory.
This subsidiary memory 27 transiently stores data, such as a game process or markers. The input / output device 28 is an input device on the secondary bus 2 of other equipment, such as a control keyboard, man / machine interface, such as a mouse, an image input or voice input. In addition, the CD-ROM drive 30 is a data entry device in the secondary bus 2 and reproduces the necessary data or application programs from the CD-ROM.
That is, with the present ideo game machine, the geometric process system, which executes geometric processes, such as the transformation of coordinates, calculations of cuts or light sources, defines three-dimensional models as a combination of the unit figures (polygons) , such as triangles or quadrangles, to prepare instructions for the delineation of a three-dimensional image to transmit the instructions for the delineation of respective polygons as a command pack on the main bus 1, consists of the main CPU 11 on the main bus 1 and the GTE 17, while the auxiliary processing system to generate pixel data for the respective polygons, based on the instructions for the delineation from the geometric process system to write in the frame buffer 18 by means of the auxiliary process to write a figure in frame buffer 18 is composed of GPU 15. This GPU 15 has a 31 package machine that It is connected to the main bus 31, as shown in Figure 4, which illustrates its basic structure, and executes the auxiliary process of writing pixel data of the respective pixels in the frame buffer 18, in accordance with the instructions for the delineation sent from the main CPU 11 or the main DMAC 13 to the packet machine 31, as a packet of commands, while extracting the pixel data from an image delineated in the frame buffer 18 to supply the pixel data as signals video via a display controller or CRT controller 34 on a television receiver or monitor receiver, not shown. The packet machine 31 develops the command packet, sent from the main CPU 11 or the main DMAC 13 on the main bus 1 via the packet machine 31 in a recorder, not shown. The preprocessor 32 also generates polygon data according to the instructions for delineation sent to the packet machine as a command packet and processes the data of the polygon with the previous process established above, such as the division of polygons as explained then, while generating various data, such as the information in the coordinates of the vertex points of each polygon, the address information in the texture or texture of the MIP map or the control information for the pixel interleaving, as required by the delineation machine 33. In addition, the delineation machine 33 includes N polygon machines, 33A1, 33A2, ..., 33AN, connected to the preprocessor 32, N texture machines, 33B1, 33B2, ... 33BN, connected to the polygon machines, 33A1, 33A2, ..., 33AN, a single bus switch 33C, connected to texture machines 33B1, 33B2, ..., 33BN, M pixel machines 33D1, 33D2, ..., 33DM, connected to the first bus switch 33C, a second bus switch 33E, connected to the pixel machines 33D1, 33D2, .. ., 33DM, a texture cache 33F, connected to the second bus switch 33E, and a 33G cache memory of the CLUT, connected to the texture cache 33F. In the delineating machine 33, the N polygon machines 33A1, 333A2, ..., 33AN execute the shading process based on the polygon, by the parallel process, in polygons produced in sequence, sensitive to the instructions of delineation with base in the polygon data previously processed by the previous processor 32. The N texture machines 33B1, 33B2, ..., 33BN perform the texture mapping or MIP mapping by the parallel process in the texture data provided there, from the memory 33F texture cache, via the color check table cache (CLUT), for each polygon generated by the polygon machines 33A1, 33A2, ..., 33AN. It will be noted that the address information of the texture or texture of the MIP map attached to the polygon processed by the N texture machines 33B1, 33B2, ..., 33BN, is supplied in advance from the previous processor 32 to the 33F cache of texture, and the texture data, as required, are transferred from the texture area in the frame buffer 18, based on the aforementioned address information. The CLUT memory 33G of the CLUT is supplied with the related CLUT data at the time of delineation that the texture is transmitted from the CLUT area in the frame buffer 18. The polygon data, processed with the texture mapping or MIP mapping by the texture engines, mentioned above, 33B1, 33B2, ..., 33BN, are transferred via the bus switch 33C to the M pixel engines 33D1, 33D2, ..., 33DM. The M motors of pixel 33D1, 33D2, ..., 33DM, perform several operations of the process of images, such as the process of the buffer Z or against secondary legends, by the process in parallel, to generate M pixel data. The M pixel data, generated by the pixel machines 33D1, 33D2, ..., 33DM, are written to the frame buffer 18 via the second bus switch 33E. The second bus switch 33E is fed with the control information in the pixel interleaved from the preprocessor 32. The second bus switch 33E has the function of selecting L from the M pixel data generated by the pixel machines 33D1, 33D2, ..., 33DM, based on the aforementioned control information, to write M pixel data at a time with the M storage sites when complying with the configuration of the polygon delineated in the frame buffer 18, such as access units via the performance of pixel interleaving. The delineating machine 33 generates all the pixel data of each polygon to write the pixel data generated in the frame buffer 18 based on the polygon data previously processed by the previous processor 32, in order to write in the buffer 18 of frame the image defined as the combination of the polygons by the aforementioned delineation instructions. The delineation machine 33 also extracts the pixel data from the image delineated in the frame buffer 18 in order to send the extracted pixel data by means of a programmable cathode ray tube (PCRTC) 34 controller as video signals to a television receiver or monitor receiver, not shown. The PCRTC 34 extracts image data written in the frame buffer 18, according to the synchronization signals to display not only multiple images on a single screen, but also to capture image data from the outside.
That is, the PCRTC 34 generates previously established directions of the horizontal synchronization signals and the vertical synchronization signals from the circuit 51 which generates synchronization signals based on the count values of an H 52 counter and a V 53 counter, as shown in Figure 5. The PCRTC 34 extracts image data from the VRAM 18 based on the above directions. The image data is supplied. The PCRTC 34 controls the production of image data to the video signals produced via the D / A converter 54. Specifically, the circuit that generates the synchronization signal 51 generates the horizontal synchronization signals and the vertical synchronization signals and sends the signals to the counter H 52 and the counter V 53, respectively. The counter H 52 counts the horizontal synchronization signals supplied there, while the counter V 53 is driven based on the count counter operation 52 to count the vertical synchronization signals supplied there. After the counter H 52 and the counter V 53 have counted numbers previously set to adjust the positions of slices, the PCRTC 34 generates an address associated with a given pixel, frame by frame. Then, after counting a previously established number to adjust the slice position, the PCRTC 34 generates an address associated with another image. That is, since image data of a frame composed of multiple images have been written in VRAM 18, an address associated with the respective image data is generated within a period of a frame. The VRAM 18 is configured so that the image data is written in sequence there in the frame period. Each time the address is extracted from the PCRTC 34, the image data associated with the supplied address is extracted and supplied to the PCRTC 34. After controlling the output, the image data supplied to cause the previously set images to be displayed on positions previously established on the screen, the PCRTC 34 sends the image data to the D / A converter 54, which then converts the image data supplied into analog signals to the produced video signals. That is, the PCRTC 34 extracts image data corresponding to multiple images displayed as a single viewing screen, from the VRAM 18, and controls the output of the extracted image data, in order to allow multiple images with different resolutions are displayed on a screen. Meanwhile, the PCRTC 34 can capture image data from the outside to write image data in the VRAM 18. In addition, the PCRTC 34 can generate addresses to extract image data as other image data, as will be explained later in detail. The configuration of the CRTC, according to the first modality, will now be explained. A PCRTC 34a, according to the first mode, has multiple CRTC buffers, to display multiple images with different resolutions on a screen, and can also independently control the CRTC buffers. Specifically, the PCRTC 34a has a controller 61, multiple CRTC buffers 62a to 62g and a selective synthesis unit 63, as shown, for example, in Figure 6. In the VRAM 18 image data is written with different resolutions, as shown in Figure 7. A Once a previously established number of the synchronization signals has been counted and the desired slicing position has been established, the controller 61 may decrease the resolution if the high resolution image data has been captured in the VRAM 18, but the data of images should be displayed on a low resolution screen. The PCRTC 34a generates addresses for the slicing of a low resolution image, stored in the VRAM 18, in order to send the address to the VRAM 18. When the next slicing position has been established, the PCRTC 34a generates addresses for slicing other high resolution image data stored in the VRAM 18. In the VRAM 18, low resolution image data and high resolution image data displayed in a frame are written, as shown in the Figure 7. Each time an address is provided from a control unit 61, the image data corresponding to the address is extracted and sent to a CRTC buffer 62. Similar to the image data written directly in the VRAM 18, the image data supplied from the outside via the CRC buffer 62g is extracted from the VRAM 18 by the address from the controller 61. The CRTC's buffer 62 is comprised of multiple CTC buffers, 62a to 62g, as described above, and fed with and transiently store image data of different resolutions of different images in each CRTC buffer, 62a to 62g. The CRTC buffer 62a to 62g is controlled independently by the controller 61 to select in sequence and synthesize the image data from one horizontal scan line to another. This allows the PCRTC 34a to display images of different resolutions from one scan line to another, as in the case of the representation of the display, shown in Figure 7. On the other hand, the buffer 62g of the CRTC of the buffer 62 of the CRTC has bidirectional functions. That is, the CRTC buffer 62g can capture image data supplied from the outside and transmit the captured data to the VRAM 18, when it is fed with an address from the controller 61, the VRAM 18 can extract data from captured images similarly to other image data. The image data, thus extracted, is supplied via CRTC 62g to the synthesis unit 63 of selection. This selection synthesis unit 63 has a selector 64 for selecting the data of the supplied images, a control circuit 65 of coefficients, and a filter 66, and the respective image data is supplied via the buffers 62a to 62g of the CRTC to the selector 64. The selector 64 selects the data of the supplied images, under control by the controller 61 and sends only the previously established image data to the filter 66.
When fed with the image data previously established from the selector 64, the coefficient control circuit 65 modifies part of the parameters of the image data based on the results of the calculations by the control unit 61, or multiplies part or all the parameters of the image data sent to the filter 66 with the alpha values, which represent the opacity of the object. The filter 66 synthesizes the data of the images supplied to the output synthesized image data. These synthesized output image data are converted by the D / A converter into analog video signals. With analog video signals, multiple images can be displayed on an exhibition screen, as shown in Figure 7. The CRTC configuration of a second mode is now explained. In the following description, the same reference numerals are used as those used in the first embodiment, to illustrate similar component. With a PCRTC 34b of the second mode, which has a line buffer instead of the CRTC buffer, as shown in Figure 8, the representation can be done in a similar way by the independent control of these line buffers. The PCRTC 34b includes a controller 71, a control program unit 72, a control register 73, caches 74a, 74b, line buffers 75a, 75b and a selection synthesis unit 63. The controller 71 modifies the parameters of part of the image data, as explained below, or executes calculations for the alpha values, based on a program stored in the control program 72. The controller 71 generates an address to be supplied to the VRAM 18 by means of the control register 73, while controlling the cache memory 74, the line buffer 75 and the synthesis unit 63 of selection. The VRAM 18 is responsive to the supplied address for extracting image data. The extracted image data is supplied by the line buffers 75a through 75d to the selection synthesis unit 63. The line buffer 75d is a bidirectional line buffer and can capture image data from the outside and send this image data to the VRAM 18. This VRAM 18 can write image data from the outside, supplied via the 75d buffer of line and extract the image data, based on the address from the controller, as other image data. The VRAM 18 also sends the image data to the caches 74a, 74b.
The cache memories 74a, 74b each consist of multiple memories and can write the supplied image data. These caches 74a, 74b extract the image data, under control by the controller 71, and transmit this data to the selection synthesis unit 63. The selection synthesis unit 63 modifies the part parameters of the supplied image data or multiplies part or all of the parameters of the image data with the alpha values, which represent the opacity of the object. The selection synthesis unit 63 then selects the supplied image data to synthesize the selected image data. The synthesized data is converted by a D / A converter into analog signals. A plurality of analog image data may be displayed in a mosaic manner on the display screen. The PCRTC 34b can contribute to the reduction in production costs by the use of line buffers 75a to 75d instead of the CRTC buffer. further, since the image data extracted from the VRAM 18 is supplied to the PCRTC 34b and multiple image data can be controlled in their production independently via the line buffers 75a to 75d, multiple images can be displayed on a single screen. display. Also, with the PCRTC 34b, since the external image data can be captured by the bidirectional line buffer 75d and written to the VRAM, if the previously established address is generated by the controller, the captured image data is extracted from the VRAM 18 as other image data. This enables the PCRTC 34b not only to display multiple images on a display screen, but also to capture and display an image from the outside. The video game machine embodying the present invention is configured as shown, for example, in a plan view of Figure 9, a front view of Figure 10 and a side view of Figure 11. That is, a video game machine 201, shown in Figure 9, is basically composed of a main body portion 202 and an operating unit 217, connected to the main body portion 202 via a cable 227. A charging unit 203 disk is provided in the middle portion of the upper surface of the main body portion 202 and a CD-ROM 251, shown in Figure 12, is loaded into the interior of the unit 203. On the left side of the unit 203 of Disc charge are provided with a power source switch 205, operated to turn on and off the power source of the apparatus and a reset switch 204 to temporarily reset the game. On the right side of the disk loading unit 203 is supplied a disk action switch 206, operated when the CD-ROM 251 is loaded or unloaded to the disk loading unit 203. On the front side of the portion 202 of the main body connection portions 207A, 207B are provided, as shown in Figure 10. These connection portions 207A, 207B are provided with a connection terminal 226 on the front most part of the cable 227 carrying, outside the insertion portion 217 of the terminal, an insertion connection portion 212 of the terminal for connecting to a recording unit 228, such as a memory card, and a recording insertion unit 208. That is, two pulse units 217 and two register units 228 can be connected to the main body portion 202. The front view of Figure 10 shows the state in which the connection terminal 226 and the recording unit 228 are connected to the connection portion 297B on the right side, while no connection terminal 226 or recording unit 228 is loaded on the right side. the connection portion 207A on the left side. Referring to Figure 10, a shutter 209 is provided in the recording insertion unit 208, so that, when the recording unit 228 is loaded onto the main body portion 202, the shutter 209 is pushed into the interior by the end distal of the recording unit 228 for charging this recording unit 228. A grip portion 231A of the connection terminal 226 and a grip portion 242A of the recording unit 228, are processed for anti-slip, such as by a knurled shape. The length of the terminal 226 of the connection and that of the recording unit 228 are selected to be substantially of the same value, as shown in a side view of Figure 11. The operation unit 27 has support portions 220, 221 held by the left and right hands The distal ends of the support portions 220, 221 are provided with pulse portions 218, 219. The operative portions 224, 225 can be operated by an index finger of the left or right hand, while the operating portions 218, 219 can be operated by a thumb of the left or right hand A selection switch 222 operated when a selection operation is performed during the game and a start switch 223 triggered to start the game, are provided between the pulse portions 218, 219.
With the present video game machine 201, the CD-ROM 251 loaded in the disk loading unit 203 is reproduced by the aforementioned CD-ROM unit 30. The pulse unit 217 is equivalent to the input device 28, while the recording device 22B is equivalent to the subsidiary memory 27. With the apparatus that generates the address, described above, the previously established addresses are generated, based on the signals of synchronization, so that the image data written in the field memory is extracted in sequence. The image data, thus extracted, is sent to the multiple line buffers in the device that generates addresses. Therefore, this apparatus independently controls the outputs of the respective image data via each line buffer, so that multiple images can be displayed on one and the same screen. Likewise, with the apparatus that generates addresses, described above, at least one of the multiple line buffers can capture image data from the outside to write on the field memory, so, when a previously established address is produced, the data of images captured from the outside are extracted from the field memory, like other image data. Therefore, the apparatus that generates addresses can extract an image captured from the outside in the same way as the image data written in the image memory, thus enabling multiple images to be displayed on the same screen. With the apparatus exhibiting images, described above, the previously established directions are generated based on the synchronization signals, so that the image data written in the field memory is extracted in sequence. The image data, thus extracted, is sent to the multiple line buffer in the device that generates addresses. Therefore, the image displaying apparatus independently controls the outputs of the respective image data via each line buffer, to produce video signals so that multiple images can be displayed on one and the same screen. Likewise, with the apparatus exhibiting images, at least one of the multiple line buffers can capture image data from the outside, to write them in the field memory, so that, when a previously established address is produced, the data of images captured from the outside are extracted from the field memory, like other image data. Therefore, the apparatus exhibiting images can extract an image captured from the outside in the same way as the image data written in the image memory to the output video signals, thus enabling multiple images to be displayed therein. screen. With the apparatus exhibiting images, described above, since the control element is controlled in the program, it becomes possible to display a clear image by partially modifying the parameters of the image data or by calculating the alpha values. Also, with the apparatus exhibiting images, described above, multiple images of the same kind can be displayed on the same screen, writing image signals by the cache and controlling the extraction in sequence of the signals of images written in the cache, by means of the control element.

Claims (9)

  1. CLAIMS 1. An apparatus for generating addresses, this apparatus comprises: an element that generates addresses, for the generation of these addresses and the extraction of the signals of images written in an image memory, based on the synchronization signals; a plurality of buffers, respectively supplied with the image signals extracted from the image memory, based on the directions; and a control element, to independently control the image signals produced by the buffers, so that the image signals supplied to these buffers will be displayed on a single screen.
  2. 2. The address generating apparatus, as claimed in claim 1, wherein at least one of the buffers captures signals from the images supplied from the outside, to guide these captured image signals to the image memory.
  3. 3. An apparatus for displaying images, this apparatus comprises: an element that produces addresses, which have elements that generate addresses, to generate these addresses to extract the signals of images written in an image memory, based on the synchronization signals, a plurality of buffers supplied respectively with the image signals extracted from the image memory, based on the addresses, and a control element, to independently control the image signals produced by the buffers, so that these supplied image signals the buffers are displayed on a single screen; and a synthesizer element, to synthesize signals from the images produced by the buffers.
  4. 4. The apparatus for displaying images, as claimed in claim 3, wherein at least one of the buffers captures image signals supplied from the outside, to guide these captured image signals to the image memory.
  5. 5. The apparatus for displaying images, as claimed in claim 3, wherein the synthesizing element is of a controlled program type, based on the calculations previously established by the control element.
  6. 6. The apparatus for displaying images, as claimed in claim 3, further comprising one or more cache memories, fed with the image signals extracted from the image memory; this cache memory writes the signals of supplied images; the control element controls the extraction in sequence of the image signals written in the cache, to cause a plurality of images of the same kind to be displayed on a single screen.
  7. 7. The apparatus for displaying images, as claimed in claim 3, wherein the buffer is composed of an online memory.
  8. 8. A method for generating addresses, this method comprises: generating addresses for extracting the signals of images written in an image memory, based on synchronization signals; supply the image signals extracted from the image memory, based on the addresses to the buffers; and independently control the signals of the images produced by the buffers, so that the image signals supplied to the buffers are displayed on a single screen.
  9. 9. A method for displaying images, this method comprises: generating addresses to extract signals from the images written in an image memory, based on the synchronization signals; supplying the image signals extracted from the image memory, based on the addresses, to the buffers; control, independently, the signals of images produced by the buffers, so that these signals of images supplied to the buffers are displayed on a single screen; and synthesize the signals of the images produced by the buffers for the exhibition.
MXPA/A/1997/007536A 1996-02-06 1997-10-01 Apparatus for general directions, apparatus for exhibiting images, method for generating addresses and method for exhibiting image MXPA97007536A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JPP8-020333 1996-02-06
JP8020333A JPH09212146A (en) 1996-02-06 1996-02-06 Address generation device and picture display device
JP8-020333 1996-02-06
PCT/JP1997/000298 WO1997029476A1 (en) 1996-02-06 1997-02-06 Address generator, image display, address generation method and image display method

Publications (2)

Publication Number Publication Date
MX9707536A MX9707536A (en) 1997-11-29
MXPA97007536A true MXPA97007536A (en) 1998-07-03

Family

ID=

Similar Documents

Publication Publication Date Title
EP0821339B1 (en) Address generating apparatus, picture display apparatus, address generation method and picture display method
KR100422082B1 (en) Drawing device and drawing method
US6561906B2 (en) Game apparatus, method of reproducing movie images and recording medium recording program thereof
KR100482391B1 (en) Video signal processing apparatus and method
EP0620532B1 (en) Method and apparatus for synthesizing a three-dimensional image signal and producing a two-dimensional visual display therefrom
JP3625184B2 (en) 3D image processing method and apparatus for game, readable recording medium recording game 3D image processing program, and video game apparatus
EP0992267B1 (en) Image creating apparatus, displayed scene switching method for the image creating apparatus, computer-readable recording medium containing displayed scene switching program for the image creating apparatus, and video game machine
KR100471905B1 (en) Memory access method and data processing device
US6339430B1 (en) Video game machine and method for changing texture of models
EP0992945B1 (en) Video game apparatus, model display and readable recording medium therefor
US6151035A (en) Method and system for generating graphic data
MXPA97007536A (en) Apparatus for general directions, apparatus for exhibiting images, method for generating addresses and method for exhibiting image
US5987190A (en) Image processing system including a processor side memory and a display side memory
JP3548648B2 (en) Drawing apparatus and drawing method
AU4614400A (en) Method and apparatus for generating images
EP1249791B1 (en) 3-D game image processing method and device for drawing border lines
JP3468985B2 (en) Graphic drawing apparatus and graphic drawing method
JP3437166B2 (en) 3D image processing method, apparatus thereof, computer-readable recording medium recording 3D image processing program, and video game apparatus
JP2004133956A (en) Drawing system and drawing method
MXPA97007540A (en) Apparatus and method to draw image