MXPA01008900A - Image rendering method and apparatus - Google Patents

Image rendering method and apparatus

Info

Publication number
MXPA01008900A
MXPA01008900A MXPA/A/2001/008900A MXPA01008900A MXPA01008900A MX PA01008900 A MXPA01008900 A MX PA01008900A MX PA01008900 A MXPA01008900 A MX PA01008900A MX PA01008900 A MXPA01008900 A MX PA01008900A
Authority
MX
Mexico
Prior art keywords
image
focus
images
original image
value
Prior art date
Application number
MXPA/A/2001/008900A
Other languages
Spanish (es)
Inventor
Tadashi Nakamura
Simon Dylan Cuthbert
Original Assignee
Sony Computer Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Computer Entertainment Inc filed Critical Sony Computer Entertainment Inc
Publication of MXPA01008900A publication Critical patent/MXPA01008900A/en

Links

Abstract

A depth-of-field display method displays a sense of distance on a two-dimensional screen. The depth-of-field display method turns objects corresponding to a preset Z value to just-in-focus states and overwrites images whose levels of out-of-focus states are sequentially increased corresponding to an increase in their positional distances to one of a farther direction and a nearer direction. Also, this method uses a bilinear filter method to perform sequential reductions of the original image, and thereafter, performs magnification of the individual reduced images, thereby generating the out-of-focus images. Furthermore, the depth-of-field display method controls the levels of the out-of-focus states according to levels of thesequential reductions.

Description

METHOD AND PAIR OF IMAGE REPRESENTATION BACKGROUND OF THE INVENTION FIELD OF THE INVENTION The present invention relates to an image representation apparatus and an image representation method for displaying depth of field and a storage medium for storing a data processing program for the image representation method.
DESCRIPTION OF THE RELATED TECHNIQUE Conventionally, to allow three-dimensional objects to be represented, entertainment systems, such as a TV game apparatus performs perspective transformations so that objects can be displayed on two-dimensional screens. In this case, a processing such as computation with a light source is also performed to display the objects three-dimensionally. However, to date there has not been a method to show depth of field that gives a sense of distance in the direction from the reference point to the object (Z direction).
BRIEF DESCRIPTION OF THE INVENTION In view of the problems described above, the present invention provides an image rendering apparatus capable of displaying depth of field. In addition, the invention provides a method of deploying depth of field to display a sense of distance from a reference point to objects on a two-dimensional screen. In addition, the present invention provides a data processing program for displaying the sense of distance from the reference point to the objects on the two-dimensional screen. An image display apparatus of the present invention comprises a device for generating an image in an out of focus state and using an original image in a state just at the focus, reducing the original image and subsequently amplifying the reduced image. Also, the image representation apparatus of the invention comprises a buffer Z for setting the pixel depth direction and a pixel interpolation algorithm and further comprises a device for prefixing a Z value of the aforementioned buffer Z; a device for generating an image in an out of focus state by reducing an original image in a state just at the focus, and subsequently amplifying the reduced image; and a device for overwriting the aforementioned image in the unfocused state on the aforementioned original image using the aforementioned prefixed Z value. In the above, the described image representation apparatus converts an image field of an object corresponding to the point represented by the aforementioned Z-value to the state just at the focus, and concurrently converts an image field of a different object. the aforementioned object to the out of focus state, thus showing depth of field. The image representation apparatus described above for the present invention that uses the aforementioned device to generate the image in the out of focus state by reducing the aforementioned original image and then amplifying the image sequentially, reduces the aforementioned original image and subsequently amplifies the reduced images, thus generating the images in states out of focus. In this case, it is preferable that the aforementioned pixel interpolation algorithm is a bilinear filter method. Also, the image representation apparatus of the present invention further comprises alpha planes to selectively cover the pixels. In this, the image representation apparatus uses the aforementioned prefixed Z-value to sequentially reduce the aforementioned original image, to overwrite out-of-focus and blurry images obtained by amplifying reduced images on the aforementioned original image, and to convert fields of image of objects located farther away than a point represented by the aforementioned Z-value. The described image representation apparatus also uses the aforementioned alpha planes to cover the image fields of the objects located beyond the point represented by the aforementioned Z-value, to subsequently overwrite the above-mentioned out-of-focus and blurry images on the above-mentioned original image and to convert the image fields located closer to the point represented by the aforementioned Z-value to the out-of-focus states. It is preferable that the above image representation apparatus further comprises a video RAM (VRAM) having an image representation area and a texture area in the same memory space to sequentially reduce the aforementioned original image in the VRAM before mentioned and then to amplify the reduced images, thus generating the out-of-focus and blurry images mentioned above. Also, the image representation apparatus comprises a buffer Z for setting the pixel depth direction and a pixel interpolation algorithm and further comprises a device for prefixing a Z value of the aforementioned Z buffer.; a device for generating multiple out-of-focus images that each have a unique out-of-focus level that reduces an original image in a just state in focus to images that each have a unique linear relationship, and subsequently, amplify the images thus reduced; and a device for using the aforementioned prefixed Z-value to overwrite the above-mentioned out-of-focus image on the image in the state just at the focus, whose aforementioned out-of-focus level increases correspondingly to an increase in its distance from a point represented by the aforementioned Z value, on the original image.
In this, the aforementioned image representation apparatus converts an image field of an object located at a point corresponding to the aforementioned Z-value to the state just at the focus, and concurrently converts an image field of a different object to the object aforementioned to the out-of-focus state where the above-mentioned out-of-focus level is increased correspondingly to an increase in its position distance from the point represented by the aforementioned Z-value, thus generating images showing depth of field. In addition, an image generation method of the present invention comprises the steps for preparing an original image in a state just at the focus, reducing the aforementioned original image, and amplifying the reduced image, thus generating an image in a state outside of the image. focus. Also, the image generation method of the invention comprises the steps for preparing an original image in a state just at the focus, sequentially reducing the aforementioned original image and amplifying the reduced images, thus generating images in out of focus states. Also, the image generation method of the present invention comprises the steps for preparing an original image in a state just at the focus, sequentially reducing the aforementioned original image to images each having a unique linear relationship, and amplifying individually the reduced images each having the unique linear relationship, thus generating a plurality of out-of-focus images each having a single fuzzy level.
Also, the depth-of-field display method of the invention comprises the steps for using a pixel interpolation algorithm to reduce an original image and subsequently amplify the reduced image, thus generating a blurred and out-of-focus image; and to use a buffer Z capable of controlling the distance in the depth direction from a reference point, thus overwriting the above-mentioned out-of-focus image on the aforementioned original image. Also, the depth of field display method of the invention comprises the steps for using a pixel interpolation algorithm to sequentially reduce an original image and subsequently to amplify the reduced images, thus generating out of focus images; and to use alpha planes having a coverage function to cover the image fields of objects located beyond a point represented by a predetermined Z-value and to overwrite the above-mentioned out-of-focus images on the aforementioned original image that has been cover, thus converting image fields that have not been covered to unfocused states. Also, the deployment method of. depth of field of the present invention comprises the steps to use a pixel interpolation algorithm to sequentially reduce an original image, and subsequently to amplify the reduced images, thus generating images in out of focus states; to use a buffer Z capable of controlling the distance in the depth direction from a reference point to overwrite the aforementioned images in the unfocused states on the aforementioned original image, thus converting image fields of objects beyond the point represented by a Z value preset to out of focus states; and to use alpha planes having a coverage function to cover the image fields of the objects located beyond the point represented by the aforementioned prefixed Z-value and to overwrite the aforementioned images in the out-of-focus states on the original image mentioned above, thus converting image fields that have not been covered to unfocused states. In addition, the Depth of Field Deployment method comprises the steps for converting image of objects located at a position corresponding to a preset Z value to a state just at the focus and overwriting images whose levels of unfocused states are sequentially increased correspondingly. to an increase in their position distances to one of a farther direction and a closer direction of a point represented by the aforementioned prefixed Z value; to use a pixel interpolation algorithm to perform sequential reductions of an original image and subsequently to perform amplification of the reduced images, thus generating the aforementioned images in out-of-focus states; and to control levels of the above-mentioned out-of-focus states according to the levels of the aforementioned sequential reductions. In addition, a storage medium of the present invention stores an image generation program so that it can be readable and executable by a computer. In this, the image generating program comprises the steps to prepare an original image in a state just at the focus, reducing the aforementioned original image and amplifying the reduced images, thus generating images in out of focus states. Also, the storage medium of the present invention that stores the image generation program so that it is readable and executable by a computer, wherein the image generating program comprises the steps for preparing an original image in a state just in focus , sequentially reducing the aforementioned original image to images that each have a unique linear relationship and, individually amplifying the reduced images that have, each one of them a unique linear relationship, thus generating a plurality of out of focus images having a level unique blur Also, the storage medium of the present invention, which stores the image generating program so that it is readable and executable by a computer, wherein the image generating program comprises the steps for using a pixel interpolation algorithm to sequentially reduce a original image, later to amplify the reduced images, thus generating images in states out of focus; to use a buffer Z capable of controlling the distance in the depth direction from a reference point to overwrite the above-mentioned out-of-focus images on the aforementioned original image, thus converting image fields of objects beyond a represented point for a predetermined Z-value to states out of focus; and to use alpha planes having a coverage function to cover the image fields of the objects located beyond the point represented by the aforementioned prefixed Z-value and to overwrite the aforementioned images in the out-of-focus states on the original image mentioned above, thus converting image fields that have not been covered to unfocused states. Also the storage medium of the invention, which stores the image generating program so that it is readable and executable by a computer, wherein the image generating program comprises the steps for using a pixel interpolation algorithm to reduce an original image, and subsequently to amplify the reduced image, thus generating a blurred and out of focus image; and to use a buffer Z capable of controlling the distance in the depth direction from a reference point, thus overwriting the above-mentioned out-of-focus image on the aforementioned original image. Also, the storage medium of the invention that stores the image generating program so that it is readable and executable by a computer, wherein the aforementioned image generating program comprises the steps for using a pixel interpolation algorithm to sequentially reduce a original image, and later to amplify the reduced images, thus generating images out of focus; and to reduce alpha planes having a coverage function to cover image fields of objects located beyond a point represented by a predetermined Z-value and to overwrite the above-mentioned out-of-focus images on the aforementioned original image that has been covered, thus converting image fields that have not been covered to unfocused states. Also, the storage medium of the present invention that stores the image generating program so that it is readable and executable by a computer, wherein the image generating program comprises the steps for using a pixel interpolation algorithm to sequentially reduce an image original, and subsequently to amplify the reduced images, thus generating images out of focus; to use a buffer Z capable of controlling the distance in the depth direction from a reference point to overwrite the above-mentioned out-of-focus images on the aforementioned original image, thus converting image fields of objects beyond a represented point for a predetermined Z-value to states out of focus; and to use alpha planes having a coverage function to cover image fields of objects located beyond the point represented by the aforementioned prefixed Z-value and to overwrite the above-mentioned out-of-focus images on the aforementioned original image, thus converting Image fields that have not been covered to out of focus states. Also, the storage medium of the present invention, which stores the image generating program to be readable and executable by a computer, wherein the aforementioned image generator program comprises the steps to convert the image of objects located in a position corresponding to a predetermined Z-value to a state just at the focus and overwriting images of which the levels of out-of-focus states are sequentially increased corresponding to an increase in their position distances to one of a farther direction and an address closest to a dot represented by the aforementioned prefixed Z value; to use a pixel interpolation algorithm to perform sequential reductions of an original image and subsequently to perform the amplification of the reduced images, thus generating the aforementioned images in out-of-focus states, and to control levels of out-of-focus states before mentioned in accordance with the levels of the sequential reductions mentioned above. Furthermore, in the foregoing, the aforementioned prefixed Z value can be varied in series by image.
BRIEF DESCRIPTION OF THE DRAWINGS Figures 1A to 1C are views used to explain alpha bits; Figures 2A and 26 are views used to explain a method of sampling points; Figures 3A to 3C are views used to explain a bilinear filter method; Figures 4A and 4B are views used to explain individual cases in which bilinear filters are used to sequentially reduce images and subsequently to amplify reduced images; Figure 5 shows views used to explain a case in which the bilinear filters are used to sequentially reduce images and subsequently to amplify the reduced images; Figure 6 is a view used to explain a case in which a Z value is used to overwrite a smoothed and blurred image on an original image; Figure 7 shows an original image stored in a VRAM image representation area; Figure 8 shows a case in which an original image is sequentially reduced; Figure 9 shows a case in which an original image is sequentially reduced to 1/16 and subsequently amplified; Figure 10 shows a case in which a buffer Z is used for the original image in Figure 9 is sequentially reduced to 1/16 and then the reduced image is amplified and is overwritten on the original image in Figure 7; Figure 11 shows a case in which a near Z value is specified to paint covered planes in the VRAM image representation area (the area painted in red for the explanation (an area other than the object located nearby). in this figure it is not visible on a real screen); Figure 12 shows a case in which the image in Figure 9 is overwritten on a covered image; Figure 13 is a view used to explain a depth of field deployment method; Figure 14 is a flowchart used to explain the steps of the depth of field deployment method in cooperation with flowcharts of Figures 15 and 16; Fig. 15 is a flowchart used to explain the steps of the depth of field deployment method in cooperation with flowcharts of Figs. 14 and 16; and Figure 16 is a flowchart used to explain the steps of the depth of field deployment method in cooperation with flow charts of Figures 14 and 15.
DESCRIPTION OF THE PREFERRED MODALITY Hardware Configuration In an image generation method, a depth-of-field display method of the present invention is implemented with an image processing system, such as an entertainment system, for example, a game apparatus, which satisfies The following conditions: The system must have a buffer Z that can control a Z-direction (depth direction) of individual pixels; The system should be able to process alpha coverage (which has an alpha filter); and in addition, The system must have a bilinear filter. However, as described below, in an entertainment system, when an image over which the predetermined processing has been provided (referred to as a processed image, below) is overwritten, if the system has no hardware restrictions such as a buffer Z allows selective overwriting only when its capacitance is less than (or greater than) a preset Z value, and buffer Z allows optional and selective overwriting when its capacitance is less than or greater than the preset Z value , condition (ii), meaning that the system is capable of processing alpha coverage, may be unnecessary.
Buffer Z The hardware employed in this mode is preferably a video RAM (VRAM) with a buffer Z. In particular, as shown in FIG. 1A, the individual points (pixels) are specified in the VRAM. For example, in addition to the elementary colors R (red), G (green) and B (blue) of 8-bit values, the VRAM has areas to store a Z-value of up to 32 bits and an A-value ("alpha bits"). or "alpha panes") of up to 8 bits.
The Z value is a parameter that represents distance from the reference point to an object. In the perspective transformation in a graphics processing unit (GPU), when a perspective transformation for a three-dimensional object is carried out on an object on a two-dimensional screen, the distance (depth information) from the reference point is also is computed, and the Z value is determined according to the depth information. In this time, the values R, G and B are also determined by light source computation. For example, as shown in Figures 1 B and 1C, when an object 100 is disposed relatively closer to the reference point, the Z value of points that make up the object 100 is set to be large. On the contrary, when the object 100 is disposed relatively further from the reference point, the value Z of points that make up the object 100 is set to be small. At this time, using the Z-value, the depth can be determined from the reference point of the object (value of the coordinate system of the reference point in the direction of the Z-axis). Accordingly, in the generation of a composite image of multiple objects on a screen, when a Z value (eg, an average Z value between a Z value of the object 100 and a Z value of an object 102) is preset, and a processed image (an image created by processing in the original image) is overwritten on the original, a portion of an image corresponding to the object 100 whose Z value is determined to be relatively large (i.e., relatively closer to the reference point) as a result of comparison with the predetermined value Z it remains unchanged (ie, the portion is not transformed into the unit of pixels). On the other hand, a portion of an image corresponding to the object 102 whose value Z is determined to be relatively small (i.e., relatively further from the reference point) can be generated on an overwritten image on a processed image (i.e. a portion is transformed into units of pixels). In this way, the buffer Z can be used in the generation of a composite image of multiple objects in the depth direction.
Alpha planes The alpha value A shown in FIG. 1A is a parameter for controlling a combination of pixel colors and also for performing coverage processing for the entire image or a part of an image. The method of coverage refers, for example, to the method performing selective execution whereby a covered image area is not colored in the coloration of an image. In this way, when overwriting a processed image on the original image, in accordance with the provision of the alpha value in the original image, the points are thus covered (points for which the coverage bits are lit) and are not overwritten with the processed image.
Deployment of first stage depth of field Point sampling method In general, conventional TV gaming devices employ a point sampling method to amplify or reduce images. The method of sampling points is briefly described below with reference to Figures 2A and 2B. Figure 2A shows the original image (ie, a pre-reduction image). Figure 2B shows an image after reduction by a linear ratio of 1/2 (area ratio of 1/4). To make the explanation concise, it is assumed that the image in Figure 2A is composed of 4 x 4 points, ie 16 pixels, and the image in Figure 2B is composed of 2 x 2 points, ie 4 pixels. Also, to specify the position of the pixels, the horizontal direction of each image is set to be an X axis direction, and the vertical direction of the image is set to be a Y axis direction, where the positions of the Top right point of each image are represented as (x, y) = (1, 1). In the figure 2A, each of the 16 points is represented by the values of R, G and B; however, to make this explanation concise, only the four points shown in it are represented by the symbols O, D, x, and?. In the point sampling method, to generate a post-reduction point in (x, y) = (1, 1), the position of the post-reduction point in (x, y) = (1, 1) is first computed, then a point whose position is closest to it is recovered from the four points in the corresponding pre-reduction areas in (x, y) = (1, 1), (1, 2), (2, 1) and (2, 1). In this case, if the position closest to the post-reduction point in (x, y) = (1, 1) is assumed to be the point in the pre-reduction area of (x, y) = (1, 1), the point represented by O represents the contents of the post-reduction point in (x, y) = (1, 1). Similarly, to generate a post-reduction point in (x, y) = (2, 1), the position of it is first computed, then a point whose position is closest to it is recovered from the four points in corresponding pre-reduction areas in (x, y) = (3, 1), (4, 1), (3, 2) and (4, 2). In this case, if the position closest to the post-reduction point represented by (x, y) = (2, 1) is assumed to be the point in the pre-reduction area in (x, y) = (3 , 1), the point represented by the symbol D represents the contents of the post-reduction point in (x, y) = (2, 1). Similar to the above, the point represented by the symbols x and? represent post-reduction points in (x, y) = (1, 2) and (x, y) = (2, 2), respectively. That is, the point sampling method eliminates unnecessary points (pixels) from a pre-reduction image to generate a post-reduction image. Therefore, among the points that make up the pre-reduction image, those that are not used for the post-reduction image are discarded. According to the above, even when an image reduced by the point sampling method is amplified, because the information of the pre-reduction image is partially discarded, an image that includes noise is simply amplified, thus producing an image mosaic Bilinear filter method A description of a bilinear filter method that performs reduction / amplification processing in a different way from the point sampling method will be given below. Figures 3A to 3C are used to explain the bilinear filter method. Figure 3A shows the original image (pre-reduction image) composed of 4 x 4 points. Figure 3B shows a first stage post-reduction image composed of 2 x 2 points. Figure 3C shows a second stage post-reduction image composed of a point. Also, the 16 points representing the invention in Figure 3A are represented by values of R, G and B; however, to make the explanation concise, an average value of the four upper points on the left is represented by the symbol •, an average value of the four lower points on the left is represented by the symbol A, an average value of four right upper points is represented by the symbol D, and an average value of the four lower right points is represented by the symbol B. To generate a point of the first stage post-reduction image in (x, y) = ( 1, 1), the bilinear filter method uses the average value (•) of the four points in the corresponding pre-reduction areas in Figure 3A, that is, in (x, y) = (1, 1), (2, 1), (1, 2) and (2, 2). Similarly, to generate a post-reduction point in (x, y) = (2, 1), the method uses the average value (D) of the four points in the corresponding pre-reduction areas in Figure 3A , that is, in (x, y) = (3, 1), (4, 1), (3, 2) and (4, 2).
Similarly, the bilinear filter method uses the average value (A) of the four points in the pre-reduction areas (x, y) = (1, 3), (2, 3), (1, 4) and (2, 4) to generate a post-reduction point (x, y) = (1, 2). Also, the method uses the average value (l) of the four points in the pre-reduction areas (x, y) = (3, 3), (4, 3), (3, 4) and (4, 4) ) to generate a postreduction point (x, y) = (2, 2). Here, to make the explanation concise, an average value of the four previous average values (•, D, Á and l) is assumed to be represented by the symbol D. In addition, to perform a second stage reduction (linear relationship of 1 / 4), the bilinear filter method uses the average value D of the four points shown in Figure 3B. According to the point sampling method described first, the number of points corresponding to the difference between the number of points that make up the pre-reduction image and the number of points that compose the post-reduction image is discarded in the reduction procedure. That is, a post-reduction image is generated, in which tens of percent of the pre-reduction image information is not used. However, the bilinear filter method is different from the point sampling method in that it uses all the points (information) that make up the pre-reduction image, thus generating the post-reduction image. However, the bilinear filter method is carried out with predetermined restraints in the algorithm. For example, when the scale is smaller than a linear relation of 1/2, the algorithm of the method uses an average value of four points that make up a pre-reduction image, which corresponds to a point that composes a post-image. reduction. By Therefore, to reduce a pre-reduction image to, for example, a image in a linear ratio of 1/16, a single reduction procedure loses dozens of percent pre-reduction image information. As a result, when the image reduced to a ratio of 1/16 is again amplified to an image of the original size, the amplified image includes noise. However, by reducing the step-by-step pre-reduction image in the linear relationships of 1/2, 1/4, 1/8, 1/16, etc., in relation to the original image, it can theoretically be said that an image is produced in which the information of the pre-reduction image is reduced to an intended resolution. Therefore, for example, as shown in Figure 4A, for amplify a reduced image at a ratio of 1/4 to the original image (the amplification processing is done all at once), the points are combined depending on the distance, and the interpolation is carried out. In the case of a real image, an image is never composed of a single point.
For example, when amplifying a 2 x 2 dot image shown in Figure 3B a a 4 x 4 image shown in Figure 3A, the positions at four points corresponding to a post-reduction image shown in Figure 3A are individually set to be •, D, A and B, according to the points •, D, A and B in preamp positions •, D, A and B shown in Figure 3A, individual points around these four points are generated according to the interpolation made depending on the distance of the individual points by using values Fixing. In comparison with reduced or amplified images according to the point sampling method currently used, the method that uses the bilinear filter can produce images that are fuzzy (ie images that are out of focus) and are smoothed according to the processing of interpolation. A description of the above will be given with reference to Figure 5 and an example of a combined VRAM having an image representation area 501, a plurality of work areas 0 an (reference symbols 502 to 505) and the like in the same memory space, first, an image in the image representation area of VRAM 501 (original image) is sequentially reduced in multiple steps to linear relationships of 1/2, 1/4, 1/8, etc., by the use of work areas 502 to 505, thus creating an image that has a final resolution of (1/2) n + '' of the original image. According to the step-by-step reductions at a ratio of 1/2, when a pixel interpolation algorithm, such as the bilinear filter method, is applied to an image reduction algorithm, a reduced image can be generated for which all the information in the original image has been sampled. In a sense, the above phenomenon is similar to a case in which, for example, an image (original image) displayed at 600 dpi (dots per 2.54 cm) is scanned at 400 dpi (in this case, however, the image is not physically reduced), it is then scanned at 200 dpi (in this case too, the image is not physically reduced), and subsequently, the 200 dpi image is reproduced as a 600 dpi image. for example, although the 200 dpi image contains the complete information of the original image, the reproduced 600 dpi image is softer than the original image. A similar feature to the previous case is that the image is blurred according to the fact that the resolution is once reduced. The linear relation of 1/2 is simply an example, and there are no restrictions on it. That is, the restriction to the effect - when the scale is smaller than the linear relation of 1/2, an average value of four pre-reduction points corresponding to a post-reduction is assumed - it is simply a restriction in the algorithm used in this mode, for example, suppose that an algorithm such as when an image is reduced to a linear relationship of 1/4, an average value of 16 pre-reduction points is used, and the complete point information on a pre-reduction image is used in the Wk reduction. In this case, the restrictions below the linear relation of 1/2 in the reduction are unnecessary. The inventor and related members have discovered that an image showing depth of field can be generated by reducing an image step by step, amplifying and reproducing the image and using the reproduced image that is smoothed and blurred.
It is known that when a reproduced image created by reducing the original image (pre-reduction image) step by step, then the amplification of the reduced image is overwritten on the whole part of the screen, the whole screen is displayed in a state outside of focus (blurred). (1) A discussion of a case in which the buffer Z is used to generate an image containing a plurality of objects each having a unique Z-value (i.e., the distance to the object from the point) will be discussed below. reference is unique), when a relatively more distant object is displayed in a fuzzy state (out of focus or unfocused state), and a relatively nearer object is displayed right in the focus. As shown in Figure 6, a case is assumed such that a screen contains at least one object 102 located relatively further away and an object 100 located relatively nearer. In this case, a relationship occurs such that the points that make up the object 100 located relatively closer have a large Z-value on average, while the points that make up the object 102 located relatively further away have a small Z-value on average. Here, an image is prepared (image 1) in the state just at the focus, which contains multiple objects (refer to figure 7). Here, the Chinese characters of the objects that are located back, middle and forward mean far, intermediate and near, respectively. Then, this image is reduced step by step, as described with reference to Figure 5 (refer to Figure 8). Subsequently, the image is amplified, thus generating a smoother and more blurred image (image 2) (refer to figure 9). The buffer described above Z compares the large and small relation of Z values of individual points. Using these Z values, a Z value that is close to the Z value of the object 100 located relatively closer is preset. According to the default value, the blurred image (image 2) is then overwritten on the image (image 1), that is, in the state just in the focus. According to a buffer Z in an entertainment system that is currently being developed, the processing in a single direction is complemented in such a way that an image field of an object located beyond a point represented by the preset Z-value ( small Z value) is overwritten. On the other hand, an image field of an object located relatively closer (point represented by a large Z-value) remains without being overwritten. As a result of the previous single address processing, multiple objects can be generated to be displayed on a screen in which an object located relatively further away is in the fuzzy state. However, an object located relatively closer is in the just-in-focus state (refer to Figure 10). The final resolution image is produced for the original VRAM image representation area 501 by the use of a pixel interpolation algorithm (e.g., the bilinear filter method).
At this time, by setting an appropriate value for the Z value, only pixels located beyond a point represented by the Z value are overwritten as an image at a reduced resolution. This allows the production of images that are in a fuzzy state when they are located beyond the Z-value limit., a discussion of the images will be made in an opposite case to the case discussed in point (1) above. (2) Next, a discussion will be made for a case in which buffer Z is used to generate an image containing a plurality of objects each having a unique Z-value (ie, the distance to the object from the point reference is unique), where a relatively distant object is displayed in the state just at the focus, and a relatively nearer object is displayed in the fuzzy state. Originally, in a processing system that allows the inverse setting of the Z-value (a processing system that does not allow overwriting relatively farther located pixels), an image field located closer to a point represented by the preset Z-value can be overwritten. so that it is an image that has a reduced resolution. As a result, the closest portion of the image is displayed in the fuzzy state. However, as described above, with the Z buffer being currently developed by the inventor and related members, according to the determination made when comparing the Z values, an image located relatively closer (when the Z value is large) can remain without being overwritten, and an image located relatively further away (when the Z value is small) can be overwritten. However, buffer Z has restrictions in the opposite case. According to this buffer Z, the object located relatively closer (when the value Z is large) can be overwritten, while the object located relatively further remains without being overwritten. In the described processing system that does not allow inverse setting of Z values, the coverage procedure can be implemented by using the alpha bit (refer to Figure 1A) and the control can be implemented whether or not overwriting is performed . This allows the short side to be overwritten in the following ways: Paint only alpha coverage planes composed of pixels represented by the VRAM Z values of image representation, which are located farther from a point represented by a preset Z value. That is, the plans are covered. Set an alpha plane test condition so that only pixels that are not covered with overlays are overwritten. Overwrite an image whose resolution is reduced. As a result of the foregoing, the pixels not covered with covers of the alpha planes, that is, only the pixels whose distances are closer than the point represented by the predetermined Z value, are overwritten as an image whose resolution is reduced. This is described later in more detail. First, the coverage procedure is carried out. In the coverage procedure step, the alpha value A is used to cover with covers over an image, and the image covered with the covers is overwritten to the original image only for an image field located farther away than the point represented by a Z value (a field where the Z value is large). In this case, the values R, G and B are not related and only the alpha A values are represented (which is also expressed as when the coverage bits are turned on). However, even when the coverage bits are turned on, no difference in the images is recognizable to the human eye. In this step, only one image field of an object located relatively further away is overwritten, while an image field of an object located relatively nearer is maintained without being overwritten (refer to Figure 11. Note that the areas covered with coverage plans are shown in red to be more quickly identified). Subsequently, a blurred image is overwritten on the image field of the object located relatively nearer. First of all, softened and fuzzy images obtained by step-by-step reductions and amplification made after the reductions are prepared. Here, in the original image, only one image field of an object located relatively further away is covered with the covers, and an image field of an object located relatively nearer is not covered with the covers. On the other hand, the image field of the object located relatively closer is overwritten as a smoothed and blurred image. The procedure described above allows the generation of a screen on which multiple objects each having a unique Z-value are displayed, an object located relatively further away is displayed in the state just at the focus, and an object located relatively closer is displayed in the fuzzy state (refer to figure 12).
Fuzzy status profiles However, in the previous step, the overwriting is carried out on the original image in which only the image field of the object located farther away than a point represented by a Z value as the blurred image is overwritten. In this case, a limit or profile of the relatively closest object is determined by the Z value in a single definition. For this reason, the profile of an object located relatively closer is clear, and an image of a smoothed and fuzzy object is placed inside the profile as it is adjusted in it. However, as a concept to perform a realistic display of the object located relatively closer in the blurred image, it is more natural to unfold the object with a blurred profile so that it is expanded.
According to this concept, another blurring procedure is performed for the distance to blur the profile.
In this step, the original image of an object is reduced step by step and then the reduced image is amplified, thus allowing to obtain a blurred image of the object. Here, an object located relatively closer is conceptually separated into an inner portion and a profile portion. The inner portion is located relatively closer (the Z value is large), therefore it is not overwritten. On the other hand, however, since the profile portion slightly expanded because it has been reduced and amplified is not originally given a Z value from the object located relatively closer, the profile portion is overwritten. Using the thus obtained fuzzy profile portion and the described fuzzy inner portion, both the profile portion and the inner portion of the object located relatively nearer are displayed as a smoothed and blurred image. According to the bilinear filter method described using the Z-value and the alpha-value A, with a preset Z-value (depth), an image field of an object located further away than a point represented by the preset Z-value may be fuzzy and subsequently an image field of an object located closer to the point may be blurred.
Multipass Depth of Field Deployment When performing the processing described above in multiple steps, a pseudo-depth field deployment can be implemented with an optical lens.
The multi-step depth of field display is described in detail below. By reducing the image step by step as described with reference to FIG. 3 and then amplifying the reduced images as described with reference to FIG. 4, the image can be smoothed and blurred (refer to FIGS. 8 and 9). At this time, as shown in Figure 5, the fuzzy level differs depending on the level of reduction. Suppose there are two images here. One is an image A (which has a resolution of 1/16) produced in a way that an image is reduced step by step in the relationships of 1/2, 1/4, 1/8 and 1/16, and then, the reduced images are amplified. The other is an image B (which has a resolution of 1/4) produced in a way that an image is reduced step by step to 1/2 and 1/4 ratios, and then the reduced images are amplified. When these two images A and B are compared, it is known that the level of blurring of the image A is greater than the level of blurring of the image B. When performing the reduction and amplification to produce multiple images each having a Single fuzzy status level and using these images, you can obtain images that show depth of field of multiple steps as described below. In a single image representing multiple images each having a unique Z-value, for example, an object having an intermediate Z-value (representing an intermediate depth) is disposed in a state just at the focus. Arranged in the depth direction (depth direction), the fuzzy state level of an object that has a lower Z value than the previous one (which represents a point beyond the previous one) is sequentially increased according to the level of the depth direction. In contrast, the fuzzy state level of an object that has a higher Z value than the previous one (which represents a point closer to the previous one) is sequentially increased according to the level of the closest address. In this way, when an object that has a prefixed intermediate Z value is displayed in the state just at the focus, and when the object is willing to separate from and closer to a point represented by the value Z in the depth direction, an image can be generated whose level of fuzzy state is sequentially increased. Figure 13 shows principles of a method for presenting multi-step field depth. In particular, as shown in Figure 13, in the original image, an image field in an area A between near Z [0] and far Z [0] is in the state just at the focus. In a first procedural step, the original image is sequentially reduced at 1/2, 1/4 and 1/8 ratios, subsequently the reduced image is amplified and the image significantly blurred in this way (level of blurring state: level 2) is overwritten on an image field in areas D that are located close to Z near [2] and farther than far Z [2].
In a second step of the procedure, the original image is sequentially reduced to 1/2 and 1/4 ratios, then the reduced image is amplified and the image is thus blurred at an intermediate level (blur level: level 1) ) is overwritten on an image field in the C areas that are located between Z near [2] and Z near [1] and between Z far [1] and Z far [1]. In the third step of the procedure, the original image is reduced to a ratio of 1/2, later the reduced image is amplified and the image made slightly blurred in this way (blurry state: level 1) is overwritten on an image field in the B areas that are located between Near Z [1] and near Z [0] and between far Z [0] and far Z [1]. This can allow the generation of images that present objects in significantly fuzzy states step by step in two directions, that is, in the direction of depth and the closest direction from the position just at the pre-set focus. The number of fuzzy state steps in Figure 13 can optionally be set correspondingly to the characteristics of the images to be produced. Also, the fuzzy state levels of the individual steps can optionally be set according to factors such as a level of optical knowledge of a producer (approximate characteristics of optical levels, image angles and other characteristics). In addition, lengths of individual steps can optionally be set.
Change of position just at the focus The position just at the focus in Figure 13 can be set at a point of a desired depth from the reference point, that is, a desired Z value. Also, as in Figure 13, the depth direction and the opposite depth direction (closer direction) may be blurred symmetrically or non-symmetrically, or in one direction only. When deploying in series multiple images of which the position just in the focus is varied in series, images can be obtained of which the depth of field varies in series. These images are significantly similar to the case in which a microscope is used to observe test pieces, so the focal points thereof are changed in series to match the portions in the depth direction of the test pieces.
Flow diagrams Figure 14 is a flow chart of the generation of a still image of an object in a state where the image is displayed significantly in a blurred state sequentially in the two directions - the direction of depth and the closest direction - from a position right in the prefixed focus. In the flow chart, FL represents a depth of filtering processing (however, note that FL> 0). The depth of filtering processing also represents the number of filters in the area that vary from the position just at the focus (Far Z [0] to Z near [0]), shown in Figure 13, to one of the most far away and the closest direction, that is, the depth level of the out of focus state. Far Z [0, ..., FL-1] represents filter positions that have to be applied beyond the position just in the focus, and far Z [FL-1] represents that the filter indicated in it is located beyond far Z [0]. Similarly, near Z [0, ..., FL-1] represents the position of the filter indicated in it that has to be applied closer to far Z [0]. Also, an area sandwiched between near Z [0] and far Z [0] does not become blurred since the area is in the just state at the focus (area just at the focus). As example definitions, FL = 1 in the one-step depth of field deployment; and the smallest number of steps in the multi-step depth of field deployment is two, in which case FL = 2 These example definitions are used here to make the explanation concise. Step S100 determines whether or not FL representing the filter processing depth is positive or not. If FL is positive, processing proceeds to the next step. If FL is not positive, the processing ends. For example, since FL = 1 in the one-step depth of field display, and FL = 2 in the simplest step depth field display, the processing proceeds to step S200 in both cases. Step S200 assumes FL, which represents the filtering processing depth, as -1. For example, FL = 0 is set in the one-step depth of field display, while FL = 1 is set in the multi-step field depth display. Step S300 assumes FL as the level. Therefore, for example, the level = 0 is assumed in the one-step depth of field display, while the level = 1 is assumed in the two-step depth-of-field display. Step S400 executes PROC1 processing. After executing PROC1, the processing control returns to step S100. For example, since the one-step depth of field display is assumed to be FL = 0, processing ends. Also, in the case of the two-step depth of field display, since it is assumed as FL = 1, step S200 assumes it as FL = 0, step S300 assumes it as level = 1, and step S400 returns to execute PROC1 processing with these values. After the re-execution of the processing PROC1, the processing returns to step S100, then it ends. Fig. 15 is a flowchart of the PROC1 processing of step S400 shown in Fig. 14. The flow diagram covers an operation of filtering processing steps, looping to execute steps until a count value M exceeds the level (a value in level). Step S410 resets the count value M to zero. Step S420 compares the count value M to the level. If the count value M is the same as or less than the level, the processing proceeds to step S430. If the count value M is greater than the level, the processing proceeds to step S422 in FIG. 16. For example, in the one-step depth of field display, since the level = 0 and the count value M = 0, the processing executes one loop of the loop, then proceeds to the processing in figure 16. However, in the two-step depth of field display, since the level = 1, the processing proceeds to step S430. Step S430 determines whether the count value is zero or not. If the count value M is zero, the processing proceeds to step S440. If the count value M is not zero, the processing proceeds to step S450. In this way, the first processing operation proceeds to step S440 and subsequently, the processing proceeds to step S450. Step S440 reduces the VRAM representation area vertically and horizontally to a ratio of 1/2 and sends the resulting image to a work area M (which is assumed to be a work area -0 in this particular case). The above is done because the image is stored in the VRAM representation area in the first operation. For example, in the one-step depth of field display, the processing proceeds to step S440, thus reducing the VRAM representation area vertically and horizontally to a ratio of 1/2, then sending the resulting image to the area of work -0. Step S450 reduces an M-1 work area vertically and horizontally to a ratio of 1/2 and sends the resulting image to work area M. For example, processing proceeds to step S440 in the first operation of the loop in the display of two-step depth of field, then proceeds to step S450 in the second operation of the loop. Step 450 reduces the reduced image to 1/2 stored in the work area -0 vertically and horizontally in a ratio of 1/2, then send the reduced image to 1/4 to a work area -1. Step S460 increases the count value M by one, then the processing control returns to step S420. Figure 16 is a flowchart of the image overwriting procedure. The procedure shown therein first uses the Z-value described and blurs an object located beyond a point represented by a predetermined Z-value. Subsequently, the procedure uses the alpha bits (coverage bits) and blurs an object located closer to the point represented by the preset Z value. Step S422 amplifies the working area M-1 (which contains a finally reduced image) back to the original image size and produces the resulting image so that an image field located beyond a point represented by far Z [FL ] in the representation area of VRAM is overwritten. For example, in the one-step depth of field display, since FL = 0 and M = 1, step S422 amplifies the work area -0 to the original image size to make the image slightly fuzzy ( a resolution of 1/2) so that an image field located farther away than a point represented by far Z [0] in the representation area is overwritten and blurred.
In the second operation of the loop in the double-pitch field depth display, given that FL = 1 and M = 2, the work area -1 is amplified to the original image size to blur the image more f significantly (image resolution blur of 1/4) so that a field of image located beyond a point represented by far Z [1] is overwritten and blurred. Step S424 always (unconditionally) clears the coverage bits in the total VRAM display area. This is the preliminary procedure that has to be done for the coverage bits, f ío Step S426 paints alpha planes in the representation area of VRAM so that all pixel coverage bits located beyond a point represented by near Z [FL] light up. For example, in the one-step depth of field display, since FL = 0 and M = 1, the pixels located beyond the point represented by near Z [0] in the VRAM representation area are covered so that Do not be overwritten. In the second operation of the loop in the two-step depth of field display, since FL = 1 and M = 2, the pixels located beyond a point represented by near Z [1] in the VRAM representation area are covered, thus preventing the overwritten area from spreading further. Step S428 amplifies the working area M-1 to an image in the original size and produces the resulting image so that only the pixels whose coverage bits are not lit in the VRAM display area are overwritten. In this way, the uncovered area is overwritten, and a blurred image of a closer object is overwritten on it. Therefore, control of the procedure returns to step S420. As described above, the procedure in the flowcharts shown in Figures 14 to 16 allows the generation of a still image in which the image field near the point represented by the preset Z value is displayed in the state just in focus. On the other hand, in the still image, the image field located closer to the point represented by the pre-set Z value is blurred sequentially in a manner corresponding to the distance from the reference point, and the image field located beyond the The point represented by the preset Z value is also blurred sequentially in a manner corresponding to the distance from the reference point. By generating a number of images whose Z-value is varied depending on the reduced image step by step and displaying the multiple images in series in one display (monitor), the display of images in which the state just at the focus varies in series. This allows for realistic simulation effects to be provided.
Applicability to an actual data processing program The Depth Depth Deployment method can be applied to a real data processing program, for example, in several ways described below. (1) This depth-of-field deployment method can be executed with information processing systems, such as similar entertainment systems, including personal computers, image processing computers and TV gaming apparatus, which have a Z buffer and bilinear filters. The above modality has been described with reference to the processing of the method in the VRAM area, but a CPU of a computer can also perform the processing. When the CPU is used to implement the method, because the processing related to the value of Z can not be optionally determined, there is no restriction such that, as described above, the buffer Z works only in the individual address . Therefore, the CPU does not require the covers (alpha planes) used in the mode to blur an image field located closer to a point represented by a pre-set Z value. (2) Currently, the deployment method that varies in series the depth of field in real time can be implemented by using a combined VRAM as a representation apparatus. In currently available personal computers, the speeds for data transfer between a main memory and a VRAM are too low to implement the method of the present invention. Current techniques are still behind in the development of busbars for high-speed data transfer of an image (for example, R, G and B data at 640 x 480 points) between graphics circuits.
Also in entertainment systems such as TV gaming apparatus, although the VRAM display area can be reduced without problem, the systems in which the texture areas are physically separated from the VRAM are not suitable for the implementation of the method according to the present invention. At the present technical level, the described deployment method that varies in series the depth of field can be implemented only with a system having a VRAM on a graphics processing unit (GPU) circuit and a texture area, a representation area and a work area in the VRAM (that is, these areas are available in the same memory space). This system is capable of implementing the described deployment method because the GPU can perform high-speed processing of the deployment method by varying the depth of field in series. The depth-of-field display method of the present invention should be able to be implemented not only with the system using the combined VRAM but also with other regular processors when the required high-speed data transfer is made in the course of progress in the related technical field. Therefore, it should be understood that the present invention could be implemented not only with the system using the combined VRAM but also with other processors such as personal computers. (3) Depth of field deployment method will be provided to, for example, software manufacturers as a library with software tools. This case allows the creation of software that provides game screens that show the depth of field by simply defining parameters, such as Z values, FL values, level values and variations of Z values. In accordance with the present invention, it can provide a representation apparatus capable of showing the depth of field. In addition, the present invention can provide a method for displaying depth of field that gives a sense of distance from the reference point to objects. Moreover, the present invention can provide a storage means for storing a data processing program to display the depth of field that gives the direction of distance from the reference point to the objects on two-dimensional screens. As seen above, the present invention has been described with reference to what is currently considered to be the preferred embodiment. However, it should be understood that the invention is not limited to the described embodiment and modifications. On the contrary, the invention is intended to cover some other equivalent modifications and arrangements included within the spirit and scope of the invention.

Claims (35)

  1. NOVELTY OF THE INVENTION
  2. CLAIMS 1, - An image representation apparatus comprising a device for generating an image in an unfocused state using a pixel interpolation algorithm to reduce an original image in a state just at the focus, and subsequently amplify the reduced image. 2. An image representation apparatus comprising a buffer Z for setting the pixel depth directions and a pixel interpolation algorithm, further comprising: a device for prefixing a Z value of said buffer Z; a device for generating an image in an out-of-focus state using a pixel interpolation algorithm to reduce an original image in a state just at the focus, and subsequently amplify the reduced image; and a device for overwriting said image in an unfocused state on the original image using a predetermined Z value; wherein said image representation apparatus converts an image field of an object corresponding to the point represented by said value Z to the state just at the focus, and concurrently, converts an image field of an object other than said object to a state outside of focus, thus showing depth of field.
  3. 3. An image representation apparatus according to claim 2, further characterized in that said device, which generates the image in the out of focus state by reducing the original image and then amplifying the reduced image, sequentially reduces the original image and subsequently It amplifies the reduced images, thus generating the images out of focus.
  4. 4. An image representation apparatus according to claim 2, further characterized in that said pixel interpolation algorithm is a bilinear filter method.
  5. 5. An image representation apparatus according to claim 2, further characterized in that it comprises alpha planes to selectively cover the pixels; wherein said image representation apparatus uses said predetermined Z-value to sequentially reduce the original image, to overwrite the out-of-focus and blurry images obtained by amplifying the reduced images on the original image, and to convert image fields of more located objects. far from a point represented by the value Z; and said apparatus uses said alpha planes to cover the image fields of the objects located farther away than the point represented by said Z value, subsequently, to overwrite said out-of-focus and blurred images on the original image, and to convert image fields located closer to the point represented by the Z value to unfocused states.
  6. 6. An image representation apparatus according to claim 2, further characterized in that it comprises a video RAM (VRMAN) having an image representation area and a texture area in the same memory space, wherein The image representation apparatus sequentially reduces the original image in said VRAM, and subsequently amplifies the reduced images, thus generating out of focus and blurry images.
  7. 7. An image representation apparatus comprising a buffer Z for setting pixel depth directions and a pixel interpolation algorithm, further comprising: a device for prefixing a Z value of said buffer Z; a device for generating multiple out-of-focus images each having a unique out-of-focus level using said pixel interpolation algorithm to reduce an original image in a just state in the focus to images having each a unique linear relationship, and subsequently to amplify the images thus reduced; and a device for using said preset Z-value to overwrite the out-of-focus image on the original image in the state just at the focus, from which said out-of-focus level is increased correspondingly to an increase in its distance from a point represented by the Z value, on the original image; wherein the image representation apparatus converts an object image field located at a point corresponding to the Z value to the state just at the focus, and concurrently converts an image field of an object other than said object to the state out of focus wherein said out-of-focus level increases correspondingly to an increase in its position distance from a point represented by the Z-value, thus generating images showing depth of field.
  8. 8. A method for generating image comprising the steps of preparing an original image in a state just at the focus, and using a pixel interpolation algorithm to reduce said original image and to amplify the reduced image, thus generating an image in a state out of focus.
  9. 9. A method for generating image comprising the steps of preparing an original image in a state just at the focus, and using a pixel interpolation algorithm to sequentially reduce said original image and to amplify the reduced images, thus generating an image in a state out of focus.
  10. 10. A method for generating image comprising the steps of preparing an original image in a state just at the focus, and using a pixel interpolation algorithm to sequentially reduce said original image to images that each have the unique linear relationship, thus generating a plurality of out-of-focus images each having a unique fuzzy level.
  11. 11. A depth of field deployment method comprising the steps of using a pixel interpolation algorithm to reduce an original image and subsequently to amplify the reduced image, thus generating a blurred and out of focus image; and to use a buffer Z capable of controlling the distance in the depth direction from a reference point, thus overwriting said out of focus image on the original image.
  12. 12. - A depth of field deployment method according to claim 11, further characterized in that said steps, which generate the out of focus image by reducing the original image and amplifying the reduced image, generate said out of focus image by sequentially reducing the image original and then amplifying the reduced images.
  13. 13. A depth-of-field deployment method according to claim 11, further characterized in that said out-of-focus image is overwritten on an image field of an object located beyond a point represented by a predetermined Z-value according to with said steps to overwrite said out of focus image on the original image.
  14. 14. A depth-of-field deployment method according to claim 11, further characterized in that the pixel interpolation algorithm is a bilinear filter method. pixel interpolation algorithm is a bilinear filter method.
  15. 15. A method of deploying depth of field according to claim 11, further characterized in that said depth of field deployment method is executed in a video RAM (VRMAN) having an area of image representation and an area of texture in the same memory space.
  16. 16. A depth of field deployment method comprising the steps of using a pixel interpolation algorithm to sequentially reduce an original image and subsequently to amplify the reduced images, thus generating out of focus images; and to use alpha planes that have a function to cover image fields of objects located farther away than the point represented by said Z value and to overwrite said out of focus images on the original image that has been covered, thus converting image fields that they have not been covered to unfocused states.
  17. 17. A method of deploying depth of field according to claim 16, further characterized in that the pixel interpolation algorithm is a bilinear filter method. pixel interpolation algorithm is a bilinear filter method.
  18. 18. A depth of field deployment method according to claim 16, further characterized in that said depth of field deployment method is executed in a video RAM (VRMAN) having an image representation area and an area of texture in the same memory space.
  19. 19. A depth-of-field deployment method according to claim 16, further characterized by the steps of using a pixel interpolation algorithm to sequentially reduce an original image and subsequently to amplify the reduced image, thereby generating images in states out of focus; to use a buffer Z capable of controlling the distance in the depth direction from a reference point to overwrite said images in out of focus states on the original image, thus converting image fields of objects located further away than a point represented by a Z value to unfocused states; and using alpha planes that have a function of covering the image fields of • objects located farther away than the point represented by said value Z and 5 to overwrite such images in out-of-focus states on the original image that has been covered, thus converting image fields that have not been covered to out-of-focus states .
  20. 20. A depth-of-field display method according to claim 16, further characterized in that the pixel interpolation algorithm is a bilinear filter method. pixel interpolation algorithm is a bilinear filter method.
  21. 21. A method of deploying depth of field according to claim 16, further characterized in that it comprises the steps for converting the image of objects located in a position 15 corresponding to a Z-value to a state out of focus and overwriting images whose levels of out-of-focus states are sequentially increased correspondingly to an increase in their position distances to one of a farther direction and a closer direction of a point represented by said value Z; to use an interpolation algorithm of 20 pixels to make sequential reductions of an original image and later to amplify the reduced images, thus generating images in states out of focus; and to control levels of said out-of-focus states according to levels of said sequential reductions.
  22. 22. - A depth of field deployment method according to claim 21, further characterized in that said pre-set Z-value varies in series by image.
  23. 23. A storage medium for storing an image generation program so that it is readable and executable by a computer, wherein said image generation program comprises steps for preparing an original image in a state just in focus, and using a pixel interpolation algorithm for sequentially reducing said original image and for individually amplifying the reduced images, thus generating an image in out of focus states.
  24. 24. A storage medium for storing an image generation program so that it is readable and executable by a computer, wherein said image generation program comprises steps for preparing an original image in a state just in focus, and using a pixel interpolation algorithm for sequentially reducing said original image to images each having a unique linear relationship, and for individually amplifying the reduced images that each have a unique linear relationship, thus generating use of plurality of images out of focus that each have a unique blur level.
  25. 25. A storage means for storing an image generation program so that it is readable and executable by a computer, wherein said image generation program comprises steps for using a pixel interpolation algorithm to sequentially reduce said original image and later to amplify the reduced images, thus generating images in states out of focus; to use a buffer Z capable of controlling the distance in the depth direction from ^ P a reference point to overwrite such an out-of-focus image on the 5 original image, thus converting image fields of objects located farther away than a point represented by a Z-value to unfocused states; and using alpha planes that have a coverage function to cover the image fields of the objects located farther away than the point represented by said Z value and to overwrite said images in the out of focus states f 10 on the original image that has been cover, thus converting image fields that have not been covered to unfocused states.
  26. 26. A storage medium for storing an image generation program so that it is readable and executable by a computer, wherein said image generation program comprises steps for using a pixel interpolation algorithm to reduce an original image and later to amplify the reduced image, thus generating a blurred image and out of focus; and to use a buffer Z capable of controlling the distance in the depth direction from a reference point, thus overwriting said out of focus image on the original image.
  27. 27. A storage medium according to claim 26, further characterized in that said steps, which generate the out of focus image by reducing the original image and amplifying the reduced image, generate said out of focus image sequentially reducing the original image and later amplifying the reduced images.
  28. 28.- A storage medium in accordance with the • claim 26, further characterized in that said out-of-focus image is 5 overwritten on an image field of an object located beyond a point represented by a predetermined Z value according to said steps to overwrite said out of focus image on the original image.
  29. 29. A storage medium according to claim 26, further characterized in that the pixel interpolation algorithm is a bilinear filter method. pixel interpolation algorithm is a bilinear filter method.
  30. 30. A storage medium for storing an image generation program so that it is readable and executable by a computer, wherein said image generation program comprises steps to use 15 a pixel interpolation algorithm to sequentially reduce said original image and subsequently to amplify the reduced images, thus generating out of focus images; and using alpha planes that have a coverage function to cover the image fields of the objects located farther away than the point represented by said Z value and to overwrite 20 said images in the unfocused states on the original image that has been covered, thus converting image fields that have not been covered to out of focus states.
  31. 31. - A storage medium according to claim 30, further characterized in that the pixel interpolation algorithm is a bilinear filter method. pixel interpolation algorithm is a bilinear filter method.
  32. 32.- A storage means for storing an image generation program to be readable and executable by a computer, wherein said image generation program comprises steps for using a pixel interpolation algorithm to sequentially reduce said original image and later to amplify the reduced images, thus generating images out of focus; to use a buffer Z capable of controlling the distance in the direction of depth from a reference point, to overwrite said images out of focus on the original image, thus converting fields of objects located further away than a point represented by the value Z to states out of focus; and to use alpha planes that have a function to cover image fields of objects located farther away than the point represented by said Z value and to overwrite said images out of focus on the original image, thus converting image fields that have not been covered to states out of focus.
  33. 33. A storage medium according to claim 32, further characterized in that the pixel interpolation algorithm is a bilinear filter method. pixel interpolation algorithm is a bilinear filter method.
  34. 34. - A storage medium for storing an image generation program so that it is readable and executable by a computer, wherein said image generation program comprises steps for converting image of objects located in a position corresponding to a predetermined Z value to a just in the focus and overwriting images whose levels of out-of-focus states are sequentially increased corresponding to an increase in their position distances to one of a farther direction and a closer direction of a point represented by said value Z; to use a pixel interpolation algorithm to perform sequential reductions of an original image and subsequently to amplify the reduced images, thus generating images in out of focus states; and to control levels of said out-of-focus states according to levels of said sequential reductions.
  35. 35.- A storage medium according to claim 34, further characterized in that said value Z varies in series per image.
MXPA/A/2001/008900A 1999-03-01 2001-09-03 Image rendering method and apparatus MXPA01008900A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP11/53397 1999-03-01

Publications (1)

Publication Number Publication Date
MXPA01008900A true MXPA01008900A (en) 2002-05-09

Family

ID=

Similar Documents

Publication Publication Date Title
US7068275B2 (en) Methods and apparatus for rendering an image with depth-of-field display
EP1064619B1 (en) Stochastic level of detail in computer animation
DE60300788T2 (en) Image with depth of field from Z buffer image data and alpha mixture
EP0950988B1 (en) Three-Dimensional image generating apparatus
KR100349483B1 (en) Image processing in which polygon is divided
US20020118217A1 (en) Apparatus, method, program code, and storage medium for image processing
JPH01265374A (en) Electron image processor
WO2022055367A1 (en) Method for emulating defocus of sharp rendered images
KR100381817B1 (en) Generating method of stereographic image using Z-buffer
JP3502796B2 (en) 3D model display method and apparatus in video game, game apparatus, and computer-readable recording medium storing 3D model display program for video game
CA2469050A1 (en) A method of rendering a graphics image
JPH07200870A (en) Stereoscopic three-dimensional image generator
JP2003168130A (en) System for previewing photorealistic rendering of synthetic scene in real-time
MXPA01008900A (en) Image rendering method and apparatus
JP3501479B2 (en) Image processing device
WO2012076757A1 (en) Method, system, processing unit and computer program product for point cloud visualization
US6982720B1 (en) Image generation method and image generation device
JP2007183832A (en) Display control system, display control method, and program
JPH05342368A (en) Method and device for generating three-dimensional picture
JP3372034B2 (en) Rendering method and apparatus, game apparatus, and computer-readable recording medium storing program for rendering stereo model
JP3453410B2 (en) Image processing apparatus and method
JP2001118083A (en) Graphic data processor
JPH10143681A (en) Image processing method and its device
EP4070538A1 (en) Encoding stereo splash screen in static image
JPH04225482A (en) Graphic display device