WO2007013492A1 - Multilayer reflection shading image creating method and computer - Google Patents
Multilayer reflection shading image creating method and computer Download PDFInfo
- Publication number
- WO2007013492A1 WO2007013492A1 PCT/JP2006/314737 JP2006314737W WO2007013492A1 WO 2007013492 A1 WO2007013492 A1 WO 2007013492A1 JP 2006314737 W JP2006314737 W JP 2006314737W WO 2007013492 A1 WO2007013492 A1 WO 2007013492A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- color
- layer
- blending
- surface layer
- coefficient
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/60—Shadow generation
Definitions
- the present invention relates to a multilayer reflection shading image generation method for computer graphics, a computer, and the like.
- the present invention is a multi-layer reflection for real-time 3D computer graphics that effectively performs luminance calculation in consideration of light scattering on the surface of a translucent object (for example, human skin or large stone).
- the present invention relates to a shading image generation method and a computer.
- Tokui Literature 1 A Practical Model for Subsurface Light Transport, Henrik Wann Jense n Stephen R. Marschner Marc Levoy Pat Hanrahan, Stanford University, To appear in the SIGGRAPHconference proceedings
- Non-Patent Document 2 Anlnexpensive BRDF Model for Physically-based Rendering, Christop he Schlick, Eurographics' 94, Computer Graphics Forrum, vl3, n3, p233-246, 1994 Disclosure of the invention
- An object of the present invention is to provide a multi-layer reflection shading image generation method for computer graphics, a computer, and the like that can calculate a shadow and the like at high speed. To do.
- the present invention combines the blending coefficient for synthesizing the surface color with the single light source or multiple light source shading results for a specific pixel, and determines the surface color after blending for at least two layers.
- the present invention combines the blending coefficient for synthesizing the surface color with the single light source or multiple light source shading results for a specific pixel, and determines the surface color after blending for at least two layers.
- a computer receives a shading result of a single light source or a plurality of light sources for a specific pixel, and a surface layer setting value indicating to which surface layer the shading result belongs. Based on the set value of the surface layer, the sieving result is distributed to each surface layer to generate at least two surface layer colors, and the surface layer color for at least two layers is combined with a blend coefficient for synthesizing the surface layer color.
- the present invention relates to an image generation method for generating image data for computer graphics related to the specific pixel.
- another aspect of the present invention is an image generation apparatus for computer graphics, in which a single light source or a plurality of light sources for a specific pixel is shaded, and on which surface the shading result is stored.
- a layer color generation means for receiving a surface layer setting value indicating whether it belongs and generating at least two surface colors; a surface color for at least two layers generated by the layer color generation means; and a surface color
- a layer coefficient blending means for generating a blended surface coefficient for at least two layers by combining a blend coefficient for each of the surface color, and generating a blended surface coefficient.
- Color blending means for synthesizing at least two layers of the blended surface layer color output from the blending means into a single color.
- the layer color generation means receives a shading result of a single light source or a plurality of light sources for the specific pixel and a surface layer setting value indicating which surface layer the shading result belongs to. At least two surface colors are generated, and the layer coefficient blending means receives at least two surface colors generated by the layer force error generating means and a blend coefficient for synthesizing the surface colors. , Combining the blending coefficients for each of the surface layer colors, generating a surface layer color after blending for at least two layers, and the color blending means blending for at least two layers output from the layer coefficient blending means
- the present invention relates to an image generation device that generates image data for computer graphics by combining the subsequent surface color into a single color. Furthermore, another aspect of the present invention is a computer, a game machine, a mobile phone, a navigation system, a computer program, and a computer-readable recording medium storing the computer program using such an image generation apparatus. About throat.
- a blending coefficient for synthesizing the surface layer color is combined with the shading result of a single light source or a plurality of light sources for a specific pixel to generate a surface layer color after blending for at least two layers. Combine into one color.
- FIG. 1 is a block diagram showing a basic configuration of an image generation apparatus according to the present invention.
- FIG. 2 is a flowchart for explaining the image generation method of the present invention.
- FIG. 3 is a block diagram showing a basic configuration of a computer system according to an embodiment of the present invention.
- FIG. 4 is a block diagram showing a graphics device according to an embodiment of the present invention.
- FIG. 5 is a block diagram showing shading means according to an embodiment of the present invention.
- FIG. 6 is a block diagram for explaining layer color generation means and layer coefficient blending means.
- FIG. 7 is a block diagram showing an example of layer color generation means (or part thereof).
- FIG. 8 is a block diagram showing a configuration example of layer coefficient blending means.
- FIG. 9 is a block diagram showing an example of color blending means.
- FIG. 10 is a block diagram showing the circuit configuration of the layer 0 blend inside the color blending means.
- FIG. 11 is a block diagram of the image generation apparatus in which the shading result and blend coefficient for each light source output from the shading means are processed as serial data and the number of powers of the surface layer is 4.
- FIG. 12 is a diagram showing a flow of blending processing of one pixel.
- FIG. 13 is a block diagram showing an embodiment (computer) of the present invention.
- FIG. 14 is a block diagram of an embodiment (game machine) according to the present invention.
- FIG. 15 shows an embodiment of the present invention (a mobile phone with a computer graphic function).
- FIG. 16 is a block diagram of an embodiment (navigation system) of the present invention.
- FIG. 1 is a block diagram showing the basic configuration of the image generation apparatus of the present invention.
- the image generation device (1) of the present invention is a computer for computer graphics or a device constituting the computer, and the result of shading of a single light source or a plurality of light sources for a specific pixel is obtained.
- a layer color generation means (2) for receiving a surface layer setting value indicating to which surface layer the shading result belongs and generating at least two surface colors; and at least the layer color generation means generated by the layer color generation means
- Layer coefficient blending means for receiving a color and a blend coefficient for synthesizing the surface layer color, combining the blend coefficient for each of the surface layer colors, and generating a surface color after blending for at least two layers (3 )
- a color blending means (4) for synthesizing the surface color after blending for at least two layers output from the layer coefficient blending means into a single color.
- shading result means, for example, the results of luminance calculations such as nuclear acid reflection (diffuse), specular reflection (specular), and indirect light (ambient) from each light source. This means the brightness calculation result for each light source.
- a diffuser from the first light source there are four types: a diffuser from the first light source, a diffuser from the first light source, a diffuser from the second light source, and a diffuser from the second light source.
- the shading results have multiple types of power.
- the light source may be a parallel light source, a point light source, or a spot light source.
- the relationship between the light sources in the multiple light sources is arbitrary.
- the surface layer means each surface layer having different characteristics with respect to light such as reflectance and refractive index with respect to light incident on an object.
- a layer is also called a layer.
- “Surface layer setting value” means a setting value or a distribution command for distributing the shading result to the surface layer.
- the surface layer setting value is information that indicates which shading result is to be assigned among multiple operation results that are shading results for each surface layer.
- the surface layer setting value may be obtained by a program that causes the computer to function to assign a predetermined shading result among a plurality of calculation results that are shading results for each surface layer.
- the surface layer setting value may be obtained by a circuit that functions to allocate a predetermined shading result among a plurality of calculation results that are shading results for each surface layer, a memory that stores such allocation information, and the like. Good. For example, when there are 8 surface shading results from A to H and there are 3 layers, A, B, D, F, and H are assigned to the first surface, and A, C, E Assign G and G to the second surface layer and B and H to the third surface layer.
- Blend coefficient is used to obtain the surface layer color by multiplying it with the shading result. This means the inner product value.
- the blending coefficient may be obtained, for example, by obtaining the inner product by the inner product calculating circuit using various vectors included in the vertex data. In addition, since the inner product of various vectors is obtained when the scaling means described later performs the calculation, the inner product of these vectors may be used as appropriate to obtain the blend coefficient.
- the blend coefficient may be input to the layer coefficient blending means described later via a nose. In addition, the blend coefficient may be stored in a memory or the like and read as appropriate.
- the “single color” to be combined means a calculation result such as luminance at a pixel of a multilayer reflection shading image.
- each constituent unit constituting the image generating apparatus (1) according to the first embodiment of the present invention will be described.
- the layer color generation means (2) receives a shading result of a single light source or a plurality of light sources for a specific pixel and a surface layer setting value indicating which surface layer the shading result belongs to, and generates a surface color for at least two layers. It is a means to do. “Generate surface color” includes selecting the surface color to obtain the surface color.
- the shading result of a single light source or multiple light sources for a specific pixel is obtained by the shading means in the computer, which will be described later, and is transmitted to the layer color generation means (2) via, for example, a bus.
- the shading result is preferably a shading result considering light from multiple light sources.
- the CPU can read the surface setting value from the memory that stores the surface setting value or the program in the main memory and input it to the layer color generation means via the nose. ,.
- the layer color generation means (2) receives a shading result of a single light source or a plurality of light sources for a specific pixel and a surface layer setting value indicating to which surface layer the shading result belongs, and receives at least two surface layers. It may be implemented by hardware such as a circuit for generating colors.
- the layer color generation means (2) receives a shading result of a single light source or a plurality of light sources for a specific pixel and a surface layer setting value indicating to which surface layer the shading result belongs, and at least for two layers. It may be implemented by software such as a program that causes a computer to function as a means for generating the surface color of the image, or by both hardware and software.
- Hardue Examples of the key include a control unit, a calculation unit, a storage unit, an input / output unit, and a system node that connects them. Then, when the predetermined information is input to the input unit force, the calculation unit reads the input data, the data stored in the storage unit, the surface layer setting value, etc. in response to a command from the control unit.
- the memory is used as a work area and predetermined calculations are performed. The calculation result is temporarily stored in the memory, for example, and then output from the output unit. These data may be transmitted through a bus, for example.
- a light selection setting means for outputting an assignment command for assigning the input shading result to each surface layer using the surface layer setting value
- a light selection setting means power examples include a layer color selection means for selecting an input shading result for each surface layer in accordance with an allocation command. Then, a seeding result of a single light source or a plurality of light sources for a specific pixel and a surface layer setting value indicating to which surface layer the seeding result belongs are received, and the light selection setting means receives the input shading result as a surface layer.
- the layer color selection means assigns each surface layer using a set value, and selects the shading result input for each surface layer in accordance with an assignment command from the light selection setting means, thereby at least two layers.
- Generate a surface color For example, when 8 shading results from A to H are input from the shading means, A, B, D, F and H are allocated from the memory to the first surface layer, and A, C, E and Allocate G to the second surface layer, assign B and H to the third surface layer, read the surface layer setting values, assign the shading results in this way, and generate (select) each layer color.
- the layer coefficient blending means (3) receives the surface color of at least two layers generated by the layer color generation means and the blend coefficient for synthesizing the surface layer color, and sets the blend coefficient for each of the surface color. It is a means for generating a surface color after blending and blending for at least two layers.
- the layer coefficient blending means (3) receives the surface color for at least two layers generated by the layer color generation means and the blend coefficient for synthesizing the surface layer color, and receives each of the surface color. Combined with blending coefficient, surface color after blending for at least two layers It may be implemented by hardware such as a circuit for generating.
- the layer coefficient blending means (3) receives at least two surface layer colors generated by the layer color generation means from the computer and a blend coefficient for synthesizing the surface layer colors. It may be implemented by software such as a program for functioning as a means to generate a surface color after blending at least two layers by combining blending coefficients, or by both hardware and software .
- the hardware includes a control unit, arithmetic unit, storage unit, input / output unit, and a system bus that connects them.
- the calculation unit reads out the input data and data such as blend coefficients stored in the storage unit.
- the memory is used as a work area and predetermined calculations are performed. Specific operations include multiplication of the blend coefficient and surface color, and summation that produces several calculation results.
- the calculation result is temporarily stored in the memory, for example, and then output from the output unit. These data may be transmitted via a bus, for example.
- Color blending means (4) is a means for synthesizing at least two layers of the blended surface layer color output from the layer coefficient blending means into a single color. This determines the color or brightness at a pixel.
- the color blending means (4) may be implemented by hardware such as a circuit for synthesizing the surface color after blending at least two layers output from the layer coefficient blending means into a single color, It may be implemented by software such as a program for causing a computer to function as a means for synthesizing at least two layers of blended surface colors output from the layer coefficient blending means into a single color. May be implemented by both.
- the hardware includes a control unit, a calculation unit, a storage unit, an input / output unit, and a system bus that connects them. When predetermined information is input from the input unit, the control unit receives an instruction from the control unit, and reads the input data and data stored in the storage unit. For example, the memory of the storage unit is read from the work area. To perform a predetermined calculation. The calculation result is temporarily stored in a memory, for example. And then output from the output section. These data may be transmitted via a bus, for example.
- the image generation device (1) of the present invention obtains image data as follows, for example.
- FIG. 2 is a flowchart for explaining the image generation method of the present invention.
- the layer color generation means receives the shading result of a single light source or a plurality of light sources for the specific pixel and the surface layer setting value indicating which surface layer the shading result belongs to (step 1). . Specifically, for example, it receives the results of swaying, such as diffuse and specular, obtained by the shading means via a bus.
- the CPU receives a command from a program stored in the main memory, reads the surface layer setting value stored in the memory or the like, and transmits the surface layer setting value to the layer color generation unit via a bus or the like. In this way, the surface layer setting value transmitted is received.
- the layer color generation means generates surface colors of at least two layers (for example, three layers) (step 2). Specifically, an assignment command for assigning to each surface layer is output using the surface layer setting value, and an input shading result is selected for each surface layer according to the assignment command, so that the surface layer for each surface layer is selected. Generate colors (for example, containing one or more seeding results).
- the layer coefficient blending means receives at least two layer surface colors generated by the layer color generation means and a blend coefficient for synthesizing the surface layer colors, and each of the surface layer colors is received. Combine the blend coefficients to generate at least two surface colors after blending (Step 3).
- the blend coefficient is stored in a memory as an inner product value of a predetermined vector, so the CPU receives a program command stored in the main memory, reads the blend coefficient stored in the memory, etc. It communicates to the layer-one coefficient blending means via a bus.
- the layer coefficient blending means generates the blended surface color by multiplying the received blend coefficient with the surface color generated by the layer color generation means.
- a single color may be synthesized by calculating the sum of the surface colors using an adder circuit, adder, or addition program.
- FIG. 3 is a block diagram showing the basic configuration of a computer system according to an embodiment of the present invention.
- the computer system 10 includes a CPU 11, a memory 12, I / 013, a graphics device 14, and a display 15.
- the CPU 11, memory 12, I / 013, graphics device 14, and display 15 are connected to the system node 16 and transfer data to each other.
- the graphic device 14 incorporates the above-described image forming apparatus of the present invention, the predetermined processing described above can be performed to effectively obtain a shaded image, and the display is considered in consideration of shading. An image is displayed.
- FIG. 4 is a block diagram illustrating a graphics device according to an embodiment of the present invention.
- This graphic device may function as a graphic device indicated by reference numeral 14 in FIG. 3 or a part thereof, and may be a part constituting a computer.
- the graphics device according to this embodiment performs vertex data interpolation with a geometry engine 22 (geometric operation circuit, device, part, etc.) 22 that receives vertex data 21 and the like and performs geometric operations.
- a geometry engine 22 geometric operation circuit, device, part, etc.
- Vertex data interpolation means vertex data interpolation circuit, device, part, etc.
- seeding means shadeing circuit, device, part, etc.
- texture means texture synthesis for texture synthesis 25
- texture memory 26 for storing textures, etc.
- layer color A generation means 2 a layer coefficient blending means 3, and a color blending means 4 are provided, and each element is connected by a system bus 16 or the like.
- the geometry engine 22 is connected to the system bus 16 of the computer system 10 and receives vertex data 21 as input data.
- the geometry engine 22 performs geometric transformation on the vertex data, performs perspective transformation and viewport mapping, and then outputs the transformed vertex data to the vertex data interpolation means 23.
- the vertex data interpolating means 23 interpolates each parameter associated with the inside of the polygon (usually a triangle) that also constitutes the input vertex data force, and outputs it to the shading means 24 and the texture means 25, respectively.
- the texture unit 25 calculates the access address to the texture memory 26 for the u and v coordinate forces input from the vertex data interpolation unit 23, and receives the corresponding data from the texture memory 26.
- the data input from the texture memory 26 is, for example, one or more of color data, bump mapping data, and tangent vector data.
- the texture means 25 determines whether the data input from the texture memory 26 is for bump mapping or tangent vector data, and if the input data is for bump mapping or tangent vector data, Output data to 24. On the other hand, if the data input from the texture memory 26 is error data, the data is output to the color blending means 4.
- the shading means 24 performs the luminance calculation for each pixel in units of light source units for the interpolation data inputted from the vertex data interpolation means 23 and the bump mapping data inputted from the texture means 25 or the tangent vector data. The result is output to the layer color generation means 2. Thereafter, for example, color information for shading is obtained in accordance with the operation of the image forming apparatus described above.
- FIG. 5 is a block diagram showing shading means according to an embodiment of the present invention.
- the shading means according to this embodiment is a system interface (I ZF) 31, bump rotation, tangent rotation, or half vector generation calculation for bump rotation, tangent rotation, half vector generation means 32, and light attenuation means (light attenuation circuit) for determining attenuation factors, etc. 33, inner product calculation means (such as an arithmetic circuit such as a multiplication circuit) 34 for performing inner product calculation, etc., write-shading means (such as an arithmetic circuit) 35 (35a to 35d) for performing shading calculation, etc.
- light blending means 36 (36a to 36d) for obtaining a shading result using the attenuation coefficient from the light attenuating means and the input value from the light shading means.
- Each element constituting the shading means may be implemented by hardware such as a circuit, may be implemented by software such as a program, or may be implemented by both hardware and software. . Also, each may be configured as a device, a part, or the like.
- Examples of hardware include a control unit, a calculation unit, a storage unit, an input / output unit, and a system node that connects them.
- the control unit receives a command from the control unit and reads the input data or data stored in the storage unit.
- the memory of the storage unit is used as a work area. Use it to perform specified calculations.
- the calculation result is temporarily stored in the memory, for example, and then output from the output unit. These data may be transmitted through a bus, for example.
- the shading means 24 includes a vertex data interpolation means 23 (Fig. 4), a plurality of light vectors Ln (41), a line-of-sight vector V (42), a normal vector N (43) and a texture means 25 (Fig. 4).
- the luminance calculation is performed for the number of light vectors, and the result is output to the light blending means 36.
- the number of light vectors Ln is 4, the luminance calculation and the resulting output are also equivalent to 4 light sources. In this example, the number of light vectors is four.
- Vertex data interpolation means 23 (Fig. 4) Normal vector (43) N to which force is also input is input to texture means 25 (Fig. 4) force tangent vector T (44) or bump vector B (45). If it is valid, the normal rotation vector N (43) is rotated by the bump rotation tangent rotation noise vector generation means 32, and the result N ′ is output to the inner product calculation means 34. Also, The output rotation tangent rotation half vector generation means 32 generates a half vector H which is a bisection vector of the sum of the plurality of light vectors Ln41 and the line-of-sight vector V42 and outputs the same to the inner product calculation means 34.
- the inner product calculation means 34 performs calculation (mainly inner product calculation) for luminance calculation from various input vectors.
- the results of four types of inner product calculations are output to light shading units 35a to 35d for each light source, for example, with a blend coefficient of 46.
- the blend coefficient 46 is also transmitted to the layer coefficient blending means 3.
- the bidirectional reflectance distribution function bidirectional
- the light attenuation means 33 obtains the attenuation coefficient (48a to 48h) of each light vector using the light vector Ln and the information input via the system IZF31, and the light blending means 36a to 36d for each light source. Output to. In addition, the attenuation coefficient for each layer may be output to the color blending means.
- the blending means 36a to 36d for each light source attenuate the input of the shading means 35a to 35d according to the attenuation coefficient input from the light attenuation means 33, respectively, and obtain the shading result (47a to 47h).
- Output to the layer color generation means 47a and 47b indicate the diffuse component and specular component of light source 0, respectively, and 47c and 47d indicate the diffuse component and specular component of light source 1, respectively.
- 47e and 47f indicate the diffuse component and specular component of light source 2, respectively, and 47g and 47h indicate the diffuse component and specular component of light source 0, respectively.
- the blend coefficients 46 such as inner product values NV, VH, NH, and NL output from the inner product calculation means 34 are output as blend coefficients 46 of a plurality of layers.
- the luminance calculation results that also output the light blending means 36a to 36d may be output separately for the diffuse reflection (diffuse) component and the specular reflection (specular) component, A combination of these may be output.
- the diffuse and specular of a single light source may be represented by 8 bits each for RGB.
- FIG. 6 is a block diagram for explaining the layer color generation means 2 and the layer coefficient blending means 3.
- the layer color generation means 2 includes a light selection setting means 51 for outputting an assignment command for assigning the input shading result to each surface layer using the surface layer setting values, and the light selection setting means. And a layer color selection means 52 for selecting the input shading result for each surface layer according to the allocation command.
- the layer color generation means 2 assigns the luminance calculation results (47a to 47h) for a plurality of light sources inputted from the shading means 24 to a plurality of surface layers approximating the internal scattering of the object surface, The surface color (61a to 61f) for each layer is output.
- layer coefficient blending means 3 performs color blending of the surface color (62a to 62f) for each surface layer (layer 1) after blending, for example, by multiplying the blending coefficient by the blending coefficient assigned to each layer. Output to means 4 (Fig. 4).
- the shading results (47a to 47h), which are the brightness calculation results of the diffuser and the specular of the four light sources to which the 24 powers of the shading means are input, are obtained by the layer color generation means 2, for example, on the surface of the object. Assigned to each surface layer that approximates internal scattering.
- the surface layer setting values are stored in, for example, a program in the main memory, a predetermined memory, a table, or the like, and the CPU or the like receives information on the program in the main memory and reads information on the surface setting values as appropriate. This can be communicated to the light selection setting means 51 via the nose 16 or system I / F53.
- the light selection setting means 51 outputs an allocation command to each surface layer to the layer first power selection means 52 using the surface layer setting value.
- the shading results 47a, 47b, and 47h need only be assigned to the first layer.
- the surface color (61a to 61f) for each surface layer (layer) is output.
- 61a and 61b indicate the diffuse component and the specular component of layer 0, respectively.
- 61c and 61d indicate the diffuse component and the specular component of layer 1, respectively.
- 61f shows the diffuse component and specular component of layer 2, respectively.
- each light selection / setting means 52 in this embodiment is composed of, for example, 4 bits each, and means that there is a light source in which each bit is assigned to each surface layer.
- the light source assigned to each surface layer may be performed for each RGB component as follows, for example. The following is for layer 0 only, but the same formula can be used for layer 1 and layer 2.
- the diffuse component 61a of layer 0 is any one of the diffuse components (47a, 47c, 47e, and 47f) of light source 0 to light source 4, or the sum of two or more of them.
- the zero specular component 61b may be one of the specular components of light source 0 to light source 4 (47b, 47d, 47f, and 47g), or the sum of two or more of them. Which component should be used can be assigned by an assignment command based on the surface layer setting value. The same applies to the other layers. In other words, the light components for each light source are converted into light components for each layer.
- FIG. 7 is a block diagram showing an example of layer color generation means (or part thereof). What is shown in FIG. 7 is a block diagram showing the part of the layer color generating means for layer 0 or the layer power selection means in FIG.
- this layer single error generation means comprises a selector 71 such as an AND gate and an adder 72 such as an adder circuit, and is connected to each write selection setting means 51 by a system bus or the like.
- each light blend means may be connected by a system bus or the like.
- the selector may be mounted so that only the data set in advance is selected and passed, and specific data is passed according to the selection command from each light selection setting means 51.
- each selector may receive different scheding results 47a to 47h, or a certain shading result may be input to two or more selectors. Shading results may be input to all selectors.
- each selector determines whether or not to output an input signal in accordance with a command from each light selection setting means 51, and each output signal is added by an adder circuit or the like. Output as surface color 61a, 61b.
- the selector 71a to which the shading result 47a and the command from the light selection setting means 53 are input if the command is ON (1).
- the selector force is also output as a shading result 47a and the command is OFF (0), the selector force will not be output as a shading result 47a! The same applies to other selectors.
- predetermined ones from the diffuses of each layer are output to the adder 72a, added by the adder 72a, and output as, for example, the diffuse component 61a of layer 0 (one component of the surface layer color of the surface layer 0) .
- the color components synthesized on each surface layer by the layer color selecting means 52 are then input to the layer coefficient blending means 3, and each surface layer is selected according to the blend coefficient 46 output from the inner product calculating means 34, for example. It is only necessary to perform coefficient calculation for combination.
- the operation of the layer coefficient blending means 3 will be described in detail below with reference to FIG. Fig. 8 is a block diagram showing a configuration example of the layer coefficient blending means. As shown in Fig. 8, the layer coefficient blending means is implemented by, for example, a RAM table and an integration circuit.
- the blend coefficient in the layer coefficient blending means the inner product value NV of the normal vector N and the line-of-sight vector V; the inner product value NH of the bisector of the normal vector N, the line-of-sight vector V and the light vector L NH ; Linear product value VH of bisection vector of line-of-sight vector V, line-of-sight vector V, and light vector L; or inner product value NL of normal vector N and light vector.
- the blending coefficient may be one type, but it is preferable that multiple types are prepared and blended appropriately.
- the color of each surface layer is multiplied by a coefficient such as a blend coefficient as follows.
- the diffuse output (62a) is the product of ⁇ 0 and the diffuse input of layer 0 (61a), and the specular output (62b) is ⁇ ⁇ , layer 0 spe This is the product of the kyura input (61b).
- the diffuse output (62c) is the product of (1 – ⁇ ⁇ ), ⁇ ⁇ and the layer 1 diffuse input (61c), and the specular output (62d) is ( 1— ⁇ ⁇ ), ⁇ ⁇ and the layer 1 specular input (61d).
- the diffuse output (62e) is the product of (1 — ⁇ ⁇ ) and (1 — ⁇ 1) and the diffuse input of layer 2 (61e).
- 62f) is the product of (1 ⁇ ⁇ ) and (1 ⁇ ⁇ ) and the layer 2 specular input (61f).
- ⁇ ⁇ and al are expressed as functions with the blend coefficients BO, Bl (46a, 46b) as arguments.
- B0 and B1 are inner products such as VH (VH0 -VH3 for each light source), NV, NL (NL0-NL3 for each light source), NH (NH0-NH3 for each light source), etc.
- Blend coefficient may be used. When VH is used as the blend coefficient, blending of each layer is executed in relation to the light source and line of sight, and in the case of NV, blending of each layer is executed in relation to the normal and line of sight of the object.
- the functions of ⁇ ⁇ ( ⁇ ) and ID (Bl) are implemented as a RAM table.
- the RAM tables 81 and 82 are preferably provided for each RGB, for example. That is, the layer coefficient blending means according to this aspect is provided for each of the blend coefficients, and is provided for each RGB to which a predetermined blend coefficient is input, and RAM tables 81 and 82; 61f) and a multiplier 83 such as a multiplication circuit for multiplying the coefficient according to the blend coefficient. Then, a coefficient ( ⁇ , 1 ⁇ , etc.) corresponding to the blend coefficient is read from the RAM table, and the multiplier multiplies the output of the layer force error generation means and the coefficient corresponding to the blend coefficient. To do. Each element is connected by a bus or the like so that signals can be exchanged.
- Blend coefficients B0 (46a) and Bl (46b) are connected to the address lines of independent RAM tables 81 and 82, respectively, and blend coefficients B0 (46a) and Bl (46b) according to the values set in the RAM in advance. Outputs the values ⁇ and 1 ⁇ ⁇ corresponding to.
- the inversion value of ⁇ is output as 1 ⁇ as an approximation of 1 ⁇ a, but is not limited to this. Further, it is preferable to use any blending coefficient as a or 1 ⁇ , and this blending coefficient includes the inner product calculated by the inner product calculating means.
- the multiplier 83 multiplies the shading result (surface color) of each layer, such as diffuse and specular, input from the layer color selection means 52, and the blend coefficient is multiplied by each layer. Reflect in Diffuses and Speckyula. In this way, the surface color (62a to 62f) for each blended surface layer can be obtained.
- the surface layer color (62a to 62f) of each blended surface layer is converted into the final 1 pixel by the color blending means 4. It is synthesized as RGB representing the cell color. The operation of the color coefficient blending means 4 will be described in detail below with reference to FIGS. 9 and 10.
- FIG. 9 is a block diagram showing an example of color blending means.
- Fig. 10 is a block diagram showing the circuit configuration of the layer 0 blend inside the force Lablender.
- the color blending means there is a color blending means including blending means 86a to 86c for each layer connected to the means via the system interface 85, and an adding circuit 87.
- a blend portion of each layer there is one that includes a multiplier 88 and a calorie calculator 89 as shown in FIG.
- the attenuation adjustment values such as diffuse, specularity and attenuation rate for each layer (this is obtained by the light attenuation means 33, for example, the system bus 16, the system I / F85).
- the diffuse and specular values are added, and finally the color values of all the surface layers are added by the adder 87.
- the texture means 25 (FIG. 4) may be synthesized using the texture color input to the blending means 86a to 86c via the bus 16 or the like. Usually, the texture color only affects the diffuse and not the specular.
- a texture color is input to the force Lablend means 4, and no texture color is input to the layer color generation means 2 and the layer coefficient blending means 3. If the texture color is synthesized immediately after the shading means 24, it is not necessary to separate the diffuser and the specular as in this embodiment.
- Figure 11 is a block diagram of the image generation device when the shading results and blending coefficients for each light source output from the shading means are processed as serial data and the surface power is several.
- layer color selection means, ⁇ / 1- ⁇ generation means for generating a and 1 a, and a / 1 to a generation means are generated.
- An intermediate register t updating means for updating a predetermined value t based on the generated l-string n , a t value storing means such as a buffer for storing the t value, an output of the layer color generating means, and a / 1-a Multiplying means such as a multiplication circuit for multiplying the ⁇ generated by the generating means by the t value stored by the t value storing means, the output value Mn of the multiplying means force, and the initial value clear
- An addition means such as an adder circuit for addition, a Csum value storage means such as a buffer for storing the value added by the addition means, an initial value init input to the intermediate register t update means, and an initial value init Control means for obtaining an initial value clear to be input to the intermediate register t updating means, to be input to the adding means, and to input the initial value clear to the adding means.
- the layer color selection means functions as the layer color generation means, the ⁇ Z1— ⁇ generation means, the intermediate register t update means, and the multiplication means function as the layer coefficient blending means, and the multiplication means and the addition means function as the color blending means. To do.
- the final RGB data generation rate per unit time decreases, but the same processing can be performed with a small number of computing resources.
- the combination of blend coefficients to the surface color is composed of only three multipliers for each RGB.
- the means for combining surface colors into a single color consists of adding the input value for each RGB and its own value.
- the diffuse and specular are grouped for each RGB at the output stage of the shading means, and one set of RGB data is input to the layer-one blending means for each light source.
- the input signal LC in Fig. 11 is RGB data input of the shading result for each light source in which the shading means power is also input serially.
- BF is coefficient information (NV, etc.) for composing each layer. LC and BF are valid when VALIDJN signal power is used, and the final RGB color output Csum is valid when VALID_OUT is valid.
- FIG. 12 is a diagram showing the flow of a certain pixel blend process.
- the layer color selection means converts the data for the four light sources into data for each layer according to the preset surface layer setting values and outputs the data to the multiplication means.
- BF0-BF2 is input to the a / l ⁇ a generator, and outputs ⁇ and l- ⁇ in the same manner as the RAM table (81, 82) in Fig. 8. At this time, from the control means ⁇ n and 1- ⁇ for which layer depending on Ln input to a l- a n generation means.
- the layer data output from the layer color selection means and the ⁇ output from the oc / 1- ⁇ generation means are input to the multiplication means, and l- ⁇ is input to the intermediate register t update means.
- the intermediate register t update means is initialized to 1 by the init signal input to the control means, and then the t value storage means stores the value that is input and 1- ⁇ that is input, and the result is a new result.
- the t value is stored in the t value storage means.
- the multiplication unit multiplies Layer data by ⁇ and t and outputs it as Mn.
- the adding means initializes the Cs wake-up memory means to 0 by the clear signal, adds the sequentially inputted Mn to the value stored in the Cs wake-up memory means, and stores the result as a new value in the Cs wake-up memory means . Then, the adding means outputs the value Cs alert stored in the Cs alert memory as the final RGB color.
- the final RGB color Csum L X a + L ⁇ (1— ⁇ ) ⁇ a + L ⁇ (1— ⁇ ) ⁇ (1— ⁇ ) ⁇ a + L ⁇ (1— ⁇ ) ⁇ (
- Ln is data indicating each layer, and a, a, a
- 1 2 1 2 3 is a value generated by the ⁇ / 1- ⁇ generating means.
- the image generation apparatus of the present invention is basically implemented by hardware, but may be implemented by software.
- a computer receives a shading result of a single light source or a plurality of light sources for a specific pixel and a surface layer setting value indicating which surface layer the shading result belongs to, and obtains surface color for at least two layers.
- a layer color generating means for generating, a surface color for at least two layers generated by the layer color generating means, and a blend coefficient for synthesizing the surface color; and blending each of the surface color
- the layer coefficient blending means for combining the coefficients and generating the surface color after blending for at least two layers, and the surface color after blending for at least two layers output from the layer coefficient blending means into a single color Computer graphics to function as a color blending means for composition Of the program, and the like. Since each means obtained by this program has the same function as described above, the above description will be applied mutatis mutandis.
- the present invention can also provide a recording medium storing the above-mentioned program and readable by a computer.
- recording media include CD-ROMs, DVDs, hard disks, or memory in computers.
- the program is read in response to a command from the control unit, and the arithmetic unit that receives the command from the control unit is input using the read program.
- Data Data stored in the storage unit is read, for example, the memory of the storage unit is used as a work area, and predetermined calculations are performed. For example, the calculation results are temporarily stored in the memory and then output from the output section. These data may be transmitted through a bus, for example. In this way, hardware resources and programs work together.
- FIG. 13 is a block diagram showing an embodiment (computer) of the present invention.
- This embodiment relates to a computer based on computer graphics (such as a graphic computer).
- the computer 101 includes a central processing unit (CPU) 102, a geometry calculation unit such as a geometry calculation circuit 103, a rendering unit such as a renderer 104, a texture generation unit such as a texture generation circuit 105, and an illumination.
- An illumination processing unit such as a processing circuit 107, a display information creation unit such as a display circuit 108, a frame buffer 109, and a monitor 110 are provided. These elements are connected by means of a nose and can transmit data to each other.
- the storage unit may be composed of RAM such as VRAM, CR-ROM, DVD, hard disk, etc.
- the central processing unit (CPU) 102 is a device for controlling a program for generating an image.
- the work memory 111 may store data used by the CPU 102, a display list, and the like.
- the CPU 102 may read a program stored in the main memory and perform predetermined processing. However, the predetermined processing may be performed only by hardware processing.
- the CPU 102 reads polygon data as world coordinate three-dimensional object data from the work memory 111 and outputs the polygon data to the geometry calculation circuit 103.
- Specific examples include a main processor, a coprocessor, a data processor, four arithmetic circuits, or a general-purpose arithmetic circuit. These are connected by a bus, etc., so that signals can be exchanged.
- a data decompression processor for decompressing compressed information may be provided.
- the geometry calculation circuit 103 performs a view with the viewpoint as the origin on the input polygon data. This is a circuit for performing coordinate conversion on the data of the point coordinate system.
- the geometry calculation circuit 103 outputs the processed polygon data to the renderer 104.
- Specific geometry operation circuits include a geometry processor, coprocessor, data processor, four arithmetic operation circuit, or general-purpose operation circuit connected to the main processor via a bus.
- the renderer 104 is a circuit or device for converting polygon unit data into pixel unit data.
- the renderer 104 outputs the pixel unit data to the texture generation circuit 105.
- Specific examples of the renderer 104 include a data processor, four arithmetic circuits, or a general-purpose arithmetic circuit connected to the main processor via a bus.
- the texture generation circuit 105 is a circuit for generating a texture color in units of pixels based on the texture data stored in the texture memory 112.
- the texture generation circuit 105 outputs pixel unit data having texture color information to the illumination processing circuit 107.
- Specific examples of the texture generation circuit 105 include a data processing processor, four arithmetic operation circuits, or a general-purpose operation circuit connected to the main processor through a bus.
- the illumination processing circuit 107 is a circuit for performing shading on the polygon having the texture color information using a normal vector, a barycentric coordinate, and the like in units of pixels.
- the illumination processing circuit 107 outputs the shaded image data to the display circuit 108.
- Specific examples of the illumination processing circuit 107 include a data processing processor, four arithmetic operation circuits, and a general-purpose operation circuit connected to the main processor through a bus. Then, the information on the light, such as a table stored in the memory, can be read and shaded as appropriate.
- the display circuit 108 writes the image data input from the illumination processing circuit 107 into the frame buffer 109, reads out the image data written into the frame buffer 109, and controls it to obtain display image information. Circuit.
- the display circuit 108 outputs display image information to the monitor 110.
- Specific display circuits include a drawing processor, a data processor, four arithmetic operation circuits, or a general-purpose operation circuit connected to the main processor via a bus.
- the monitor 110 performs computer graphics according to the input display image information.
- An apparatus for displaying an image is an apparatus for displaying an image.
- the computer according to the present invention includes the image generation device according to the present invention in the illumination processing unit such as an illumination processing circuit, it can effectively generate a shading image or the like.
- the CPU 102 reads the polygon data from the work memory 111 and outputs the polygon data to the geometry calculation circuit 103.
- the geometry calculation circuit 103 performs processing such as coordinate conversion of the input polygon data to data in a viewpoint coordinate system with the viewpoint as the origin.
- the geometry calculation circuit 103 outputs the processed polygon data to the renderer 104.
- the renderer 104 converts polygon data into pixel data.
- the renderer 104 outputs the pixel unit data to the texture generation circuit 105.
- the texture generation circuit 105 generates a texture color for each pixel based on the texture data stored in the texture memory 112.
- the texture generation circuit 105 outputs pixel unit data having texture color information to the illumination processing circuit 107.
- Illumination processing circuit 107 shades polygons having texture color information using normal vectors, barycentric coordinates, etc. in pixel units.
- the illumination processing circuit 107 outputs the shaded image data to the display circuit 108.
- the display circuit 108 writes the image data input from the illumination processing circuit 107 into the frame buffer 109 and reads out the image data written into the frame buffer 109 to obtain display image information.
- the display circuit 108 outputs display image information to the monitor 110.
- the monitor 110 displays a computer graph status image according to the input display image information.
- the computer of the present invention includes the image generation device of the present invention in an illumination processing unit such as an illumination processing circuit, brightness calculation and the like can be performed without considering incident light to pixels other than a specific pixel. Since this can be done effectively, it is possible to calculate shadows at high speed.
- an illumination processing unit such as an illumination processing circuit, brightness calculation and the like
- FIG. 14 is a block diagram of an embodiment (game machine) according to the present invention.
- the embodiment shown in this block diagram is particularly suitably used as a portable, home or business game machine. Can be done. Therefore, in the following, it will be described as a game machine.
- the game machine shown in FIG. 5 may include at least the processing unit 200 (or may include the processing unit 200 and the storage unit 270, or the processing unit 200, the storage unit 270, and the information storage medium 280).
- Other blocks for example, the operation unit 260, the display unit 290, the sound output unit 292, the portable information storage device 294, and the communication unit 296) can be optional components.
- the processing unit 200 performs various processing such as control of the entire system, instruction instruction to each block in the system, game processing, image processing, and sound processing.
- the functions of the processing unit 200 can be realized by hardware such as various processors (CPU, DSP, etc.) or ASIC (gate array, etc.) and a given program (game program).
- the operation unit 260 is used by the player to input operation data.
- the function of the operation unit 260 can be realized by, for example, a controller equipped with a lever, a button, an outer frame, and nodeware.
- the operation unit 260 may be formed integrally with the game machine body. Processing information from the controller is transmitted to the main processor via a serial interface (I / F) or bus.
- I / F serial interface
- the storage unit 270 is a work area such as the processing unit 200 or the communication unit 296. It may also store programs and various tables.
- the storage unit 270 may include, for example, a main memory 272, a frame buffer 274, and a texture storage unit 276, and may also store various tables.
- the functions of the storage unit 270 can be realized by hardware such as ROM and RAM.
- RAM examples include VRAM, DRAM, and SRAM, which can be selected appropriately according to the application.
- VRAM that constitutes the frame buffer 274 is used as a work area for various processors.
- Information storage medium (storage medium usable by computer) 280 stores information such as programs and data.
- the information storage medium 280 can be sold as a so-called game cassette.
- the functions of the information storage medium 280 can be realized by hardware such as an optical disk (CD, DVD), a magneto-optical disk (MO), a magnetic disk, a hard disk, a magnetic tape, or a memory (ROM).
- the processing unit 200 performs various processes based on the information stored in the information storage medium 280.
- the information storage medium 280 includes the present invention (actual The information (program or program and data) for executing the means of the embodiment (particularly the blocks included in the processing unit 200) is stored. Note that the information storage medium 280 is not necessarily required when information such as program data is stored in the storage unit.
- Part or all of the information stored in the information storage medium 280 is transferred to the storage unit 270, for example, when the system is powered on.
- a program code for performing predetermined processing, image data, sound data, shape data of display objects, table data, list data, and processing of the present invention are instructed.
- the display unit 290 outputs an image generated by the present embodiment, and its functions are CRT (CRT), LCD (Liquid Crystal), OEL (Organic Electroluminescent Device), PDP (Plasma Display). Panel) or HMD (head-mounted display).
- CRT CRT
- LCD Liquid Crystal
- OEL Organic Electroluminescent Device
- PDP Plasma Display
- HMD head-mounted display
- the sound output unit 292 outputs sound.
- the functions of the sound output unit 292 can be realized by hardware such as speakers. Sound output is processed by a sound processor connected to the main processor via a bus, for example, and output from a sound output unit such as a speaker.
- the portable information storage device 294 stores player personal data, save data, and the like.
- Examples of the portable information storage device 294 include a memory card and a portable game device.
- the functions of the portable information storage device 294 can be achieved by known storage means such as a memory card, a flash memory, a hard disk, and a USB memory.
- the communication unit 296 is an arbitrary unit that performs various controls for communicating with the outside (for example, a host device or another image generation system).
- the functions of the communication unit 296 can be realized by hardware such as various processors or communication ASICs, and programs.
- the program or data for executing the game machine may be distributed to the information storage medium 280 via the information storage medium power network and communication unit 296 of the host device (server).
- the processing unit 200 includes a game processing unit 220, an image processing unit 230, and a sound processing unit 250.
- main processor, coprocessor, geometry processor, drawing Examples include processors, data processors, four arithmetic circuits, and general-purpose arithmetic circuits. These are appropriately connected by a bus or the like so that signals can be exchanged.
- a data decompression processor may also be provided to decompress the compressed information.
- the game processing unit 220 receives coins (price) acceptance processing, various mode setting processing, game progress processing, selection screen setting processing, object position and rotation angle (X, Y or Z axis). Rotation angle) processing, object motion processing (motion processing), viewpoint position (virtual camera position) and line-of-sight angle (virtual camera rotation angle) processing, map object and other objects into the object space Various game processes such as placement process, hit check process, process for calculating game results (results, results), process for multiple players to play in a common game space, game over process, etc. This is based on operation data from the unit 260, personal data from the portable information storage device 294, stored data, game programs, and the like.
- the image processing unit 230 performs various types of image processing in accordance with instructions from the game processing unit 220 and the like.
- the sound processing unit 250 performs various types of sound processing in accordance with instructions from the game processing unit 220.
- All of the functions of the game processing unit 220, the image processing unit 230, and the sound processing unit 250 may be realized by a hard ware, or all of them may be realized by a program. Or it may be realized by both hardware and program.
- Examples of the image processing unit 230 include a geometry calculation unit 2 32 (three-dimensional coordinate calculation unit) and a drawing unit 240 (rendering unit).
- the geometry computation unit 232 performs various geometry computations (three-dimensional coordinate computation) such as coordinate transformation, clipping processing, perspective transformation, or light source computation. Then, the object data (after the perspective transformation) after the geometry processing (object vertex coordinates, vertex texture coordinates, luminance data, etc.) is stored in the main memory 272 of the storage unit 270 and saved, for example.
- the drawing unit 240 draws an object in the frame buffer 274 based on the object data after the geometry calculation (after perspective transformation) and the texture stored in the texture storage unit 276.
- Examples of the drawing unit 240 include a texture mapping unit 242 and a shading processing unit 244. Specifically, it can be implemented by a drawing processor. Drawing processor Is connected to the texture storage unit, various tables, frame buffers, VRAM, etc. via a bus, and further to the display.
- the texture mapping unit 242 reads the environment texture from the texture storage unit 276, and maps the read environment texture to the object.
- the shading processing unit 244 performs shading processing on the object.
- the geometry processing unit 232 performs light source calculation and obtains the luminance (RGB) of each vertex of the object based on the light source information for shading processing, the lighting model, and the normal vector of each vertex of the object. .
- the shading processing unit 244 obtains the luminance of each dot on the primitive surface (polygon, curved surface) by, for example, honshading or Gouraud shading.
- the geometry calculation unit 232 includes a normal vector processing unit 234.
- the normal vector processing unit 234 performs a process of rotating the normal vector of each vertex of the object (in a broad sense, the normal vector of the surface of the object) by a rotation matrix to the local coordinate system force world coordinate system. May be.
- the shading processing unit 244 includes the image generation device of the present invention.
- the storage unit 270 When the system is turned on, some or all of the information stored in the information storage medium 280 is transferred to the storage unit 270, for example.
- a game processing program is stored in the main memory 272, for example, and various data are stored in the texture storage unit 276, a table (not shown), or the like.
- the operation information from the operation unit 260 is transmitted to the processing unit 200 via, for example, a serial interface and a bus (not shown), and sound processing and various image processing are performed.
- the sound information processed by the sound processing unit 250 is transmitted to the sound output unit 292 via the bus and released as sound.
- save information stored in the portable information storage device 194 such as a memory card is transmitted to the processing unit 200 via a serial interface or bus (not shown), and predetermined data is read from the storage unit 170.
- the image processing unit 230 performs various image processing in accordance with instructions from the game processing unit 220. I do. Specifically, the geometry calculation unit 232 performs various geometry calculations (three-dimensional coordinate calculation) such as coordinate conversion, clipping processing, perspective conversion, or light source calculation. Then, the object data (after the perspective transformation) after the geometry processing (object vertex coordinates, vertex texture coordinates, luminance data, etc.) is stored in the main memory 272 of the storage unit 270, for example. Next, the rendering unit 240 renders the object in the frame buffer 274 based on the object data after the geometry calculation (after perspective transformation) and the texture stored in the texture storage unit 276.
- various geometry calculations three-dimensional coordinate calculation
- the object data after the perspective transformation
- the rendering unit 240 renders the object in the frame buffer 274 based on the object data after the geometry calculation (after perspective transformation) and the texture stored in the texture storage unit 276.
- the information stored in the frame buffer 274 is transmitted to the display unit 290 via the bus and is drawn. In this way, it functions as a game machine with computer graphics.
- the game machine of the present invention includes, for example, the image generation device of the present invention in the shading processing unit 244, the luminance calculation and the like can be effectively performed without considering incident light to pixels other than the specific pixel. Therefore, it is possible to calculate shadows at high speed.
- FIG. 15 is a block diagram of an embodiment of the present invention (a mobile phone with a computer graphic function).
- the embodiment shown in this block diagram can be suitably used particularly as a mobile phone with a three-dimensional computer darling function, particularly a mobile phone with a game function and a mobile phone with a navigation function.
- this mobile phone includes a control unit 221 and a memory unit that stores programs and image data for the control unit 221 and serves as a work area such as the control unit and the communication unit.
- wireless communication function unit 223 for wireless communication imaging unit 224, which is an optional element such as a CCD camera that captures still images and videos and converts them to digital signals, and displays images and characters
- Display unit 225 such as LCD
- audio input unit 227 such as a microphone for voice calls
- audio output for outputting sounds
- the control unit 201 controls the entire mobile phone system and gives instructions to each block in the system. Various processes such as display, game processing, image processing, and sound processing are performed.
- the functions of the control unit 221 can be realized by hardware such as various processors (CPU, DSP, etc.) or ASIC (gate array, etc.) and a given program (game program).
- control unit includes an image processing unit (not shown), and the image processing unit includes a geometry calculation unit such as a geometry calculation circuit and a drawing unit (renderer). Furthermore, a texture generation circuit, an illumination processing circuit, or a display circuit may be provided. Furthermore, the drawing processing circuit in the computer or game machine described above may be provided as appropriate.
- the mobile phone of the present invention includes the image generating device of the present invention in, for example, a drawing unit (renderer).
- the voice input to the voice input unit 227 is converted into digital information by the interface, subjected to predetermined processing by the control unit 221, and output as a radio signal from the wireless communication function unit 223.
- the wireless communication function unit 223 receives the wireless signal, and after being subjected to a predetermined conversion process, is output from the audio output unit 228 under the control of the control unit 221.
- the operations and processes for processing an image are basically the same as the operations and processes in the computer and game machine described above.
- processing information is input from the operation unit 224 via an interface or bus (not shown), for example, a geometry processor or the like is operated based on a command from the image processing unit in the control unit 221, a work area such as a RAM, and various tapes. Geometry is calculated using the appropriate information. Further, the renderer of the control unit 221 performs rendering processing based on the command of the image processing unit in the control unit 221.
- the mobile phone of the present invention includes, for example, the image generation device of the present invention in a drawing unit (renderer), brightness calculation and the like can be performed without considering incident light to pixels other than a specific pixel. Since it can be performed effectively, it is possible to calculate shadows and the like at high speed. Since the level of computer graphics, especially 3D computer graphics, in mobile phones is not so high, the shading technique as in the present invention is not required, but by incorporating the image generating apparatus of the present invention, it is not possible to It will be possible to display beautiful images effectively.
- FIG. 16 is a block diagram of an embodiment (navigation system) of the present invention.
- the embodiment represented by this block diagram can be suitably used particularly as car navigation with a three-dimensional computer graphic function.
- this navigation system includes a GPS unit 241, an autonomous positioning unit 242 as an optional element, a map storage unit 243, a control unit 244, a display unit 245, and a map as an optional element. And a matching unit 246.
- the GPS unit 241 is a GPS unit that is equipped with a GPS receiver and obtains vehicle positioning data by simultaneously receiving radio waves from multiple GPS satellite forces.
- the GPS unit 241 obtains the absolute position of the vehicle from the data received by the GPS receiver.
- This positioning data includes vehicle direction information and elevation angle information in addition to the vehicle position information. .
- the autonomous positioning unit 242 is an autonomous positioning unit that includes an autonomous sensor and calculates the travel distance and direction of the vehicle from the output data of the autonomous sensor.
- Autonomous sensors include wheel-side sensors that detect signals according to the number of wheel rotations, acceleration sensors that detect vehicle acceleration, and gyro sensors that detect vehicle angular velocity.
- a three-dimensional gyro sensor that can also detect the attitude angle in the pitch motion direction of the vehicle (hereinafter referred to as “pitch angle”) is used as the gyro sensor. Therefore, the autonomous positioning unit 242 force is also output.
- the positioning data includes the pitch angle of the vehicle.
- the map storage unit 243 is a map storage unit that stores digital map data having 2D map information, 3D road information, and 3D building information.
- CD-ROMs and hard disks are examples of storage media that make up the map storage unit 243.
- the map data is preferably divided and stored in multiple blocks because it takes a long time to read when the amount of data is large.
- the road information may include information indicating major points (nodes) such as intersections and inflection points.
- the node information includes coordinate data at the points, and road information.
- the road may be approximated as a straight line (link) connecting the nodes.
- 3D road information means that node information power 3D coordinate data is provided.
- the control unit 244 Based on the vehicle position information obtained from the GPS unit 241 or the autonomous positioning unit 242, the control unit 244 reads out map data of a predetermined area corresponding to the vehicle position from the map storage unit 243. It is for performing control.
- the display unit 245 is for displaying the map data read out by the positioning control unit 244.
- the map matching unit 246 corrects the position of the vehicle on the road based on the vehicle positioning data and the 3D road information of the map data.
- the car navigation system includes, for example, a control unit including a geometric operation unit and a drawing unit.
- the (renderer) includes the image generation apparatus of the present invention.
- the GPS unit 241 simultaneously receives radio waves from multiple GPS satellite forces to obtain vehicle positioning data.
- the autonomous positioning unit 242 calculates the travel distance and direction of the vehicle from the output data of the autonomous sensor.
- the control unit 244 performs predetermined processing on the data obtained from the GPS unit 241 or the autonomous positioning unit 242 to obtain vehicle position information. Then, based on the vehicle position information, map data of a predetermined area related to the vehicle position is read from the map storage unit 243. In addition, in response to operation information from an operation unit (not shown), the display mode is changed, and map data corresponding to the display mode is read. Further, the control unit 244 performs predetermined drawing processing based on the position information, and displays a 3D building image, 3D map image, 3D car image, and the like. In addition, force ring processing is performed based on the Z value.
- the display unit 245 displays the map data read by the control unit 244.
- the car navigation system according to the present invention includes, for example, the image generation device according to the present invention in a drawing unit (renderer), so that brightness calculation and the like can be performed without considering incident light to pixels other than a specific pixel. Can be done effectively.
- the car navigation system of the present invention can calculate shadows and the like at high speed. Since the level of computer graphics, especially 3D computer graphics, in car navigation systems is not so high, the shading technology as in the present invention is not required, but the image generation device of the present invention is intentionally included. Therefore, the beauty of the shadow and the image can be displayed effectively.
- the image generating apparatus of the present invention can generate three-dimensional computer graphics, and can be used in the field of computers. Further, it can be suitably used in the fields of game machines, mobile phones, navigation devices and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
Abstract
[PROBLEMS] To provide a multilayer reflection shading image creating method and device for computer graphics enabling high-speed shading computation, and a computer. [MEANS FOR SOLVING PROBLEMS] An image creating device (1) comprises layer color creating means (2) for receiving the results of shading by a single light source or bylight sources and a surface layer set value representing to which surface layer the shading results belong and creating surface layer colors for at least two layers, layer coefficient blending means (3) for receiving the surface colors and a blend coefficient and creating the surface layer colors of at least the two layer after blending, and a color blending means (4) for combining the surface layer colors after the blending into a single color. Even without considering the light incident on pixels other than a specific pixel, brightness computation can be efficiently conducted. Therefore, a multilayer reflection shading image creating device for computer graphics enabling high-speed computation of shading is provided.
Description
明 細 書 Specification
多層反射シェーディング画像生成方法及びコンピュータ Multilayer reflection shading image generation method and computer
技術分野 Technical field
[0001] 本発明は,コンピュータグラフィックス用の多層反射シェーディング画像生成方法及 びコンピュータなどに関する。特に,本発明は,半透明の物体 (例えば人間の肌ゃ大 理石など)の物体表面における光の散乱を考慮した輝度計算などを効果的に行うリ アルタイム 3Dコンピュータグラフィックス用の多層反射シェーディング画像生成方法 及びコンピュータなどに関する。 The present invention relates to a multilayer reflection shading image generation method for computer graphics, a computer, and the like. In particular, the present invention is a multi-layer reflection for real-time 3D computer graphics that effectively performs luminance calculation in consideration of light scattering on the surface of a translucent object (for example, human skin or large stone). The present invention relates to a shading image generation method and a computer.
背景技術 Background art
[0002] コンピュータグラフィックスにおいて,半透明の物体 (例えば人間の肌や大理石など )は,物体表面に入射した光が物体の内部において複雑に散乱することを考慮して, 各ピクセルの陰影付けがなされる。これら物体表面での複雑な反射を演算する方法 として, BS¾RDF(bidirectional surface scattering reflectanse distribution function) 用いる方法などがあげられる(例えば,下記非特許文献 1及び非特許文献 2を参照)。 [0002] In computer graphics, translucent objects (such as human skin and marble) have their pixels shaded in consideration of the complex scattering of light incident on the object surface. Made. As a method for calculating the complicated reflection on the object surface, there is a method using BS¾RDF (bidirectional surface scattering reflect distribution function) (for example, see Non-Patent Document 1 and Non-Patent Document 2 below).
[0003] しかし, BSSRDFを用いた方法では,あるピクセルの陰影付けのために,そのピクセ ルへの入射光を考慮するだけでなく,そのピクセル以外のピクセルへの入射光も考 慮する必要があった。このため,従来の方法では,高速に陰影などを演算することが できな 、と!/、う問題があった。 [0003] However, in the method using BSSRDF, in order to shade a certain pixel, it is necessary to consider not only the incident light to that pixel but also the incident light to pixels other than that pixel. there were. For this reason, the conventional method has a problem that it cannot calculate shadows at high speed!
[0004] 特干文献 1: A Practical Model forSubsurface Light Transport, Henrik Wann Jense n Stephen R. Marschner Marc LevoyPat Hanrahan, Stanford University, To appear i n the SIGGRAPHconference proceedings [0004] Tokui Literature 1: A Practical Model for Subsurface Light Transport, Henrik Wann Jense n Stephen R. Marschner Marc Levoy Pat Hanrahan, Stanford University, To appear in the SIGGRAPHconference proceedings
非特許文献 2 : Anlnexpensive BRDF Model for Physically-based Rendering, Christop he Schlick, Eurographics ' 94, Computer Graphics Forrum, vl3, n3, p233- 246, 1994 発明の開示 Non-Patent Document 2: Anlnexpensive BRDF Model for Physically-based Rendering, Christop he Schlick, Eurographics' 94, Computer Graphics Forrum, vl3, n3, p233-246, 1994 Disclosure of the invention
発明が解決しょうとする課題 Problems to be solved by the invention
[0005] 本発明は,高速に陰影などを演算することができるコンピュータグラフィックス用の 多層反射シェーディング画像生成方法及びコンピュータなどを提供することを目的と
する。 [0005] An object of the present invention is to provide a multi-layer reflection shading image generation method for computer graphics, a computer, and the like that can calculate a shadow and the like at high speed. To do.
課題を解決するための手段 Means for solving the problem
[0006] 本発明は,基本的には,特定のピクセルに関する単一光源又は複数光源のシエー デイング結果に表層色を合成するためのブレンド係数を組み合わせ,少なくとも 2層 分のブレンド後の表層色を生成し,単一色に合成することにより,特定のピクセル以 外のピクセルへの入射光を考慮しなくとも,輝度計算などを効果的に行い,コンビユー タグラフィックス用のシェーディング画像データを生成することができるという知見に基 づくものである。 [0006] Basically, the present invention combines the blending coefficient for synthesizing the surface color with the single light source or multiple light source shading results for a specific pixel, and determines the surface color after blending for at least two layers. By generating and synthesizing it into a single color, it is possible to effectively perform luminance calculations and generate shaded image data for computer graphics without considering the incident light to pixels other than a specific pixel. It is based on the knowledge that it is possible.
[0007] すなわち,本発明の第一の側面は,コンピュータが,特定のピクセルに関する単一 光源又は複数光源のシェーディング結果と,前記シェーディング結果がどの表層に 属するかを表す表層設定値を受け取り,前記表層設定値に基づいて前記シエーディ ング結果を各表層に振り分け,少なくとも 2層分の表層色を生成し,前記少なくとも 2 層分の表層色と,表層色を合成するためのブレンド係数とを組み合わせ,ブレンド後 の表層色を少なくとも 2層分生成し, 前記少なくとも 2層分のブレンド後の表層色を 単一色に合成することにより,前記特定のピクセル以外のピクセルへの入射光を考慮 せずに,前記特定のピクセルに関するコンピュータグラフィックス用の画像データを生 成する画像生成方法に関する。 That is, according to the first aspect of the present invention, a computer receives a shading result of a single light source or a plurality of light sources for a specific pixel, and a surface layer setting value indicating to which surface layer the shading result belongs. Based on the set value of the surface layer, the sieving result is distributed to each surface layer to generate at least two surface layer colors, and the surface layer color for at least two layers is combined with a blend coefficient for synthesizing the surface layer color. By generating the surface color after blending for at least two layers and combining the surface color after blending for at least two layers into a single color, without considering the incident light to pixels other than the specific pixel, The present invention relates to an image generation method for generating image data for computer graphics related to the specific pixel.
[0008] また,本発明の上記とは別の側面は,コンピュータグラフィックス用の画像生成装置 であって,特定のピクセルに関する単一光源又は複数光源のシェーディング結果と, 前記シェーディング結果がどの表層に属するかを表す表層設定値を受け取り,少な くとも 2層分の表層色を生成するためのレイヤーカラー生成手段と,前記レイヤーカラ 一生成手段が生成した少なくとも 2層分の表層色と,表層色を合成するためのプレン ド係数とを受け取り,前記表層色のそれぞれに対してブレンド係数を組み合わせ,ブ レンド後の表層色を少なくとも 2層分生成するためのレイヤー係数ブレンド手段と,前 記レイヤー係数ブレンド手段から出力される少なくとも 2層分のブレンド後の表層色を 単一色に合成するためのカラーブレンド手段と,を具備し,前記レイヤーカラー生成 手段が,前記特定のピクセルに関する単一光源又は複数光源のシェーディング結果 と,前記シェーディング結果がどの表層に属するかを表す表層設定値を受け取り,少
なくとも 2層分の表層色を生成し,前記レイヤー係数ブレンド手段が,前記レイヤー力 ラー生成手段が生成した少なくとも 2層分の表層色と,表層色を合成するためのブレ ンド係数とを受け取り,前記表層色のそれぞれに対してブレンド係数を組み合わせ, 少なくとも 2層分のブレンド後の表層色を生成し,前記カラーブレンド手段が,前記レ ィヤー係数ブレンド手段から出力される少なくとも 2層分のブレンド後の表層色を単一 色に合成することによりコンピュータグラフィックス用の画像データを生成する画像生 成装置などに関する。さらに,本発明の別の側面は,そのような画像生成装置を用い た,コンピュータ,ゲーム機,携帯電話,ナビゲーシヨンシステム,コンピュータプログ ラム,及びそのコンピュータプログラムを格納したコンピュータ読取可能な記録媒体な どに関する。 [0008] Further, another aspect of the present invention is an image generation apparatus for computer graphics, in which a single light source or a plurality of light sources for a specific pixel is shaded, and on which surface the shading result is stored. A layer color generation means for receiving a surface layer setting value indicating whether it belongs and generating at least two surface colors; a surface color for at least two layers generated by the layer color generation means; and a surface color A layer coefficient blending means for generating a blended surface coefficient for at least two layers by combining a blend coefficient for each of the surface color, and generating a blended surface coefficient. Color blending means for synthesizing at least two layers of the blended surface layer color output from the blending means into a single color. The layer color generation means receives a shading result of a single light source or a plurality of light sources for the specific pixel and a surface layer setting value indicating which surface layer the shading result belongs to. At least two surface colors are generated, and the layer coefficient blending means receives at least two surface colors generated by the layer force error generating means and a blend coefficient for synthesizing the surface colors. , Combining the blending coefficients for each of the surface layer colors, generating a surface layer color after blending for at least two layers, and the color blending means blending for at least two layers output from the layer coefficient blending means The present invention relates to an image generation device that generates image data for computer graphics by combining the subsequent surface color into a single color. Furthermore, another aspect of the present invention is a computer, a game machine, a mobile phone, a navigation system, a computer program, and a computer-readable recording medium storing the computer program using such an image generation apparatus. About throat.
発明の効果 The invention's effect
[0009] 本発明では,特定のピクセルに関する単一光源又は複数光源のシェーディング結 果に表層色を合成するためのブレンド係数を組み合わせ,少なくとも 2層分のプレン ド後の表層色を生成し,単一色に合成する。これにより,本発明によれば,特定のピ クセル以外のピクセルへの入射光を考慮しなくとも,輝度計算などを行うことができる ので,高速に陰影などを演算することができるコンピュータグラフィックス用の多層反 射シェーディング画像生成方法及びコンピュータなどを提供できる。 図面の簡単な説明 [0009] In the present invention, a blending coefficient for synthesizing the surface layer color is combined with the shading result of a single light source or a plurality of light sources for a specific pixel to generate a surface layer color after blending for at least two layers. Combine into one color. As a result, according to the present invention, since it is possible to perform luminance calculation and the like without considering light incident on pixels other than a specific pixel, it is possible to compute shadows and the like at high speed. A multilayer reflection shading image generation method and a computer can be provided. Brief Description of Drawings
[0010] [図 1]図 1は,本発明の画像生成装置の基本構成を示すブロック図である。 FIG. 1 is a block diagram showing a basic configuration of an image generation apparatus according to the present invention.
[図 2]図 2は,本発明の画像生成方法を説明するためのフローチャートである。 FIG. 2 is a flowchart for explaining the image generation method of the present invention.
[図 3]図 3は,本発明のある実施態様に係るコンピュータシステムの基本構成を示す ブロック図である。 FIG. 3 is a block diagram showing a basic configuration of a computer system according to an embodiment of the present invention.
[図 4]図 4は,本発明のある実施態様に係るグラフィクスデバイスを示すブロック図であ る。 FIG. 4 is a block diagram showing a graphics device according to an embodiment of the present invention.
[図 5]図 5は,本発明のある実施形態に係るシェーディング手段を示すブロック図であ る。 FIG. 5 is a block diagram showing shading means according to an embodiment of the present invention.
[図 6]図 6は,レイヤーカラー生成手段及びレイヤー係数ブレンド手段を説明するため のブロック図である。
[図 7]図 7は,レイヤーカラー生成手段 (又はその一部)の例を示すブロック図である。 FIG. 6 is a block diagram for explaining layer color generation means and layer coefficient blending means. FIG. 7 is a block diagram showing an example of layer color generation means (or part thereof).
[図 8]図 8は,レイヤー係数ブレンド手段の構成例を示すブロック図である。 FIG. 8 is a block diagram showing a configuration example of layer coefficient blending means.
[図 9]図 9は,カラーブレンド手段の例を示すブロック図である。 FIG. 9 is a block diagram showing an example of color blending means.
[図 10]図 10は,カラーブレンド手段内部のレイヤー 0ブレンドの回路構成を示すブロッ ク図である。 [FIG. 10] FIG. 10 is a block diagram showing the circuit configuration of the layer 0 blend inside the color blending means.
[図 11]図 11は,シェーディング手段から出力される光源ごとのシェーディング結果とブ レンド係数がシリアルデータとして処理され,かつ表層の数力 4の場合における画像 生成装置のブロック図である。 [FIG. 11] FIG. 11 is a block diagram of the image generation apparatus in which the shading result and blend coefficient for each light source output from the shading means are processed as serial data and the number of powers of the surface layer is 4.
[図 12]図 12は,ある 1ピクセルのブレンド処理の流れを示す図である。 [FIG. 12] FIG. 12 is a diagram showing a flow of blending processing of one pixel.
[図 13]図 13は,本発明のある実施態様 (コンピュータ)を示すブロック図である。 FIG. 13 is a block diagram showing an embodiment (computer) of the present invention.
[図 14]図 14は,本発明のある実施形態 (ゲーム機)のブロック図である。 FIG. 14 is a block diagram of an embodiment (game machine) according to the present invention.
[図 15]図 15は,本発明のある実施形態 (コンピュータグラフィック機能つき携帯電話機 FIG. 15 shows an embodiment of the present invention (a mobile phone with a computer graphic function).
)のブロック図である。 ) Is a block diagram.
[図 16]図 16は,本発明のある実施形態 (ナビゲーシヨンシステム)のブロック図である FIG. 16 is a block diagram of an embodiment (navigation system) of the present invention.
符号の説明 Explanation of symbols
[0011] 1 画像生成装置 [0011] 1 Image generator
2 レイヤーカラー生成手段 2 Layer color generation means
3 レイヤー係数ブレンド手段 3 Layer coefficient blending means
4 カラーブレンド手段 4 Color blending means
発明を実施するための最良の形態 BEST MODE FOR CARRYING OUT THE INVENTION
[0012] [画像生成装置の概要] [Outline of Image Generating Device]
図 1は,本発明の画像生成装置の基本構成を示すブロック図である。図 1に示される とおり,本発明の画像生成装置(1)は,コンピュータグラフィックス用のコンピュータ又 はコンピュータを構成する装置であって,特定のピクセルに関する単一光源又は複 数光源のシェーディング結果と,前記シェーディング結果がどの表層に属するかを表 す表層設定値を受け取り,少なくとも 2層分の表層色を生成するためのレイヤーカラ 一生成手段 (2)と,前記レイヤーカラー生成手段が生成した少なくとも 2層分の表層
色と,表層色を合成するためのブレンド係数とを受け取り,前記表層色のそれぞれに 対してブレンド係数を組み合わせ,少なくとも 2層分のブレンド後の表層色を生成する ためのレイヤー係数ブレンド手段 (3)と,前記レイヤー係数ブレンド手段から出力され る少なくとも 2層分のブレンド後の表層色を単一色に合成するためのカラーブレンド手 段 (4)とを具備する。 FIG. 1 is a block diagram showing the basic configuration of the image generation apparatus of the present invention. As shown in FIG. 1, the image generation device (1) of the present invention is a computer for computer graphics or a device constituting the computer, and the result of shading of a single light source or a plurality of light sources for a specific pixel is obtained. A layer color generation means (2) for receiving a surface layer setting value indicating to which surface layer the shading result belongs and generating at least two surface colors; and at least the layer color generation means generated by the layer color generation means Surface layer for 2 layers Layer coefficient blending means for receiving a color and a blend coefficient for synthesizing the surface layer color, combining the blend coefficient for each of the surface layer colors, and generating a surface color after blending for at least two layers (3 ) And a color blending means (4) for synthesizing the surface color after blending for at least two layers output from the layer coefficient blending means into a single color.
[0013] なお,本明細書において, 「シェーディング結果」とは,たとえば,各光源からの核 酸反射 (ディフューズ),鏡面反射 (スぺキユラ一),間接光 (アンビエント)などの輝度 演算結果など,光源ごとの輝度演算結果を意味する。たとえば,第 1の光源カゝらのデ ィフューズ,第 1の光源からのスぺキユラ一,第 2の光源からのディフューズ,及び第 2 の光源力ものスぺキユラ一の 4種類のように,シェーディング結果は複数種類力もなる ことが好ましい。なお,光源は,平行光源,点光源,又はスポット光源のいずれであつ てもよい。また,複数の光源におけるそれぞれの光源の関係も任意のものがあげられ る。 In this specification, “shading result” means, for example, the results of luminance calculations such as nuclear acid reflection (diffuse), specular reflection (specular), and indirect light (ambient) from each light source. This means the brightness calculation result for each light source. For example, there are four types: a diffuser from the first light source, a diffuser from the first light source, a diffuser from the second light source, and a diffuser from the second light source. , It is preferable that the shading results have multiple types of power. The light source may be a parallel light source, a point light source, or a spot light source. In addition, the relationship between the light sources in the multiple light sources is arbitrary.
[0014] 表層とは,物体へ入射した光に対する反射率,屈折率など,光に対する異なる特性 を持つそれぞれの表面の層を意味する。層は,レイヤーともよばれる。 [0014] The surface layer means each surface layer having different characteristics with respect to light such as reflectance and refractive index with respect to light incident on an object. A layer is also called a layer.
[0015] 「表層設定値」とは,シェーディング結果を表層に振り分けるための設定値や振り分 け指令などを意味する。表層設定値は,表層ごとに,シェーディング結果である複数 の演算結果のうちどのシェーディング結果を割り当てるかと 、つた情報であればょ ヽ 。表層設定値は,コンピュータを,表層ごとにシェーディング結果である複数の演算 結果のうち所定のシェーディング結果を割り当てるよう機能させるプログラムにより得 ることができるようにされてもよい。また,表層ごとにシェーディング結果である複数の 演算結果のうち所定のシェーディング結果を割り当てるよう機能させる回路,そのよう な割り当て情報を記憶するメモリなどにより表層設定値を得ることができるようにされ てもよい。表層設定値は,たとえば,シェーディング結果が A〜Hまでの 8つあり,層が 3つあった場合に, A, B, D, F及び Hを第 1の表層に割り当て, A, C, E及び Gを第 2 の表層に割り当て, B及び Hを第 3の表層に割り当てると 、つたようなものであればよ い。 [0015] "Surface layer setting value" means a setting value or a distribution command for distributing the shading result to the surface layer. The surface layer setting value is information that indicates which shading result is to be assigned among multiple operation results that are shading results for each surface layer. The surface layer setting value may be obtained by a program that causes the computer to function to assign a predetermined shading result among a plurality of calculation results that are shading results for each surface layer. Also, the surface layer setting value may be obtained by a circuit that functions to allocate a predetermined shading result among a plurality of calculation results that are shading results for each surface layer, a memory that stores such allocation information, and the like. Good. For example, when there are 8 surface shading results from A to H and there are 3 layers, A, B, D, F, and H are assigned to the first surface, and A, C, E Assign G and G to the second surface layer and B and H to the third surface layer.
[0016] 「ブレンド係数」とは,シェーディング結果などと掛け合わせるなどして表層色を得るた
めの内積値などを意味する。ブレンド係数は,たとえば,頂点データに含まれる各種 ベクトルを用いて,内積算出回路がそれらの内積を求めることにより得てもよい。また ,後述するシュエーデイング手段が演算を行う際に,各種ベクトルの内積が求められ るので,それらの内積を適宜利用して,ブレンド係数としてもよい。ブレンド係数は, ノ スなどを介して,後述のレイヤー係数ブレンド手段に入力されてもよい。また,ブレ ンド係数は,メモリなどに格納され,適宜読み出されるようにしてもよい。 [0016] "Blend coefficient" is used to obtain the surface layer color by multiplying it with the shading result. This means the inner product value. The blending coefficient may be obtained, for example, by obtaining the inner product by the inner product calculating circuit using various vectors included in the vertex data. In addition, since the inner product of various vectors is obtained when the scaling means described later performs the calculation, the inner product of these vectors may be used as appropriate to obtain the blend coefficient. The blend coefficient may be input to the layer coefficient blending means described later via a nose. In addition, the blend coefficient may be stored in a memory or the like and read as appropriate.
[0017] 合成される「単一色」とは,多層反射シェーディング画像のあるピクセルにおける輝度 などの計算結果などを意味する。以下,本発明の第一の実施形態に係る画像生成 装置 (1)を構成する各構成手段にっ ヽて説明する。 The “single color” to be combined means a calculation result such as luminance at a pixel of a multilayer reflection shading image. Hereinafter, each constituent unit constituting the image generating apparatus (1) according to the first embodiment of the present invention will be described.
[0018] [レイヤーカラー生成手段] [0018] [Layer color generation means]
レイヤーカラー生成手段 (2)は,特定のピクセルに関する単一光源又は複数光源の シェーディング結果と,前記シェーディング結果がどの表層に属するかを表す表層設 定値を受け取り,少なくとも 2層分の表層色を生成するための手段である。なお, 「表 層色を生成する」には,表層色を選択して表層色を得るものも含まれる。特定のピク セルに関する単一光源又は複数光源のシェーディング結果は,後述するコンビユー タ内のシェーディング手段により求められ,たとえばバスなどを介して,レイヤーカラ 一生成手段 (2)に伝達される。なお,シェーディング結果として,複数光源からの光を 考慮したシェーディング結果が好ましい。表層設定値は,たとえば, CPUが,表層設 定値を記憶するメモリや,メインメモリ中のプログラムなどから,表層設定値を読み出 し,ノ スなどを介してレイヤーカラー生成手段に入力すればょ 、。 The layer color generation means (2) receives a shading result of a single light source or a plurality of light sources for a specific pixel and a surface layer setting value indicating which surface layer the shading result belongs to, and generates a surface color for at least two layers. It is a means to do. “Generate surface color” includes selecting the surface color to obtain the surface color. The shading result of a single light source or multiple light sources for a specific pixel is obtained by the shading means in the computer, which will be described later, and is transmitted to the layer color generation means (2) via, for example, a bus. The shading result is preferably a shading result considering light from multiple light sources. For example, the CPU can read the surface setting value from the memory that stores the surface setting value or the program in the main memory and input it to the layer color generation means via the nose. ,.
[0019] レイヤーカラー生成手段 (2)は,特定のピクセルに関する単一光源又は複数光源の シェーディング結果と,前記シェーディング結果がどの表層に属するかを表す表層設 定値を受け取り,少なくとも 2層分の表層色を生成するための回路などのハードウェア により実装されても良い。また,レイヤーカラー生成手段 (2)は,特定のピクセルに関 する単一光源又は複数光源のシェーディング結果と,前記シェーディング結果がど の表層に属するかを表す表層設定値を受け取り,少なくとも 2層分の表層色を生成す るための手段としてコンピュータを機能させるプログラムなどのソフトウェアにより実装 されても良く,ハードウェアとソフトウェアとの両方により実装されても良い。ハードゥエ
ァとして,制御部,演算部,記憶部,入出力部,及びそれらを連結するシステムノ ス などにより構成されるものがあげられる。そして,所定の情報が入力部力も入力さると ,制御部の指令を受けて,演算部が,入力されたデータや,記憶部が記憶したデー タゃ表層設定値などを読み出し,例えば記憶部のメモリを作業領域として利用し,所 定の演算を行う。演算結果は,例えばメモリに一時的に記憶された後,出力部から出 力される。これらのデータは,例えばバスを通じて伝達されればよい。 [0019] The layer color generation means (2) receives a shading result of a single light source or a plurality of light sources for a specific pixel and a surface layer setting value indicating to which surface layer the shading result belongs, and receives at least two surface layers. It may be implemented by hardware such as a circuit for generating colors. The layer color generation means (2) receives a shading result of a single light source or a plurality of light sources for a specific pixel and a surface layer setting value indicating to which surface layer the shading result belongs, and at least for two layers. It may be implemented by software such as a program that causes a computer to function as a means for generating the surface color of the image, or by both hardware and software. Hardue Examples of the key include a control unit, a calculation unit, a storage unit, an input / output unit, and a system node that connects them. Then, when the predetermined information is input to the input unit force, the calculation unit reads the input data, the data stored in the storage unit, the surface layer setting value, etc. in response to a command from the control unit. The memory is used as a work area and predetermined calculations are performed. The calculation result is temporarily stored in the memory, for example, and then output from the output unit. These data may be transmitted through a bus, for example.
[0020] レイヤーカラー生成手段 (2)として,入力されるシェーディング結果を,表層設定値を 用いて各表層に割り当てるための割り当て指令を出力するライト選択設定手段と,前 記ライト選択設定手段力 の割り当て指令に従って,入力されるシェーディング結果 を各表層用に選択するレイヤーカラー選択手段と具備するものがあげられる。そして ,特定のピクセルに関する単一光源又は複数光源のシヱーデイング結果と,前記シヱ ーデイング結果がどの表層に属するかを表す表層設定値を受け取り,前記ライト選択 設定手段は,入力されるシェーディング結果を表層設定値を用いて各表層に割り当 て,前記レイヤーカラー選択手段は,前記ライト選択設定手段からの割り当て指令に 従って入力されるシェーディング結果を各表層用に選択することにより,少なくとも 2層 分の表層色を生成する。たとえば,シェーディング手段から, A〜Hまでの 8つのシェ ーデイング結果が入力された場合,メモリから, A, B, D, F及び Hを第 1の表層に割 り当て, A, C, E及び Gを第 2の表層に割り当て, B及び Hを第 3の表層に割り当てる 表層設定値を読み出し,そのようにシェーディング結果を割り当てて,各レイヤーカラ 一を生成 (選択)すればよい。 [0020] As the layer color generation means (2), a light selection setting means for outputting an assignment command for assigning the input shading result to each surface layer using the surface layer setting value, and a light selection setting means power Examples include a layer color selection means for selecting an input shading result for each surface layer in accordance with an allocation command. Then, a seeding result of a single light source or a plurality of light sources for a specific pixel and a surface layer setting value indicating to which surface layer the seeding result belongs are received, and the light selection setting means receives the input shading result as a surface layer. The layer color selection means assigns each surface layer using a set value, and selects the shading result input for each surface layer in accordance with an assignment command from the light selection setting means, thereby at least two layers. Generate a surface color. For example, when 8 shading results from A to H are input from the shading means, A, B, D, F and H are allocated from the memory to the first surface layer, and A, C, E and Allocate G to the second surface layer, assign B and H to the third surface layer, read the surface layer setting values, assign the shading results in this way, and generate (select) each layer color.
[0021] [レイヤー係数ブレンド手段] [0021] [Layer coefficient blending means]
レイヤー係数ブレンド手段 (3)は,前記レイヤーカラー生成手段が生成した少なくとも 2層分の表層色と,表層色を合成するためのブレンド係数とを受け取り,前記表層色 のそれぞれに対してブレンド係数を組み合わせ,少なくとも 2層分のブレンド後の表 層色を生成するための手段である。 The layer coefficient blending means (3) receives the surface color of at least two layers generated by the layer color generation means and the blend coefficient for synthesizing the surface layer color, and sets the blend coefficient for each of the surface color. It is a means for generating a surface color after blending and blending for at least two layers.
[0022] レイヤー係数ブレンド手段 (3)は,レイヤーカラー生成手段が生成した少なくとも 2層 分の表層色と,表層色を合成するためのブレンド係数とを受け取り,前記表層色のそ れぞれに対してブレンド係数を組み合わせ,少なくとも 2層分のブレンド後の表層色
を生成するための回路などのハードウェアにより実装されても良い。また,レイヤー係 数ブレンド手段 (3)は,コンピュータをレイヤーカラー生成手段が生成した少なくとも 2 層分の表層色と,表層色を合成するためのブレンド係数とを受け取り,前記表層色の それぞれに対してブレンド係数を組み合わせ,少なくとも 2層分のブレンド後の表層 色を生成する手段として機能させるためのプログラムなどのソフトウェアにより実装さ れても良く,ハードウェアとソフトウェアとの両方により実装されても良い。ハードウェア として,制御部,演算部,記憶部,入出力部,及びそれらを連結するシステムバスな どにより構成されるものがあげられる。そして,所定の情報が入力部から入力さると, 制御部の指令を受けて,演算部が,入力されたデータや,記憶部に記憶されるブレ ンド係数などのデータを読み出し,例えば記憶部のメモリを作業領域として利用し, 所定の演算を行う。具体的な演算として,ブレンド係数と表層色を乗算する演算や, いくつかの演算結果をたす和算演算などがあげられる。演算結果は,例えばメモリに 一時的に記憶された後,出力部から出力される。これらのデータは,例えばバスを通 じて伝達されればよい。 [0022] The layer coefficient blending means (3) receives the surface color for at least two layers generated by the layer color generation means and the blend coefficient for synthesizing the surface layer color, and receives each of the surface color. Combined with blending coefficient, surface color after blending for at least two layers It may be implemented by hardware such as a circuit for generating. The layer coefficient blending means (3) receives at least two surface layer colors generated by the layer color generation means from the computer and a blend coefficient for synthesizing the surface layer colors. It may be implemented by software such as a program for functioning as a means to generate a surface color after blending at least two layers by combining blending coefficients, or by both hardware and software . The hardware includes a control unit, arithmetic unit, storage unit, input / output unit, and a system bus that connects them. When predetermined information is input from the input unit, in response to a command from the control unit, the calculation unit reads out the input data and data such as blend coefficients stored in the storage unit. The memory is used as a work area and predetermined calculations are performed. Specific operations include multiplication of the blend coefficient and surface color, and summation that produces several calculation results. The calculation result is temporarily stored in the memory, for example, and then output from the output unit. These data may be transmitted via a bus, for example.
[0023] [カラーブレンド手段] [0023] [Color blending means]
カラーブレンド手段 (4)は,レイヤー係数ブレンド手段から出力される少なくとも 2層分 のブレンド後の表層色を単一色に合成するための手段である。これにより,あるピクセ ルにおける色又は輝度が決定されることとなる。 Color blending means (4) is a means for synthesizing at least two layers of the blended surface layer color output from the layer coefficient blending means into a single color. This determines the color or brightness at a pixel.
[0024] カラーブレンド手段 (4)は,レイヤー係数ブレンド手段から出力される少なくとも 2層分 のブレンド後の表層色を単一色に合成するための回路などのハードウェアにより実装 されても良いし,コンピュータをレイヤー係数ブレンド手段から出力される少なくとも 2 層分のブレンド後の表層色を単一色に合成するため手段として機能させるためのプ ログラムなどのソフトウェアにより実装されても良く,ハードウェアとソフトウェアとの両方 により実装されても良い。ハードウ アとして,制御部,演算部,記憶部,入出力部, 及びそれらを連結するシステムバスなどにより構成されるものがあげられる。そして, 所定の情報が入力部から入力さると,制御部の指令を受けて,演算部が,入力され たデータや,記憶部に記憶されるデータを読み出し,例えば記憶部のメモリを作業領 域として利用し,所定の演算を行う。演算結果は,例えばメモリに一時的に記憶され
た後,出力部から出力される。これらのデータは,例えばバスを通じて伝達されれば よい。 [0024] The color blending means (4) may be implemented by hardware such as a circuit for synthesizing the surface color after blending at least two layers output from the layer coefficient blending means into a single color, It may be implemented by software such as a program for causing a computer to function as a means for synthesizing at least two layers of blended surface colors output from the layer coefficient blending means into a single color. May be implemented by both. The hardware includes a control unit, a calculation unit, a storage unit, an input / output unit, and a system bus that connects them. When predetermined information is input from the input unit, the control unit receives an instruction from the control unit, and reads the input data and data stored in the storage unit. For example, the memory of the storage unit is read from the work area. To perform a predetermined calculation. The calculation result is temporarily stored in a memory, for example. And then output from the output section. These data may be transmitted via a bus, for example.
[0025] [画像生成装置の基本動作] [Basic operation of image generation apparatus]
本発明の画像生成装置(1)は,例えば,以下のようにして画像データを得る。図 2は ,本発明の画像生成方法を説明するためのフローチャートである。図 2に示されると おり,レイヤーカラー生成手段が,前記特定のピクセルに関する単一光源又は複数 光源のシェーディング結果と,前記シェーディング結果がどの表層に属するかを表す 表層設定値を受け取る (ステップ 1)。具体的には,たとえばシェーディング手段が求 めたディフューズ,スぺキユラなどのシュエーデイング結果をバスなどを介して受け取 る。また, CPUはメインメモリに格納されたプログラムの指令を受け,メモリなどに格納 された表層設定値を読み出してバスなどを介して表層設定値をレイヤーカラー生成 手段に伝え,レイヤーカラー生成手段はこのようにして伝えられた表層設定値を受け 取る。 The image generation device (1) of the present invention obtains image data as follows, for example. FIG. 2 is a flowchart for explaining the image generation method of the present invention. As shown in Fig. 2, the layer color generation means receives the shading result of a single light source or a plurality of light sources for the specific pixel and the surface layer setting value indicating which surface layer the shading result belongs to (step 1). . Specifically, for example, it receives the results of swaying, such as diffuse and specular, obtained by the shading means via a bus. In addition, the CPU receives a command from a program stored in the main memory, reads the surface layer setting value stored in the memory or the like, and transmits the surface layer setting value to the layer color generation unit via a bus or the like. In this way, the surface layer setting value transmitted is received.
[0026] そして,レイヤーカラー生成手段が,少なくとも 2層分 (例えば, 3層分)の表層色を生 成する (ステップ 2)。具体的には,表層設定値を用いて各表層に割り当てるための割 り当て指令を出力し,割り当て指令に従って,入力されるシェーディング結果を各表 層用に選択することにより,各表層用の表層色 (たとえば, 1又は複数のシ ーディン グ結果を含む)を生成すればょ ヽ。 [0026] Then, the layer color generation means generates surface colors of at least two layers (for example, three layers) (step 2). Specifically, an assignment command for assigning to each surface layer is output using the surface layer setting value, and an input shading result is selected for each surface layer according to the assignment command, so that the surface layer for each surface layer is selected. Generate colors (for example, containing one or more seeding results).
[0027] 次に,レイヤー係数ブレンド手段が,前記レイヤーカラー生成手段が生成した少なく とも 2層分の表層色と,表層色を合成するためのブレンド係数とを受け取り,前記表層 色のそれぞれに対してブレンド係数を組み合わせ,ブレンド後の表層色を少なくとも 2 層分生成する (ステップ 3)。ブレンド係数は,たとえば,所定のベクトルの内積値などと してメモリなどに格納されるので, CPUはメインメモリに格納されたプログラムの指令 を受け,メモリなどに格納されたブレンド係数を読み出して,バスなどを介してレイヤ 一係数ブレンド手段に伝える。そして,レイヤー係数ブレンド手段は,受け取ったブレ ンド係数とレイヤーカラー生成手段が生成した表層色に掛け合わせることにより,ブ レンド後の表層色を生成する。 [0027] Next, the layer coefficient blending means receives at least two layer surface colors generated by the layer color generation means and a blend coefficient for synthesizing the surface layer colors, and each of the surface layer colors is received. Combine the blend coefficients to generate at least two surface colors after blending (Step 3). For example, the blend coefficient is stored in a memory as an inner product value of a predetermined vector, so the CPU receives a program command stored in the main memory, reads the blend coefficient stored in the memory, etc. It communicates to the layer-one coefficient blending means via a bus. The layer coefficient blending means generates the blended surface color by multiplying the received blend coefficient with the surface color generated by the layer color generation means.
[0028] 次に,カラーブレンド手段が,前記レイヤー係数ブレンド手段から出力される少なくと
も 2層分のブレンド後の表層色を受取って,これを加算するなどして単一色に合成す ることによりコンピュータグラフィックス用の画像データを生成する(ステップ 4)。具体 的には,加算回路,加算器,又は加算プログラムなどを用いてそれぞれの表層色の 和を求め,単一色を合成すればよい。 [0028] Next, at least the color blending means is output from the layer coefficient blending means. Also, the surface color after blending for two layers is received and added to generate a single color image data (step 4). Specifically, a single color may be synthesized by calculating the sum of the surface colors using an adder circuit, adder, or addition program.
[0029] これにより,特定のピクセルにおけるシュエーデイング結果を用いて,それを複数の 表層に分けてブレンド係数と組み合わせた上で,単一の色とするので,光源ごとのず れを考慮した色を計算することができ,特定のピクセル以外のピクセルへの入射光を 考慮しなくとも,輝度計算などを効果的に行うことができることとなる。これにより,本発 明の画像生成装置は,高速に陰影などを演算することができる。 [0029] By using the result of sueding at a specific pixel and dividing it into multiple surface layers and combining it with the blending coefficient, a single color is taken into account. Colors can be calculated, and brightness calculations can be performed effectively without taking into account incident light on pixels other than specific pixels. As a result, the image generation apparatus of the present invention can calculate shadows and the like at high speed.
[0030] [コンピュータ] [0030] [Computer]
図 3は,本発明のある実施態様に係るコンピュータシステムの基本構成を示すプロ ック図である。図 3に示されるように,本実施形態に係るコンピュータシステム 10は, C PU11,メモリ 12, I/013,グラフィクスデバイス 14及びディスプレイ 15を具備する。そ して, CPU11,メモリ 12, I/013,グラフィクスデバイス 14,ディスプレイ 15は,システ ムノ ス 16に接続され,相互にデータ転送を行う。たとえば,グラフィックデバイス 14に は,上記した本発明の画像形成装置が組み込まれるので,上記に説明した所定の処 理を行い,シェーディング画像を効果的に得ることができ,ディスプレイにシエーディ ングを考慮した画像などが表示される。 FIG. 3 is a block diagram showing the basic configuration of a computer system according to an embodiment of the present invention. As shown in FIG. 3, the computer system 10 according to the present embodiment includes a CPU 11, a memory 12, I / 013, a graphics device 14, and a display 15. The CPU 11, memory 12, I / 013, graphics device 14, and display 15 are connected to the system node 16 and transfer data to each other. For example, since the graphic device 14 incorporates the above-described image forming apparatus of the present invention, the predetermined processing described above can be performed to effectively obtain a shaded image, and the display is considered in consideration of shading. An image is displayed.
[0031] [グラフィックデバイス] [0031] [Graphic device]
図 4は,本発明のある実施態様に係るグラフィクスデバイスを示すブロック図である。 このグラフィックデバイスは,図 3の符号 14で示されるグラフィックデバイス又はその一 部として機能し,コンピュータを構成する一部であってもよい。図 4に示されるとおり, この実施態様に係るグラフィクスデバイスは,頂点データ 21などが入力され幾何演算 などを行うためのジオメトリエンジン (幾何演算回路,装置,部など) 22と,頂点データ 補間を行うための頂点データ補間手段 (頂点データ補間回路,装置,部など) 23と, シェーディング演算を行うためのシヱーデイング手段 (シヱーデイング回路,装置,部 など) 24と,テクスチャ合成を行うためのテクスチャ手段 (テクスチャ合成回路,装置, 部など) 25と,テクスチャなどを格納するためのテクスチャメモリ 26と,レイヤーカラー
生成手段 2と,レイヤー係数ブレンド手段 3と,カラーブレンド手段 4を具備し,各要素 はシステムバス 16などにより連結されている。 FIG. 4 is a block diagram illustrating a graphics device according to an embodiment of the present invention. This graphic device may function as a graphic device indicated by reference numeral 14 in FIG. 3 or a part thereof, and may be a part constituting a computer. As shown in Fig. 4, the graphics device according to this embodiment performs vertex data interpolation with a geometry engine 22 (geometric operation circuit, device, part, etc.) 22 that receives vertex data 21 and the like and performs geometric operations. Vertex data interpolation means (vertex data interpolation circuit, device, part, etc.) 23, seeding means (shading circuit, device, part, etc.) 24 for shading calculation, and texture means (texture synthesis) for texture synthesis 25), texture memory 26 for storing textures, etc., layer color A generation means 2, a layer coefficient blending means 3, and a color blending means 4 are provided, and each element is connected by a system bus 16 or the like.
[0032] [グラフィックデバイスの基本動作] [0032] [Basic operation of graphic device]
以下,グラフィックデバイスの基本動作について説明する。ジオメトリエンジン 22は, コンピュータシステム 10のシステムバス 16に接続され,頂点データ 21を入力データと して受け取る。ジオメトリエンジン 22は,頂点データに対して幾何変換などを行い,透 視変換とビューポートマッピングを行った後に,変換した頂点データを頂点データ補 間手段 23に出力する。頂点データ補間手段 23は,入力された頂点データ力も構成さ れる多角形 (通常三角形)の内部に対して,付随する各パラメータを補間して,シエー デイング手段 24,テクスチャ手段 25にそれぞれ出力する。 The basic operation of the graphic device will be described below. The geometry engine 22 is connected to the system bus 16 of the computer system 10 and receives vertex data 21 as input data. The geometry engine 22 performs geometric transformation on the vertex data, performs perspective transformation and viewport mapping, and then outputs the transformed vertex data to the vertex data interpolation means 23. The vertex data interpolating means 23 interpolates each parameter associated with the inside of the polygon (usually a triangle) that also constitutes the input vertex data force, and outputs it to the shading means 24 and the texture means 25, respectively.
[0033] テクスチャ手段 25は,頂点データ補間手段 23から入力される u,v座標力もテクスチャメ モリ 26へのアクセスアドレスを算出し,対応するデータをテクスチャメモリ 26から受け取 る。テクスチャメモリ 26から入力されるデータは,たとえば,カラーデータ,バンプマツ ビング用のデータ,又はタンジェントベクトルのデータのうちいずれ力 1つ又は 2っ以 上である。テクスチャ手段 25は,テクスチャメモリ 26から入力されたデータがバンプマ ッビング用またはタンジェントベクトルのデータであるか判断し,入力されたデータが バンプマッピング用またはタンジェントベクトルのデータであった場合は,シエーディ ング手段 24にデータを出力する。一方,テクスチャメモリ 26から入力されたデータが力 ラーデータであった場合は,カラーブレンド手段 4にデータを出力する。 The texture unit 25 calculates the access address to the texture memory 26 for the u and v coordinate forces input from the vertex data interpolation unit 23, and receives the corresponding data from the texture memory 26. The data input from the texture memory 26 is, for example, one or more of color data, bump mapping data, and tangent vector data. The texture means 25 determines whether the data input from the texture memory 26 is for bump mapping or tangent vector data, and if the input data is for bump mapping or tangent vector data, Output data to 24. On the other hand, if the data input from the texture memory 26 is error data, the data is output to the color blending means 4.
[0034] シェーディング手段 24は,頂点データ補間手段 23から入力された補間データとテクス チヤ手段 25から入力されたバンプマッピング用データ,又はタンジュントベクトルデー タカ 光源単位に画素ごとの輝度演算を行 、,その結果をレイヤーカラー生成手段 2 に出力する。その後は,たとえば,上記した画像形成装置の動作に従って,シエーデ イング用などの色情報が得られる。 [0034] The shading means 24 performs the luminance calculation for each pixel in units of light source units for the interpolation data inputted from the vertex data interpolation means 23 and the bump mapping data inputted from the texture means 25 or the tangent vector data. The result is output to the layer color generation means 2. Thereafter, for example, color information for shading is obtained in accordance with the operation of the image forming apparatus described above.
[0035] [シェーディング手段] [0035] [Shading means]
次にシェーディング手段 24の基本構成例及び動作例にっ 、て説明する。図 5は, 本発明のある実施形態に係るシェーディング手段を示すブロック図である。図 5に示 されるように,この実施態様に係るシェーディング手段は,システムインターフェイス (I
ZF) 31と,バンプ回転,タンジュント回転又はハーフベクトル生成演算などを行うた めのバンプ回転,タンジェント回転,ハーフベクトル生成手段 32と,減衰率などを求め るためのライト減衰手段 (ライト減衰回路など) 33と,内積演算などを行うための内積 演算手段 (乗算回路などの演算回路など) 34と,シェーディング演算を行うためのライ トシエーデイング手段 (演算回路など) 35 (35a〜35d)と,ライト減衰手段からの減衰係 数とライトシェーディング手段からの入力値とを用いてシェーディング結果を得るため のライトブレンド手段 36 (36a〜36d)を具備する。 Next, a basic configuration example and an operation example of the shading means 24 will be described. FIG. 5 is a block diagram showing shading means according to an embodiment of the present invention. As shown in FIG. 5, the shading means according to this embodiment is a system interface (I ZF) 31, bump rotation, tangent rotation, or half vector generation calculation for bump rotation, tangent rotation, half vector generation means 32, and light attenuation means (light attenuation circuit) for determining attenuation factors, etc. 33, inner product calculation means (such as an arithmetic circuit such as a multiplication circuit) 34 for performing inner product calculation, etc., write-shading means (such as an arithmetic circuit) 35 (35a to 35d) for performing shading calculation, etc. And light blending means 36 (36a to 36d) for obtaining a shading result using the attenuation coefficient from the light attenuating means and the input value from the light shading means.
[0036] シェーディング手段を構成する各要素は,回路などのハードウェアにより実装されて も良いし,プログラムなどのソフトウェアにより実装されても良く,ハードウェアとソフトゥ エアとの両方により実装されても良い。また,それぞれ装置,部などとして構成されて も良い。ハードウェアとして,制御部,演算部,記憶部,入出力部,及びそれらを連結 するシステムノ スなどにより構成されるものがあげられる。そして,所定の情報が入力 部から入力さると,制御部の指令を受けて,演算部が,入力されたデータや,記憶部 に記憶されるデータを読み出し,例えば記憶部のメモリを作業領域として利用し,所 定の演算を行う。演算結果は,例えばメモリに一時的に記憶された後,出力部から出 力される。これらのデータは,例えばバスを通じて,伝達されればよい。 [0036] Each element constituting the shading means may be implemented by hardware such as a circuit, may be implemented by software such as a program, or may be implemented by both hardware and software. . Also, each may be configured as a device, a part, or the like. Examples of hardware include a control unit, a calculation unit, a storage unit, an input / output unit, and a system node that connects them. When predetermined information is input from the input unit, the control unit receives a command from the control unit and reads the input data or data stored in the storage unit. For example, the memory of the storage unit is used as a work area. Use it to perform specified calculations. The calculation result is temporarily stored in the memory, for example, and then output from the output unit. These data may be transmitted through a bus, for example.
[0037] [シェーディング処理] [0037] [Shading process]
次に,シェーディング処理の例を説明する。シェーディング手段 24は,頂点データ 補間手段 23 (図 4)力も入力される複数のライトベクトル Ln (41) ,視線ベクトル V (42) , 法線ベクトル N (43)及びテクスチャ手段 25 (図 4)カゝら入力されるタンジュントベクトル T (44) ,バンプベクトル B (45)を用いて,ライトベクトル数分の輝度演算を行い,その結 果をライトブレンド手段 36に出力する。図 5に示される例では,ライトベクトル Lnの個数 を 4としているので,輝度演算及びその結果出力も 4光源分となる。この例では,ライト ベクトルの個数を 4個とした力これに限定されるものではない。 Next, an example of shading processing will be described. The shading means 24 includes a vertex data interpolation means 23 (Fig. 4), a plurality of light vectors Ln (41), a line-of-sight vector V (42), a normal vector N (43) and a texture means 25 (Fig. 4). Using the input tangent vector T (44) and bump vector B (45), the luminance calculation is performed for the number of light vectors, and the result is output to the light blending means 36. In the example shown in Fig. 5, since the number of light vectors Ln is 4, the luminance calculation and the resulting output are also equivalent to 4 light sources. In this example, the number of light vectors is four.
[0038] 頂点データ補間手段 23 (図 4)力も入力された法線ベクトル (43) Nは,テクスチャ手段 25 (図 4)力 のタンジェントベクトル T (44)またはバンプベクトル B (45)の入力が有効 であった場合は,バンプ回転タンジェント回転ノヽーフベクトル生成手段 32で,法線べ タトル N (43)の回転が行われ,その結果 N'を内積演算手段 34に出力する。また,パ
ンプ回転タンジェント回転ハーフベクトル生成手段 32は,複数のライトベクトル Ln41と 視線ベクトル V42の和の 2等分ベクトルであるハーフベクトル Hを生成し,同様に内積 演算手段 34に出力する。内積演算手段 34は,入力された各種のベクトルから輝度計 算の為の演算(主に内積演算)を行う。たとえば, NV, VH, NH, NLの 4種類の内積演 算を行った結果を,たとえばブレンド係数 46として,光源ごとのライトシェーディング手 段 35a〜35dに出力する。また,ブレンド係数 46は,レイヤー係数ブレンド手段 3にも伝 えられる。ライトシェーディング手段 35a〜35dでは,フォンシェーディングなどの双方 向反射率分布関数 (bidirectional [0038] Vertex data interpolation means 23 (Fig. 4) Normal vector (43) N to which force is also input is input to texture means 25 (Fig. 4) force tangent vector T (44) or bump vector B (45). If it is valid, the normal rotation vector N (43) is rotated by the bump rotation tangent rotation noise vector generation means 32, and the result N ′ is output to the inner product calculation means 34. Also, The output rotation tangent rotation half vector generation means 32 generates a half vector H which is a bisection vector of the sum of the plurality of light vectors Ln41 and the line-of-sight vector V42 and outputs the same to the inner product calculation means 34. The inner product calculation means 34 performs calculation (mainly inner product calculation) for luminance calculation from various input vectors. For example, the results of four types of inner product calculations (NV, VH, NH, and NL) are output to light shading units 35a to 35d for each light source, for example, with a blend coefficient of 46. The blend coefficient 46 is also transmitted to the layer coefficient blending means 3. In the light shading means 35a to 35d, the bidirectional reflectance distribution function (bidirectional
reflectance distribution fonction)の演算を行い,その結果を光源ごとのライトブレンド 手段 36a〜36dへ出力する。 reflectance distribution fonction) and outputs the result to the light blending means 36a to 36d for each light source.
[0039] ライト減衰手段 33は,ライトベクトル Lnと,システム IZF31を介して入力される情報 を用いて,各ライトベクトルの減衰係数 (48a〜48h)を求め,光源ごとのライトブレンド 手段 36a〜36dへ出力する。また,各層ごとの減衰係数として,カラーブレンド手段へ と出力されてちよい。 [0039] The light attenuation means 33 obtains the attenuation coefficient (48a to 48h) of each light vector using the light vector Ln and the information input via the system IZF31, and the light blending means 36a to 36d for each light source. Output to. In addition, the attenuation coefficient for each layer may be output to the color blending means.
[0040] 光源ごとのブレンド手段 36a〜36dは,ライト減衰手段 33から入力された減衰係数に従 つてシェーディング手段 35a〜35dの入力をそれぞれ減衰させて,シェーディング結果 (47a〜47h)とし,これをレイヤーカラー生成手段へ出力する。なお,図中, 47a及び 4 7bは,それぞれ光源 0のディフューズ成分,及びスぺキユラ一成分を示し, 47c及び 47 dは,それぞれ光源 1のディフューズ成分,及びスぺキユラ一成分を示し, 47e及び 47f は,それぞれ光源 2のディフューズ成分,及びスぺキユラ一成分を示し, 47g及び 47h は,それぞれ光源 0のディフューズ成分,及びスぺキユラ一成分を示す。また内積演 算手段 34から出力される,内積値 NV, VH, NH, NLなどのブレンド係数 46は,複数層 のブレンド係数 46として出力される。図に示されるように,ライトブレンド手段 36a〜36d 力も出力される輝度演算結果は,拡散反射 (ディフューズ)成分と鏡面反射 (スぺキュ ラー)成分に分けてそれぞれ出力されてもよいし,これらをあわせたものが出力されて もよい。また,ある 1つ光源のディフューズ,スぺキユラ一は,それぞれ RGB各 8ビットで 表現されてもよい。 [0040] The blending means 36a to 36d for each light source attenuate the input of the shading means 35a to 35d according to the attenuation coefficient input from the light attenuation means 33, respectively, and obtain the shading result (47a to 47h). Output to the layer color generation means. In the figure, 47a and 47b indicate the diffuse component and specular component of light source 0, respectively, and 47c and 47d indicate the diffuse component and specular component of light source 1, respectively. 47e and 47f indicate the diffuse component and specular component of light source 2, respectively, and 47g and 47h indicate the diffuse component and specular component of light source 0, respectively. The blend coefficients 46 such as inner product values NV, VH, NH, and NL output from the inner product calculation means 34 are output as blend coefficients 46 of a plurality of layers. As shown in the figure, the luminance calculation results that also output the light blending means 36a to 36d may be output separately for the diffuse reflection (diffuse) component and the specular reflection (specular) component, A combination of these may be output. Also, the diffuse and specular of a single light source may be represented by 8 bits each for RGB.
[0041] [レイヤーカラー生成手段,レイヤー係数ブレンド手段]
図 6は,レイヤーカラー生成手段 2及びレイヤー係数ブレンド手段 3を説明するための ブロック図である。なお,この実施態様で,レイヤーカラー生成手段 2は,入力される シェーディング結果を表層設定値を用いて各表層に割り当てるための割り当て指令 を出力するライト選択設定手段 51と,前記ライト選択設定手段からの割り当て指令に 従って入力されるシェーディング結果を各表層用に選択するレイヤーカラー選択手 段 52とを具備する。この実施形態で,レイヤーカラー生成手段 2は,シェーディング手 段 24から入力された複数光源分の輝度演算結果 (47a〜47h)を物体表面の内部散 乱を近似する複数の表層に割り当て,表層(レイヤー)ごとの表層色 (61a〜61f)を出 力する。一方,レイヤー係数ブレンド手段 3は,それぞれの層ごとに割り当てられたシ エーデイング結果とブレンド係数を掛け合わせるなどして,ブレンド後の表層(レイヤ 一)ごとの表層色 (62a〜62f)をカラーブレンド手段 4 (図 4)に出力する。 [0041] [Layer color generation means, layer coefficient blending means] FIG. 6 is a block diagram for explaining the layer color generation means 2 and the layer coefficient blending means 3. In this embodiment, the layer color generation means 2 includes a light selection setting means 51 for outputting an assignment command for assigning the input shading result to each surface layer using the surface layer setting values, and the light selection setting means. And a layer color selection means 52 for selecting the input shading result for each surface layer according to the allocation command. In this embodiment, the layer color generation means 2 assigns the luminance calculation results (47a to 47h) for a plurality of light sources inputted from the shading means 24 to a plurality of surface layers approximating the internal scattering of the object surface, The surface color (61a to 61f) for each layer is output. On the other hand, layer coefficient blending means 3 performs color blending of the surface color (62a to 62f) for each surface layer (layer 1) after blending, for example, by multiplying the blending coefficient by the blending coefficient assigned to each layer. Output to means 4 (Fig. 4).
[0042] そして,シェーディング手段 24力も入力された 4光源分のディフューズ,スぺキユラ一 それぞれの輝度演算結果であるシェーディング結果 (47a〜47h)は,レイヤーカラー 生成手段 2で,たとえば物体表面の内部散乱を近似する表層ごとに割り当てられる。 [0042] Then, the shading results (47a to 47h), which are the brightness calculation results of the diffuser and the specular of the four light sources to which the 24 powers of the shading means are input, are obtained by the layer color generation means 2, for example, on the surface of the object. Assigned to each surface layer that approximates internal scattering.
[0043] 表層設定値は,たとえばメインメモリ中のプログラムや所定のメモリ,テーブルなどに 格納されており, CPUなどがメインメモリ中のプログラムの指令を受けて適宜表装設 定値に関する情報を読み出して,ノ ス 16や,システム I/F53を介してライト選択設定 手段 51へ伝えればよい。ライト選択設定手段 51は,この表層設定値を用いて,レイヤ 一力ラー選択手段 52へ,各表層への割り当て指令を出力する。この割り当て指令は ,たとえば,シェーディング結果 47a, 47b,及び 47hは,第 1のレイヤーへといった内容 であればよい。このようにして,表層(レイヤー)ごとの表層色(61a〜61f)が出力される 。なお,図中, 61a及び 61bは,それぞれレイヤー 0のディフューズ成分,及びスぺキュ ラー成分を示し, 61c及び 61dは,それぞれレイヤー 1のディフューズ成分,及びスぺ キュラー成分を示し, 61e及び 61fは,それぞれレイヤー 2のディフューズ成分,及びス ぺキユラ一成分を示す。 [0043] The surface layer setting values are stored in, for example, a program in the main memory, a predetermined memory, a table, or the like, and the CPU or the like receives information on the program in the main memory and reads information on the surface setting values as appropriate. This can be communicated to the light selection setting means 51 via the nose 16 or system I / F53. The light selection setting means 51 outputs an allocation command to each surface layer to the layer first power selection means 52 using the surface layer setting value. For example, the shading results 47a, 47b, and 47h need only be assigned to the first layer. In this way, the surface color (61a to 61f) for each surface layer (layer) is output. In the figure, 61a and 61b indicate the diffuse component and the specular component of layer 0, respectively. 61c and 61d indicate the diffuse component and the specular component of layer 1, respectively. 61f shows the diffuse component and specular component of layer 2, respectively.
[0044] 本実施態様では表層の数を 3としているので,ライト選択設定手段 52は 3種類 (LselO, Lsell, Lsel2)存在してもよい。すなわち,ライト選択設定手段 52は,表層の数だけ存 在してもよいし,ひとつでもよい。また,表層の数以上でもよい。なお,表層の数は 2以
上であれば, 3などに限定されることはない。本実施形態における各ライト選択設定手 段 52は,たとえば,それぞれ 4ビットで構成され,それぞれのビットが各表層に割り当 てられる光源の有無を意味するものがあげられる。各表層に割り当てられる光源は, RGB成分ごとに例えば以下のように行われればよい。なお,以下は層(レイヤー) 0の みに関するが,レイヤー 1,及びレイヤー 2も同様の式を用いればよい。 In the present embodiment, since the number of surface layers is 3, there are three types of light selection setting means 52 (LselO, Lsell, Lsel2). That is, the number of light selection setting means 52 may be the same as the number of surface layers, or one. Also, it may be more than the number of surface layers. The number of surface layers is 2 or more. If it is above, it is not limited to 3 etc. Each light selection / setting means 52 in this embodiment is composed of, for example, 4 bits each, and means that there is a light source in which each bit is assigned to each surface layer. The light source assigned to each surface layer may be performed for each RGB component as follows, for example. The following is for layer 0 only, but the same formula can be used for layer 1 and layer 2.
[0045] すなわち,レイヤー 0のディフューズ成分 61aとして,光源 0〜光源 4のディフューズ成 分(47a,47c,47e,及び 47f)のうちいずれか又はそれらのうち 2つ以上の和とし,レイヤ 一 0のスぺキユラ一成分 61bとして,光源 0〜光源 4のスぺキユラ一成分(47b,47d,47f, 及び 47g)のうちいずれか又はそれらのうち 2つ以上の和とすればよい。そして,どの 成分を採用するかについては,表層設定値に基づく割り当て指令により割り当てされ ればよい。他のレイヤーについても同様である。すなわり,光源ごとの光成分が,層ご との光成分へと変換される。 That is, the diffuse component 61a of layer 0 is any one of the diffuse components (47a, 47c, 47e, and 47f) of light source 0 to light source 4, or the sum of two or more of them. The zero specular component 61b may be one of the specular components of light source 0 to light source 4 (47b, 47d, 47f, and 47g), or the sum of two or more of them. Which component should be used can be assigned by an assignment command based on the surface layer setting value. The same applies to the other layers. In other words, the light components for each light source are converted into light components for each layer.
[0046] 図 7は,レイヤーカラー生成手段(又はその一部)の例を示すブロック図である。図 7に 示されるものは,図 6のものでは,レイヤー 0用のレイヤーカラー生成手段又はレイヤ 一力ラー選択手段の部分を示すブロック図である。図 7に示されるように,このレイヤ 一力ラー生成手段は, ANDゲートなどのセレクタ 71と加算回路などの加算器 72とを 具備し,各ライト選択設定手段 51とシステムバスなどにより連結され,又各ライトプレン ド手段ともシステムバスなどにより連結されるものがあげられる。そして,セレクタは,あ らカじめ設定されたデータのみを選択して通過させるように実装されてもょ 、し,各ラ イト選択設定手段 51からの選択指令に応じて特定のデータを通過させるように実装さ れてもよい。また,各セレクタ(図では 8つのセレクタ)にはそれぞれ別のシエーデイン グ結果 47a〜47hが入力されてもょ 、し,あるシェーディング結果が二つ以上のセレク タに入力されてもよく,全てのシェーディング結果が全てのセレクタに入力されてもよ い。 FIG. 7 is a block diagram showing an example of layer color generation means (or part thereof). What is shown in FIG. 7 is a block diagram showing the part of the layer color generating means for layer 0 or the layer power selection means in FIG. As shown in FIG. 7, this layer single error generation means comprises a selector 71 such as an AND gate and an adder 72 such as an adder circuit, and is connected to each write selection setting means 51 by a system bus or the like. In addition, each light blend means may be connected by a system bus or the like. The selector may be mounted so that only the data set in advance is selected and passed, and specific data is passed according to the selection command from each light selection setting means 51. It may be implemented so that In addition, each selector (eight selectors in the figure) may receive different scheding results 47a to 47h, or a certain shading result may be input to two or more selectors. Shading results may be input to all selectors.
[0047] そして,たとえば,各ライト選択設定手段 51からの指令に応じて各セレクタは入力信 号を出力するか否か決定され,出力された各信号は,加算回路などで加算された後 ,表層色 61a, 61bとして出力される。たとえば,シェーディング結果 47aとライト選択設 定手段 53からの指令が入力されるセレクタ 71aにおいて,前記指令が ON (1)であれ
ば,そのセレクタ力もシェーディング結果 47aが出力され,指令が OFF (0)であれば, そのセレクタ力もシェーディング結果 47aが出力されな!、ようにすればよ!、。他のセレ クタにおいても同様である。そして,たとえば,各層のディフューズから所定のものが 加算器 72aへ出力され,加算器 72aにより加算されて,たとえばレイヤー 0のディフュー ズ成分 61a (表層 0の表層色の一成分)として出力される。 [0047] Then, for example, each selector determines whether or not to output an input signal in accordance with a command from each light selection setting means 51, and each output signal is added by an adder circuit or the like. Output as surface color 61a, 61b. For example, in the selector 71a to which the shading result 47a and the command from the light selection setting means 53 are input, if the command is ON (1). For example, if the selector force is also output as a shading result 47a and the command is OFF (0), the selector force will not be output as a shading result 47a! The same applies to other selectors. Then, for example, predetermined ones from the diffuses of each layer are output to the adder 72a, added by the adder 72a, and output as, for example, the diffuse component 61a of layer 0 (one component of the surface layer color of the surface layer 0) .
[0048] レイヤーカラー選択手段 52で,各表層に合成されたカラー成分は,次にレイヤー係 数ブレンド手段 3に入力され,たとえば,内積演算手段 34から出力されるブレンド係数 46に従って,各表層を組み合わせる為の係数演算が行われればよい。レイヤー係数 ブレンド手段 3の動作について,図 8を参照して以下で詳細を説明する。図 8は,レイ ヤー係数ブレンド手段の構成例を示すブロック図である。図 8に示されるように,レイ ヤー係数ブレンド手段は,例えば RAMテーブルと,積算回路により実装される。 [0048] The color components synthesized on each surface layer by the layer color selecting means 52 are then input to the layer coefficient blending means 3, and each surface layer is selected according to the blend coefficient 46 output from the inner product calculating means 34, for example. It is only necessary to perform coefficient calculation for combination. The operation of the layer coefficient blending means 3 will be described in detail below with reference to FIG. Fig. 8 is a block diagram showing a configuration example of the layer coefficient blending means. As shown in Fig. 8, the layer coefficient blending means is implemented by, for example, a RAM table and an integration circuit.
[0049] なお,レイヤー係数ブレンド手段におけるブレンド係数として,法線ベクトル Nと視線 ベクトル Vの内積値 NV;法線ベクトル Nと視線ベクトル Vとライトベクトル Lの 2等分べタト ルの内積値 NH ;視線ベクトル Vと視線ベクトル Vとライトベクトル Lの 2等分ベクトルの内 積値 VH;又は法線ベクトル Nとライトベクトルしの内積値 NLがあげられる。ブレンド係 数は 1種類であってもよいが,複数種類が準備され適宜ブレンドされることが好ましい [0049] It should be noted that as the blend coefficient in the layer coefficient blending means, the inner product value NV of the normal vector N and the line-of-sight vector V; the inner product value NH of the bisector of the normal vector N, the line-of-sight vector V and the light vector L NH ; Linear product value VH of bisection vector of line-of-sight vector V, line-of-sight vector V, and light vector L; or inner product value NL of normal vector N and light vector. The blending coefficient may be one type, but it is preferable that multiple types are prepared and blended appropriately.
[0050] レイヤー係数ブレンド手段では,例えば,以下のようにして各表層のカラーにブレンド 係数などの係数を掛け合わせる。 [0050] In the layer coefficient blending means, for example, the color of each surface layer is multiplied by a coefficient such as a blend coefficient as follows.
[0051] レイヤー 0については,ディフューズ出力(62a)を, α 0とレイヤー 0のディフューズ入 力(61a)との積とし,スぺキユラ一出力(62b)を α θ,レイヤー 0スぺキユラ一入力(61b) との積とする。 [0051] For layer 0, the diffuse output (62a) is the product of α0 and the diffuse input of layer 0 (61a), and the specular output (62b) is α θ, layer 0 spe This is the product of the kyura input (61b).
[0052] レイヤー 1については,ディフューズ出力(62c)を, (1— α θ)と α ΐとレイヤー 1のディ フューズ入力 (61c)との積とし,スぺキユラ一出力(62d)を(1— α θ)と α ΐとレイヤー 1 のスぺキユラ一入力(61d)との積とする。 [0052] For layer 1, the diffuse output (62c) is the product of (1 – α θ), α ΐ and the layer 1 diffuse input (61c), and the specular output (62d) is ( 1—α θ), α ΐ and the layer 1 specular input (61d).
[0053] レイヤー 2については,ディフューズ出力(62e)を, (1— α θ)と(1— α 1)とレイヤー 2 のディフューズ入力 (61e)との積とし,スぺキユラ一出力(62f)を(1 α θ)と(1 α ΐ) とレイヤー 2のスぺキユラ一入力(61f)との積とする。
[0054] ここで, α θと a lは,ブレンド係数 BO, Bl (46a, 46b)を引数とする関数として表現され る。 α θ [0053] For layer 2, the diffuse output (62e) is the product of (1 — α θ) and (1 — α 1) and the diffuse input of layer 2 (61e). 62f) is the product of (1 α θ) and (1 α ΐ) and the layer 2 specular input (61f). [0054] Here, α θ and al are expressed as functions with the blend coefficients BO, Bl (46a, 46b) as arguments. α θ
= ID(BO), α θΐ = fl(Bl)などである。本実施形態では, B0と B1は VH (光源ごとに VH0 -VH3) , NV, NL (光源ごとに NL0-NL3) , NH (光源ごとに NH0-NH3)などの内積値( たとえば,いずれかのブレンド係数)が使用されればよい。ブレンド係数として VHを用 いた場合,光源と視線に関連して各層のブレンドが実行され, NVの場合物体の法線 と視線に関連して各層のブレンドが実行される。 = ID (BO), α θΐ = fl (Bl), etc. In this embodiment, B0 and B1 are inner products such as VH (VH0 -VH3 for each light source), NV, NL (NL0-NL3 for each light source), NH (NH0-NH3 for each light source), etc. Blend coefficient) may be used. When VH is used as the blend coefficient, blending of each layer is executed in relation to the light source and line of sight, and in the case of NV, blending of each layer is executed in relation to the normal and line of sight of the object.
[0055] 図 8の例では上記 ΙΌ(ΒΟ)と ID(Bl)の関数を RAMテーブルとして実装している。この RA Mテーブル 81,82は,たとえば RGBごとに設けられるものが好ましい。すなわち,この 態様に係るレイヤー係数ブレンド手段は,前記ブレンド係数ごとに設けられ,所定の ブレンド係数が入力される RGB毎に設けられる RAMテーブル 81, 82と;レイヤーカラ 一生成手段の出力(61a〜61f)と,前記ブレンド係数に応じた係数とを乗算するため の乗算回路などの乗算器 83とを具備する。そして,前記 RAMテーブルから前記ブレ ンド係数に応じた係数(α , 1— αなど)を読み出し,前記乗算器は,前記レイヤー力 ラー生成手段の出力と,前記ブレンド係数に応じた係数とを乗算する。そして,各要 素は,バスなどにより連結され,信号の授受が可能となっている。 In the example of FIG. 8, the functions of 上 記 (ΒΟ) and ID (Bl) are implemented as a RAM table. The RAM tables 81 and 82 are preferably provided for each RGB, for example. That is, the layer coefficient blending means according to this aspect is provided for each of the blend coefficients, and is provided for each RGB to which a predetermined blend coefficient is input, and RAM tables 81 and 82; 61f) and a multiplier 83 such as a multiplication circuit for multiplying the coefficient according to the blend coefficient. Then, a coefficient (α, 1−α, etc.) corresponding to the blend coefficient is read from the RAM table, and the multiplier multiplies the output of the layer force error generation means and the coefficient corresponding to the blend coefficient. To do. Each element is connected by a bus or the like so that signals can be exchanged.
[0056] ブレンド係数 B0(46a), Bl(46b)は,それぞれ独立した RAMテーブル 81, 82のアドレス ラインに接続され,予め RAMに設定された値に従ってブレンド係数 B0(46a), Bl(46b) に応じた値 αと 1— αを出力する。本発明の好ま Uヽ実施形態では, 1 - aの近似と して αの反転値を 1— αとして出力するが,これに限定されない。また, a ,又は 1— αとして,いずれかのブレンド係数を用いるものが好ましく,このブレンド係数として, 内積演算手段が演算した内積があげられる。次に,レイヤーカラー選択手段 52から 入力された各レイヤーのディフューズ,スぺキユラ一などのシェーディング結果 (表層 色)とひ, 1 ひとを乗算器 83が掛け合わせて,ブレンド係数を各レイヤーのディフユ ーズ,スぺキユラ一に反映させる。このようにして,ブレンドされた表層ごとの表層色 (6 2a〜62f)を得ることができる。 [0056] Blend coefficients B0 (46a) and Bl (46b) are connected to the address lines of independent RAM tables 81 and 82, respectively, and blend coefficients B0 (46a) and Bl (46b) according to the values set in the RAM in advance. Outputs the values α and 1− α corresponding to. In the preferred embodiment of the present invention, the inversion value of α is output as 1−α as an approximation of 1−a, but is not limited to this. Further, it is preferable to use any blending coefficient as a or 1−α, and this blending coefficient includes the inner product calculated by the inner product calculating means. Next, the multiplier 83 multiplies the shading result (surface color) of each layer, such as diffuse and specular, input from the layer color selection means 52, and the blend coefficient is multiplied by each layer. Reflect in Diffuses and Speckyula. In this way, the surface color (62a to 62f) for each blended surface layer can be obtained.
[0057] 各表層のディフューズ,スぺキユラ一にブレンド係数が掛け合わされた後,ブレンドさ れた表層ごとの表層色 (62a〜62f)は,カラーブレンド手段 4によって,最終的な 1ピク
セルの色を表す RGBなどとして合成される。カラー係数ブレンド手段 4の動作にっ ヽ ては,図 9及び図 10を参照して以下で詳細を説明する。 [0057] After the diffusion coefficient and the specular of each surface layer are multiplied by the blend coefficient, the surface layer color (62a to 62f) of each blended surface layer is converted into the final 1 pixel by the color blending means 4. It is synthesized as RGB representing the cell color. The operation of the color coefficient blending means 4 will be described in detail below with reference to FIGS. 9 and 10.
[0058] [カラーブレンド手段] [0058] [Color blending means]
図 9は,カラーブレンド手段の例を示すブロック図である。また,図 10は,力ラーブレン ド手段内部のレイヤー 0ブレンドの回路構成を示すブロック図である。図 9に示される ように,カラーブレンド手段として,システムインターフェイス 85を介して前記手段と連 結されたレイヤーごとのブレンド手段 86a〜86cと,加算回路 87とを具備するものがあ げられる。そして,各レイヤーのブレンド部として,図 10に示されるように乗算器 88とカロ 算器 89とを具備するものがあげられる。 FIG. 9 is a block diagram showing an example of color blending means. Fig. 10 is a block diagram showing the circuit configuration of the layer 0 blend inside the force Lablender. As shown in FIG. 9, as the color blending means, there is a color blending means including blending means 86a to 86c for each layer connected to the means via the system interface 85, and an adding circuit 87. As a blend portion of each layer, there is one that includes a multiplier 88 and a calorie calculator 89 as shown in FIG.
[0059] ブレンド手段 86a〜86cでは,レイヤーごとのディフューズ,スぺキユラ一と減衰率など の減衰調整値 (これは,たとえば,ライト減衰手段 33により求められ,システムバス 16, システム I/F85を介してブレンド手段 86a〜86cに入力される)が掛け合わされた後,デ ィフューズとスぺキユラ一の値を加算し,最後にすべての表層のカラー値を加算器 87 により加算する。テクスチャ手段 25 (図 4)から,バス 16などを介して,ブレンド手段 86a 〜86cに入力されるテクスチャカラーを用いて合成してもよい。通常,テクスチャカラー はディフューズにのみ影響を与え,スぺキユラ一には影響を与えない。カラーブレンド 手段 4へのデータ入力段階まで,各レイヤーのカラーをディフューズとスぺキユラ一に 分離していたのはこの為である。すなわち,本発明の好ましい態様は,力ラーブレン ド手段 4にはテクスチャカラーを入力し,レイヤーカラー生成手段 2及びレイヤー係数 ブレンド手段 3にはテクスチャカラーを入力しないものである。なお,シェーディング手 段 24の直後にテクスチャカラーを合成する場合は,本実施例の様にディフューズとス ぺキユラ一を分離する必要はな 、。 [0059] In the blending means 86a to 86c, the attenuation adjustment values such as diffuse, specularity and attenuation rate for each layer (this is obtained by the light attenuation means 33, for example, the system bus 16, the system I / F85). After being multiplied by the blending means 86a to 86c), the diffuse and specular values are added, and finally the color values of all the surface layers are added by the adder 87. The texture means 25 (FIG. 4) may be synthesized using the texture color input to the blending means 86a to 86c via the bus 16 or the like. Usually, the texture color only affects the diffuse and not the specular. This is why the color of each layer was separated into diffuse and specular until the data input stage to color blending means4. In other words, in a preferred embodiment of the present invention, a texture color is input to the force Lablend means 4, and no texture color is input to the layer color generation means 2 and the layer coefficient blending means 3. If the texture color is synthesized immediately after the shading means 24, it is not necessary to separate the diffuser and the specular as in this embodiment.
[0060] [別の実施態様] [0060] [Another Embodiment]
次に,これまで説明したとは別の実施態様に係る本発明の画像形成装置について説 明する。図 11は,シェーディング手段から出力される光源ごとのシェーディング結果と ブレンド係数がシリアルデータとして処理され,かつ表層の数力 の場合における画 像生成装置のブロック図である。この実施態様では,レイヤーカラー選択手段と, a 及び 1 a を生成するための α /1 - α 生成手段と, a /1 ~ a 生成手段が生
成した l—ひ nに基づいて所定の値 tを更新するための中間レジスタ t更新手段と, t値 を記憶するためのバッファなどの t値記憶手段と,レイヤーカラー生成手段の出力と, a /1 - a 生成手段が生成した α と, t値記憶手段が記憶した t値とを乗算するた めの乗算回路などの乗算手段と,乗算手段力 の出力値 Mnと,初期値 clearとを加算 するための加算回路などの加算手段と,加算手段が加算した値を記憶するバッファ などの Csum値記憶手段と, 中間レジスタ t更新手段に入力される初期値 initを得て初 期値 initと中間レジスタ t更新手段に入力し,加算手段に入力される初期値 clearを得 て初期値 clearを加算手段に入力するためのコントロール手段とを具備する。そして, レイヤーカラー選択手段がレイヤーカラー生成手段として機能し, α Z1— α 生成 手段, 中間レジスタ t更新手段,乗算手段がレイヤー係数ブレンド手段として機能し, 乗算手段及び加算手段がカラーブレンド手段として機能する。 Next, an image forming apparatus according to another embodiment of the present invention will be described. Figure 11 is a block diagram of the image generation device when the shading results and blending coefficients for each light source output from the shading means are processed as serial data and the surface power is several. In this embodiment, layer color selection means, α / 1-α generation means for generating a and 1 a, and a / 1 to a generation means are generated. An intermediate register t updating means for updating a predetermined value t based on the generated l-string n , a t value storing means such as a buffer for storing the t value, an output of the layer color generating means, and a / 1-a Multiplying means such as a multiplication circuit for multiplying the α generated by the generating means by the t value stored by the t value storing means, the output value Mn of the multiplying means force, and the initial value clear An addition means such as an adder circuit for addition, a Csum value storage means such as a buffer for storing the value added by the addition means, an initial value init input to the intermediate register t update means, and an initial value init Control means for obtaining an initial value clear to be input to the intermediate register t updating means, to be input to the adding means, and to input the initial value clear to the adding means. The layer color selection means functions as the layer color generation means, the α Z1—α generation means, the intermediate register t update means, and the multiplication means function as the layer coefficient blending means, and the multiplication means and the addition means function as the color blending means. To do.
[0061] この実施態様に係る画像形成装置の場合,単位時間当たりの最終 RGBデータ生成 率は低下するが,少ない演算リソースで同様の処理を行う事ができる。表層色へのブ レンド係数の組み合わせは, RGBごとに 3つの乗算器のみ構成される。また,表層色 を単一色に合成する手段は, RGBごとの入力値と自分自身の値との加算で構成され る。また,この例の場合,シェーディング手段の出力段階で,ディフューズとスぺキュ ラーが RGBごとにそれぞれまとめられ,各光源でそれぞれ 1組の RGBデータがレイヤ 一ブレンド手段に対して入力されるものとする。 In the case of the image forming apparatus according to this embodiment, the final RGB data generation rate per unit time decreases, but the same processing can be performed with a small number of computing resources. The combination of blend coefficients to the surface color is composed of only three multipliers for each RGB. In addition, the means for combining surface colors into a single color consists of adding the input value for each RGB and its own value. In this example, the diffuse and specular are grouped for each RGB at the output stage of the shading means, and one set of RGB data is input to the layer-one blending means for each light source. And
[0062] 図 11の,入力信号 LCは,シェーディング手段力もシリアルに入力される光源ごとのシ エーデイング結果の RGBデータ入力である。一方, BFは,各レイヤーを合成する為の 係数情報(NVなど)である。 LCと BFは VALIDJN信号力 の時にデータが有効であり, 最終 RGBカラー出力 Csumは VALID_OUTカ^の時にデータが有効である。 [0062] The input signal LC in Fig. 11 is RGB data input of the shading result for each light source in which the shading means power is also input serially. On the other hand, BF is coefficient information (NV, etc.) for composing each layer. LC and BF are valid when VALIDJN signal power is used, and the final RGB color output Csum is valid when VALID_OUT is valid.
[0063] 図 12は,ある 1ピクセルのブレンド処理の流れを示す図である。この例では,最初に, VALIDJN力 1の時に 4光源分のデータ LC0- LC3と,各レイヤーのブレンド係数 BFO- B F2がレイヤーカラー選択手段に入力される。レイヤーカラー選択手段では,予め設 定された表層設定値に従って 4光源分のデータをレイヤーごとのデータに変換して乗 算手段に出力する。 BF0-BF2は, a /l~ a生成手段に入力され,図 8の RAMテープ ル(81,82)と同様の動作で α と l- α の値を出力する。この時,コントロール手段から
a l- a n生成手段に入力される Lnによって,どのレイヤーに対する α nと 1- α。の値を 出力するかが決定される。 次に,レイヤーカラー選択手段から出力された Layer data と, oc /1- α生成手段から出力された αは乗算手段に入力され, l- α は中間レジ スタ t更新手段にそれぞれ入力される。中間レジスタ t更新手段は,コントロール手段 力 入力される init信号で 1に初期化された後, t値記憶手段が記憶して 、る値と入力 される 1- αを乗算し,その結果を新しい t値として t値記憶手段に記憶させる。乗算手 段は, Layer dataと αと tを乗算し, Mnとして出力する。加算手段は, clear信号により Cs醒記憶手段を 0に初期化した後,順次入力される Mnを Cs醒記憶手段が記憶する 値に加算し,その結果を新しい値として Cs醒記憶手段に記憶させる。そして,加算 手段は, Cs醒記憶手段が記憶する値 Cs醒を最終 RGBカラーとして出力する。 FIG. 12 is a diagram showing the flow of a certain pixel blend process. In this example, when the VALIDJN force is 1, the data LC0-LC3 for four light sources and the blend coefficient BFO-BF2 of each layer are input to the layer color selection means. The layer color selection means converts the data for the four light sources into data for each layer according to the preset surface layer setting values and outputs the data to the multiplication means. BF0-BF2 is input to the a / l ~ a generator, and outputs α and l-α in the same manner as the RAM table (81, 82) in Fig. 8. At this time, from the control means α n and 1- α for which layer depending on Ln input to a l- a n generation means. Whether to output the value of is determined. Next, the layer data output from the layer color selection means and the α output from the oc / 1-α generation means are input to the multiplication means, and l-α is input to the intermediate register t update means. The intermediate register t update means is initialized to 1 by the init signal input to the control means, and then the t value storage means stores the value that is input and 1-α that is input, and the result is a new result. The t value is stored in the t value storage means. The multiplication unit multiplies Layer data by α and t and outputs it as Mn. The adding means initializes the Cs wake-up memory means to 0 by the clear signal, adds the sequentially inputted Mn to the value stored in the Cs wake-up memory means, and stores the result as a new value in the Cs wake-up memory means . Then, the adding means outputs the value Cs alert stored in the Cs alert memory as the final RGB color.
[0064] 次に,ある 1ピクセルを処理する場合のサイクル単位の動作の例を示す。初期状態は , t = 1, Csum = 0である。乗算手段の算出値, t記憶手段に格納される t値, Csum記 憶手段が記憶する最終 RGBカラー Csum値は,サイクルごとに以下のようになる。なお ,サイクル 4の際に VALID_OUT [0064] Next, an example of operation in units of cycles when one pixel is processed will be described. The initial state is t = 1, Csum = 0. The calculated value of the multiplication means, the t value stored in the t storage means, and the final RGB color Csum value stored in the Csum storage means are as follows for each cycle. In cycle 4, VALID_OUT
として 1が出力される。 1 is output as
[0065] 乗算手段 [0065] Multiplication means
t記憶手段 最終 RGBカラー t Storage means Final RGB color
CycleO: MO = LO X t X a t CycleO: MO = LO X t X at
= t*(l- a ) Csum = t * (l- a) Csum
o o
= 0 = 0
Cyclel: Ml = Ll X t X a Cyclel: Ml = Ll X t X a
1 1
t = t*(l- a ) t = t * (l- a)
1 1
Csum = Csum Csum = Csum
+ MO + MO
Cycle2: M2 = L2 X t X a Cycle2: M2 = L2 X t X a
= t*(l- ) Csum = t * (l-) Csum
2 2
= Csum + Ml = Csum + Ml
Cycle3: M3 = L3 X t
= 1 [initで初期化] Csum = Csum + M2 Cycle3: M3 = L3 X t = 1 [initialized by init] Csum = Csum + M2
Cycle4: Cycle4:
Csum = Csum + M3 Csum = Csum + M3
(Cycle4の時, VALID— OUT = 1) (In Cycle4, VALID—OUT = 1)
[0066] その結果, VALID.OUT = [0066] As a result, VALID.OUT =
1の時の最終 RGBカラー Csumとして,以下のデータが出力される。すなわち,最終 RG Bカラー Csum=L X a +L Χ (1— α ) Χ a +L Χ (1— α ) Χ (1— α ) Χ a +L Χ (1— α ) Χ ( The following data is output as the final RGB color Csum when 1. That is, the final RG B color Csum = L X a + L Χ (1—α) Χ a + L Χ (1—α) Χ (1—α) Χ a + L Χ (1—α) Χ (
0 0 1 0 1 2 0 1 2 3 0 0 0 1 0 1 2 0 1 2 3 0
1- α ) Χ (1- α )である。ここで, Lnは,各レイヤーを示すデータであり, a , a , a1- α) Χ (1- α). Here, Ln is data indicating each layer, and a, a, a
1 2 1 2 3 は α /1 - α 生成手段が生成した値である。 1 2 1 2 3 is a value generated by the α / 1-α generating means.
[0067] [プログラム] [0067] [Program]
本発明の画像生成装置は,基本的にはハードウェアにより実装されるが,ソフトゥェ ァにより実装されてもよい。そのようなソフトウェアとして,コンピュータを, 特定のピク セルに関する単一光源又は複数光源のシェーディング結果と,前記シェーディング 結果がどの表層に属するかを表す表層設定値を受け取り,少なくとも 2層分の表層色 を生成するためのレイヤーカラー生成手段と,前記レイヤーカラー生成手段が生成し た少なくとも 2層分の表層色と,表層色を合成するためのブレンド係数とを受け取り, 前記表層色のそれぞれに対してブレンド係数を組み合わせ,少なくとも 2層分のブレ ンド後の表層色を生成するためのレイヤー係数ブレンド手段と,前記レイヤー係数ブ レンド手段から出力される少なくとも 2層分のブレンド後の表層色を単一色に合成す るためのカラーブレンド手段として機能させるためのコンピュータグラフィックス用のプ ログラムがあげられる。そして,このプログラムにより得られる各手段は,上記したと同 様の機能を有するものとすればょ 、ので,ここでは上記の記載を準用することとする。 The image generation apparatus of the present invention is basically implemented by hardware, but may be implemented by software. As such software, a computer receives a shading result of a single light source or a plurality of light sources for a specific pixel and a surface layer setting value indicating which surface layer the shading result belongs to, and obtains surface color for at least two layers. Receiving a layer color generating means for generating, a surface color for at least two layers generated by the layer color generating means, and a blend coefficient for synthesizing the surface color; and blending each of the surface color The layer coefficient blending means for combining the coefficients and generating the surface color after blending for at least two layers, and the surface color after blending for at least two layers output from the layer coefficient blending means into a single color Computer graphics to function as a color blending means for composition Of the program, and the like. Since each means obtained by this program has the same function as described above, the above description will be applied mutatis mutandis.
[0068] [記録媒体] [0068] [Recording medium]
本発明は,上記のプログラムを格納した,コンピュータにより読み取ることができる記 録媒体をも提供できる。このような記録媒体としては, CD-ROM, DVD,ハードデ イスク又はコンピュータ内のメモリなどがあげられる。そして,コンピュータの入力部に 所定の情報が入力さると,制御部の指令を受けてプログラムが読み取され,制御部 の指令を受けた演算部が,読み出されたプログラムを用いて,入力されたデータや,
記憶部に記憶されるデータなどを読み出し,例えば記憶部のメモリを作業領域として 利用し,所定の演算を行う。演算結果は,例えばメモリに一時的に記憶された後,出 力部から出力される。これらのデータは,例えばバスを通じて,伝達されればよい。こ のようにして,ハードウェア資源とプログラムとが協働した処理が行われる。 The present invention can also provide a recording medium storing the above-mentioned program and readable by a computer. Such recording media include CD-ROMs, DVDs, hard disks, or memory in computers. When predetermined information is input to the input unit of the computer, the program is read in response to a command from the control unit, and the arithmetic unit that receives the command from the control unit is input using the read program. Data, Data stored in the storage unit is read, for example, the memory of the storage unit is used as a work area, and predetermined calculations are performed. For example, the calculation results are temporarily stored in the memory and then output from the output section. These data may be transmitted through a bus, for example. In this way, hardware resources and programs work together.
[0069] [コンピュータの構成] [0069] [Computer configuration]
図 13は,本発明のある実施態様 (コンピュータ)を示すブロック図である。この実施 態様は,コンピュータグラフィックスによるコンピュータ (グラフィック用コンピュータなど )に関する。図 13に示されるとおり,このコンピュータ 101は, 中央演算装置(CPU) 10 2,ジオメトリ演算回路 103などのジオメトリ演算部,レンダラー 104などの描画部,テク スチヤ生成回路 105などのテクスチャ生成部,照光処理回路 107などの照光処理部, 表示回路 108などの表示情報作成部,フレームバッファ 109,及びモニター 110を具備 する。これらの要素は,ノ スなどにより接続され,相互にデータを伝達できる。そのほ カ 図示しないメインメモリや,各種テーブル,ワーク領域となるワークメモリ 111,テク スチヤを格納するテクスチャメモリ 112などを具備する記憶部などを有しても良い。各 部を構成するハードウェアは,例えばバスなどを介して連結されている。なお,記憶 部は, VRAMなどの RAMや, CR— ROM, DVD,ハードディスクなどにより構成さ れても良い。 FIG. 13 is a block diagram showing an embodiment (computer) of the present invention. This embodiment relates to a computer based on computer graphics (such as a graphic computer). As shown in Fig. 13, the computer 101 includes a central processing unit (CPU) 102, a geometry calculation unit such as a geometry calculation circuit 103, a rendering unit such as a renderer 104, a texture generation unit such as a texture generation circuit 105, and an illumination. An illumination processing unit such as a processing circuit 107, a display information creation unit such as a display circuit 108, a frame buffer 109, and a monitor 110 are provided. These elements are connected by means of a nose and can transmit data to each other. In addition, it may have a main memory (not shown), various tables, a work memory 111 serving as a work area, a texture memory 112 for storing textures, and the like. The hardware that constitutes each unit is connected through, for example, a bus. The storage unit may be composed of RAM such as VRAM, CR-ROM, DVD, hard disk, etc.
[0070] 中央演算装置 (CPU) 102は,画像を生成するためのプログラムなどを制御するた めの装置である。ワークメモリ 111は, CPU102で使用するデータ及びディスプレイリス トなどを記憶してもよい。そして, CPU102は,メインメモリに記憶されたプログラムなど を読み出して,所定の処理を行ってもよい。ただし,ハードウェア処理のみにより所定 の処理を行っても良い。 CPU102は,たとえばワークメモリ 111から,ワールド座標の 3 次元オブジェクトデータとしてのポリゴンデータを読出し,ポリゴンデータをジオメトリ 演算回路 103へ出力する。具体的には,メインプロセッサ,コプロセッサ,データ処理 プロセッサ,四則演算回路又は汎用演算回路などを適宜有するものがあげられる。こ れらはバスなどにより連結され,信号の授受が可能とされる。また,圧縮された情報を 伸張するためのデータ伸張プロセッサを備えても良 、。 The central processing unit (CPU) 102 is a device for controlling a program for generating an image. The work memory 111 may store data used by the CPU 102, a display list, and the like. The CPU 102 may read a program stored in the main memory and perform predetermined processing. However, the predetermined processing may be performed only by hardware processing. For example, the CPU 102 reads polygon data as world coordinate three-dimensional object data from the work memory 111 and outputs the polygon data to the geometry calculation circuit 103. Specific examples include a main processor, a coprocessor, a data processor, four arithmetic circuits, or a general-purpose arithmetic circuit. These are connected by a bus, etc., so that signals can be exchanged. In addition, a data decompression processor for decompressing compressed information may be provided.
[0071] ジォメトリ演算回路 103は,入力されたポリゴンデータに対して,視点を原点とする視
点座標系のデータに座標変換などを行うための回路である。ジオメトリ演算回路 103 は,処理したポリゴンデータを,レンダラー 104へ出力する。具体的なジオメトリ演算回 路として,前記メインプロセッサとバスなどで連結された,ジオメトリプロセッサ,コプロ セッサ,データ処理プロセッサ,四則演算回路又は汎用演算回路などがあげられる。 [0071] The geometry calculation circuit 103 performs a view with the viewpoint as the origin on the input polygon data. This is a circuit for performing coordinate conversion on the data of the point coordinate system. The geometry calculation circuit 103 outputs the processed polygon data to the renderer 104. Specific geometry operation circuits include a geometry processor, coprocessor, data processor, four arithmetic operation circuit, or general-purpose operation circuit connected to the main processor via a bus.
[0072] レンダラー 104は,ポリゴン単位のデータをピクセル単位のデータに変換するための 回路又は装置である。レンダラー 104は,ピクセル単位のデータをテクスチャ生成回 路 105へ出力する。具体的なレンダラー 104として,前記メインプロセッサとバスなどで 連結されたデータ処理プロセッサ,四則演算回路又は汎用演算回路などがあげられ る。 The renderer 104 is a circuit or device for converting polygon unit data into pixel unit data. The renderer 104 outputs the pixel unit data to the texture generation circuit 105. Specific examples of the renderer 104 include a data processor, four arithmetic circuits, or a general-purpose arithmetic circuit connected to the main processor via a bus.
[0073] テクスチャ生成回路 105は,テクスチャメモリ 112に記憶されるテクスチャデータに基 づき,ピクセル単位のテクスチャカラーを生成するための回路である。テクスチャ生成 回路 105は,テクスチャカラー情報を有するピクセル単位のデータを,照光処理回路 1 07へ出力する。具体的なテクスチャ生成回路 105として,前記メインプロセッサとバス などで連結されたデータ処理プロセッサ,四則演算回路又は汎用演算回路などがあ げられる。 The texture generation circuit 105 is a circuit for generating a texture color in units of pixels based on the texture data stored in the texture memory 112. The texture generation circuit 105 outputs pixel unit data having texture color information to the illumination processing circuit 107. Specific examples of the texture generation circuit 105 include a data processing processor, four arithmetic operation circuits, or a general-purpose operation circuit connected to the main processor through a bus.
[0074] 照光処理回路 107は,テクスチャカラー情報を有するポリゴンに対し,ピクセル単位 で法線ベクトル,重心座標などを利用して陰影付けなどを行うための回路である。照 光処理回路 107は,陰影付けした画像データを,表示回路 108へ出力する。具体的な 照光処理回路 107として,前記メインプロセッサとバスなどで連結されたデータ処理プ ロセッサ,四則演算回路又は汎用演算回路などがあげられる。そして,メモリに格納さ れたテーブルなど力 適宜光に関する情報を読み出して陰影付けを行えばよい。 The illumination processing circuit 107 is a circuit for performing shading on the polygon having the texture color information using a normal vector, a barycentric coordinate, and the like in units of pixels. The illumination processing circuit 107 outputs the shaded image data to the display circuit 108. Specific examples of the illumination processing circuit 107 include a data processing processor, four arithmetic operation circuits, and a general-purpose operation circuit connected to the main processor through a bus. Then, the information on the light, such as a table stored in the memory, can be read and shaded as appropriate.
[0075] 表示回路 108は,照光処理回路 107から入力された画像データをフレームバッファ 1 09に書き込み,またフレームバッファ 109に書き込まれた画像データを読み出し,制 御して表示画像情報を得るための回路である。表示回路 108は,表示画像情報をモ 二ター 110へ出力する。具体的な表示回路として,前記メインプロセッサとバスなどで 連結された描画プロセッサ,データ処理プロセッサ,四則演算回路又は汎用演算回 路などがあげられる。 [0075] The display circuit 108 writes the image data input from the illumination processing circuit 107 into the frame buffer 109, reads out the image data written into the frame buffer 109, and controls it to obtain display image information. Circuit. The display circuit 108 outputs display image information to the monitor 110. Specific display circuits include a drawing processor, a data processor, four arithmetic operation circuits, or a general-purpose operation circuit connected to the main processor via a bus.
[0076] モニター 110は,入力された表示画像情報にしたがって,コンピュータグラフィックス
画像を表示するための装置である。 [0076] The monitor 110 performs computer graphics according to the input display image information. An apparatus for displaying an image.
[0077] 本発明のコンピュータは,照光処理回路などの照光処理部に,本発明の画像生成 装置を具備するので,シェーディング画像などを効果的に生成できる。 Since the computer according to the present invention includes the image generation device according to the present invention in the illumination processing unit such as an illumination processing circuit, it can effectively generate a shading image or the like.
[0078] [コンピュータの動作] [0078] [Computer operation]
以下,コンピュータを用いて画像を生成する動作例を説明する。 CPU102は,ヮー クメモリ 111から,ポリゴンデータを読出し,ポリゴンデータをジオメトリ演算回路 103へ 出力する。ジオメトリ演算回路 103は,入力されたポリゴンデータに対して,視点を原 点とする視点座標系のデータに座標変換するなどの処理を行う。ジオメトリ演算回路 103は,処理したポリゴンデータを,レンダラー 104へ出力する。レンダラー 104は,ポリ ゴン単位のデータをピクセル単位のデータに変換する。レンダラー 104は,ピクセル単 位のデータをテクスチャ生成回路 105へ出力する。テクスチャ生成回路 105は,テクス チヤメモリ 112に記憶されるテクスチャデータに基づき,ピクセル単位のテクスチャカラ 一を生成する。テクスチャ生成回路 105は,テクスチャカラー情報を有するピクセル単 位のデータを,照光処理回路 107へ出力する。照光処理回路 107は,テクスチャカラ 一情報を有するポリゴンに対し,ピクセル単位で法線ベクトル,重心座標などを利用 して陰影付けを行う。照光処理回路 107は,陰影付けした画像データを,表示回路 10 8へ出力する。表示回路 108は,照光処理回路 107から入力された画像データをフレ ームバッファ 109に書き込み,またフレームバッファ 109に書き込まれた画像データを 読み出し,表示画像情報を得る。表示回路 108は,表示画像情報をモニター 110へ出 力する。モニター 110は,入力された表示画像情報にしたがって,コンピュータグラフ イツタス画像を表示する。 Hereinafter, an operation example of generating an image using a computer will be described. The CPU 102 reads the polygon data from the work memory 111 and outputs the polygon data to the geometry calculation circuit 103. The geometry calculation circuit 103 performs processing such as coordinate conversion of the input polygon data to data in a viewpoint coordinate system with the viewpoint as the origin. The geometry calculation circuit 103 outputs the processed polygon data to the renderer 104. The renderer 104 converts polygon data into pixel data. The renderer 104 outputs the pixel unit data to the texture generation circuit 105. The texture generation circuit 105 generates a texture color for each pixel based on the texture data stored in the texture memory 112. The texture generation circuit 105 outputs pixel unit data having texture color information to the illumination processing circuit 107. Illumination processing circuit 107 shades polygons having texture color information using normal vectors, barycentric coordinates, etc. in pixel units. The illumination processing circuit 107 outputs the shaded image data to the display circuit 108. The display circuit 108 writes the image data input from the illumination processing circuit 107 into the frame buffer 109 and reads out the image data written into the frame buffer 109 to obtain display image information. The display circuit 108 outputs display image information to the monitor 110. The monitor 110 displays a computer graph status image according to the input display image information.
[0079] 本発明のコンピュータは,照光処理回路などの照光処理部に,本発明の画像生成 装置を具備するので,特定のピクセル以外のピクセルへの入射光を考慮しなくとも, 輝度計算などを効果的に行うことができるので,高速に陰影などを演算することがで きる。 [0079] Since the computer of the present invention includes the image generation device of the present invention in an illumination processing unit such as an illumination processing circuit, brightness calculation and the like can be performed without considering incident light to pixels other than a specific pixel. Since this can be done effectively, it is possible to calculate shadows at high speed.
[0080] [ゲーム機の構成] [0080] [Game console configuration]
図 14は,本発明のある実施形態(ゲーム機)のブロック図である。このブロック図で 表される実施形態は,特に携帯用,家庭用又は業務用のゲーム機として好適に利用
されうる。そこで,以下では,ゲーム機として説明する。なお,同図に示されるゲーム 機は,少なくとも処理部 200を含めばよく(又は処理部 200と記憶部 270,又は処理部 2 00と記憶部 270と情報記憶媒体 280を含んでもよく) ,それ以外のブロック(例えば操作 部 260,表示部 290,音出力部 292,携帯型情報記憶装置 294,通信部 296)について は,任意の構成要素とすることができる。 FIG. 14 is a block diagram of an embodiment (game machine) according to the present invention. The embodiment shown in this block diagram is particularly suitably used as a portable, home or business game machine. Can be done. Therefore, in the following, it will be described as a game machine. Note that the game machine shown in FIG. 5 may include at least the processing unit 200 (or may include the processing unit 200 and the storage unit 270, or the processing unit 200, the storage unit 270, and the information storage medium 280). Other blocks (for example, the operation unit 260, the display unit 290, the sound output unit 292, the portable information storage device 294, and the communication unit 296) can be optional components.
[0081] 処理部 200は,システム全体の制御,システム内の各ブロックへの命令の指示,ゲー ム処理,画像処理,音処理などの各種の処理を行うものである。処理部 200の機能は ,各種プロセッサ(CPU, DSP等),又は ASIC (ゲートアレイ等)などのハードウェア や,所与のプログラム (ゲームプログラム)により実現できる。 The processing unit 200 performs various processing such as control of the entire system, instruction instruction to each block in the system, game processing, image processing, and sound processing. The functions of the processing unit 200 can be realized by hardware such as various processors (CPU, DSP, etc.) or ASIC (gate array, etc.) and a given program (game program).
[0082] 操作部 260は,プレーヤが操作データを入力するためのものである。操作部 260は, の機能は,例えば,レバー,ボタン,外枠,及びノヽードウエアを備えたコントローラによ り実現できる。なお,特に携帯用ゲーム機の場合は,操作部 260は,ゲーム機本体と 一体として形成されても良い。コントローラからの処理情報は,シリアルインターフエ一 ス (I/F)やバスを介してメインプロセッサなどに伝えられる。 [0082] The operation unit 260 is used by the player to input operation data. The function of the operation unit 260 can be realized by, for example, a controller equipped with a lever, a button, an outer frame, and nodeware. In particular, in the case of a portable game machine, the operation unit 260 may be formed integrally with the game machine body. Processing information from the controller is transmitted to the main processor via a serial interface (I / F) or bus.
[0083] 記憶部 270は,処理部 200や通信部 296などのワーク領域となるものである。また,プロ グラムや各種テーブルなどを格納しても良い。記憶部 270は,例えば,メインメモリ 272 ,フレームバッファ 274,及びテクスチャ記憶部 276を含んでもよく,そのほか各種テー ブルなどを記憶しても良い。記憶部 270の機能は, ROMや RAMなどのハードウェア により実現できる。 The storage unit 270 is a work area such as the processing unit 200 or the communication unit 296. It may also store programs and various tables. The storage unit 270 may include, for example, a main memory 272, a frame buffer 274, and a texture storage unit 276, and may also store various tables. The functions of the storage unit 270 can be realized by hardware such as ROM and RAM.
RAMとして, VRAM, DRAM又は SRAMなどがあげられ,用途に応じて適宜選択 すればよい。フレームバッファ 274を構成する VRAMなどは,各種プロセッサの作業 領域として用いられる。 Examples of RAM include VRAM, DRAM, and SRAM, which can be selected appropriately according to the application. The VRAM that constitutes the frame buffer 274 is used as a work area for various processors.
[0084] 情報記憶媒体 (コンピュータにより使用可能な記憶媒体) 280は,プログラムやデータ などの情報を格納するものである。情報記憶媒体 280は,いわゆるゲームカセットなど として販売されうる。そして,情報記憶媒体 280の機能は,光ディスク (CD, DVD) , 光磁気ディスク (MO) ,磁気ディスク,ハードディスク,磁気テープ,又はメモリ (RO M)などのハードウェアにより実現できる。処理部 200は,この情報記憶媒体 280に格 納される情報に基づいて種々の処理を行う。情報記憶媒体 280には,本発明(本実
施形態)の手段 (特に処理部 200に含まれるブロック)を実行するための情報 (プログ ラム又はプログラム及びデータ)が格納される。なお,上記記憶部にプログラムゃデ ータなどの情報を格納した場合は,情報記憶媒体 280は必ずしも必要ない。情報記 憶媒体 280に格納される情報の一部又は全部は,例えば,システムへの電源投入時 等に記憶部 270に転送されることになる。また,情報記憶媒体 280に記憶される情報と して,所定の処理を行うためのプログラムコード,画像データ,音データ,表示物の形 状データ,テーブルデータ,リストデータ,本発明の処理を指示するための情報,そ の指示に従って処理を行うための情報等の少なくとも 2つを含むものがあげられる。 Information storage medium (storage medium usable by computer) 280 stores information such as programs and data. The information storage medium 280 can be sold as a so-called game cassette. The functions of the information storage medium 280 can be realized by hardware such as an optical disk (CD, DVD), a magneto-optical disk (MO), a magnetic disk, a hard disk, a magnetic tape, or a memory (ROM). The processing unit 200 performs various processes based on the information stored in the information storage medium 280. The information storage medium 280 includes the present invention (actual The information (program or program and data) for executing the means of the embodiment (particularly the blocks included in the processing unit 200) is stored. Note that the information storage medium 280 is not necessarily required when information such as program data is stored in the storage unit. Part or all of the information stored in the information storage medium 280 is transferred to the storage unit 270, for example, when the system is powered on. In addition, as information stored in the information storage medium 280, a program code for performing predetermined processing, image data, sound data, shape data of display objects, table data, list data, and processing of the present invention are instructed. Information that includes at least two, such as information for processing and information for processing according to the instructions.
[0085] 表示部 290は,本実施形態により生成された画像を出力するものであり,その機能は , CRT (ブラウン管), LCD (液晶), OEL (有機電界発光素子), PDP (プラズマディ スプレイパネル)又は HMD (ヘッドマウントディスプレイ)などのハードウェアにより実 現できる。 [0085] The display unit 290 outputs an image generated by the present embodiment, and its functions are CRT (CRT), LCD (Liquid Crystal), OEL (Organic Electroluminescent Device), PDP (Plasma Display). Panel) or HMD (head-mounted display).
[0086] 音出力部 292は,音を出力するものである。音出力部 292の機能は,スピーカなどの ハードウェアにより実現できる。音出力は,例えばバスを介してメインプロセッサなどと 接続されたサウンドプロセッサにより,音処理が施され,スピーカなどの音出力部から 出力される。 [0086] The sound output unit 292 outputs sound. The functions of the sound output unit 292 can be realized by hardware such as speakers. Sound output is processed by a sound processor connected to the main processor via a bus, for example, and output from a sound output unit such as a speaker.
[0087] 携帯型情報記憶装置 294は,プレーヤの個人データやセーブデータなどが記憶され るものである。この携帯型情報記憶装置 294としては,メモリカードや携帯型ゲーム装 置などがあげられる。携帯型情報記憶装置 294の機能は,メモリカード,フラッシュメ モリ,ハードディスク, USBメモリなど公知の記憶手段により達成できる。 The portable information storage device 294 stores player personal data, save data, and the like. Examples of the portable information storage device 294 include a memory card and a portable game device. The functions of the portable information storage device 294 can be achieved by known storage means such as a memory card, a flash memory, a hard disk, and a USB memory.
[0088] 通信部 296は,外部(例えばホスト装置や他の画像生成システム)との間で通信を行う ための各種の制御を行う任意のものである。通信部 296の機能は,各種プロセッサ, 又は通信用 ASICなどのハードウェアや,プログラムなどにより実現できる。 The communication unit 296 is an arbitrary unit that performs various controls for communicating with the outside (for example, a host device or another image generation system). The functions of the communication unit 296 can be realized by hardware such as various processors or communication ASICs, and programs.
[0089] ゲーム機を実行するためのプログラム又はデータは,ホスト装置 (サーバー)が有する 情報記憶媒体力 ネットワーク及び通信部 296を介して情報記憶媒体 280に配信する ようにしてもよい。 The program or data for executing the game machine may be distributed to the information storage medium 280 via the information storage medium power network and communication unit 296 of the host device (server).
[0090] 処理部 200は,ゲーム処理部 220,画像処理部 230,及び音処理部 250を含むものが あげられる。具体的には,メインプロセッサ,コプロセッサ,ジオメトリプロセッサ,描画
プロセッサ,データ処理プロセッサ,四則演算回路又は汎用演算回路などがあげられ る。これらは適宜バスなどにより連結され,信号の授受が可能とされる。また,圧縮さ れた情報を伸張するためのデータ伸張プロセッサを備えても良い。 [0090] The processing unit 200 includes a game processing unit 220, an image processing unit 230, and a sound processing unit 250. Specifically, main processor, coprocessor, geometry processor, drawing Examples include processors, data processors, four arithmetic circuits, and general-purpose arithmetic circuits. These are appropriately connected by a bus or the like so that signals can be exchanged. A data decompression processor may also be provided to decompress the compressed information.
[0091] ここでゲーム処理部 220は,コイン (代価)の受け付け処理,各種モードの設定処理, ゲームの進行処理,選択画面の設定処理,オブジェクトの位置や回転角度 (X, Y又 は Z軸回り回転角度)を求める処理,オブジェクトを動作させる処理 (モーション処理) ,視点の位置 (仮想カメラの位置)や視線角度 (仮想カメラの回転角度)を求める処理 ,マップオブジェクトなどのオブジェクトをオブジェクト空間へ配置する処理,ヒットチェ ック処理,ゲーム結果 (成果,成績)を演算する処理,複数のプレーヤが共通のゲー ム空間でプレイするための処理,又はゲームオーバー処理などの種々のゲーム処理 を,操作部 260からの操作データや,携帯型情報記憶装置 294からの個人データ,保 存データや,ゲームプログラムなどに基づ 、て行う。 [0091] Here, the game processing unit 220 receives coins (price) acceptance processing, various mode setting processing, game progress processing, selection screen setting processing, object position and rotation angle (X, Y or Z axis). Rotation angle) processing, object motion processing (motion processing), viewpoint position (virtual camera position) and line-of-sight angle (virtual camera rotation angle) processing, map object and other objects into the object space Various game processes such as placement process, hit check process, process for calculating game results (results, results), process for multiple players to play in a common game space, game over process, etc. This is based on operation data from the unit 260, personal data from the portable information storage device 294, stored data, game programs, and the like.
[0092] 画像処理部 230は,ゲーム処理部 220からの指示等にしたがって,各種の画像処理を 行うものである。また,音処理部 250は,ゲーム処理部 220からの指示等にしたがって ,各種の音処理を行う。 The image processing unit 230 performs various types of image processing in accordance with instructions from the game processing unit 220 and the like. The sound processing unit 250 performs various types of sound processing in accordance with instructions from the game processing unit 220.
[0093] ゲーム処理部 220,画像処理部 230,音処理部 250の機能は,その全てをハードゥエ ァにより実現してもよいし,その全てをプログラムにより実現してもよい。又は,ハード ウェアとプログラムの両方により実現してもよい。画像処理部 230は,ジオメトリ演算部 2 32 (3次元座標演算部) ,描画部 240 (レンダリング部)を含むものがあげられる。 [0093] All of the functions of the game processing unit 220, the image processing unit 230, and the sound processing unit 250 may be realized by a hard ware, or all of them may be realized by a program. Or it may be realized by both hardware and program. Examples of the image processing unit 230 include a geometry calculation unit 2 32 (three-dimensional coordinate calculation unit) and a drawing unit 240 (rendering unit).
[0094] ジオメトリ演算部 232は,座標変換,クリッピング処理,透視変換,又は光源計算など の種々のジオメトリ演算 (3次元座標演算)を行う。そして,ジオメトリ処理後 (透視変換 後)のオブジェクトデータ (オブジェクトの頂点座標,頂点テクスチャ座標,又は輝度 データ等)は,例えば,記憶部 270のメインメモリ 272に格納されて,保存される。 [0094] The geometry computation unit 232 performs various geometry computations (three-dimensional coordinate computation) such as coordinate transformation, clipping processing, perspective transformation, or light source computation. Then, the object data (after the perspective transformation) after the geometry processing (object vertex coordinates, vertex texture coordinates, luminance data, etc.) is stored in the main memory 272 of the storage unit 270 and saved, for example.
[0095] 描画部 240は,ジオメトリ演算後(透視変換後)のオブジェクトデータと,テクスチャ記 憶部 276に記憶されるテクスチャなどに基づいて,オブジェクトをフレームバッファ 274 に描画する。 The drawing unit 240 draws an object in the frame buffer 274 based on the object data after the geometry calculation (after perspective transformation) and the texture stored in the texture storage unit 276.
[0096] 描画部 240は,例えば,テクスチャマッピング部 242,シェーディング処理部 244を含 むものがあげられる。具体的には,描画プロセッサにより実装できる。描画プロセッサ
は,テクスチャ記憶部,各種テーブル,フレームバッファ, VRAMなどとバスなどを介 して接続され,更にディスプレイと接続される。 Examples of the drawing unit 240 include a texture mapping unit 242 and a shading processing unit 244. Specifically, it can be implemented by a drawing processor. Drawing processor Is connected to the texture storage unit, various tables, frame buffers, VRAM, etc. via a bus, and further to the display.
[0097] テクスチャマッピング部 242は,環境テクスチャをテクスチャ記憶部 276から読み出し, 読み出された環境テクスチャを,オブジェクトに対してマッピングする。 The texture mapping unit 242 reads the environment texture from the texture storage unit 276, and maps the read environment texture to the object.
[0098] シェーディング処理部 244は,オブジェクトに対するシェーディング処理を行う。例え ば,ジオメトリ処理部 232が光源計算を行い,シェーディング処理用の光源の情報や ,照明モデルや,オブジェクトの各頂点の法線ベクトルなどに基づいて,オブジェクト の各頂点の輝度 (RGB)を求める。シェーディング処理部 244は,この各頂点の輝度 に基づいて,プリミティブ面 (ポリゴン, 曲面)の各ドットの輝度を例えば,ホンシエーデ イングや,グーローシェーディングなどにより求める。 The shading processing unit 244 performs shading processing on the object. For example, the geometry processing unit 232 performs light source calculation and obtains the luminance (RGB) of each vertex of the object based on the light source information for shading processing, the lighting model, and the normal vector of each vertex of the object. . Based on the luminance of each vertex, the shading processing unit 244 obtains the luminance of each dot on the primitive surface (polygon, curved surface) by, for example, honshading or Gouraud shading.
[0099] ジオメトリ演算部 232は,法線ベクトル処理部 234を含むものがあげられる。法線べク トル処理部 234は,オブジェクトの各頂点の法線ベクトル (広義にはオブジェクトの面 の法線ベクトル)を,ローカル座標系力 ワールド座標系への回転マトリクスで回転さ せる処理を行ってもよい。 [0099] The geometry calculation unit 232 includes a normal vector processing unit 234. The normal vector processing unit 234 performs a process of rotating the normal vector of each vertex of the object (in a broad sense, the normal vector of the surface of the object) by a rotation matrix to the local coordinate system force world coordinate system. May be.
[0100] 本発明のゲーム機は,たとえば,シェーディング処理部 244に,本発明の画像生成 装置を具備する。 [0100] In the game machine of the present invention, for example, the shading processing unit 244 includes the image generation device of the present invention.
[0101] [ゲーム機の基本動作] [0101] [Basic operation of game console]
システムの電源が ONになると,情報記憶媒体 280に格納される情報の一部又は全 部は,例えば,記憶部 270に転送される。そして,ゲーム処理用のプログラムが,例え ばメインメモリ 272に格納され,様々なデータが,テクスチャ記憶部 276や,図示しない テーブルなどに格納される。 When the system is turned on, some or all of the information stored in the information storage medium 280 is transferred to the storage unit 270, for example. A game processing program is stored in the main memory 272, for example, and various data are stored in the texture storage unit 276, a table (not shown), or the like.
[0102] 操作部 260からの操作情報は,例えば,図示しないシリアルインターフェイスやバスを 介して,処理部 200へ伝えられ,音処理や,様々な画像処理が行われる。音処理部 2 50により処理された音情報は,バスを介して音出力部 292へ伝えられ,音として放出さ れる。また,メモリカードなどの携帯型情報記憶装置 194に記憶されたセーブ情報な ども,図示しないシリアルインターフェイスやバスを介して,処理部 200へ伝えられ所 定のデータが記憶部 170から読み出される。 [0102] The operation information from the operation unit 260 is transmitted to the processing unit 200 via, for example, a serial interface and a bus (not shown), and sound processing and various image processing are performed. The sound information processed by the sound processing unit 250 is transmitted to the sound output unit 292 via the bus and released as sound. In addition, save information stored in the portable information storage device 194 such as a memory card is transmitted to the processing unit 200 via a serial interface or bus (not shown), and predetermined data is read from the storage unit 170.
[0103] 画像処理部 230が,ゲーム処理部 220からの指示等にしたがって,各種の画像処理
を行う。具体的には,ジオメトリ演算部 232が,座標変換,クリッピング処理,透視変換 ,又は光源計算などの種々のジオメトリ演算 (3次元座標演算)を行う。そして,ジオメ トリ処理後(透視変換後)のオブジェクトデータ (オブジェクトの頂点座標,頂点テクス チヤ座標,又は輝度データ等)は,例えば,記憶部 270のメインメモリ 272に格納されて ,保存される。次に,描画部 240が,ジオメトリ演算後 (透視変換後)のオブジェクトデ ータと,テクスチャ記憶部 276に記憶されるテクスチャなどとに基づいて,オブジェクト をフレームバッファ 274に描画する。 [0103] The image processing unit 230 performs various image processing in accordance with instructions from the game processing unit 220. I do. Specifically, the geometry calculation unit 232 performs various geometry calculations (three-dimensional coordinate calculation) such as coordinate conversion, clipping processing, perspective conversion, or light source calculation. Then, the object data (after the perspective transformation) after the geometry processing (object vertex coordinates, vertex texture coordinates, luminance data, etc.) is stored in the main memory 272 of the storage unit 270, for example. Next, the rendering unit 240 renders the object in the frame buffer 274 based on the object data after the geometry calculation (after perspective transformation) and the texture stored in the texture storage unit 276.
[0104] フレームバッファ 274に格納された情報は,バスを介して表示部 290へ伝えられ,描 画されることとなる。このようにして,コンピュータグラフィックを有するゲーム機として 機能する。 [0104] The information stored in the frame buffer 274 is transmitted to the display unit 290 via the bus and is drawn. In this way, it functions as a game machine with computer graphics.
[0105] 本発明のゲーム機は,たとえば,シェーディング処理部 244に,本発明の画像生成 装置を具備するので,特定のピクセル以外のピクセルへの入射光を考慮しなくとも, 輝度計算などを効果的に行うことができるので,高速に陰影などを演算することがで きる。 [0105] Since the game machine of the present invention includes, for example, the image generation device of the present invention in the shading processing unit 244, the luminance calculation and the like can be effectively performed without considering incident light to pixels other than the specific pixel. Therefore, it is possible to calculate shadows at high speed.
[0106] [携帯電話機の構成] [0106] [Configuration of mobile phone]
図 15は,本発明のある実施形態 (コンピュータグラフィック機能つき携帯電話機)の ブロック図である。このブロック図で表される実施形態は,特に 3次元コンピュータダラ フィック機能つき携帯電話機,特にゲーム機能付携帯電話や,ナビゲーシヨン機能 付形態電話として好適に利用されうる。 FIG. 15 is a block diagram of an embodiment of the present invention (a mobile phone with a computer graphic function). The embodiment shown in this block diagram can be suitably used particularly as a mobile phone with a three-dimensional computer darling function, particularly a mobile phone with a game function and a mobile phone with a navigation function.
[0107] 図 15に示されるように,この携帯電話は,制御部 221と,制御部 221のためのプログ ラムや画像データなどが格納され,制御部や通信部などのワーク領域となるメモリ部 2 22と,無線通信を行うための無線通信機能部 223と,静止画や動画を撮影してデジタ ル信号に変換する CCDカメラなどの任意要素である撮像部 224と,画像や文字を表 示するための LCDなどの表示部 225と,テンキーや各種機能キーなどを含む操作部 226と,音声通話のためのマイクなどの音声入力部 227と,レシーバやスピーカなど音 を出力するための音声出力部 228と,当該携帯電話端末を動作させるための電池 22 9と,電池 229を安定化し各機能部へ分配する電源部 230を含む。 As shown in FIG. 15, this mobile phone includes a control unit 221 and a memory unit that stores programs and image data for the control unit 221 and serves as a work area such as the control unit and the communication unit. 22, wireless communication function unit 223 for wireless communication, imaging unit 224, which is an optional element such as a CCD camera that captures still images and videos and converts them to digital signals, and displays images and characters Display unit 225 such as LCD, operation unit 226 including numeric keys and various function keys, audio input unit 227 such as a microphone for voice calls, and audio output for outputting sounds such as a receiver and speaker Unit 228, a battery 229 for operating the mobile phone terminal, and a power source unit 230 that stabilizes and distributes battery 229 to each functional unit.
[0108] 制御部 201は,携帯電話システム全体の制御,システム内の各ブロックへの命令の指
示,ゲーム処理,画像処理,音処理などの各種の処理を行うものである。制御部 221 の機能は,各種プロセッサ(CPU, DSP等),又は ASIC (ゲートアレイ等)などのハ 一ドウエアや,所与のプログラム (ゲームプログラム)により実現できる。 [0108] The control unit 201 controls the entire mobile phone system and gives instructions to each block in the system. Various processes such as display, game processing, image processing, and sound processing are performed. The functions of the control unit 221 can be realized by hardware such as various processors (CPU, DSP, etc.) or ASIC (gate array, etc.) and a given program (game program).
[0109] より具体的には,制御部は,図示しない画像処理部を具備し,画像処理部は,ジォ メトリ演算回路などのジオメトリ演算部と,描画部 (レンダラー)とを具備する。さらに, テクスチャ生成回路,照光処理回路,又は表示回路などを具備してもよい。更には, 先に説明したコンピュータやゲーム機における描画処理回路を適宜具備すればよい More specifically, the control unit includes an image processing unit (not shown), and the image processing unit includes a geometry calculation unit such as a geometry calculation circuit and a drawing unit (renderer). Furthermore, a texture generation circuit, an illumination processing circuit, or a display circuit may be provided. Furthermore, the drawing processing circuit in the computer or game machine described above may be provided as appropriate.
[0110] 本発明の携帯電話は,たとえば,描画部(レンダラー)に,本発明の画像生成装置 を具備する。 [0110] The mobile phone of the present invention includes the image generating device of the present invention in, for example, a drawing unit (renderer).
[0111] [携帯電話機の動作例] [0111] [Operation example of mobile phone]
まず,音声による通信動作について説明する。例えば音声入力部 227に入力された 音声は,インターフェイスによりデジタル情報に変換され,制御部 221によって,所定 の処理が施され,無線通信機能部 223から無線信号として出力される。また,相手の 音情報を受信する場合は,無線通信機能部 223が無線信号を受信し,所定の変換 処理が施された後,制御部 221の制御を受けて,音声出力部 228から出力される。 First, the voice communication operation will be described. For example, the voice input to the voice input unit 227 is converted into digital information by the interface, subjected to predetermined processing by the control unit 221, and output as a radio signal from the wireless communication function unit 223. When receiving the other party's sound information, the wireless communication function unit 223 receives the wireless signal, and after being subjected to a predetermined conversion process, is output from the audio output unit 228 under the control of the control unit 221. The
[0112] 次に,画像を処理するための動作や処理は,基本的には先に説明したコンピュータ やゲーム機における動作や処理と同様である。操作部 224から,図示しないインター フェイスやバスを介して処理情報が入力されると,例えば,制御部 221中の画像処理 部の指令に基づき,ジオメトリプロセッサなどが, RAMなどの作業領域,各種テープ ルなどを適宜利用し,ジオメトリ演算を行う。さらに,制御部 221のレンダラーは,制御 部 221中の画像処理部の指令に基づき,レンダリング処理を行う。力リング処理やタリ ッビング処理,アンチエイリアス処理などが適宜施された画像情報は,描画プロセッ サにより所定の描画処理を施され,フレームバッファに記憶され,表示部に画像とし て表示される。このようにして, 3次元コンピュータグラフィックスが表示される。 [0112] Next, the operations and processes for processing an image are basically the same as the operations and processes in the computer and game machine described above. When processing information is input from the operation unit 224 via an interface or bus (not shown), for example, a geometry processor or the like is operated based on a command from the image processing unit in the control unit 221, a work area such as a RAM, and various tapes. Geometry is calculated using the appropriate information. Further, the renderer of the control unit 221 performs rendering processing based on the command of the image processing unit in the control unit 221. Image information that has been appropriately subjected to force ring processing, tacking processing, anti-aliasing processing, etc., is subjected to predetermined drawing processing by a drawing processor, stored in the frame buffer, and displayed as an image on the display unit. In this way, 3D computer graphics are displayed.
[0113] 本発明の携帯電話は,たとえば,描画部(レンダラー)に,本発明の画像生成装置 を具備するので,特定のピクセル以外のピクセルへの入射光を考慮しなくとも,輝度 計算などを効果的に行うことができるので,高速に陰影などを演算することができる。
携帯電話におけるコンピュータグラフィックス,特に 3次元コンピュータグラフィックス, のレベルはそれほど高くはないので,本発明のようなシェーディング技術は要求され ていないが,あえて本発明の画像生成装置を盛り込むことで,陰影の美しい画像を 効果的〖こ表示できることとなる。 [0113] Since the mobile phone of the present invention includes, for example, the image generation device of the present invention in a drawing unit (renderer), brightness calculation and the like can be performed without considering incident light to pixels other than a specific pixel. Since it can be performed effectively, it is possible to calculate shadows and the like at high speed. Since the level of computer graphics, especially 3D computer graphics, in mobile phones is not so high, the shading technique as in the present invention is not required, but by incorporating the image generating apparatus of the present invention, it is not possible to It will be possible to display beautiful images effectively.
[0114] [カーナビの構成] [0114] [Configuration of car navigation system]
図 16は,本発明のある実施形態(ナビゲーシヨンシステム)のブロック図である。この ブロック図で表される実施形態は,特に 3次元コンピュータグラフィック機能つきカー ナビゲーシヨンとして好適に利用されうる。図 16に示されるように,このナビゲーシヨン システムは, GPS部 241と,任意要素としての自律測位部 242と,地図記憶部 243と, 制御部 244と,表示部 245と,任意要素としてのマップマッチング部 246とを含むものが あげられる。 FIG. 16 is a block diagram of an embodiment (navigation system) of the present invention. The embodiment represented by this block diagram can be suitably used particularly as car navigation with a three-dimensional computer graphic function. As shown in Fig. 16, this navigation system includes a GPS unit 241, an autonomous positioning unit 242 as an optional element, a map storage unit 243, a control unit 244, a display unit 245, and a map as an optional element. And a matching unit 246.
[0115] GPS部 241は, GPS受信機を備え,複数の GPS衛星力 の電波を同時に受信して 車両の測位データを得る GPS部である。 GPS部 241は, GPS受信機において受信し たデータから車両の絶対位置を得るものであるが,この測位データには車両の位置 情報の他に車両の進行方向情報,仰角情報が含まれている。 [0115] The GPS unit 241 is a GPS unit that is equipped with a GPS receiver and obtains vehicle positioning data by simultaneously receiving radio waves from multiple GPS satellite forces. The GPS unit 241 obtains the absolute position of the vehicle from the data received by the GPS receiver. This positioning data includes vehicle direction information and elevation angle information in addition to the vehicle position information. .
[0116] 自律測位部 242は, 自律型センサを備え, 自律型センサの出力データから車両の移 動距離,移動方位を算出する自律測位部である。自律型センサとしては,車輪の回 転数に応じた信号を検出する車輪側センサ,車両の加速度を検出する加速度セン サ,車両の角速度を検出するジャイロセンサなどが含まれる。この例では,ジャイロセ ンサとして,さらに車両のピッチ動作方向における姿勢角度 (以下「ピッチ角」と称す る)も検出できる 3次元ジャイロセンサが使用されており,したがって, 自律測位部 242 力も出力される測位データには車両のピッチ角が含まれている。 [0116] The autonomous positioning unit 242 is an autonomous positioning unit that includes an autonomous sensor and calculates the travel distance and direction of the vehicle from the output data of the autonomous sensor. Autonomous sensors include wheel-side sensors that detect signals according to the number of wheel rotations, acceleration sensors that detect vehicle acceleration, and gyro sensors that detect vehicle angular velocity. In this example, a three-dimensional gyro sensor that can also detect the attitude angle in the pitch motion direction of the vehicle (hereinafter referred to as “pitch angle”) is used as the gyro sensor. Therefore, the autonomous positioning unit 242 force is also output. The positioning data includes the pitch angle of the vehicle.
[0117] 地図記憶部 243は, 2次元地図情報, 3次元道路情報,及び 3次元建物情報を有する デジタル地図データが記憶された地図記憶部である。地図記憶部 243を構成する記 憶媒体として, CD-ROM,ハードディスクがあげられる。地図データは,データ量が 大きいと読み込み時間を要するため,好ましくは複数のブロックに分割されて記憶さ れる。また,道路情報とは,交差点や屈曲点などの主要な地点 (ノード)を示す情報を 有したものであってもよく,ノード情報はその地点における座標データなどを備え,道
路は各ノードを結ぶ直線 (リンク)として近似されてもよい。このシステムでの 3次元道 路情報とは,ノード情報力 ¾次元の座標データを備えていることを意味している。 [0117] The map storage unit 243 is a map storage unit that stores digital map data having 2D map information, 3D road information, and 3D building information. CD-ROMs and hard disks are examples of storage media that make up the map storage unit 243. The map data is preferably divided and stored in multiple blocks because it takes a long time to read when the amount of data is large. The road information may include information indicating major points (nodes) such as intersections and inflection points. The node information includes coordinate data at the points, and road information. The road may be approximated as a straight line (link) connecting the nodes. In this system, 3D road information means that node information power 3D coordinate data is provided.
[0118] 制御部 244は, GPS部 241または自律測位部 242から得られた車両の位置情報に基 づいて,地図記憶部 243から車両の位置が該当する所定領域の地図データを読み 出すなど所定の制御を行うためのものである。 [0118] Based on the vehicle position information obtained from the GPS unit 241 or the autonomous positioning unit 242, the control unit 244 reads out map data of a predetermined area corresponding to the vehicle position from the map storage unit 243. It is for performing control.
[0119] 表示部 245は,測位制御部 244により読み出された地図データを表示するためのもの である。 The display unit 245 is for displaying the map data read out by the positioning control unit 244.
[0120] マップマッチング部 246は,車両の測位データおよび地図データの 3次元道路情報を 基に,車両の位置を道路上に補正するためのものである。 [0120] The map matching unit 246 corrects the position of the vehicle on the road based on the vehicle positioning data and the 3D road information of the map data.
[0121] 本発明のカーナビは,たとえば,制御部に幾何演算部と描画部とを具備し,描画部 [0121] The car navigation system according to the present invention includes, for example, a control unit including a geometric operation unit and a drawing unit.
(レンダラー)に,本発明の画像生成装置を具備する。 The (renderer) includes the image generation apparatus of the present invention.
[0122] [カーナビの動作例] [0122] [Operation example of car navigation system]
GPS部 241が,複数の GPS衛星力 の電波を同時に受信し車両の測位データを得 る。自律測位部 242は, 自律型センサの出力データから車両の移動距離,移動方位 を算出する。制御部 244は, GPS部 241または自律測位部 242から得られたデータに 所定の処理を施して車両の位置情報を得る。そして,車両の位置情報に基づいて, 地図記憶部 243から車両の位置に関連する所定領域の地図データを読み出す。また ,図示しない操作部からの操作情報を受けて表示モードを変え,表示モードに応じ た地図データを読み出す。また,制御部 244は,位置情報に基づいて,所定の描画 処理を行い建物の立体画像,地図の立体画像,車の立体画像などを表示する。さら に, Z値に基づいて,力リング処理などを行う。表示部 245が,制御部 244により読み出 された地図データを表示する。 The GPS unit 241 simultaneously receives radio waves from multiple GPS satellite forces to obtain vehicle positioning data. The autonomous positioning unit 242 calculates the travel distance and direction of the vehicle from the output data of the autonomous sensor. The control unit 244 performs predetermined processing on the data obtained from the GPS unit 241 or the autonomous positioning unit 242 to obtain vehicle position information. Then, based on the vehicle position information, map data of a predetermined area related to the vehicle position is read from the map storage unit 243. In addition, in response to operation information from an operation unit (not shown), the display mode is changed, and map data corresponding to the display mode is read. Further, the control unit 244 performs predetermined drawing processing based on the position information, and displays a 3D building image, 3D map image, 3D car image, and the like. In addition, force ring processing is performed based on the Z value. The display unit 245 displays the map data read by the control unit 244.
[0123] 本発明のカーナビは,たとえば,描画部(レンダラー)に,本発明の画像生成装置を 具備するので,特定のピクセル以外のピクセルへの入射光を考慮しなくとも,輝度計 算などを効果的に行うことができる。これにより,本発明のカーナビは,高速に陰影な どを演算することができる。カーナビにおけるコンピュータグラフィックス,特に 3次元コ ンピュータグラフィックス,のレベルはそれほど高くはないので,本発明のようなシエー デイング技術は要求されていないが,あえて本発明の画像生成装置を盛り込むこと
で,陰影の美 、画像を効果的に表示できることとなる。 [0123] The car navigation system according to the present invention includes, for example, the image generation device according to the present invention in a drawing unit (renderer), so that brightness calculation and the like can be performed without considering incident light to pixels other than a specific pixel. Can be done effectively. As a result, the car navigation system of the present invention can calculate shadows and the like at high speed. Since the level of computer graphics, especially 3D computer graphics, in car navigation systems is not so high, the shading technology as in the present invention is not required, but the image generation device of the present invention is intentionally included. Therefore, the beauty of the shadow and the image can be displayed effectively.
産業上の利用可能性 Industrial applicability
本発明の画像生成装置は,特に 3次元のコンピュータグラフィックを生成することが できるのでコンピュータなどの分野で利用されうる。また,ゲーム機,携帯電話,ナビ ゲーシヨン装置などの分野でも好適に利用されうる。
The image generating apparatus of the present invention can generate three-dimensional computer graphics, and can be used in the field of computers. Further, it can be suitably used in the fields of game machines, mobile phones, navigation devices and the like.
Claims
[1] コンピュータが, [1] Computer is
特定のピクセルに関する単一光源又は複数光源のシェーディング結果と,前記シ ーデイング結果がどの表層に属するかを表す表層設定値を受け取り,前記表層設定 値に基づいて前記シェーディング結果を各表層に振り分け,少なくとも 2層分の表層 色を生成し, A shading result of a single light source or a plurality of light sources for a specific pixel and a surface layer setting value indicating to which surface layer the seeding result belongs are received, and the shading result is distributed to each surface layer based on the surface layer setting value. Generate the surface color for two layers,
前記少なくとも 2層分の表層色と,表層色を合成するためのブレンド係数とを組み合 わせ,ブレンド後の表層色を少なくとも 2層分生成し, Combining the surface color for at least two layers and the blending coefficient for synthesizing the surface color to produce at least two surface colors after blending,
前記少なくとも 2層分のブレンド後の表層色を単一色に合成することにより,前記特 定のピクセル以外のピクセルへの入射光を考慮せずに,前記特定のピクセルに関す るコンピュータグラフィックス用の画像データを生成する画像生成方法。 By combining the surface color after blending the at least two layers into a single color, the computer graphics for the specific pixel can be used without considering the incident light to the pixels other than the specific pixel. An image generation method for generating image data.
[2] 前記単一色に合成された色がコンピュータグラフィックス用の多層反射シエーデイン グ画像を得るための色である請求項 1に記載の画像生成方法。 [2] The image generation method according to [1], wherein the color combined with the single color is a color for obtaining a multilayer reflection shading image for computer graphics.
[3] コンピュータグラフィックス用の画像生成装置であって, [3] An image generation device for computer graphics,
特定のピクセルに関する単一光源又は複数光源のシヱーデイング結果と,前記シヱ ーデイング結果がどの表層に属するかを表す表層設定値を受け取り,少なくとも 2層 分の表層色を生成するためのレイヤーカラー生成手段と, Layer color generation means for receiving a single light source or multiple light source seeding result for a specific pixel and a surface layer setting value indicating which surface layer the seeding result belongs to, and generating at least two surface colors When,
前記レイヤーカラー生成手段が生成した少なくとも 2層分の表層色と,表層色を合 成するためのブレンド係数とを受け取り,前記表層色のそれぞれに対してブレンド係 数を組み合わせ,ブレンド後の表層色を少なくとも 2層分生成するためのレイヤー係 数ブレンド手段と, The surface color for at least two layers generated by the layer color generation means and the blend coefficient for synthesizing the surface layer color are received, the blending coefficient is combined for each of the surface layer colors, and the surface color after blending Layer coefficient blending means for generating at least two layers,
前記レイヤー係数ブレンド手段から出力される少なくとも 2層分のブレンド後の表層 色を単一色に合成するためのカラーブレンド手段と, Color blending means for synthesizing at least two layers of blended surface layer colors output from the layer coefficient blending means into a single color;
を具備し, Comprising
前記レイヤーカラー生成手段が,前記特定のピクセルに関する単一光源又は複数 光源のシェーディング結果と,前記シェーディング結果がどの表層に属するかを表す 表層設定値を受け取り,少なくとも 2層分の表層色を生成し, The layer color generation means receives a shading result of a single light source or a plurality of light sources for the specific pixel and a surface layer setting value indicating to which surface layer the shading result belongs, and generates surface colors for at least two layers. ,
前記レイヤー係数ブレンド手段が,前記レイヤーカラー生成手段が生成した少なく
とも 2層分の表層色と,表層色を合成するためのブレンド係数とを受け取り,前記表層 色のそれぞれに対してブレンド係数を組み合わせ,少なくとも 2層分のブレンド後の 表層色を生成し, The layer coefficient blending means is less generated by the layer color generation means. In both cases, the surface layer color for two layers and the blending coefficient for synthesizing the surface layer color are received, the blending coefficient is combined for each of the surface layer colors, and the surface layer color after blending for at least two layers is generated.
前記カラーブレンド手段が,前記レイヤー係数ブレンド手段力 出力される少なくと も 2層分のブレンド後の表層色を単一色に合成することによりコンピュータグラフィック ス用の画像データを生成する画像生成装置。 An image generating device for generating image data for computer graphics by combining the color of the surface color after blending at least two layers output by the color blending means into a single color.
[4] コンピュータグラフィックス用の多層反射シェーディング画像を得る請求項 3に記載 の画像生成装置。 4. The image generating apparatus according to claim 3, wherein a multilayer reflection shading image for computer graphics is obtained.
[5] 前記レイヤーカラー生成手段は, [5] The layer color generation means includes:
入力されるシェーディング結果を表層設定値を用いて各表層に割り当てるための割り 当て指令を出力するライト選択設定手段と, A light selection setting means for outputting an assignment command for assigning the input shading result to each surface layer using the surface layer setting value;
前記ライト選択設定手段力 の割り当て指令に従って入力されるシェーディング結果 を各表層用に選択するレイヤーカラー選択手段と具備し, Layer color selection means for selecting, for each surface layer, a shading result input according to the light selection setting means power allocation command,
特定のピクセルに関する単一光源又は複数光源のシェーディング結果と,前記シ ーデイング結果がどの表層に属するかを表す表層設定値を受け取り, Receives a single light source or multiple light source shading result for a specific pixel, and a surface layer setting value indicating which surface layer the seeding result belongs to,
前記ライト選択設定手段は,入力されるシェーディング結果とを表層設定値を用いて 各表層に割り当て指令を出力し, The light selection setting means outputs an assignment command to each surface layer by using the surface layer setting value and the input shading result,
前記レイヤーカラー選択手段は,前記ライト選択設定手段からの割り当て指令に従つ て入力されるシェーディング結果を各表層用に選択することにより,少なくとも 2層分の 表層色を生成する, The layer color selection means generates a surface color for at least two layers by selecting a shading result input for each surface layer in accordance with an assignment command from the light selection setting means.
請求項 3に記載の画像生成装置。 The image generation apparatus according to claim 3.
[6] 前記シェーディング結果は,各光源からのディフューズ又はスぺキユラ一を求めた演 算結果である請求項 3に記載の画像生成装置。 6. The image generating apparatus according to claim 3, wherein the shading result is an operation result obtained by obtaining a diffuse or specular from each light source.
[7] 前記ブレンド係数は,法線ベクトル Nと視線ベクトル Vとの内積値 NV;法線ベクトル N と視線ベクトル Vとライトベクトル Lとの 2等分ベクトルの内積値 NH ;視線ベクトル Vと視 線ベクトル Vとライトベクトル Lとの 2等分ベクトルの内積値 VH;又は法線ベクトル Nとラ イトベクトル Lとの内積値 NLのうちいずれ力 1種又は 2種以上である請求項 3に記載の 画像生成装置。
[7] The blend coefficient is the inner product value NV of the normal vector N and the eye vector V; the inner product value NH of the bisection vector of the normal vector N, the eye vector V, and the light vector L; 4. The inner product value VH of the bisection vector of the line vector V and the light vector L; or the inner product value NL of the normal vector N and the light vector L, either one or more of the forces. Image generation device.
[8] 前記レイヤー係数ブレンド手段は,前記ブレンド係数ごとに設けられ,所定のプレン ド係数が入力される RGB毎に設けられる RAMテーブルと, [8] The layer coefficient blending means is provided for each of the blend coefficients, a RAM table provided for each RGB to which a predetermined blend coefficient is input,
レイヤーカラー生成手段の出力と,前記ブレンド係数に応じた係数とを乗算するため の乗算器とを具備し, A multiplier for multiplying the output of the layer color generation means by a coefficient corresponding to the blend coefficient;
前記 RAMテーブルカゝら前記ブレンド係数に応じた係数を読み出し, Read out the coefficient corresponding to the blend coefficient from the RAM table cover,
前記乗算器は,前記レイヤーカラー生成手段の出力と,前記ブレンド係数に応じた 係数とを乗算する, The multiplier multiplies the output of the layer color generation means by a coefficient corresponding to the blend coefficient.
請求項 3に記載の画像生成装置。 The image generation apparatus according to claim 3.
[9] 請求項 3に記載の画像生成装置を具備するコンピュータ。 [9] A computer comprising the image generating device according to [3].
[10] 請求項 3に記載の画像生成装置を具備するゲーム機。 10. A game machine comprising the image generating device according to claim 3.
[11] 請求項 3に記載の画像生成装置を具備する携帯電話。 11. A mobile phone comprising the image generating device according to claim 3.
[12] 請求項 3に記載の画像生成装置を具備するナビゲーシヨンシステム。 12. A navigation system comprising the image generating device according to claim 3.
[13] コンピュータを, [13]
特定のピクセルに関する単一光源又は複数光源のシヱーデイング結果と,前記シヱ ーデイング結果がどの表層に属するかを表す表層設定値を受け取り,少なくとも 2層 分の表層色を生成するためのレイヤーカラー生成手段と, Layer color generation means for receiving a single light source or multiple light source seeding result for a specific pixel and a surface layer setting value indicating which surface layer the seeding result belongs to, and generating at least two surface colors When,
前記レイヤーカラー生成手段が生成した少なくとも 2層分の表層色と,表層色を合 成するためのブレンド係数とを受け取り,前記表層色のそれぞれに対してブレンド係 数を組み合わせ,少なくとも 2層分のブレンド後の表層色を生成するためのレイヤー 係数ブレンド手段と, The surface color for at least two layers generated by the layer color generation means and the blending coefficient for synthesizing the surface layer color are received, and the blending coefficient is combined for each of the surface layer colors, and at least for two layers. Layer coefficient blending means for generating the surface color after blending,
前記レイヤー係数ブレンド手段から出力される少なくとも 2層分のブレンド後の表層 色を単一色に合成するためのカラーブレンド手段として機能させるためのコンビユー タグラフィックス用のプログラム。 A computer graphics program for functioning as a color blending means for synthesizing a surface color after blending at least two layers outputted from the layer coefficient blending means into a single color.
[14] 請求項 13に記載のプログラムを格納したコンピュータが読取ることができる記録媒 体。
[14] A recording medium readable by a computer storing the program according to claim 13.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2007526868A JP4886691B2 (en) | 2005-07-26 | 2006-07-26 | Multilayer reflection shading image generation method and computer |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2005215248 | 2005-07-26 | ||
JP2005-215248 | 2005-07-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2007013492A1 true WO2007013492A1 (en) | 2007-02-01 |
Family
ID=37683382
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2006/314737 WO2007013492A1 (en) | 2005-07-26 | 2006-07-26 | Multilayer reflection shading image creating method and computer |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP4886691B2 (en) |
WO (1) | WO2007013492A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011129133A (en) * | 2009-12-21 | 2011-06-30 | Korea Electronics Telecommun | Apparatus and method for processing complex material texture information |
JP2022540722A (en) * | 2019-07-19 | 2022-09-16 | ビーエーエスエフ コーティングス ゲゼルシャフト ミット ベシュレンクテル ハフツング | Method and system for simulating texture characteristics of coatings |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7034828B1 (en) * | 2000-08-23 | 2006-04-25 | Nintendo Co., Ltd. | Recirculating shade tree blender for a graphics system |
-
2006
- 2006-07-26 WO PCT/JP2006/314737 patent/WO2007013492A1/en active Application Filing
- 2006-07-26 JP JP2007526868A patent/JP4886691B2/en active Active
Non-Patent Citations (2)
Title |
---|
DOBASHI T. ET AL.: "Hikari no Nijimi to Kansho ni Chakumiku shita Shinju Visual Simulator no Jitsugen", THE TRANSACTIONS OF THE INSTITUTE OF ELECTRICAL ENGINEERS OF JAPAN C. IEEJ TRANSACTIONS ON ELECTRONICS, INFORMATION AND SYSTEMS A, vol. 117-C, 1997, pages 1370 - 1376, XP003008252 * |
TERADO I. ET AL.: "Bishiteki Kyoshiteki Kozo ni Motozuku Hana no Graphics Model ni Kansuru Kenkyu", INFORMATION PROCESSING SOCIETY OF JAPAN KEKYU HOKOKU, vol. 99, no. 3, 1999, pages 145 - 152, XP003008253 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011129133A (en) * | 2009-12-21 | 2011-06-30 | Korea Electronics Telecommun | Apparatus and method for processing complex material texture information |
JP2022540722A (en) * | 2019-07-19 | 2022-09-16 | ビーエーエスエフ コーティングス ゲゼルシャフト ミット ベシュレンクテル ハフツング | Method and system for simulating texture characteristics of coatings |
JP7387867B2 (en) | 2019-07-19 | 2023-11-28 | ビーエーエスエフ コーティングス ゲゼルシャフト ミット ベシュレンクテル ハフツング | Method and system for simulating texture characteristics of coatings |
Also Published As
Publication number | Publication date |
---|---|
JPWO2007013492A1 (en) | 2009-02-12 |
JP4886691B2 (en) | 2012-02-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1189173B1 (en) | Achromatic lighting in a graphics system and method | |
US7755626B2 (en) | Cone-culled soft shadows | |
US8004522B1 (en) | Using coverage information in computer graphics | |
JP4305903B2 (en) | Image generation system, program, and information storage medium | |
JP4187188B2 (en) | Image generation system, program, and information storage medium | |
JP4749198B2 (en) | Program, information storage medium, and image generation system | |
JP4223244B2 (en) | Image generation system, program, and information storage medium | |
JP4886691B2 (en) | Multilayer reflection shading image generation method and computer | |
JP4318240B2 (en) | Image generation system and information storage medium | |
JP4159082B2 (en) | Image generation system, program, and information storage medium | |
JP4761541B2 (en) | Image generation device | |
JP2010033297A (en) | Program, information storage medium, and image generation system | |
US6340972B1 (en) | Graphics adapter having a versatile lighting engine | |
JP4704615B2 (en) | Image generation system, program, and information storage medium | |
JP3748451B1 (en) | Program, information storage medium, and image generation system | |
JP4832152B2 (en) | System used in a three-dimensional computer graphics device for displaying a gaseous object on a two-dimensional display | |
JP4391632B2 (en) | Image generation system and information storage medium | |
JP4073031B2 (en) | Program, information storage medium, and image generation system | |
US7724255B2 (en) | Program, information storage medium, and image generation system | |
JP2006323512A (en) | Image generation system, program, and information storage medium | |
JP4787662B2 (en) | System used in a three-dimensional computer graphics device for displaying a gaseous object on a two-dimensional display | |
JP3538392B2 (en) | GAME SYSTEM, PROGRAM, AND INFORMATION STORAGE MEDIUM | |
JP4480322B2 (en) | GAME SYSTEM AND INFORMATION STORAGE MEDIUM | |
JP4592087B2 (en) | Image generation system, program, and information storage medium | |
JP2010033295A (en) | Image generation system, program and information storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2007526868 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 06781649 Country of ref document: EP Kind code of ref document: A1 |