US20030184546A1 - Image processing method - Google Patents

Image processing method Download PDF

Info

Publication number
US20030184546A1
US20030184546A1 US10/359,384 US35938403A US2003184546A1 US 20030184546 A1 US20030184546 A1 US 20030184546A1 US 35938403 A US35938403 A US 35938403A US 2003184546 A1 US2003184546 A1 US 2003184546A1
Authority
US
United States
Prior art keywords
modeling
pixel position
post
shading
color data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/359,384
Other languages
English (en)
Inventor
Ikuyo Kitamura
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Electronics Corp
Original Assignee
NEC Electronics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Electronics Corp filed Critical NEC Electronics Corp
Assigned to NEC ELECTRONICS CORPORATION reassignment NEC ELECTRONICS CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KITAMURA, IKUYO
Publication of US20030184546A1 publication Critical patent/US20030184546A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/812Ball games, e.g. soccer or baseball
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading

Definitions

  • the present invention relates to image processing technology. More particularly, it relates to an image processing method for rendering including shading.
  • Such image processing generally includes geometry processing and rendering, where the geometry processing involves transforming coordinates of the vertices of a polygon and calculating colors of the vertices through light source calculation to represent how the polygon looks depending on the locations of the vertices, light source, and observer's eye position as well as on colors and normal vectors; and the rendering involves drawing the polygon in specified coordinates, going through hidden surface elimination, shading, and texture mapping, and writing the polygon in bitmap format into video memory (see, for example, Japanese Patent Laid-Open No. 9-198525 paragraphs 0005 to 0018).
  • Image processing procedures are divided into modeling for defining the shape of an object, rendering for configuring the viewing direction and how the object looks, and drawing for producing output to a display. Shading is done as part of rendering and ray tracing is a rendering technique.
  • the ray tracing is a technique for depicting a mirror, a transparent object, or the like by calculating processes (reflections, refractions, shadows, etc.) undergone by rays before they reach an eye from a light source.
  • the rays reaching the eye are almost impossible to track if they are traced from the light source.
  • virtual rays emitted from the eye they are traced in the direction opposite to the direction of arrival of the real rays.
  • FIG. 1 is a diagram illustrating an example of modeling, namely a method for modeling a fish-eye lens. Effect of a fish-eye lens is modeled by cutting out part of a circle, denoting the center of the circle as point O, and placing (or displaying) an original picture on a screen located at a distance of R 1 from point O. Using this model, the address of each pixel position on the screen is converted. Specifically, through modeling, point A 1 which corresponds to the pixel position to be converted to point A 2 on the screen is determined and its address is converted to the address of point A 2 . In this way, an image is processed.
  • a pixel position obtained after modeling (corresponds to point A 2 in FIG. 1) will be referred to as a post-modeling pixel position and a corresponding pixel position before modeling (corresponds to point A 1 in FIG. 1) will be referred to as a pre-modeling pixel position.
  • Distance R 2 is referred to as a distance component at point A 2 and defined by the length of the line segment between point A 2 and point P, intersections of a straight line drawn perpendicular to the screen from the post-modeling pixel position A 2 with the screen and model surface.
  • FIG. 1 shows an example of 2-dimensional modeling, a similar method can also be used for 3-dimensional modeling by increasing a dimension.
  • FIG. 2 is a flowchart illustrating modeling procedures.
  • Step 1101 a post-modeling pixel position is specified.
  • Step 1102 a pre-modeling pixel position is calculated using the post-modeling pixel position specified in Step 1101 , and a distance component (which corresponds to R 2 in FIG.
  • Step 1103 the pixel is moved. Steps 1101 to 1103 are repeated for modeling.
  • Shading deals with a phenomenon that the larger the incident angle of rays, the smaller the intensity of the rays hitting a surface of an object than at normal incidence.
  • the intensity of the rays per unit area of a surface of an object is given by Xcos ⁇ , where ⁇ is the angle between a vector 1202 and normal 1201 of rays from a light source S, and X is the quantity of light per unit area from the light source S.
  • a cosine calculation will be described with reference to FIG. 4.
  • cos ⁇ ⁇ ⁇ A v ⁇ B v ⁇ A ⁇ ⁇ ⁇ ⁇ B ⁇
  • the present invention has been made in view of the above circumstances. Its object is to provide an image processing method which can produce shading effects similar to those of ray tracing without the need for cosine calculations which involve high-speed multiplications, simplify the configuration of a processing apparatus, and reduce processing time.
  • an image processing method comprises a first step of calculating a distance component which is distance between a screen and model surface; and a second step of adding a shading value obtained based on the distance component and a brightness value in pre-shading color data to obtain a brightness value in post-shading color data.
  • the first step may comprise a modeling step of calculating correspondence between pre-modeling pixel position and post-modeling pixel position on the screen as well as a distance component at the post-modeling pixel position; and the second step may comprise a shading value generation step of generating a shading value of a pixel at the post-modeling pixel position based on the distance component at the post-modeling pixel position, and a shading data preparation step of preparing post-shading color data of the pixel at the post-modeling pixel position by adding the shading value of the pixel at the post-modeling pixel position to a brightness value in pre-shading color data of the pixel at the post-modeling pixel position.
  • FIG. 1 is a diagram illustrating a modeling method
  • FIG. 2 is a flowchart illustrating modeling procedures
  • FIG. 3 is a diagram illustrating shading
  • FIG. 4 is a diagram illustrating how to find a cosine value
  • FIG. 5 is a diagram showing a configuration example of an image processing apparatus to illustrate an image processing method according to a first embodiment of the present invention
  • FIG. 6 is a flowchart of the image processing method according to the first embodiment of the present invention.
  • FIG. 7 is a diagram showing a configuration example of an image processing apparatus to illustrate an image processing method according to a second embodiment of the present invention.
  • FIG. 8 is a flowchart of the image processing method according to the second embodiment of the present invention.
  • FIG. 9 is a diagram showing a configuration example of an image processing apparatus to illustrate an image processing method according to a third embodiment of the present invention.
  • FIG. 10 is a flowchart of the image processing method according to the third embodiment of the present invention.
  • FIG. 11 is a diagram showing a configuration example of an image processing apparatus to illustrate an image processing method according to a fourth embodiment of the present invention.
  • FIG. 12 is a flowchart of the image processing method according to the fourth embodiment of the present invention.
  • FIG. 13 is a diagram showing a configuration example of an image processing apparatus to illustrate an image processing method according to a fifth embodiment of the present invention.
  • FIG. 14 is a flowchart of the image processing method according to the fifth embodiment of the present invention.
  • FIG. 15 is a diagram illustrating shading according to the present invention.
  • the present invention generates shading values of pixels at post-modeling pixel positions based on distance components each of which is the distance defined by the length of the line segment between two points where a straight line drawn perpendicular to a screen from a post-modeling pixel position intersects with the screen and a model surface (i.e., the distance between screen and model surface), adds the shading values to brightness values in color data (i.e., color data before shading) of the pixels at the post-modeling pixel positions, and outputs the resulting values as brightness values in color data after shading of pixels.
  • color data before shading will be referred to as pre-shading color data and color data after shading will be referred to as post-shading color data.
  • the shading values are determined based on a ratio between a distance component (R 4 in FIG. 15) and a predetermined distance component (R 3 in FIG. 15) which serves as a reference for brightness.
  • R 4 in FIG. 15 a distance component
  • R 3 in FIG. 15 a predetermined distance component
  • FIGS. 5 and 6 are diagrams illustrating a first embodiment of an image processing method according to the present invention.
  • the image processing apparatus comprises a modeling block 101 which receives input of post-modeling pixel positions and back-calculates pre-modeling pixel positions, a distance component storage memory 102 which stores distance components—each of which is the distance defined by the length of the line segment between two points where a straight line drawn perpendicular to a screen from a post-modeling pixel position intersects with the screen and a model surface—and pre-modeling pixel positions, a shading value generation circuit 103 which calculates shading values for color data using the distance components outputted from the distance component storage memory 102 , a color data storage memory 105 which stores color data of image for one frame corresponding to pre-modeling pixel positions, and an adder 104 which adds values determined by the shading value generation circuit 103 to color data that has been read out of the color data storage memory 105 using the addresses of pre-modeling pixel positions (the color data read out is the same as the color data of the original picture and is also the pre-shading color data of the pixels at the post-modeling
  • the post-modeling pixel position (point A 2 in FIG. 1) is determined first and then the corresponding pre-modeling pixel position (point A 1 in FIG. 1) is determined by going back along the path: A 2 , P, A 1 .
  • a signal S 1 entering the modeling block 101 indicates the post-modeling pixel position.
  • a signal S 2 outputted from the modeling block 101 and entering the distance component storage memory 102 indicates the pre-modeling pixel position.
  • a signal S 3 outputted from the modeling block 101 and entering the distance component storage memory 102 indicates the distance component which is the distance defined by the length of the line segment between two points where a straight line drawn perpendicular to the screen from the post-modeling pixel position intersects with the screen and the model surface.
  • a signal S 7 read out of the distance component storage memory 102 and entering the shading value generation circuit 103 indicates the distance component which is the distance defined by the length of the line segment between two points where the straight line drawn perpendicular to the screen from the post-modeling pixel position intersects with the screen and the model surface.
  • a signal S 8 read out of the distance component storage memory 102 and entering the color data storage memory 105 indicates the pre-modeling pixel position.
  • a signal S 4 outputted from the shading value generation circuit 103 and entering the adder 104 indicates the shading value to be added to the color data of the pixel.
  • a signal S 5 read out of the color data storage memory 105 and entering the adder 104 indicates the color data of the pixel at the post-modeling pixel position (hereinafter referred to as the post-modeling pixel color data), and the signal S 5 also indicates pre-shading pixel color data.
  • a signal S 6 outputted from the adder 104 indicates post-shading pixel color data.
  • the signal S 8 is used as an address signal of the color data storage memory 105 .
  • the signal S 5 which is the color data read at the address of the color data storage memory 105 , is used as the pre-shading color data of the pixel at the post-modeling pixel position (hereinafter referred to as the post-modeling pixel color data).
  • the modeling block 101 calculates and outputs pre-modeling pixel positions S 2 and the distance components S 3 based on the post-modeling pixel positions S 1 using predetermined modeling means (e.g., for a fish-eye, circular cylinder, etc.).
  • predetermined modeling means e.g., for a fish-eye, circular cylinder, etc.
  • the distance component storage memory 102 stores pre-modeling pixel positions S 2 and distance components S 3 for one frame. Also, the distance component storage memory 102 outputs the distance components S 7 and pre-modeling pixel positions S 8 upon a read request.
  • the shading value generation circuit 103 outputs the shading values S 4 for the color data of the pixels at the post-modeling pixel positions using the distance components S 7 outputted from the distance component storage memory 102 .
  • the color data storage memory 105 receives input of the pre-modeling pixel positions S 8 as addresses and outputs the color data at the addresses as the pre-shading color data S 5 of the pixels at the corresponding post-modeling pixel positions (i.e., as the post-modeling pixel color data).
  • the adder 104 adds the shading values S 4 outputted from the shading value generation circuit 103 to brightness values in the post-modeling pixel color data S 5 and outputs the resulting values as brightness values in post-shading pixel color data S 6 .
  • the post-shading pixel color data S 6 is inputted in a drawing block 110 and drawn on a display (not shown).
  • the image processing apparatus performs modeling by using the modeling block 101 in Steps 201 and 202 .
  • a known method such as the one described with reference to FIG. 1 is used for modeling.
  • the modeling block 101 judges in Step 201 whether each pixel falls within a target area. If it does, the modeling block 101 calculates the pre-modeling pixel position S 2 by tracing back from the post-modeling pixel position S 1 and calculates the distance component S 3 , in Step 202 . If it is judged in Step 201 that the pixel is outside the target area, the distance component S 3 assumes an initial value of 0.
  • Step 203 the image processing apparatus stores the pre-modeling pixel positions S 2 and the distance components S 3 calculated during modeling in the distance component storage memory 102 .
  • Step 204 using the addresses of the pre-modeling pixel positions S 8 in the distance component storage memory 102 , the image processing apparatus reads color data at the corresponding addresses in the color data storage memory 105 and outputs it as the post-modeling pixel color data S 5 (i.e., as the color data of pixel at the post-modeling pixel position) corresponding to the pre-modeling pixel positions S 8 .
  • Step 205 the image processing apparatus reads the distance components S 7 out of the distance component storage memory and calculates the shading values S 4 .
  • the calculation of the shading values S 4 based on the distance component S 7 is carried out by the shading value generation circuit 103 .
  • Step 206 the image processing apparatus makes the adder 104 add the shading values S 4 determined in Step 205 and the post-modeling pixel color data S 5 determined in Step 204 , uses the results of the addition as brightness values in shading, and outputs pixel color data S 6 after shading.
  • FIG. 15 is a diagram illustrating an example of shading for a fish-eye lens.
  • the modeled fish-eye lens is shaded in such a way that its brightness will be the highest at the center and decreases with increasing distance from the center.
  • R 3 is the distance component from point P 3 on an arc which represents a model surface at the center of the lens to point A 3 on a screen and if R 4 is the distance component from point P 4 on the arc to point A 4 on the screen
  • the distance component R 3 at the center of the lens corresponds to the brightest part.
  • the distance component R 3 in the brightest part is used as the maximum value to be added to the brightness of the screen in the shading process. If it is taken as 100, the shading value at point A 4 on the screen is given by 100*(R 4 /R 3 ), i.e., the ratio of the distance component R 4 to the distance component R 3 multiplied by 100.
  • the adder 104 adds the shading value thus obtained to the brightness value (denoted by B 4 ) of the pixel relocated to point A 4 , the pro-modeling pixel position.
  • the brightness value of point A 4 after shading is given by
  • the brightness value of point A 4 after modeling is a brightness component of the post-modeling pixel color data S 5 read out of the color data storage memory 105 using the address of the pre-modeling pixel position S 8 .
  • the present invention can produce shading effects similar to those of ray tracing without cosine calculations.
  • the elimination of the need for cosine calculations makes it possible to simplify circuit configuration and reduce processing time, compared to conventional technologies.
  • FIGS. 7 and 8 are diagrams illustrating the second embodiment of an image processing method according to the present invention.
  • FIG. 7 is a diagram showing a configuration of a processing apparatus while FIG. 8 is a flowchart of image processing.
  • the second embodiment makes it easy to produce complex graphical representations.
  • an image processing apparatus shown in FIG. 7 comprises, a second color data storage memory 307 , selector 308 , and distance component adder 306 .
  • the image processing apparatus comprises a modeling block 301 , the distance component adder 306 , a distance component storage memory 302 , a shading value generation circuit 303 , an adder 304 , a first color data storage memory 305 , the second color data storage memory 307 , and the selector 308 .
  • the modeling block 301 , distance component storage memory 302 , shading value generation circuit 303 , and adder 304 operate in a manner similar to the modeling block 101 , distance component storage memory 102 , shading value generation circuit 103 , and adder 104 shown in FIG. 5, respectively.
  • the distance component adder 306 is a circuit for calculating distance components. It adds a distance component S 3 outputted by the modeling block 301 and a distance component S 7 outputted by the distance component storage memory 302 and outputs the result as a distance component signal S 9 .
  • An input signal S 1 entering the modeling block 301 indicates a post-modeling pixel position.
  • a signal S 2 outputted from the modeling block 301 and entering the distance component storage memory 302 indicates a pre-modeling pixel position.
  • a signal S 3 outputted from the modeling block 301 and entering the distance component adder 306 indicates a distance component.
  • the signal S 9 outputted from the distance component adder 306 and entering the distance component storage memory 302 indicates a distance component (distance component obtained by adding the signal S 3 outputted from the modeling block 301 and signal S 7 outputted from the distance component storage memory 302 ).
  • the signal S 7 outputted from the distance component storage memory 302 and entering the shading value generation circuit 303 and distance component adder 306 indicates a distance component.
  • a signal S 8 outputted from the distance component storage memory 302 and entering the first color data storage memory 305 and the second color data storage memory 307 indicates a pre-modeling pixel position.
  • a signal S 4 outputted from the shading value generation circuit 303 and entering the adder 304 indicates a shading value to be added to pixel color data.
  • a signal S 11 read out of the first color data storage memory 305 and entering the second color data storage memory 307 and selector 308 indicates pixel color data which has been modeled based on color data stored in the first color data storage memory.
  • a signal S 10 read out of the second color data storage memory 307 and entering the first color data storage memory 305 and the selector 308 indicates pixel color data which has been modeled based on color data stored in the second color data storage memory.
  • a signal S 12 entering the selector 308 is a selection control signal for the selector. It controls which of the incoming signal S 11 and signal S 10 the selector 308 should select as its output.
  • a signal S 5 outputted from the selector 308 and entering theadder 304 indicates post-modeling pixel color data selected by the selector 308 .
  • a signal S 6 outputted from the adder 304 indicates post-shading pixel color data.
  • the first color data storage memory 305 and second color data storage memory 307 have addresses corresponding to pixel positions and store color data for each pixel.
  • Color data read and write operations are performed alternately on the first color data storage memory 305 and second color data storage memory 307 .
  • the second color data storage memory 307 undergoes a write operation.
  • the first color data storage memory 305 undergoes a write operation.
  • the first color data storage memory 305 and second color data storage memory 307 each output the color data stored at addresses which correspond to the pre-modeling pixel positions S 8 outputted from the distance component storage memory 302 .
  • the first color data storage memory 305 and second color data storage memory 307 each write the color data outputted from the other memory to addresses which correspond to the post-modeling pixel positions S 1 .
  • the output signal S 5 of the selector 308 is color data actually outputted as the color data of the pixel at the post-modeling pixel position (post-modeling pixel color data), being selected by the selection control signal S 12 from the pixel color data S 10 and S 11 .
  • FIG. 8 an image processing method in FIG. 8 according to the second embodiment of the present invention will be described in relation to the operation of the image processing apparatus shown in FIG. 7.
  • Step 204 in the flowchart of FIG. 6 is substituted with Step 404 in which post-modeling pixel color data is read out and color data is rewritten. Also, this embodiment additionally contains Step 407 in which distance components are added.
  • the image processing apparatus performs modeling by using the modeling block 301 in Steps 401 and 402 .
  • a known method such as the one described with reference to FIG. 1 is used for modeling.
  • the pre-modeling pixel positions S 2 are calculated by tracing back from the post-modeling pixel positions S 1 .
  • the distance components S 3 are calculated.
  • Step 407 the distance component adder 306 calculates the distance components S 9 by adding the distance components S 3 read out of the modeling block 301 and the distance components S 7 read out of the distance component storage memory 302 .
  • Step 403 the image processing apparatus stores the pre-modeling pixel positions S 2 and distance components S 9 calculated during modeling in the distance component storage memory 302 .
  • Step 404 the image processing apparatus reads and rewrites color data using the two image color data storage memories 305 and 307 .
  • the first color data storage memory 305 When the first color data storage memory 305 is selected as the read memory, the first color data storage memory 305 outputs the color data at the addresses which correspond to the pre-modeling pixel positions S 8 as the color data at the corresponding post-modeling pixel positions via the selector 308 .
  • the second color data storage memory 307 stores the color data read out of the first color data storage memory 305 to the addresses which correspond to the post-modeling pixel positions S 1 .
  • the second color data storage memory 307 when the second color data storage memory 307 is selected as the read memory, the second color data storage memory 307 outputs the color data at the addresses which correspond to the pre-modeling pixel positions S 8 as the color data at the corresponding post-modeling pixel positions via the selector 308 in Step 404 .
  • the first color data storage memory 305 stores the color data read out of the second color data storage memory 307 , using the addresses which correspond to the post-modeling pixel positions S 1 . Then, the image processing apparatus goes to Step 404 and returns to Step 401 where it runs the next modeling iteration by using the modeling block 301 .
  • Step 405 the image processing apparatus reads the distance components S 7 out of the distance component storage memory and calculates the shading values S 4 .
  • the calculation of the shading values S 4 based on the distance components S 7 is carried out by the shading value generation circuit 303 .
  • Step 406 the image processing apparatus makes the adder 104 add the shading values S 4 determined in Step 405 and the post-modeling pixel color data S 5 determined in Step 404 and outputs the results of the addition.
  • the image processing apparatus outputs shaded pixel color data S 6 .
  • Steps 401 to 406 are carried out with the first color data storage memory 305 selected as the read memory, the pixels composing the original picture are shaded in such a way that they become darker with increasing distance form the center as they approach the periphery, as is the case with the first embodiment, resulting in an image which looks as if it were sticking to the surface of a sphere.
  • Steps 401 to 406 are carried out again with the second color data storage memory 307 selected as the read memory and with the first image used as the original (pre-shading) picture.
  • the pixels composing the original picture are shaded in such a way that pixels become darker with increasing distance form the center as they approach the periphery.
  • the resulting image is drawn in such a way that the contrast between the center and periphery is enhanced.
  • the resulting image is practically such that both modeling and shading have been repeated twice with respect to the original picture.
  • FIGS. 9 and 10 are diagrams illustrating the third embodiment of an image processing method according to the present invention.
  • FIG. 9 is a diagram showing a configuration of a processing apparatus while FIG. 10 is a flowchart of image processing.
  • the image processing apparatus according to this embodiment comprises another modeling block for repeating different types of modeling and displaying the results on one screen and a selector for selecting a modeling block.
  • the image processing apparatus comprises a first modeling block 501 , second modeling block 510 , selector 509 which receives outputs from the first modeling block 501 and second modeling block 510 and outputs one of them selectively, distance component storage memory 502 , distance component adder 506 , shading value generation circuit 503 , adder 504 , first color data storage memory 505 , second color data storage memory 507 , and selector 508 which receives outputs from the first color data storage memory 505 and second color data storage memory 507 and outputs one of them selectively.
  • the first modeling block 501 , distance component storage memory 502 , shading value generation circuit 503 , adder 504 , first color data storage memory 505 , distance component adder 506 , second color data storage memory 507 , and selector 508 are configured similarly to the modeling block 301 , distance component storage memory 302 , shading value generation circuit 303 , adder 304 , first color data storage memory 305 , distance component adder 306 , second color data storage memory 307 , and selector 308 , respectively.
  • the image processing circuit in FIG. 9 comprises the second modeling block 510 and selector 509 in addition to the configuration of the image processing circuit shown in FIG. 7.
  • the second modeling block 510 Based on the post-modeling pixel position S 1 and using a different type of modeling than the first modeling block 501 , the second modeling block 510 outputs a pre-modeling pixel position S 14 and distance component S 13 calculated during modeling. Even if the second modeling differs from the first modeling such as when modeling a fish-eye lens and circular cylinder, the distance component S 13 is defined by the length of the line segment between two points where a straight line drawn perpendicular to the screen from the post-modeling pixel position intersects with the screen and the model surface.
  • the selector 509 outputs a pre-modeling pixel position S 15 and distance component S 16 depending on which is selected, the output from the first modeling (the distance component S 3 and pre-modeling pixel position S 2 ) or output from the second modeling (the distance component S 13 and pre-modeling pixel position S 14 ).
  • the distance component storage memory 502 stores distance components and post-modeling pixel positions.
  • One of input signals to the distance component storage memory 502 represents the distance component S 9 and the other input signal represents the pre-modeling pixel position S 15 .
  • One of output signals from the distance component storage memory 502 represents the distance component S 7 read out and other output signal represents the pre-modeling pixel position S 8 read out.
  • the distance component adder 506 receives the distance component S 16 outputted from the selector 509 and the distance component S 7 read out of the distance component storage memory 502 , adds the two distance components, and outputs the result as the distance component S 9 to the distance component storage memory.
  • the shading value generation circuit 503 operates in a manner similar to the shading value generation circuit 303 shown in FIG. 7.
  • the adder 504 operates in a manner similar to the adder 304 .
  • the first color data storage memory 505 operates in a manner similar to the first color data storage memory 305 .
  • the second color data storage memory 507 operates in a manner similar to the second color data storage memory 307 .
  • Selector 508 operates in a manner similar to the selector 308 . Thus, description of these components will be omitted.
  • FIG. 10 is a flowchart of the image processing method according to the third embodiment of the present invention.
  • This embodiment differs from the second embodiment in that a modeling block to be used is selected from a plurality of modeling blocks 601 and 602 (which correspond to the modeling blocks 501 and 510 in FIG. 9). Except for the number of modeling blocks and Step 608 in which a type of modeling is selected, Steps 607 , 603 , 604 , 605 , and 606 correspond to Steps 407 , 403 , 404 , 405 , and 406 in the second embodiment and perform operations similar to those of the corresponding steps.
  • Steps 601 and 602 the image processing apparatus performs modeling simultaneously by using the plurality of modeling blocks. Again, a known method such as the one described with reference to FIG. 1 is used for modeling.
  • the pre-modeling pixel positions S 2 are calculated by tracing back from the post-modeling pixel positions S 1 . Also, the distance components S 3 are calculated.
  • the pre-modeling pixel positions S 13 are calculated by tracing back from the post-modeling pixel positions S 1 . Also, the distance components S 14 are calculated.
  • Step 608 the image processing apparatus selects a modeling block to be used for modeling.
  • the distance components and pre-modeling pixel positions outputted from the selected modeling block are sent to Step 607 as the distance components and S 16 pre-modeling pixel positions S 15 to be used for rendering.
  • Step 607 the distance components S 16 are added to the distance components S 7 outputted from the distance component storage memory 502 by the distance component adder 506 to produce the distance components S 9 .
  • Step 603 the image processing apparatus stores the distance components S 9 and pre-modeling pixel positions S 15 in the distance component storage memory 502 .
  • Step 604 the image processing apparatus reads and rewrites color data using the two image color data storage memories 505 and 507 .
  • first color data storage memory 505 When first color data storage memory 505 is selected as the read memory, the first color data storage memory 505 outputs the color data at the addresses which correspond to the pre-modeling pixel positions S 8 as the color data at the corresponding post-modeling pixel positions via the selector 508 .
  • the second color data storage memory 507 stores the color data read out of the first color data storage memory 505 to the addresses which correspond to the post-modeling pixel positions S 1 .
  • the second color data storage memory 507 when the second color data storage memory 507 is selected as the read memory, the second color data storage memory 507 outputs the color data at the addresses which correspond to the pre-modeling pixel positions S 8 as the color data at the corresponding post-modeling pixel positions via the selector 508 in Step 604 .
  • the first color data storage memory 505 writes the color data read out of the second color data storage memory 507 to the addresses which correspond to the post-modeling pixel positions S 1 .
  • Step 605 the image processing apparatus reads the distance components S 7 out of the distance component storage memory and calculates the shading values S 4 .
  • the calculation of the shading values S 4 based on the distance components S 7 is carried out by the shading value generation circuit 503 .
  • Step 606 the image processing apparatus adds the shading values S 4 determined in Step 605 and the post-modeling pixel color data S 5 determined in Step 604 using the adder 504 and outputs the results of the addition.
  • the image processing apparatus outputs shaded pixel color data S 6 in Step 606 , and then returns to Step 601 and Step 602 .
  • this embodiment makes it possible to generate a shaded image which contains overlapping or aligned objects of different shapes from an original picture.
  • FIGS. 11 and 12 are diagrams illustrating the fourth embodiment of an image processing method according to the present invention.
  • FIG. 11 is a diagram showing a configuration of a processing apparatus while FIG. 12 is a flowchart of image processing.
  • the fourth embodiment differs from the first embodiment in that post-shading color data is prepared by adding distance components as shading values to color data without generating special shading values.
  • a distance component S 7 read out of a distance component storage memory 702 is inputted as a shading value directly into an adder 704 .
  • the adder 704 generates post-shading color data by adding a distance component S 7 to color data S 5 outputted from an color data storage memory 705 .
  • a modeling block 701 , the distance component storage memory 702 , the adder 704 , the color data storage memory 705 , and a drawing block 710 in FIG. 11 respectively correspond to the modeling block 101 , distance component storage memory 102 , adder 104 , color data storage memory 105 , and drawing block 110 in FIG. 5.
  • Their operation is the same as the corresponding components in FIG. 5, and thus description thereof will be omitted.
  • FIG. 12 is a flowchart illustrating the fourth embodiment of the image processing method according to the present invention. This embodiment differs from the first embodiment shown in FIG. 6 in that Step 205 in FIG. 6 has been deleted and that Step 206 has been substituted with Step 206 A in which distance components are added to color data. The other steps are the same as those in FIG. 6, and thus description thereof will be omitted.
  • this embodiment can eliminate a shading value generation circuit, and thus allows for size reduction.
  • the fourth embodiment has been described as an application to the first embodiment, this technical idea is also applicable to the second and third embodiment.
  • the shading value generation circuit 303 shown in FIG. 7 is removed and the signal S 7 is inputted in the adder 304 instead of the signal S 4 .
  • Step 405 in FIG. 8 is deleted and distance components are added instead of shading values in Step 406 .
  • the shading value generation circuit 503 shown in FIG. 9 is removed and the signal S 7 is inputted in the adder 504 instead of the signal S 4 .
  • Step 605 in FIG. 10 is deleted and distance components are added instead of shading values in Step 606 .
  • FIGS. 13 and 14 are diagrams illustrating the fifth embodiment of an image processing method according to the present invention.
  • FIG. 13 is a diagram showing a configuration of a processing apparatus while FIG. 14 is a flowchart of image processing.
  • the fifth embodiment differs from the first embodiment in that shading values are stored in a table and retrieved based on corresponding distance components.
  • the image processing apparatus in addition to the shading value generation circuit 103 shown in FIG. 5, the image processing apparatus according to this embodiment comprises a shading value memory 803 whose addresses correspond to distance components S 7 and which stores shading value S 4 data for pixel color data.
  • the shading value memory 803 outputs the shading values S 4 corresponding to inputted distance components S 7 .
  • a modeling block 801 , distance component storage memory 802 , adder 804 , color data storage memory 805 , and drawing block 810 in FIG. 13 respectively correspond to the modeling block 101 , distance component storage memory 102 , adder 104 , color data storage memory 105 , and drawing block 110 in FIG. 5. Their operation is the same as the corresponding components in FIG. 5, and thus description thereof will be omitted.
  • FIG. 14 is a flowchart illustrating the fifth embodiment of the image processing method according to the present invention.
  • This embodiment differs from the first embodiment shown in FIG. 6 in that Step 205 in which shading values S 4 are calculated from inputted distance components S 7 using the shading value generation circuit 103 in FIG. 6 has been substituted with Step 205 A in which shading values S 4 stored in tabular form in the shading value memory 803 are read out using addresses which correspond to distance components S 7 .
  • the other steps are the same as those in FIG. 6, and thus description thereof will be omitted.
  • the fifth embodiment has been described as an application to the first embodiment, this technical idea is also applicable to the second and third embodiment.
  • the shading value generation circuit 303 shown in FIG. 7 is substituted with a shading value storage memory from which shading values are read using addresses which correspond to distance components S 7 .
  • Step 406 in FIG. 8 is changed so that shading values are readout of the shading value storage memory which stores shading values.
  • the shading value generation circuit 503 shown in FIG. 9 is substituted with a shading value storage memory from which shading values are read using addresses which correspond to distance components S 7 .
  • Step 606 in FIG. 10 is changed so that shading values are read out of the shading value storage memory which stores shading values.
  • distance components between a screen and model surface calculated during modeling are reused during rendering.
  • brightness is calculated during shadings imply by adding values obtained from distance components to brightness values read as color data at post-modeling pixel positions. Consequently, shading effects similar to those of ray tracing can be produced easily without the need for cosine calculations. This simplifies an image processing apparatus and reduces processing time.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Image Generation (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)
US10/359,384 2002-02-13 2003-02-06 Image processing method Abandoned US20030184546A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2002035151A JP3629243B2 (ja) 2002-02-13 2002-02-13 モデリング時の距離成分を用いてレンダリング陰影処理を行う画像処理装置とその方法
JP35151/2002 2002-02-13

Publications (1)

Publication Number Publication Date
US20030184546A1 true US20030184546A1 (en) 2003-10-02

Family

ID=27654962

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/359,384 Abandoned US20030184546A1 (en) 2002-02-13 2003-02-06 Image processing method

Country Status (6)

Country Link
US (1) US20030184546A1 (ko)
EP (1) EP1339023A3 (ko)
JP (1) JP3629243B2 (ko)
KR (1) KR100489572B1 (ko)
CN (1) CN1438615A (ko)
TW (1) TW583602B (ko)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080150944A1 (en) * 2004-12-28 2008-06-26 Reshetov Alexander V Applications of interval arithmetic for reduction of number of computations in ray tracing problems
US20110188750A1 (en) * 2009-09-25 2011-08-04 Canon Kabushiki Kaisha Image processing apparatus and processing method therefor
US20130086524A1 (en) * 2010-06-08 2013-04-04 Mitsubishi Electric Corporation Screen creating system of programmable display and screen creating program thereof
US8482628B1 (en) * 2007-07-02 2013-07-09 Marvell International Ltd. Early radial distortion correction

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100889602B1 (ko) * 2006-12-05 2009-03-20 한국전자통신연구원 광선 추적을 위한 광선-삼각형 충돌 처리 방법 및 장치
CN101419391B (zh) * 2007-10-24 2010-06-02 佛山普立华科技有限公司 自动调整显示器亮度的系统及方法
KR101511281B1 (ko) 2008-12-29 2015-04-13 삼성전자주식회사 레이 트레이싱 고속화 방법 및 장치
US9171394B2 (en) * 2012-07-19 2015-10-27 Nvidia Corporation Light transport consistent scene simplification within graphics display system
CN107452031B (zh) * 2017-03-09 2020-06-26 叠境数字科技(上海)有限公司 虚拟光线跟踪方法及光场动态重聚焦显示系统
US10607403B2 (en) * 2017-10-04 2020-03-31 Google Llc Shadows for inserted content

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4947347A (en) * 1987-09-18 1990-08-07 Kabushiki Kaisha Toshiba Depth map generating method and apparatus
US5418901A (en) * 1989-12-21 1995-05-23 Sony Corporation Shading method and shading apparatus for computer graphics
US5499324A (en) * 1993-03-18 1996-03-12 Fujitsu Limited Graphic display apparatus with improved shading capabilities
US5694530A (en) * 1994-01-18 1997-12-02 Hitachi Medical Corporation Method of constructing three-dimensional image according to central projection method and apparatus for same
US5905503A (en) * 1993-01-28 1999-05-18 U.S. Philips Corporation Rendering an image using lookup tables giving illumination values for each light source by direction and distance
US6100895A (en) * 1994-12-01 2000-08-08 Namco Ltd. Apparatus and method of image synthesization

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4947347A (en) * 1987-09-18 1990-08-07 Kabushiki Kaisha Toshiba Depth map generating method and apparatus
US5418901A (en) * 1989-12-21 1995-05-23 Sony Corporation Shading method and shading apparatus for computer graphics
US5905503A (en) * 1993-01-28 1999-05-18 U.S. Philips Corporation Rendering an image using lookup tables giving illumination values for each light source by direction and distance
US5499324A (en) * 1993-03-18 1996-03-12 Fujitsu Limited Graphic display apparatus with improved shading capabilities
US5694530A (en) * 1994-01-18 1997-12-02 Hitachi Medical Corporation Method of constructing three-dimensional image according to central projection method and apparatus for same
US6100895A (en) * 1994-12-01 2000-08-08 Namco Ltd. Apparatus and method of image synthesization

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080150944A1 (en) * 2004-12-28 2008-06-26 Reshetov Alexander V Applications of interval arithmetic for reduction of number of computations in ray tracing problems
US7786991B2 (en) 2004-12-28 2010-08-31 Intel Corporation Applications of interval arithmetic for reduction of number of computations in ray tracing problems
US8482628B1 (en) * 2007-07-02 2013-07-09 Marvell International Ltd. Early radial distortion correction
US20110188750A1 (en) * 2009-09-25 2011-08-04 Canon Kabushiki Kaisha Image processing apparatus and processing method therefor
US8660347B2 (en) * 2009-09-25 2014-02-25 Canon Kabushiki Kaisha Image processing apparatus and processing method therefor
US20130086524A1 (en) * 2010-06-08 2013-04-04 Mitsubishi Electric Corporation Screen creating system of programmable display and screen creating program thereof
US8683371B2 (en) * 2010-06-08 2014-03-25 Mitsubishi Electric Corporation Screen creating system of programmable display and screen creating program thereof

Also Published As

Publication number Publication date
JP2003233836A (ja) 2003-08-22
EP1339023A3 (en) 2005-01-19
EP1339023A2 (en) 2003-08-27
KR20030068445A (ko) 2003-08-21
KR100489572B1 (ko) 2005-05-16
JP3629243B2 (ja) 2005-03-16
TW583602B (en) 2004-04-11
CN1438615A (zh) 2003-08-27
TW200303494A (en) 2003-09-01

Similar Documents

Publication Publication Date Title
US7212207B2 (en) Method and apparatus for real-time global illumination incorporating stream processor based hybrid ray tracing
Greene et al. Creating raster omnimax images from multiple perspective views using the elliptical weighted average filter
US5704024A (en) Method and an apparatus for generating reflection vectors which can be unnormalized and for using these reflection vectors to index locations on an environment map
US7030879B1 (en) System and method of improved calculation of diffusely reflected light
US5377313A (en) Computer graphics display method and system with shadow generation
US7362332B2 (en) System and method of simulating motion blur efficiently
US8648856B2 (en) Omnidirectional shadow texture mapping
JP4078716B2 (ja) 画像処理装置および方法、並びに提供媒体
US20040181382A1 (en) Visualizing the surface of a liquid
US6384824B1 (en) Method, system and computer program product for multi-pass bump-mapping into an environment map
US20040061700A1 (en) Image processing apparatus and method of same
JPH09231404A (ja) 物体を表示するための画像処理方法および装置
US7528831B2 (en) Generation of texture maps for use in 3D computer graphics
US7158133B2 (en) System and method for shadow rendering
US20030184546A1 (en) Image processing method
JP3349871B2 (ja) 画像処理装置
US6542154B1 (en) Architectural extensions to 3D texturing units for accelerated volume rendering
JP4209129B2 (ja) グラフィックスモデルを表す複数のポリゴンを含むメッシュをレンダリングする方法
JP3035571B2 (ja) 画像処理装置
JPH07225854A (ja) 3次元物体の2次元表示を生成するシステム及び方法
JP2973413B2 (ja) コンピュータグラフィックスの照度計算方法及び表示装置
Pajarola et al. Depth-mesh objects: Fast depth-image meshing and warping
Yuan et al. Tile pair-based adaptive multi-rate stereo shading
JP2952585B1 (ja) 画像生成方法
CN117437345B (zh) 基于三维引擎实现渲染纹理镜面反射效果的方法及系统

Legal Events

Date Code Title Description
AS Assignment

Owner name: NEC ELECTRONICS CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KITAMURA, IKUYO;REEL/FRAME:013752/0443

Effective date: 20030203

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION