WO2008014384A2 - Real-time scenery and animation - Google Patents

Real-time scenery and animation Download PDF

Info

Publication number
WO2008014384A2
WO2008014384A2 PCT/US2007/074441 US2007074441W WO2008014384A2 WO 2008014384 A2 WO2008014384 A2 WO 2008014384A2 US 2007074441 W US2007074441 W US 2007074441W WO 2008014384 A2 WO2008014384 A2 WO 2008014384A2
Authority
WO
WIPO (PCT)
Prior art keywords
cloud
real
grid
values
time
Prior art date
Application number
PCT/US2007/074441
Other languages
French (fr)
Other versions
WO2008014384A3 (en
Inventor
Jonathan R. Klein
Andrew C. O'meara
Anthony Key Compton
Gary S. Gluck
Original Assignee
Soundspectrum, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Soundspectrum, Inc. filed Critical Soundspectrum, Inc.
Publication of WO2008014384A2 publication Critical patent/WO2008014384A2/en
Publication of WO2008014384A3 publication Critical patent/WO2008014384A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation

Definitions

  • This disclosure describes the creation of realistic animated cloud-scenes in real-time using animated noise textures applied to arbitrary macro-scale shapes and input patterns.
  • Performing 2D atmospheric simulations in real-time may be feasible, but turning a 2D cloud scene into a 3D photorealistic scene poses a different set of challenges. If lighting effects and movement/flow in directions that cross the simulation plane is desired, the challenges are even more daunting. It's not clear that creating a photorealistic animated cloudscape from a 2D simulation is even possible, given the absence of model data in the cross-plane direction. Also, as with 3D simulations, rendering cloud model data derived from physically-inspired simulation is difficult to impossible for even high-level desktop computers.
  • aspects of the current invention provide a solution for drawing real-time, animated cloud scenery that is photorealistic in movement, cloud shape, cloud texture, and/or cloud behavior.
  • an initial image map is provided that defines the borders of the cloud and defines movement of the cloud on a macro level.
  • Micro level noise texture, opacity and/or coloring/lighting attributes are then applied.
  • the result is cloud shapes, characteristics, and movement that resemble what one would expect to see if a camera was aimed upwards into the sky.
  • the system can be used to simulate "time-lapse" photography effects in order to produce cloud scenes with a time compression of about 5 to 20 times or more.
  • the algorithm can also include one or more of the following steps:
  • a first aspect of the present invention provides a real-time system for rendering algorithmically generated animated cloud imagery comprising: an initial image provider for providing an input two-dimensional grid of scalar values that evolves over time to define macro- scale behaviors of a cloud scene; and a noise texture applicator for generating at least one layer of cloud imagery by perturbing positions of the scalar values with noise textures.
  • a second aspect of the present invention provides a real-time system for rendering algorithmically generated animated cloud imagery comprising: means for obtaining an input two- dimensional grid of scalar values that evolves over time to define macro-scale behaviors of a cloud scene; and means for generating at least one layer of cloud imagery by perturbing positions of the scalar values with noise textures.
  • a third aspect of the present invention provides a real-time method for rendering algorithmically generated animated cloud imagery comprising: obtaining an input two-dimensional grid of scalar values that evolves over time to define macro-scale behaviors of a cloud scene; and generating at least one layer of cloud imagery by perturbing positions of the scalar values with noise textures.
  • a fourth aspect of the present invention provides a program product stored on a computer readable medium for rendering algorithmically generated animated cloud imagery in real time, the program product comprising program code for performing the following steps: obtaining an input two-dimensional grid of scalar values that evolves over time to define macro-scale behaviors of a cloud scene; and generating at least one layer of cloud imagery by perturbing positions of the scalar values with noise textures.
  • a fifth aspect of the present invention provides a method for deploying a real-time system for rendering algorithmically generated animated cloud imagery comprising: providing a computer infrastructure being operable to: obtain an input two-dimensional grid of scalar values that evolves over time to define macro-scale behaviors of a cloud scene; and generate at least one layer of cloud imagery by perturbing positions of the scalar values with noise textures.
  • Fig. 1 depicts an illustrative computerized implementation in accordance with an embodiment of the present invention
  • Fig. 2(a) depicts a test pattern "macro-level" input field in accordance with an embodiment of the present invention.
  • Fig. 2(b) depicts the test pattern with spatial perturbation applied only, in accordance with an embodiment of the present invention.
  • Fig. 2(c) depicts the test pattern with a subtractive noise function applied only, in accordance with an embodiment of the present invention.
  • Fig. 2(d) depicts the test pattern with a subtractive noise functions and spatial perturbation applied in accordance with an embodiment of the present invention.
  • Fig. 3 depicts a diagram of a simple lighting map in accordance with an embodiment of the present invention.
  • Fig. 4 depicts a diagram of lighting sample points from different types of light sources in accordance with the present invention.
  • Fig. 5 depicts a diagram of lighting sample points for weak ambient lighting in accordance with the present invention.
  • Fig. 6 depicts a diagram of lighting sample points for a strong, distant point source (the sun) in accordance with the present invention.
  • aspects of the current invention provide a solution for drawing realtime, animated cloud scenery that is photorealistic in movement, cloud shape, cloud texture, and/or cloud behavior.
  • an initial image map is provided that defines the borders of the cloud and defines movement of the cloud on a macro level.
  • Micro level noise texture, opacity and/or coloring/lighting attributes are then applied.
  • the result is cloud shapes, characteristics, and movement that resemble what one would expect to see if a camera was aimed upwards into the sky.
  • the system can be used to simulate "time-lapse" photography effects in order to produce cloud scenes with a time compression of about 5 to 20 times or more.
  • system 10 includes a computer system 14 deployed within a computerized infrastructure/environment 12.
  • a network environment e.g., the Internet, a wide area network (WAN), a local area network (LAN), a virtual private network (VPN), etc.
  • communication throughout the network can occur via any combination of various types of communications links.
  • the communication links can comprise addressable connections that may utilize any combination of wired and/or wireless transmission methods.
  • connectivity could be provided by conventional TCP/IP sockets- based protocol, and an Internet service provider could be used to establish connectivity to the Internet.
  • computerized infrastructure 12 is intended to demonstrate that some or all of the components of system 10 could be deployed, managed, serviced, etc. by a service provider who offers to provide and/or deploy software that provides real-time rendering of algorithmically generated animated cloud imagery according to the present invention.
  • computer system 14 includes a processing unit 20, a graphic processing unit 21 (optional), a memory 22, a bus 24, and input/output (I/O) interfaces 26. Further, computer system 14 is shown in communication with external I/O devices/resources 28 and storage system 30.
  • processing unit 20 executes computer program code, such as imagery system 40, which is stored in memory 22 and/or storage system 30. While executing computer program code, processing unit 20 can read and/or write data to/from graphic processing unit 21, memory 22, storage system 30, and/or I/O interfaces 26.
  • Bus 24 provides a communication link between each of the components in computer system 14.
  • External interfaces 28 can comprise any devices (e.g., keyboard, pointing device, display, etc.) that enable a user to interact with computer system 14 and/or any devices (e.g., network card, modem, etc.) that enable computer system 14 to communicate with one or more other providing devices.
  • Computerized infrastructure 12 is only illustrative of various types of computer infrastructures for implementing the invention.
  • computer infrastructure 12 comprises two or more providing devices (e.g., a server cluster) that communicate over a network to perform the various process steps of the invention.
  • computer system 14 is only representative of various possible computer systems that can include numerous combinations of hardware.
  • computer system 14 can comprise any specific purpose article of manufacture comprising hardware and/or computer program code for performing specific functions, any article of manufacture that comprises a combination of specific purpose and general purpose hardware/software, or the like.
  • the program code and hardware can be created using standard programming and engineering techniques, respectively.
  • processing unit 20 and/or graphic processing unit 21 may comprise a single processing unit, or be distributed across one or more processing units in one or more locations, e.g., on a client and server.
  • memory 22 and/or storage system 30 can comprise any combination of various types of data storage and/or transmission media that reside at one or more physical locations.
  • I/O interfaces 26 can comprise any system for exchanging information with one or more external interfaces 28.
  • I/O interfaces 26 could be used to exchange information with GPU 21. Still further, it is understood that one or more additional components (e.g., system software, math co-processing unit, etc.) not shown in Fig. 1 can be included in computer system 14. However, if computer system 14 comprises a handheld device or the like, it is understood that one or more external interfaces 28 (e.g., a display) and/or storage system 30 could be contained within computer system 14, not externally as shown.
  • external interfaces 28 e.g., a display
  • storage system 30 could be contained within computer system 14, not externally as shown.
  • Storage system 30 can be any type of system (e.g., a database) capable of providing storage for information under the present invention such as use information, resource information, data decay probabilities etc.
  • storage system 30 could include one or more storage devices, such as a magnetic disk drive or an optical disk drive.
  • storage system 30 includes data distributed across, for example, a local area network (LAN), wide area network (WAN) or a storage area network (SAN) (not shown).
  • LAN local area network
  • WAN wide area network
  • SAN storage area network
  • additional components such as cache memory, communication systems, system software, etc., may be incorporated into computer system 14.
  • imagery system 40 Shown in memory 22 of computer system 14 is imagery system 40, which enables computer system 14 to perform the method described herein.
  • Imagery system 40 modifies an initial image 62 to produce a final image 64 that has real-time, animated cloud 66 scenery that is photorealistic in movement, cloud shape, cloud texture, and/or cloud behavior on a typical desktop computer.
  • imagery system 40 includes an initial image provider 42, a noise texture applicator 44, an opacity applicator 46 and a coloring applicator 48.
  • Initial image provider 42 obtains an initial image 62 that is used to produce cloud 66 in final image 64.
  • Initial image 62 described in this invention includes a two-dimensional grid of scalar values that evolves over time. To this extent, initial image 62 has several functions. First, initial image 62 is used to define boundaries for cloud 66 and/or the location of cloud 66 within final image 64. Further, initial image 62 may contain gradual opacity effects that are used in creating cloud-like opacity regions over the entire image. Still further, initial image 62 may contain data that describes macro level movement of cloud 66 within final image 64. Because of this, values in the initial image 62 grid are preferably scalar, i.e., are single real numbers representing the amount (or "scale") of cloud at each grid point.
  • Fig. 2(a) shows a sample test pattern frame 110 having initial image 62 (Fig. 1) used as the macro-level algorithm input.
  • initial image 62 contains several lighter colored squares bordered by darkness. The larger of the squares has various levels of shading that range from lighter (on the left) to darker (on the right).
  • initial image 62 is illustrated as being square in shape, this shape is not limiting.
  • the use of initial image 62 having a square shape is designed to illustrate that realistic cloud 66 images can be created even from initial images 62 having basic shapes. To this extent, any shape of initial image 62 may be used in this invention.
  • initial image 62 represents the "macro-level" of the cloud simulation, meaning the overall position of bodies of cloud formations in the sky.
  • This data does not capture the "micro- level” behaviors of the clouds, which includes the cloud-like texture and shape of the clouds on a smaller scale. Separating the micro-level behaviors from macro-level behaviors allows for the creation of realistic cloud appearances applied to any shape or pattern, thus allowing artists or programmers a high level of precise control over the overall cloud behaviors and placement of clouds within a scene, while maintaining a realistic appearance for the clouds on the micro-level.
  • the input data can be of any resolution, but is typically far smaller than the output cloud animation. Though quality does increase with larger input fields, even for output animations of over 1000x1000 pixels, an input field of 128x128 can produce reasonable results.
  • the macro behaviors may be created by any process capable of generating an evolving grid of 2D scalar values in real-time, including:
  • initial image 62 is described herein as including a 2D grid of values, it may be also be generated by using 2D projections of 3D space.
  • the fluid- like simulations and the composites of shapes can be implemented in either 2D or 3D space.
  • the input field can take on more of a realistic 3D feel.
  • Noise texture applicator 44 generates at least one layer of cloud imagery by perturbing positions of the scalar values with noise textures. Because the algorithm input specifies only the macro-level behaviors of the cloud animation, noise texture applicator 44 dynamically generates the micro-level behaviors using perturbation and decay functions described in detail herein. These perturbation and decay functions each require "noise" data, which is used to produce irregular patterns in time, space and cloud vapor intensity similar to the patterns observed in the behavior of real clouds.
  • the ideal noise textures contain a desired value distribution as defined by the algorithm designer, illustrate irregular patterns and are spatially continuous.
  • the ideal noise textures in fact, illustrate a "cloud-like" appearance, though the generation of these textures is done with mathematical functions and does not relate to the cloud imagery generation technique described in this document.
  • Fig. 2(b) illustrates a sample application by noise texture applicator 44 to test pattern frame 110 (Fig. 2 (a)). As illustrated multiple layers of pattern 110 are subjected to spatial noise perturbation in order to change the shape and appearance of initial image 62, to yield frame 112.
  • One embodiment of the algorithm uses "fractal noise", which is generated by adding multiple layers of fields containing a uniform random distribution between 0.0 and 1.0, each with a different spatial frequency. Because the macro-scale input data is animated and moves over time, the macro-level behaviors of the cloud animation will likewise be dynamic. In order to capture realistic micro-scale cloud motions over time, the noise texture should be animated such that the values used for perturbation and subtractive noise evolves over time. In order to assure continuous animation and avoid the appearance of "jumping" or "skipping” artifacts, the noise textures must themselves be relatively smooth and continuous in time.
  • Fig. 2(c) illustrates a sample application by noise texture applicator 44 to test pattern frame 112 (Fig. 2(b)). As illustrated, the subtractive noise function is applied to patterns 110 and/or 112 which simulates the decay of initial image 62 to yield frame 114.
  • Continuous animated noise textures can be generated in a number of ways including the use of continuous 3 -dimensional mathematical noise functions, smooth interpolation between two or more 2-dimensional noise fields, or by making incremental additions or other changes to an existing 2-dimensional noise field at each frame.
  • the cloud rendering algorithm may procure the static or animated noise textures described here in a number of ways: the textures may be generated dynamically, generated once and stored in memory when the cloud rendering algorithm or program is started, or may be generated once and stored on disk and then loaded dynamically when the cloud rendering algorithm or program is started. Because the generation of these textures may require significant computation time and because hard disk space is often plentiful in modern desktop computers, our embodiment of the invention uses large pre-computed 3D fields of noise textures, which are saved to disk and loaded when the cloud rendering algorithm is initialized.
  • Opacity applicator 46 enhances a cloud- like decay texture pattern by subjecting opacity values to a function in which animated noise textures are used to reduce cloud opacity in continuous regions of the image.
  • animated noise textures are used to reduce cloud opacity in continuous regions of the image.
  • the position and intensity of the values in the macro-level scalar grid are perturbed and processed using data from the animated noise textures.
  • This step of the cloud generation technique determines the transparency values for a layer of cloud imagery. The transparency values are used, along with blending, to render semi-transparent polygons that are opaque where clouds are visible in a scene.
  • the cloud images are at full intensity and cover up any background imagery; in areas of low opacity, the clouds are mostly or fully transparent and the background imagery is visible. This step of the invention is concerned only with transparency and does not explicitly specify the color of the clouds.
  • Fig. 2(d) illustrates a sample application by opacity applicator 46 to test pattern frame 114 (Fig. 2(c)).
  • initial image 62 has now been modified by both noise texture applicator 44 and opacity applicator 46 to produce effects such that micro-level detail is applied to generate image 116.
  • the macro-level input in this situation is a simple square and not a realistic cloud shape
  • the "micro-level" output shows realistic cloud-like shapes and internal texturing. As such, applying the same techniques to more realistic cloud-like shapes which evolve over time can yield realistic cloud animations.
  • each frame of the final cloud image animation is computed at a number of points on a 2D grid. These points may correspond to individual pixels when sufficient computational resources are available, or they may correspond to control points on a higher resolution grid, in which case individual pixel values are computed via interpolating between values at the control points.
  • the system can be expected to process on the order of 1 million pixels per frame (for a computer display with a resolution of 1,000 x 1,000 pixels), which can be done using per-pixel fragment-shader hardware that is available on a great deal of modern consumer level hardware.
  • the control-point approach can be used when computational and graphics hardware is more limited. By generating cloud images at the individual pixel level, the resulting image is sharp and filled with pixel-level details.
  • the final image can be generated using control points on a 2D grid.
  • the same process is used as with the per-pixel rendering, but here, computation is done only at discrete points throughout the grid, and the pixels in between are filled in using texture interpolation.
  • Reasonable results can be produced using grids on the order of 100 x 100 points, but quality improves steadily as the grid size is increased.
  • the control-point approach is therefore adaptive and may be modified dynamically to favor either image quality or performance. Though the techniques are implemented independently, the control- point technique approaches the individual-pixel technique as the number of control points approaches the number of actual pixels to be rendered on screen.
  • the cloud transparency values are computed for each point using the following steps: To compute intensity I at each point P by offsetting position values: a. Create coordinates P 1 by adding independent noise values to P for the x- and y- dimensions. b. Sample macro-scale image at point P 1 to determine intensity I c. I is subjected to a subtractive noise function which emulates decay and evaporation of cloud vapor
  • the subtractive noise function (step c) is used to emulate the gradual decay of cloud vapor as vapor concentration drops off. Instead of fading out cloud opacity smoothly as vapor concentration approaches 0, the decay instead creates larger and larger gaps and disturbances in the cloud vapor.
  • the contrast between a smooth opacity fade and decay is illustrated in Figs. l(a)-(d).
  • Fig. l(a) shows the smooth opacity fade of the input image
  • Fig. l(c) shows the subtractive noise decay function applied.
  • the subtractive noise function works by subtracting noise values from the cloud opacity levels in proportion to (1.0 - opacity).
  • the subtractive noise has the effect of probabilistically setting opacity values to below 0.0 (meaning they are fully transparent and thus invisible) with a probability proportional to (1.0 - opacity). Points with opacity values of near 1.0 are only rarely set below 0.0 by the function, while those with small opacity values are almost always set below 0.0. Because the source noise used in the function is spatially continuous fractal noise, the resulting images show continuous regions of decay patterns.
  • the resulting opacity values are scaled up by large constant values such that regions which remain positive are set to high opacity values, while those that go negative are clamped to 0.
  • the result of this scaling is that even the sparse bits of cloud vapor that remain in regions of low overall vapor are set to high intensity values. This emulates the behavior of actual clouds in which even mostly empty regions of sky can have small, intense traces of cloud vapor that appear visually as intense or almost as intense as regions with high cloud concentration.
  • the resulting transparency at each point P is then used to render a semi-transparent cloud layer.
  • Coloring applicator 48 simulates lighting by creating a two-dimensional lighting grid of scalar values for each grid point P by sampling cloud- field intensity at one or more other grid points representing paths from one or more directional or ambient light sources to the point P; and assigning output colors to the cloud images through interpolation between two input color fields according to values in the lighting grid. While, the previous section deals with the overall shape, texture and appearance of clouds and cloud/non-cloud boundaries by producing layers of cloud data with varying levels of opacity, it does not address the color or shading of the clouds.
  • Realistic lighting effects for 2D or 3D surfaces are typically accomplished by tracing the path of rays of light as they travel from a light source and are reflected and absorbed by objects in a scene. This can be done in real-time for a small number of objects, but for realistic lighting of a 2D or 3D scalar field, each point in the grid must be treated as an individual object which can absorb and reflect rays of light in a scene. For the most realistic lighting effects, one would need to trace the path of each ray of light through the scalar field, calculating how the light is reflected and absorbed through each individual scalar grid point "particle" it encounters. For a sample grid size of 128x128, this corresponds to 16,384 objects in a simple scene, which is far too complex for a real-time application.
  • the lighting method described here emulates some of these effects with a real-time approach. Without considering the actual rays of light which illuminate the points in the scalar field, a shading map is constructed which indicates how much a given grid location is obscured irom potential paths of light. Although actual paths of light are not explicitly determined, the scalar map gives an aggregate assessment of how much light a given grid point could be exposed to. Though the resulting images are not based on accurate lighting models and do not even explicitly model real lights, they do produce believable lighting results.
  • a lighting computation must be made for each point in the scalar field.
  • These lighting values are determined at each point P in the input grid by calculating a weighted average of cloud intensity values at a number of lighting points L which are found on potential lighting paths from ambient or directional light sources towards point P.
  • the specific weighting values and source locations are used to indicate the strength, direction and type of the light sources.
  • Fig. 3 shows a simple example of a desired output lighting map which can be used to color an image (diagram 120 of a lighting map).
  • the shading at point P is calculated as the weighted average of the cloud intensity at points Ll, L2 and L3.
  • the weight associated with a point L indicates the type and brightness of the light source.
  • the strongly weighted L2 represents the path to P from a directional light source, while the weakly weighted points Ll and L3 represent an ambient light source.
  • sample points can be used.
  • To represent ambient light multiple sample points are used, roughly equidistant and in a circular pattern.
  • To effectively represent a far off point source, such as the sun multiple sample points are used with varying distances in a straight-line or fanned in a small angle (to represent reflection or bending of light rays by unspecified objects).
  • Fig. 5 depicts a diagram 124 of lighting sample points for weak ambient lighting.
  • Fig. 6 depicts a diagram 126 of lighting sample points for a strong, distant point source (the sun).
  • the resulting lighting map is a 2D scalar field describing the darkness at each point in the macro-level grid. As with the cloud opacity layers, these points are perturbed in space to introduce additional texturing effects into the lighting. Then, at each point, the color is computed by interpolating between two input image fields-one representing the fully lit regions, the other representing the fully dark-according to the strength of the perturbed lighting map input corresponding to the point. In its simplest embodiment, the "lit” input image can be totally white and the "dark” input image can be totally black. The interpolation with the lighting map then generates a cloud image that is fully white when lit, fully black when dark and shades of gray in between indicating the amount of light hitting a particular region.
  • the process above describes a single layer of cloud imagery. Individual layers appear cloud-like, but generally flat.
  • An embodiment of the algorithm achieves improved results by rendering multiple layers of cloud imagery based on the same macro-level behaviors. Variation in the multiple layers is archived by scaling and offsetting the animated noise textures used by each of the layers. Because the animated noise layers are responsible for determining the shape and decay patterns of the clouds on the micro-level, sampling from different areas of the noise textures for each layer of cloud data may change the shape and texture of each layer significantly, thus creating a diverse multi-layered appearance based on the same macro-level inputs.
  • the number of layers rendered can be adjusted according to the desired quality level or the available system resources. Typically, 3 layers of cloud imagery are sufficient to produce reasonable output quality, with more layers providing further enhancement.
  • the invention provides a computer-readable/useable medium that includes computer program code to enable a computer infrastructure to render algorithmically generated animated cloud imagery in real-time.
  • the computer-readable/useable medium includes program code that implements each of the various process steps of the invention. It is understood that the terms computer-readable medium or computer useable medium comprises one or more of any type of physical embodiment of the program code.
  • the computer-readable/useable medium can comprise program code embodied on one or more portable storage articles of manufacture (e.g., a compact disc, a magnetic disk, a tape, etc.), on one or more data storage portions of a providing device, such as memory 22 (Fig. 1) and/or storage system 30 (Fig. 1) (e.g., a fixed disk, a read-only memory, a random access memory, a cache memory, etc.), and/or as a data signal (e.g., a propagated signal) traveling over a network (e.g., during a wired/wireless electronic distribution of the program code).
  • a portable storage articles of manufacture e.g., a compact disc, a magnetic disk, a tape, etc.
  • data storage portions of a providing device such as memory 22 (Fig. 1) and/or storage system 30 (Fig. 1) (e.g., a fixed disk, a read-only memory, a random access memory, a cache memory, etc.), and/or as a data
  • the invention provides a business method that performs the process steps of the invention on a subscription, advertising, and/or fee basis. That is, a service provider, such as a Solution Provider, could offer to render algorithmically generated animated cloud imagery in real-time.
  • the service provider can create, maintain, support, etc., a computer infrastructure, such as computerized infrastructure 12 (Fig. 1) that performs the process steps of the invention for one or more customers.
  • the service provider can receive payment from the customer(s) under a subscription and/or fee agreement and/or the service provider can receive payment from the sale of advertising content to one or more third parties.
  • the invention provides a computer-implemented method for managing data decay.
  • a computerized infrastructure such as computerized infrastructure 12 (Fig. 1)
  • one or more systems for performing the process steps of the invention can be obtained (e.g., created, purchased, used, modified, etc.) and deployed to the computerized infrastructure.
  • the deployment of a system can comprise one or more of (1) installing program code on a providing device, such as computer system 14 (Fig. 1), from a computer-readable medium; (2) adding one or more providing devices to the computer infrastructure; and (3) incorporating and/or modifying one or more existing systems of the computer infrastructure to enable the computerized infrastructure to perform the process steps of the invention.
  • the invention provides a business method that performs the process steps of the invention for purposes of rendering animation display on a device, such as computer system 14 (Fig. 1), such that the animation display can be recorded through an I/O interface 26 (Fig. 1) such that the output can be stored on external devices 28 (Fig. 1) such as a hard drive, a CD ROM, a DVD, or other recordable storage medium.
  • a device such as computer system 14 (Fig. 1)
  • the animation display can be recorded through an I/O interface 26 (Fig. 1) such that the output can be stored on external devices 28 (Fig. 1) such as a hard drive, a CD ROM, a DVD, or other recordable storage medium.
  • the invention provides a business method that performs the process steps of the invention for purposes of rendering animation display on devices such as mobile phones, personal digital assistants, gaming systems, and other devices capable of implementing the inventions described.
  • the invention provides a business method that performs the process steps of the invention for purposes of rendering animation display in response to input from a user regarding one or more variables that are used to create a customized animation display for the user.
  • program code and "computer program code” are synonymous and mean any expression, in any language, code or notation, of a set of instructions intended to cause a providing device having an information processing capability to perform a particular function either directly or after either or both of the following: (a) conversion to another language, code or notation; and/or (b) reproduction in a different material form.
  • program code can be embodied as one or more of: an application/software program, component software/a library of functions, an operating system, a basic I/O system/driver for a particular providing and/or I/O device, and the like.
  • processing unit can refer to a single processing unit or multiple processing units which may include one or multiple of the following: (i) central processing unit (CPU), (ii) graphic processing unit (GPUs), (iii) math co-processing unit, or (iv) any other processing unit capable of interpreting instructions and processing data from computer program code or from another processing unit.
  • CPU central processing unit
  • GPUs graphic processing unit
  • math co-processing unit any other processing unit capable of interpreting instructions and processing data from computer program code or from another processing unit.
  • processing units may exist within a single computer environment connected to each other through a system memory controller or a memory bus such as bus 110 or an input / output interface such as I/O interface 112.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Real-time animation of cloud scenery is provided Input of an initial image (62) provides a two-dimensional gπd of scalar values that evolves over time to define the borders of the cloud and movement of the cloud on a macro level A noise texture applicator (44) generates a cloud layer by perturbing positions of the input scalar values with noise textures An opacity applicator (46) creates a cloud-like decay texture pattern by applying animated noise textures to opacity values to reduce cloud opacity in continuous regions of the image A coloring applicator (48) simulates lighting effects by creating a two-dimensional lighting grid of scalar values for each gπd point by sampling cloud-field intensity at one or more other grid points representing paths from one or more directional or ambient light sources The result is real-time animated cloud scenery that is photorealistic in movement, cloud shape, cloud texture, and/or cloud behavior

Description

REAL-TIME CLOUD SCENERY AND ANIMATION
Background of the Invention
1. Field of the Invention
This disclosure describes the creation of realistic animated cloud-scenes in real-time using animated noise textures applied to arbitrary macro-scale shapes and input patterns.
2. Related Art
Performing an atmospheric computer simulation in order to create photorealistic computer- generated cloudscapes in real-time quickly becomes impossible for even a super-computer (let alone typical desktop computer) due to the typical dataset size and running time of the simulation algorithms. If simulating cloud behavior wasn't daunting enough, rendering cloud model data (i.e., drawing the model data coherently on screen) in real-time on modern computers is also an impossibility for similar reasons: dataset size and running time of the algorithms involved. For example, consider a model data set size of 1000x1000x1000 points. If each point had just two single precision floating points associated with it, then 8GB of RAM would be needed just to store the point data, which is twice the RAM even possible for a 32-bit desktop computer to have equipped. Even so, for a single modern-day computer with a 10 GB/s access rate to simply read and then write that amount of data just once would take about a full second. To actually perform computations on multiple samples of the data, it would take orders of magnitude longer. Keep in mind that "real-time" frame rates imply about 20+ frames per second.
Performing 2D atmospheric simulations in real-time may be feasible, but turning a 2D cloud scene into a 3D photorealistic scene poses a different set of challenges. If lighting effects and movement/flow in directions that cross the simulation plane is desired, the challenges are even more daunting. It's not clear that creating a photorealistic animated cloudscape from a 2D simulation is even possible, given the absence of model data in the cross-plane direction. Also, as with 3D simulations, rendering cloud model data derived from physically-inspired simulation is difficult to impossible for even high-level desktop computers.
Summary of the Invention
Aspects of the current invention provide a solution for drawing real-time, animated cloud scenery that is photorealistic in movement, cloud shape, cloud texture, and/or cloud behavior. In an embodiment, an initial image map is provided that defines the borders of the cloud and defines movement of the cloud on a macro level. Micro level noise texture, opacity and/or coloring/lighting attributes are then applied. The result is cloud shapes, characteristics, and movement that resemble what one would expect to see if a camera was aimed upwards into the sky. The system can be used to simulate "time-lapse" photography effects in order to produce cloud scenes with a time compression of about 5 to 20 times or more.
The main steps in the algorithm are as follows:
1. Procuring input of animated scalar field of "macro-level" features;
2. Computing lighting maps based on macro-level input fields; and
3. Rendering one or more layers of cloud imagery using the following steps: a. Macro-level cloud field is spatially perturbed using "noise" textures to yield cloud transparency map, b. Cloud transparency map is subjected to a subtractive noise algorithm to yield realistic cloud evaporation and decay effects, c. If lighting map is available, compute colors by interpolating input color fields according to lighting map.
The algorithm can also include one or more of the following steps:
4. Enhancing a cloud-like decay texture pattern by subjecting opacity values to a function in which animated noise textures are used to reduce cloud opacity in continuous regions;
5. Simulating lighting effects using the following steps: creating a two-dimensional lighting grid of scalar values for each grid point P by sampling cloud- field intensity at one or more other grid points representing paths from one or more directional or ambient light sources to the point P;
6. Assigning output colors to the cloud images through interpolation between two input color fields according to values in the lighting grid; and
7. Animating the noise textures in time to simulate effects of wind, condensation and evaporation.
A first aspect of the present invention provides a real-time system for rendering algorithmically generated animated cloud imagery comprising: an initial image provider for providing an input two-dimensional grid of scalar values that evolves over time to define macro- scale behaviors of a cloud scene; and a noise texture applicator for generating at least one layer of cloud imagery by perturbing positions of the scalar values with noise textures.
A second aspect of the present invention provides a real-time system for rendering algorithmically generated animated cloud imagery comprising: means for obtaining an input two- dimensional grid of scalar values that evolves over time to define macro-scale behaviors of a cloud scene; and means for generating at least one layer of cloud imagery by perturbing positions of the scalar values with noise textures.
A third aspect of the present invention provides a real-time method for rendering algorithmically generated animated cloud imagery comprising: obtaining an input two-dimensional grid of scalar values that evolves over time to define macro-scale behaviors of a cloud scene; and generating at least one layer of cloud imagery by perturbing positions of the scalar values with noise textures.
A fourth aspect of the present invention provides a program product stored on a computer readable medium for rendering algorithmically generated animated cloud imagery in real time, the program product comprising program code for performing the following steps: obtaining an input two-dimensional grid of scalar values that evolves over time to define macro-scale behaviors of a cloud scene; and generating at least one layer of cloud imagery by perturbing positions of the scalar values with noise textures.
A fifth aspect of the present invention provides a method for deploying a real-time system for rendering algorithmically generated animated cloud imagery comprising: providing a computer infrastructure being operable to: obtain an input two-dimensional grid of scalar values that evolves over time to define macro-scale behaviors of a cloud scene; and generate at least one layer of cloud imagery by perturbing positions of the scalar values with noise textures.
Brief Description of the Drawings
These and other features of this invention will be more readily understood from the following detailed description of the various aspects of the invention taken in conjunction with the accompanying drawings in which:
Fig. 1 depicts an illustrative computerized implementation in accordance with an embodiment of the present invention
Fig. 2(a) depicts a test pattern "macro-level" input field in accordance with an embodiment of the present invention.
Fig. 2(b) depicts the test pattern with spatial perturbation applied only, in accordance with an embodiment of the present invention.
Fig. 2(c) depicts the test pattern with a subtractive noise function applied only, in accordance with an embodiment of the present invention.
Fig. 2(d) depicts the test pattern with a subtractive noise functions and spatial perturbation applied in accordance with an embodiment of the present invention.
Fig. 3 depicts a diagram of a simple lighting map in accordance with an embodiment of the present invention. Fig. 4 depicts a diagram of lighting sample points from different types of light sources in accordance with the present invention.
Fig. 5 depicts a diagram of lighting sample points for weak ambient lighting in accordance with the present invention.
Fig. 6 depicts a diagram of lighting sample points for a strong, distant point source (the sun) in accordance with the present invention.
The drawings are not necessarily to scale. The drawings are merely schematic representations, not intended to portray specific parameters of the invention. The drawings are intended to depict only typical embodiments of the invention, and therefore should not be considered as limiting the scope of the invention. In the drawings, like numbering represents like elements.
Detailed Description of the Invention
As indicated above, aspects of the current invention provide a solution for drawing realtime, animated cloud scenery that is photorealistic in movement, cloud shape, cloud texture, and/or cloud behavior. In an embodiment, an initial image map is provided that defines the borders of the cloud and defines movement of the cloud on a macro level. Micro level noise texture, opacity and/or coloring/lighting attributes are then applied. The result is cloud shapes, characteristics, and movement that resemble what one would expect to see if a camera was aimed upwards into the sky. The system can be used to simulate "time-lapse" photography effects in order to produce cloud scenes with a time compression of about 5 to 20 times or more.
Computerized Implementation
Referring now to Fig. 1, a real-time system 10 for rendering algorithmically generated animated cloud imagery according to an embodiment of the present invention is shown. As depicted, system 10 includes a computer system 14 deployed within a computerized infrastructure/environment 12. This is intended to demonstrate, among other things, that the present invention could be implemented within a network environment (e.g., the Internet, a wide area network (WAN), a local area network (LAN), a virtual private network (VPN), etc.), or on a stand-alone computer system. In the case of the former, communication throughout the network can occur via any combination of various types of communications links. For example, the communication links can comprise addressable connections that may utilize any combination of wired and/or wireless transmission methods. Where communications occur via the Internet, connectivity could be provided by conventional TCP/IP sockets- based protocol, and an Internet service provider could be used to establish connectivity to the Internet. Still yet, computerized infrastructure 12 is intended to demonstrate that some or all of the components of system 10 could be deployed, managed, serviced, etc. by a service provider who offers to provide and/or deploy software that provides real-time rendering of algorithmically generated animated cloud imagery according to the present invention.
As shown, computer system 14 includes a processing unit 20, a graphic processing unit 21 (optional), a memory 22, a bus 24, and input/output (I/O) interfaces 26. Further, computer system 14 is shown in communication with external I/O devices/resources 28 and storage system 30. In general, processing unit 20 executes computer program code, such as imagery system 40, which is stored in memory 22 and/or storage system 30. While executing computer program code, processing unit 20 can read and/or write data to/from graphic processing unit 21, memory 22, storage system 30, and/or I/O interfaces 26. Bus 24 provides a communication link between each of the components in computer system 14. External interfaces 28 can comprise any devices (e.g., keyboard, pointing device, display, etc.) that enable a user to interact with computer system 14 and/or any devices (e.g., network card, modem, etc.) that enable computer system 14 to communicate with one or more other providing devices. Computerized infrastructure 12 is only illustrative of various types of computer infrastructures for implementing the invention. For example, in one embodiment, computer infrastructure 12 comprises two or more providing devices (e.g., a server cluster) that communicate over a network to perform the various process steps of the invention. Moreover, computer system 14 is only representative of various possible computer systems that can include numerous combinations of hardware. To this extent, in other embodiments, computer system 14 can comprise any specific purpose article of manufacture comprising hardware and/or computer program code for performing specific functions, any article of manufacture that comprises a combination of specific purpose and general purpose hardware/software, or the like. In each case, the program code and hardware can be created using standard programming and engineering techniques, respectively. Moreover, processing unit 20 and/or graphic processing unit 21 may comprise a single processing unit, or be distributed across one or more processing units in one or more locations, e.g., on a client and server. Similarly, memory 22 and/or storage system 30 can comprise any combination of various types of data storage and/or transmission media that reside at one or more physical locations. Further, I/O interfaces 26 can comprise any system for exchanging information with one or more external interfaces 28. Although not shown as such in Fig. 1, I/O interfaces 26 could be used to exchange information with GPU 21. Still further, it is understood that one or more additional components (e.g., system software, math co-processing unit, etc.) not shown in Fig. 1 can be included in computer system 14. However, if computer system 14 comprises a handheld device or the like, it is understood that one or more external interfaces 28 (e.g., a display) and/or storage system 30 could be contained within computer system 14, not externally as shown.
Storage system 30 can be any type of system (e.g., a database) capable of providing storage for information under the present invention such as use information, resource information, data decay probabilities etc. To this extent, storage system 30 could include one or more storage devices, such as a magnetic disk drive or an optical disk drive. In another embodiment, storage system 30 includes data distributed across, for example, a local area network (LAN), wide area network (WAN) or a storage area network (SAN) (not shown). In addition, although not shown, additional components, such as cache memory, communication systems, system software, etc., may be incorporated into computer system 14.
Shown in memory 22 of computer system 14 is imagery system 40, which enables computer system 14 to perform the method described herein. Imagery system 40 modifies an initial image 62 to produce a final image 64 that has real-time, animated cloud 66 scenery that is photorealistic in movement, cloud shape, cloud texture, and/or cloud behavior on a typical desktop computer. To this extent, imagery system 40 includes an initial image provider 42, a noise texture applicator 44, an opacity applicator 46 and a coloring applicator 48.
Low-Resolution Macro Behavior Inputs
Initial image provider 42 obtains an initial image 62 that is used to produce cloud 66 in final image 64. Initial image 62 described in this invention includes a two-dimensional grid of scalar values that evolves over time. To this extent, initial image 62 has several functions. First, initial image 62 is used to define boundaries for cloud 66 and/or the location of cloud 66 within final image 64. Further, initial image 62 may contain gradual opacity effects that are used in creating cloud-like opacity regions over the entire image. Still further, initial image 62 may contain data that describes macro level movement of cloud 66 within final image 64. Because of this, values in the initial image 62 grid are preferably scalar, i.e., are single real numbers representing the amount (or "scale") of cloud at each grid point.
Fig. 2(a) shows a sample test pattern frame 110 having initial image 62 (Fig. 1) used as the macro-level algorithm input. As shown, initial image 62 contains several lighter colored squares bordered by darkness. The larger of the squares has various levels of shading that range from lighter (on the left) to darker (on the right). Even though initial image 62 is illustrated as being square in shape, this shape is not limiting. On the contrary, the use of initial image 62 having a square shape is designed to illustrate that realistic cloud 66 images can be created even from initial images 62 having basic shapes. To this extent, any shape of initial image 62 may be used in this invention.
As such, initial image 62 represents the "macro-level" of the cloud simulation, meaning the overall position of bodies of cloud formations in the sky. This data does not capture the "micro- level" behaviors of the clouds, which includes the cloud-like texture and shape of the clouds on a smaller scale. Separating the micro-level behaviors from macro-level behaviors allows for the creation of realistic cloud appearances applied to any shape or pattern, thus allowing artists or programmers a high level of precise control over the overall cloud behaviors and placement of clouds within a scene, while maintaining a realistic appearance for the clouds on the micro-level. The input data can be of any resolution, but is typically far smaller than the output cloud animation. Though quality does increase with larger input fields, even for output animations of over 1000x1000 pixels, an input field of 128x128 can produce reasonable results.
The macro behaviors may be created by any process capable of generating an evolving grid of 2D scalar values in real-time, including:
• pre-rendered video created by artists;
• fluid-like computer simulations of gases or liquids moving through a space; and/or
• composites of shapes or image sprites moving through a space
Though initial image 62 is described herein as including a 2D grid of values, it may be also be generated by using 2D projections of 3D space. With respect to the examples above, the fluid- like simulations and the composites of shapes can be implemented in either 2D or 3D space. By using 2D projections of 3D space, the input field can take on more of a realistic 3D feel. Static and Animated Noise Textures
Noise texture applicator 44 generates at least one layer of cloud imagery by perturbing positions of the scalar values with noise textures. Because the algorithm input specifies only the macro-level behaviors of the cloud animation, noise texture applicator 44 dynamically generates the micro-level behaviors using perturbation and decay functions described in detail herein. These perturbation and decay functions each require "noise" data, which is used to produce irregular patterns in time, space and cloud vapor intensity similar to the patterns observed in the behavior of real clouds.
The ideal noise textures contain a desired value distribution as defined by the algorithm designer, illustrate irregular patterns and are spatially continuous. The ideal noise textures, in fact, illustrate a "cloud-like" appearance, though the generation of these textures is done with mathematical functions and does not relate to the cloud imagery generation technique described in this document.
Fig. 2(b) illustrates a sample application by noise texture applicator 44 to test pattern frame 110 (Fig. 2 (a)). As illustrated multiple layers of pattern 110 are subjected to spatial noise perturbation in order to change the shape and appearance of initial image 62, to yield frame 112.
One embodiment of the algorithm uses "fractal noise", which is generated by adding multiple layers of fields containing a uniform random distribution between 0.0 and 1.0, each with a different spatial frequency. Because the macro-scale input data is animated and moves over time, the macro-level behaviors of the cloud animation will likewise be dynamic. In order to capture realistic micro-scale cloud motions over time, the noise texture should be animated such that the values used for perturbation and subtractive noise evolves over time. In order to assure continuous animation and avoid the appearance of "jumping" or "skipping" artifacts, the noise textures must themselves be relatively smooth and continuous in time.
Fig. 2(c) illustrates a sample application by noise texture applicator 44 to test pattern frame 112 (Fig. 2(b)). As illustrated, the subtractive noise function is applied to patterns 110 and/or 112 which simulates the decay of initial image 62 to yield frame 114.
Continuous animated noise textures can be generated in a number of ways including the use of continuous 3 -dimensional mathematical noise functions, smooth interpolation between two or more 2-dimensional noise fields, or by making incremental additions or other changes to an existing 2-dimensional noise field at each frame.
The cloud rendering algorithm may procure the static or animated noise textures described here in a number of ways: the textures may be generated dynamically, generated once and stored in memory when the cloud rendering algorithm or program is started, or may be generated once and stored on disk and then loaded dynamically when the cloud rendering algorithm or program is started. Because the generation of these textures may require significant computation time and because hard disk space is often plentiful in modern desktop computers, our embodiment of the invention uses large pre-computed 3D fields of noise textures, which are saved to disk and loaded when the cloud rendering algorithm is initialized.
Spatial Perturbation and Subtractive Noise Function
Opacity applicator 46 enhances a cloud- like decay texture pattern by subjecting opacity values to a function in which animated noise textures are used to reduce cloud opacity in continuous regions of the image. In order to turn macro-scale shape images into detailed cloud animations, the position and intensity of the values in the macro-level scalar grid are perturbed and processed using data from the animated noise textures. This step of the cloud generation technique determines the transparency values for a layer of cloud imagery. The transparency values are used, along with blending, to render semi-transparent polygons that are opaque where clouds are visible in a scene. In areas of high opacity, the cloud images are at full intensity and cover up any background imagery; in areas of low opacity, the clouds are mostly or fully transparent and the background imagery is visible. This step of the invention is concerned only with transparency and does not explicitly specify the color of the clouds.
Fig. 2(d) illustrates a sample application by opacity applicator 46 to test pattern frame 114 (Fig. 2(c)). As illustrated, initial image 62 has now been modified by both noise texture applicator 44 and opacity applicator 46 to produce effects such that micro-level detail is applied to generate image 116. As stated earlier, while the macro-level input in this situation is a simple square and not a realistic cloud shape, the "micro-level" output shows realistic cloud-like shapes and internal texturing. As such, applying the same techniques to more realistic cloud-like shapes which evolve over time can yield realistic cloud animations.
To do so, each frame of the final cloud image animation is computed at a number of points on a 2D grid. These points may correspond to individual pixels when sufficient computational resources are available, or they may correspond to control points on a higher resolution grid, in which case individual pixel values are computed via interpolating between values at the control points.
In the individual pixel scenario, the system can be expected to process on the order of 1 million pixels per frame (for a computer display with a resolution of 1,000 x 1,000 pixels), which can be done using per-pixel fragment-shader hardware that is available on a great deal of modern consumer level hardware. The control-point approach can be used when computational and graphics hardware is more limited. By generating cloud images at the individual pixel level, the resulting image is sharp and filled with pixel-level details.
When per-pixel shading is not possible due to hardware limitations, the final image can be generated using control points on a 2D grid. The same process is used as with the per-pixel rendering, but here, computation is done only at discrete points throughout the grid, and the pixels in between are filled in using texture interpolation. Reasonable results can be produced using grids on the order of 100 x 100 points, but quality improves steadily as the grid size is increased. The control-point approach is therefore adaptive and may be modified dynamically to favor either image quality or performance. Though the techniques are implemented independently, the control- point technique approaches the individual-pixel technique as the number of control points approaches the number of actual pixels to be rendered on screen.
The cloud transparency values are computed for each point using the following steps: To compute intensity I at each point P by offsetting position values: a. Create coordinates P1 by adding independent noise values to P for the x- and y- dimensions. b. Sample macro-scale image at point P1 to determine intensity I c. I is subjected to a subtractive noise function which emulates decay and evaporation of cloud vapor
The subtractive noise function (step c) is used to emulate the gradual decay of cloud vapor as vapor concentration drops off. Instead of fading out cloud opacity smoothly as vapor concentration approaches 0, the decay instead creates larger and larger gaps and disturbances in the cloud vapor. The contrast between a smooth opacity fade and decay is illustrated in Figs. l(a)-(d). Fig. l(a) shows the smooth opacity fade of the input image, while Fig. l(c) shows the subtractive noise decay function applied. The subtractive noise function works by subtracting noise values from the cloud opacity levels in proportion to (1.0 - opacity). Because the source noise is generated as a random map with values from 0.0 to 1.0, the subtractive noise has the effect of probabilistically setting opacity values to below 0.0 (meaning they are fully transparent and thus invisible) with a probability proportional to (1.0 - opacity). Points with opacity values of near 1.0 are only rarely set below 0.0 by the function, while those with small opacity values are almost always set below 0.0. Because the source noise used in the function is spatially continuous fractal noise, the resulting images show continuous regions of decay patterns.
In one embodiment of the invention, the resulting opacity values are scaled up by large constant values such that regions which remain positive are set to high opacity values, while those that go negative are clamped to 0. The result of this scaling is that even the sparse bits of cloud vapor that remain in regions of low overall vapor are set to high intensity values. This emulates the behavior of actual clouds in which even mostly empty regions of sky can have small, intense traces of cloud vapor that appear visually as intense or almost as intense as regions with high cloud concentration.
The resulting transparency at each point P is then used to render a semi-transparent cloud layer.
Lighting
Coloring applicator 48 simulates lighting by creating a two-dimensional lighting grid of scalar values for each grid point P by sampling cloud- field intensity at one or more other grid points representing paths from one or more directional or ambient light sources to the point P; and assigning output colors to the cloud images through interpolation between two input color fields according to values in the lighting grid. While, the previous section deals with the overall shape, texture and appearance of clouds and cloud/non-cloud boundaries by producing layers of cloud data with varying levels of opacity, it does not address the color or shading of the clouds. As with the macro-level algorithm inputs, it is typically desirable to allow programmers, artists or users with arbitrarily fine control over the coloring applied to clouds, while simultaneously integrating algorithmic effects to enhance realism and ensure that the resulting images appear cloud-like. This section describes an algorithm that allows for both realistic cloud behavior and fine control over cloud color.
In order to render realistic lighting effects for a cloud scene, it is desirable to have an algorithm that is capable of capturing a number of phenomena, including: • ambient and directional light sources; • self-shadowing; and/or
• absorption and reflection of light.
Realistic lighting effects for 2D or 3D surfaces are typically accomplished by tracing the path of rays of light as they travel from a light source and are reflected and absorbed by objects in a scene. This can be done in real-time for a small number of objects, but for realistic lighting of a 2D or 3D scalar field, each point in the grid must be treated as an individual object which can absorb and reflect rays of light in a scene. For the most realistic lighting effects, one would need to trace the path of each ray of light through the scalar field, calculating how the light is reflected and absorbed through each individual scalar grid point "particle" it encounters. For a sample grid size of 128x128, this corresponds to 16,384 objects in a simple scene, which is far too complex for a real-time application.
The lighting method described here emulates some of these effects with a real-time approach. Without considering the actual rays of light which illuminate the points in the scalar field, a shading map is constructed which indicates how much a given grid location is obscured irom potential paths of light. Although actual paths of light are not explicitly determined, the scalar map gives an aggregate assessment of how much light a given grid point could be exposed to. Though the resulting images are not based on accurate lighting models and do not even explicitly model real lights, they do produce believable lighting results.
To obtain the shading map of values in the macro-level input grid, a lighting computation must be made for each point in the scalar field. These lighting values are determined at each point P in the input grid by calculating a weighted average of cloud intensity values at a number of lighting points L which are found on potential lighting paths from ambient or directional light sources towards point P. The specific weighting values and source locations are used to indicate the strength, direction and type of the light sources. Fig. 3 shows a simple example of a desired output lighting map which can be used to color an image (diagram 120 of a lighting map). Referring to Fig. 4 (diagram 122 of lighting sample points from different types of light sources), the shading at point P is calculated as the weighted average of the cloud intensity at points Ll, L2 and L3. The weight associated with a point L indicates the type and brightness of the light source. In the diagram, the strongly weighted L2 represents the path to P from a directional light source, while the weakly weighted points Ll and L3 represent an ambient light source.
To effectively represent different kinds of lights, different patterns of sample points can be used. To represent ambient light, multiple sample points are used, roughly equidistant and in a circular pattern. To effectively represent a far off point source, such as the sun, multiple sample points are used with varying distances in a straight-line or fanned in a small angle (to represent reflection or bending of light rays by unspecified objects).
Fig. 5 depicts a diagram 124 of lighting sample points for weak ambient lighting. Fig. 6 depicts a diagram 126 of lighting sample points for a strong, distant point source (the sun).
In practice, realistic lighting effects can be created using as few as three sample points, though quality and nuance of the lighting effects improve continually as additional sample points are used. Using fewer than three sample points tends to produce poor results because cloud boundaries become clearly visible in the samples. Using multiple samples smoothes out these boundaries.
The resulting lighting map is a 2D scalar field describing the darkness at each point in the macro-level grid. As with the cloud opacity layers, these points are perturbed in space to introduce additional texturing effects into the lighting. Then, at each point, the color is computed by interpolating between two input image fields-one representing the fully lit regions, the other representing the fully dark-according to the strength of the perturbed lighting map input corresponding to the point. In its simplest embodiment, the "lit" input image can be totally white and the "dark" input image can be totally black. The interpolation with the lighting map then generates a cloud image that is fully white when lit, fully black when dark and shades of gray in between indicating the amount of light hitting a particular region.
Rendering Multiple Layers of Cloud Imagery
The process above describes a single layer of cloud imagery. Individual layers appear cloud-like, but generally flat. An embodiment of the algorithm achieves improved results by rendering multiple layers of cloud imagery based on the same macro-level behaviors. Variation in the multiple layers is archived by scaling and offsetting the animated noise textures used by each of the layers. Because the animated noise layers are responsible for determining the shape and decay patterns of the clouds on the micro-level, sampling from different areas of the noise textures for each layer of cloud data may change the shape and texture of each layer significantly, thus creating a diverse multi-layered appearance based on the same macro-level inputs.
The number of layers rendered can be adjusted according to the desired quality level or the available system resources. Typically, 3 layers of cloud imagery are sufficient to produce reasonable output quality, with more layers providing further enhancement.
While shown and described herein as a real-time method and system for rendering algorithmically generated animated cloud imagery, it is understood that the invention further provides various alternative embodiments. For example, in one embodiment, the invention provides a computer-readable/useable medium that includes computer program code to enable a computer infrastructure to render algorithmically generated animated cloud imagery in real-time. To this extent, the computer-readable/useable medium includes program code that implements each of the various process steps of the invention. It is understood that the terms computer-readable medium or computer useable medium comprises one or more of any type of physical embodiment of the program code. In particular, the computer-readable/useable medium can comprise program code embodied on one or more portable storage articles of manufacture (e.g., a compact disc, a magnetic disk, a tape, etc.), on one or more data storage portions of a providing device, such as memory 22 (Fig. 1) and/or storage system 30 (Fig. 1) (e.g., a fixed disk, a read-only memory, a random access memory, a cache memory, etc.), and/or as a data signal (e.g., a propagated signal) traveling over a network (e.g., during a wired/wireless electronic distribution of the program code).
In another embodiment, the invention provides a business method that performs the process steps of the invention on a subscription, advertising, and/or fee basis. That is, a service provider, such as a Solution Provider, could offer to render algorithmically generated animated cloud imagery in real-time. In this case, the service provider can create, maintain, support, etc., a computer infrastructure, such as computerized infrastructure 12 (Fig. 1) that performs the process steps of the invention for one or more customers. In return, the service provider can receive payment from the customer(s) under a subscription and/or fee agreement and/or the service provider can receive payment from the sale of advertising content to one or more third parties.
In still another embodiment, the invention provides a computer-implemented method for managing data decay. In this case, a computerized infrastructure, such as computerized infrastructure 12 (Fig. 1), can be provided and one or more systems for performing the process steps of the invention can be obtained (e.g., created, purchased, used, modified, etc.) and deployed to the computerized infrastructure. To this extent, the deployment of a system can comprise one or more of (1) installing program code on a providing device, such as computer system 14 (Fig. 1), from a computer-readable medium; (2) adding one or more providing devices to the computer infrastructure; and (3) incorporating and/or modifying one or more existing systems of the computer infrastructure to enable the computerized infrastructure to perform the process steps of the invention.
In another embodiment, the invention provides a business method that performs the process steps of the invention for purposes of rendering animation display on a device, such as computer system 14 (Fig. 1), such that the animation display can be recorded through an I/O interface 26 (Fig. 1) such that the output can be stored on external devices 28 (Fig. 1) such as a hard drive, a CD ROM, a DVD, or other recordable storage medium.
In another embodiment, the invention provides a business method that performs the process steps of the invention for purposes of rendering animation display on devices such as mobile phones, personal digital assistants, gaming systems, and other devices capable of implementing the inventions described.
In another embodiment, the invention provides a business method that performs the process steps of the invention for purposes of rendering animation display in response to input from a user regarding one or more variables that are used to create a customized animation display for the user.
As used herein, it is understood that the terms "program code" and "computer program code" are synonymous and mean any expression, in any language, code or notation, of a set of instructions intended to cause a providing device having an information processing capability to perform a particular function either directly or after either or both of the following: (a) conversion to another language, code or notation; and/or (b) reproduction in a different material form. To this extent, program code can be embodied as one or more of: an application/software program, component software/a library of functions, an operating system, a basic I/O system/driver for a particular providing and/or I/O device, and the like.
As used herein, it is understood that the term "processing unit" can refer to a single processing unit or multiple processing units which may include one or multiple of the following: (i) central processing unit (CPU), (ii) graphic processing unit (GPUs), (iii) math co-processing unit, or (iv) any other processing unit capable of interpreting instructions and processing data from computer program code or from another processing unit. To the extent that there are multiple processing units involved in interpreting instructions and processing data from the computer program code, these processing units may exist within a single computer environment connected to each other through a system memory controller or a memory bus such as bus 110 or an input / output interface such as I/O interface 112. The foregoing description of various aspects of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed, and obviously, many modifications and variations are possible. Such modifications and variations that may be apparent to a person skilled in the art are intended to be included within the scope of the invention as defined by the accompanying claims.

Claims

ClaimsWhat is claimed is:
1. A real-time system for rendering algorithmically generated animated cloud imagery comprising: an initial image provider for providing an input two-dimensional grid of scalar values that evolves over time to define macro-scale behaviors of a cloud scene; and a noise texture applicator for generating at least one layer of cloud imagery by perturbing positions of the scalar values with noise textures.
2. A real-time system of claim 1, further comprising an opacity applicator for enhancing cloud-like decay texture pattern by subjecting opacity values to a function in which animated noise textures are used to reduce cloud opacity in continuous regions.
3. A real-time system of claim 1, further comprising a coloring applicator for simulating lighting effects using the following: a two-dimensional lighting grid of scalar values is created for each grid point P by sampling cloud-field intensity from at least one other grid points representing paths from at least one directional or ambient light source to the point P; and output colors assigned to the cloud images through interpolation between two input color fields according to values in the lighting grid.
4. The real-time system in claim 1, wherein the noise textures are animated in time to simulate effects of wind, condensation and evaporation.
5. A real-time system for rendering algorithmically generated animated cloud imagery comprising: means for obtaining an input two-dimensional grid of scalar values that evolves over time to define macro-scale behaviors of a cloud scene; and means for generating at least one layer of cloud imagery by perturbing positions of the scalar values with noise textures.
6. A real-time system of claim 5, further comprising means for enhancing a cloud-like decay texture pattern by subjecting opacity values to a function in which animated noise textures are used to reduce cloud opacity in continuous regions.
7. A real-time system of claim 5, further comprising means for simulating lighting, wherein the means for simulating include: means for creating a two-dimensional lighting grid of scalar values for each grid point P by sampling cloud-field intensity from at least one other grid point representing paths from at least one directional or ambient light source to the point P; and means for assigning output colors to the cloud images through interpolation between two input color fields according to values in the lighting grid.
8. The real-time system in claim 5, further comprising means for animating the noise textures in time to simulate effects of wind, condensation and evaporation.
9. A real-time method for rendering algorithmically generated animated cloud imagery comprising: obtaining an input two-dimensional grid of scalar values that evolves over time to define macro-scale behaviors of a cloud scene; and generating at least one layer of cloud imagery by perturbing positions of the scalar values with noise textures.
10. A real-time method of claim 10, further comprising enhancing a cloud-like decay texture pattern by subjecting opacity values to a function in which animated noise textures are used to reduce cloud opacity in continuous regions.
11. A real-time method of claim 10, further comprising simulating lighting effects using the following: creating a two-dimensional lighting grid of scalar values for each grid point P by sampling cloud-field intensity from at least one other grid point representing paths from at least one directional or ambient light source to the point P; and assigning output colors to the cloud images through interpolation between two input color fields according to values in the lighting grid.
12. The real-time method in claim 10, further comprising animating the noise textures in time to simulate effects of wind, condensation and evaporation.
13. A program product stored on a computer readable medium for rendering algorithmically generated animated cloud imagery in real time, the program product comprising program code for performing the following: obtaining an input two-dimensional grid of scalar values that evolves over time to define macro-scale behaviors of a cloud scene; and generating at least one layer of cloud imagery by perturbing positions of the scalar values with noise textures.
14. A method for deploying a real-time system for rendering algorithmically generated animated cloud imagery comprising: providing a computer infrastructure being operable to: obtain an input two-dimensional grid of scalar values that evolves over time to define macro-scale behaviors of a cloud scene; and generate at least one layer of cloud imagery by perturbing positions of the scalar values with noise textures.
PCT/US2007/074441 2006-07-26 2007-07-26 Real-time scenery and animation WO2008014384A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US82037706P 2006-07-26 2006-07-26
US60/820,377 2006-07-26

Publications (2)

Publication Number Publication Date
WO2008014384A2 true WO2008014384A2 (en) 2008-01-31
WO2008014384A3 WO2008014384A3 (en) 2008-07-10

Family

ID=38982332

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/074441 WO2008014384A2 (en) 2006-07-26 2007-07-26 Real-time scenery and animation

Country Status (1)

Country Link
WO (1) WO2008014384A2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706967B (en) * 2009-11-18 2011-10-05 电子科技大学 Comprehensive simulation method for realistic cloud layer
CN104299263A (en) * 2014-09-14 2015-01-21 北京航空航天大学 Method for modeling cloud scene based on single image
CN104299262A (en) * 2014-09-14 2015-01-21 北京航空航天大学 Three-dimensional cloud simulating method based on speed field flow line
CN104335584A (en) * 2012-06-29 2015-02-04 英特尔公司 Systems, methods, and computer program products for scalable video coding based on coefficient sampling
US10516898B2 (en) 2013-10-10 2019-12-24 Intel Corporation Systems, methods, and computer program products for scalable video coding based on coefficient sampling
CN111145326A (en) * 2019-12-26 2020-05-12 网易(杭州)网络有限公司 Processing method of three-dimensional virtual cloud model, storage medium, processor and electronic device
CN111951362A (en) * 2020-07-01 2020-11-17 北京领为军融科技有限公司 Three-dimensional volume cloud rendering method and system based on three-dimensional noise map
CN112150598A (en) * 2020-09-25 2020-12-29 网易(杭州)网络有限公司 Cloud layer rendering method, device, equipment and storage medium
CN113487708A (en) * 2021-06-25 2021-10-08 山东齐鲁数通科技有限公司 Graphics-based flowing animation implementation method, storage medium and terminal equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5138699A (en) * 1990-02-13 1992-08-11 International Business Machines Corporation Hardware utilization of color interpolation capability in a color imaging system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5138699A (en) * 1990-02-13 1992-08-11 International Business Machines Corporation Hardware utilization of color interpolation capability in a color imaging system

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
DUBE J.-F.: 'REALISTIC CLOUD RENGERING ON MODERN GPUS' CHARLES RIVER MEDIA, GAME PROGRAMMING GEMS 5 2005, pages 499 - 505 *
EBERT D. ET AL.: 'Texturing and Modeling: A Procedural Approach', vol. 2ND ED., 1998, ACADEMIC PRESS pages 294 - 304 *
HASAN M.M. ET AL.: 'Generating and Rendering Procedural Clouds in Real Time on Programmable 3D Graphics Hardware' 9TH INTERNATIONAL MULTITOPIC CONFERENCE, IEEE INMIC 2005 December 2005, pages 1 - 6 *
MAX N.: 'Computer Animation of Clouds' PROCEEDINGS OF COMPUTER ANIMATION '94 25 May 1994 - 28 May 1994, pages 167 - 174 *
PALLISTER K.: 'GENERATING PROCEDURAL CLOUDS USING 3D HARDWARE' GAME PROGRAMMING GEMS 2 2001, pages 463 - 473 *
RODEN T. AND PARBERRY I.: 'Clouds and Stars: Efficient Real-Time Procedural Sky Rendering Using 3D Hardware' ACM INTERNATIONAL CONFERENCE PROCEEDING SERIES vol. 265, 2005, pages 434 - 437 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706967B (en) * 2009-11-18 2011-10-05 电子科技大学 Comprehensive simulation method for realistic cloud layer
CN104335584A (en) * 2012-06-29 2015-02-04 英特尔公司 Systems, methods, and computer program products for scalable video coding based on coefficient sampling
US10516898B2 (en) 2013-10-10 2019-12-24 Intel Corporation Systems, methods, and computer program products for scalable video coding based on coefficient sampling
CN104299263A (en) * 2014-09-14 2015-01-21 北京航空航天大学 Method for modeling cloud scene based on single image
CN104299262A (en) * 2014-09-14 2015-01-21 北京航空航天大学 Three-dimensional cloud simulating method based on speed field flow line
CN104299262B (en) * 2014-09-14 2017-03-29 北京航空航天大学 A kind of three-dimensional cloud analogy method based on velocity field streamline
CN111145326A (en) * 2019-12-26 2020-05-12 网易(杭州)网络有限公司 Processing method of three-dimensional virtual cloud model, storage medium, processor and electronic device
CN111145326B (en) * 2019-12-26 2023-12-19 网易(杭州)网络有限公司 Processing method of three-dimensional virtual cloud model, storage medium, processor and electronic device
CN111951362A (en) * 2020-07-01 2020-11-17 北京领为军融科技有限公司 Three-dimensional volume cloud rendering method and system based on three-dimensional noise map
CN112150598A (en) * 2020-09-25 2020-12-29 网易(杭州)网络有限公司 Cloud layer rendering method, device, equipment and storage medium
CN113487708A (en) * 2021-06-25 2021-10-08 山东齐鲁数通科技有限公司 Graphics-based flowing animation implementation method, storage medium and terminal equipment
CN113487708B (en) * 2021-06-25 2023-11-03 山东齐鲁数通科技有限公司 Flow animation implementation method based on graphics, storage medium and terminal equipment

Also Published As

Publication number Publication date
WO2008014384A3 (en) 2008-07-10

Similar Documents

Publication Publication Date Title
Nalbach et al. Deep shading: convolutional neural networks for screen space shading
WO2008014384A2 (en) Real-time scenery and animation
US20050212800A1 (en) Volumetric hair simulation
Kryachko Using vertex texture displacement for realistic water rendering
US9582929B2 (en) Dynamic skydome system
US9189883B1 (en) Rendering of multiple volumes
Ganovelli et al. Introduction to computer graphics: A practical learning approach
Peddie Ray tracing: a tool for all
Goswami A survey of modeling, rendering and animation of clouds in computer graphics
Pessoa et al. RPR-SORS: Real-time photorealistic rendering of synthetic objects into real scenes
Nowak et al. Modeling and rendering of volumetric clouds in real-time with unreal engine 4
Yan et al. A non-photorealistic rendering method based on Chinese ink and wash painting style for 3D mountain models
McGuire et al. Real-time rendering of cartoon smoke and clouds
US9317967B1 (en) Deformation of surface objects
US20180005432A1 (en) Shading Using Multiple Texture Maps
Nair Using Raymarched shaders as environments in 3D video games
Gierlinger et al. Rendering techniques for mixed reality
Jens et al. GPU-based responsive grass
Rosu et al. EasyPBR: A lightweight physically-based renderer
Papanikolaou et al. Real-time separable subsurface scattering for animated virtual characters
Nordahl Enhancing the hpc-lab snow simulator with more realistic terrains and other interactive features
Yang et al. Visual effects in computer games
Andrei et al. Bioblender: A software for intuitive representation of surface properties of biomolecules
Peddie et al. The Continuum
Ki et al. A GPU-based light hierarchy for real-time approximate illumination

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07799833

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

NENP Non-entry into the national phase in:

Ref country code: RU

122 Ep: pct application non-entry in european phase

Ref document number: 07799833

Country of ref document: EP

Kind code of ref document: A2