WO2023277702A1 - Spectral uplifting converter using moment-based mapping - Google Patents

Spectral uplifting converter using moment-based mapping Download PDF

Info

Publication number
WO2023277702A1
WO2023277702A1 PCT/NZ2021/050156 NZ2021050156W WO2023277702A1 WO 2023277702 A1 WO2023277702 A1 WO 2023277702A1 NZ 2021050156 W NZ2021050156 W NZ 2021050156W WO 2023277702 A1 WO2023277702 A1 WO 2023277702A1
Authority
WO
WIPO (PCT)
Prior art keywords
lattice
color
coefficient
moment
computer
Prior art date
Application number
PCT/NZ2021/050156
Other languages
French (fr)
Inventor
Alexander Wilkie
Lucia TODOVA
Luca FASCIONE
Original Assignee
Weta Digital Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weta Digital Limited filed Critical Weta Digital Limited
Publication of WO2023277702A1 publication Critical patent/WO2023277702A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/6058Reduction of colour to a range of reproducible colours, e.g. to ink- reproducible colour gamut

Definitions

  • the present disclosure generally relates to converters of color and reflectance data between various representations and more particularly to a converter that efficiently converts spectral representations to color vectors.
  • a Tenderer may compute a color of an illuminant that illuminates an object (real or virtual), compute a reflectance of the object, and from those, compute a final pixel color. This may need to be done for a million pixels or more, so efficient processing and memory storage can be important. For realistic appearance, considerable care may be needed when processing color values.
  • color of an illuminant (such as a real or virtual light source) may be represented by more than three values and may be represented by a spectral representation.
  • reflectance of an object or a portion of an object may also be represented by a spectral representation.
  • An example of a spectral representation may be a dataset that, when plotted over a range of frequencies, conveys the color spectrum of an illuminant’ s light or reflectance of an object.
  • a Tenderer By providing a Tenderer with the capability to simulate light transport in a physically accurate fashion using spectra representations, one can obtain an intrinsically realistic scene appearance via global illumination.
  • There are other useful capabilities of physically correct rendering, such as color accuracy, as working with spectral data allows one to predict object appearance under varying illuminations, something which is crucial for matching plate footage of real objects with rendered images of their digitized asset counterparts across different shots.
  • the spectral representation may comprise many more degrees of freedom than the three degrees of freedom afforded by an RGB representation. Consequently, a spectral representation can be more complicated to deal with.
  • a spectral representation may require more memory storage and the number of computing cycles needed to compute a spectral representation of a color of an object from the spectral representation of the object’s reflectance and the spectral representation of illuminants that illuminate the object compared to computing an RGB color vector from an RGB reflectance of the object and an RGB representation of the illuminant’s color.
  • Spectral representations can also be more difficult to deal with when image processing system allows for arbitrary artist-created textures.
  • a texture may correspond to a large number of pixels and/or a broad region of an object to be depicted in imagery, it may be quite tedious to enter an entire texture using spectral representations.
  • RGB color vectors representing colors in an RGB color space
  • RGB color vectors representing colors in an RGB color space
  • it should be understood that other color spaces may be used. In any case, where processing is done using spectral representations and color vectors, there is often a need to convert between those.
  • a commercially-available Tenderer may internally represent color with color vectors, such as color vectors in the RGB color space.
  • many virtual reality, effects, and game engine assets represent textures and material data in RGB space, as this can be easier for the artists creating them.
  • Defining genuinely spectral assets may require artists to either specify spectral reflectances from reference data collections (e.g., color atlas data), or to measure them on a real asset with a spectrometer. Both options can be tedious, and the second one is not possible for fully virtual assets.
  • metamers Different spectral representations that evaluate to the same perceived color under a particular illuminant are called metamers under that illuminant. Mathematically, there could be infinitely many metamers for a spectrum, although in reality the number of metamers is limited by the properties of real colorants and pigments.
  • appearance of assets defined via color vectors can be processed under changing illumination without creating undesirable artifacts that may be unacceptable for, e.g., workflows with very high demands on visual consistency between plate footage and visual effects renders.
  • RGB color vectors spectrally upsampled to spectral representations, illuminants are processed, and the resulting image is then represented in RGB color space, if two nearby pixels were similarly colored but not identically colored, it may be undesirable if the resulting colors of the nearby pixels are not close as a result of the upsampling, processing, and converting.
  • An improved converter for use with spectral uplifting can be useful. It is an object of at least preferred embodiments of the present invention to address at least some of the aforementioned disadvantages. An additional or alternative object is to at least provide the public with a useful choice.
  • the disclosure relates to a computer-implemented method for converting color vector data to spectrum data for use in image processing.
  • the method may comprise, under the control of one or more computer systems configured with executable instructions: obtaining a first reference spectral representation, wherein the first reference spectral representation comprises data representing an illumination spectrum usable in the image processing or a reflectance spectrum of an object to be processed in the image processing, obtaining a first corresponding reference color vector, wherein the first corresponding reference color vector is a first color vector in a color space and the first color vector corresponds to the first reference spectral representation, allocating a lattice storage for a moment-based uplift coefficient lattice, wherein the moment-based uplift coefficient lattice comprises a plurality of color locations in the color space, each color location representable by an associated color vector, the first color vector representing a first color location, and wherein the lattice storage is usable for storing moment coefficient arrays for lattice locations, computing a first corresponding moment coefficient array,
  • the term ‘comprising’ as used in this specification means ‘consisting at least in part of. When interpreting each statement in this specification that includes the term ‘comprising’, features other than that or those prefaced by the term may also be present. Related terms such as ‘comprise’ and ‘comprises’ are to be interpreted in the same manner.
  • the set of nearby lattice points of the moment-based uplift coefficient lattice that are within the predetermined color space distance from the first color location may be lattice points that bound a voxel of the color space wherein the voxel encloses the first color location.
  • the method may further comprise interpolating additional moment coefficient arrays for additional lattice points based on previously computed moment coefficient arrays for previously processed lattice points.
  • the first array size may represent a first number of coefficients in a first corresponding moment coefficient array corresponding to the first reference spectral representation, is greater than a second array size representing a second number of coefficients in one of the additional moment coefficient arrays.
  • the method may further comprise obtaining the input color vector value, searching, in the lattice storage, for a corresponding lattice location, wherein the corresponding lattice location corresponds to the input color vector value, obtaining, for the corresponding lattice location, a corresponding moment coefficient array, computing, from the corresponding moment coefficient array, a corresponding spectral representation, and outputting the corresponding spectral representation in response to the input color vector value.
  • the method may include computing the corresponding moment coefficient array by interpolation from moment coefficient arrays of nearby lattice points in the moment-based uplift coefficient lattice that are within a second predetermined color space distance from the corresponding lattice location.
  • the possible color vectors may be vectors in an RGB color space.
  • the moment coefficient arrays may comprise a plurality of coefficient of moments and at least two moment coefficient arrays comprise different numbers of coefficients.
  • the numbers of the coefficients of the moment coefficient arrays may be determined based on round-trip error values, wherein the numbers of the coefficients of the moment coefficient arrays may be determined based on round-trip error values are determined iteratively, comprising setting an initial coefficient count for a spectral representation, determining a sufficiency of a coefficient count for the spectral representation, if the coefficient count is insufficient, increasing the coefficient count, and repeating determination of the sufficiency of the coefficient count until a sufficient coefficient count is reached or a predetermined maximum coefficient count is reached.
  • a computer system for generating a moment-based uplift coefficient lattice array may comprise a) at least one processor, b) lattice storage for storing a moment-based uplift coefficient lattice, wherein the moment-based uplift coefficient lattice comprises a plurality of color locations in the color space, each color location representable by an associated color vector representing a color location, and wherein the lattice storage is usable for storing moment coefficient arrays for lattice locations, and c) a computer-readable medium storing instructions, which when executed by the at least one processor, causes the computer system to obtain a first reference spectral representation, wherein the first reference spectral representation comprises data representing an illumination spectrum usable in the image processing or a reflectance spectrum of an object to be processed in the image processing, obtain a first corresponding reference color vector, wherein the first corresponding reference color vector is a first color vector in a color space and the first color vector corresponds to the first reference spectral representation and the first color vector represents a
  • a carrier medium can carry instructions, which when executed by one or more processors of a machine, cause the machine to carry out any one of the methods described above.
  • the carrier medium may comprise a storage medium or a transient medium, such as a signal.
  • FIG. 1 illustrates a conversion process between and among spectral representations and color vectors.
  • FIG. 2 illustrates a conversion process from color vectors to spectral representations.
  • FIG. 3 is a flowchart illustrating a method of processing image data using moment- based coefficients.
  • FIG. 4 is a flowchart illustrating a method for computing moment coefficients.
  • FIG. 5 is a flowchart illustrating a method for converting from a color vector to a set of moment coefficients in a spectral uplifting process.
  • FIG. 6 illustrates examples of constrained uplifting and round-trip performance.
  • FIG. 7 illustrates examples of accuracy of uplifting.
  • FIG. 8 is a table of experimental results.
  • FIG. 9 illustrates an example visual content generation system as may be used to generate imagery in the form of still images and/or video sequences of images.
  • FIG. 10 is a block diagram illustrating an example computer system upon which computer systems of the systems illustrated in FIGS. 1 and 9 may be implemented.
  • Spectral rendering can be used in appearance-critical rendering workflows due to its ability to predict color values under varying illuminants.
  • directly modelling assets via input of spectral data is a tedious process and if asset appearance is defined via artist- created textures, these may be drawn in color space, i.e., the RGB color space. Converting these RGB values to equivalent spectral representations is an ambiguous problem and does not necessarily provide the user with further control over the resulting spectral shape.
  • a method for constraining a spectral uplifting process is provided so that for a finite number of input spectra that need to be preserved, it may always yield a correct uplifted spectrum for the corresponding RGB value. Due to constraints placed on the uplifting process, target RGB values that are in close proximity to one another uplift to spectra within the same metameric family, so that textures with color variations can be meaningfully uplifted.
  • FIG. 1 illustrates a system 100 usable for converting among spectral representations and color vectors.
  • a spectral representation storage 102 may store several spectral representations one of which is illustrated by plot 104 showing a plot of reflectance ratio as a function of wavelength. It is non-limiting example, the spectral representation ranges from around 400 nm to around 700 nm.
  • a color vector generator 106 can generate a color vector 108 from the spectral representation and color vector one away can be stored in color vector storage 110.
  • color vector 108 comprises three color components labeled “R Value”, “G Value”, and “B Value” representing components in a three- dimensional RGB color space.
  • a spectral representation can have many more than the three degrees of freedom of an RGB color vector.
  • a spectral uplifting converter 120 can map a color vector from color vector storage 110 into a spectral representation for use in computer operations that operate on spectra rather than color vectors.
  • an artist or other user of an image processing system can operate in the spectral space or in the color space depending on the needs of an operation. Preferably, moving between the spectral space and the color space does not introduce undesirable artifacts.
  • FIG. 1 depicts only one spectral representation and one color vector, it should be understood that color vector storage 110 may store multiple color vectors, as is the case when color vector storage 110 is used for image storage.
  • An image store may store a plurality of color vectors each associated with a pixel or voxel position of an image.
  • spectral representation storage 102 may store a plurality of spectra.
  • FIG. 2 illustrates a conversion process 200 from color vectors to spectral representations.
  • an input spectral representation 202 is provided to a color vector generator 204 that downsamples input spectral representation 202 to determine a color vector 206.
  • color vector 206 is a vector in a three-dimensional RGB color space, but other color spaces are known in the art and could be used.
  • the input color can be processed and manipulated by human or computer users of color vectors 210. These may include image or animation manipulation tools that may provide, for example, for altering an image captured by a camera by the insertion of virtual objects into a scene.
  • a spectral representation that represents a captured portion of a real object in a scene to map to a color vector, have an artist create a virtual object based on that color vector, and then be able to generate a spectral representation of a texture of that virtual object that appears, under various illumination, to match that of the real object under those same illuminations.
  • This can be achieved at least in part by having a spectral uplifting converter 214 convert color vector 206 based on stored moment coefficient sets in a lattice storage 230 as described in more detail herein.
  • the goal of matching colors under varying illumination may be obtained if an output spectral representation 220 to which a color vector is uplifted to is a spectrum similar to that generated from a set of moment coefficients that are stored in association with that color vector or nearby points in a lattice structure.
  • lattice storage 230 can be storage for an array of seeds 232 and an array of lattice points 234.
  • An entry in array of lattice points 234 may contain an indication of a color vector, as may correspond to a lattice point of the lattice, and a color location of a voxel for that lattice point.
  • each lattice point could define one voxel, such as having the lattice point for (R x , G x , B x ) be the lattice point for the voxel bounded by the eight lattice points (R x , G x , B x ), (R x , G x , B x+i ), (R x , G x+i , B x ), (R x , G x+i , B x+i ), (R x+i , G x , B x ), (R x+i , G x , B x+i ), (R x+i , G x+i , B x ), and (R x+i , G x+i , B x+i ).
  • an entry of array of lattice points 234 includes an indication of the lattice point (which in turn can define a voxel), a moment coefficients count (which can vary if needed so that some more significant spectra are more closely defined, which others are defined at lower precision to save on memory and processing), and a set of moment coefficients numbering as indicated by the lattice point’s moment coefficients count.
  • a seed entry in array of seeds 232 may indicate a color vector for a seed (which need not be an integer and can be a floating point value), which lattice voxel that color vector may fall into and the moment coefficients as may be computed as illustrated in FIG.
  • a lattice is defined as the intersections of 32 evenly spaced red planes in an RGB color space, 32 evenly spaced green planes in the RGB color space, and 32 evenly spaced blue planes in the RGB color space
  • the color vector (RA, GA, BA) may fall within the voxel Vi(R,G,B) bounded by eight such intersections.
  • array of lattice points 234 could be populated from seeds, wherein a set of moment coefficients for a seed are known, and with several seeds (or possibly only one seed), the sets of moment coefficients for the lattice points of the voxels that contain seeds can be computed, and the sets of moment coefficients for other lattice points can be interpolated. Then, in use, a spectral uplifting can be done for a given color vector, by identifying a lattice location for that color vector, the color location, interpolating nearby moment coefficient sets, and reconstructing a spectrum from an interpolated moment coefficient set.
  • a spectrum that is uplifted for a color vector that was a function of an input spectral representation will likely be a spectrum that appears similar to the input spectrum under different illumination conditions. This can be important where, for example, in animation or adding visual effects, that colors appear to match from frame to frame even though image manipulation was done in the color space and not with full spectral representations.
  • FIG. 3 is a flowchart illustrating a method of processing image data using moment- based coefficients.
  • a spectral uplifting converter obtains reference spectral representations. These reference spectral representations may be supplied by an artist, another person, or some computer process. A reference spectral representation may be structured so as to define a reflection spectrum over a range of wavelengths.
  • the spectral uplifting converter determines corresponding reference color vectors for the reference spectral representations.
  • the spectral uplifting converter may compute a set of moment coefficients for each reference spectral representation in step 303 and as explained in greater detail below.
  • the spectral uplifting converter may then store sets of moment coefficients, perhaps in lattice array 230 shown in FIG. 2. As the sets of moment coefficients each correspond to a specified color vector, they can serve as seeded values in a moment-based uplift coefficient lattice array.
  • the spectral uplifting converter can compute additional moment coefficients for color vectors other than those seeded values.
  • the spectral uplifting converter may use the moment-based uplift coefficient lattice array to determine an input spectral representation for an RGB color vector.
  • the spectral uplifting converter may receive an RGB color vector and return a set of moments that represent a suitable corresponding spectrum.
  • an image processing system or color processing system can move between a spectral space and a color vector space with certain preserving qualities.
  • An example of a preserving quality may be that a round-trip spectrum, computed by converting a spectrum to a color vector and then back to a spectrum, introduces only small errors.
  • Another example of a preserving quality is that to color vectors that are perceptual close in a color space remain close when uplifted to spectra and downsampled back to color vectors.
  • uplifting is constrained uplifting, as a spectrum is converted to a set of moment coefficients wherein the mapping conforms to constraints that can be specified in advance, e.g., that a given moment coefficient set should map to a given color vector, another given moment coefficient set should map to another given color vector, and so on, and that unspecified mappings for other color vectors could be smoothly mapped so that the mapping is smooth and matches the constraint of the given seeded moment coefficient sets.
  • the seeds may come from spectra measured using spectral photometers.
  • FIG. 4 is a flowchart illustrating a method for computing moment coefficients as may be used in step 303 of FIG. 3 by a processor that may be part of a spectral uplifting converter.
  • a processor may read in a spectral representation, which may be stored as computer-readable data from a computer memory.
  • the processor may convert the spectral representation into a signal data structure that can be processed as a signal, such as a digital representation of an analogue time-domain signal.
  • a mirrored signal is generated from the spectral representation and other processing may be done to reduce artifacts at signal boundaries.
  • step 403 the processor computes a Fourier basis for the mirrored signal wherein the number of coefficients in a set of moment coefficients is equal to a specified initial coefficient count.
  • step 404 the processor determines a round-trip error and in step 405 if the error is below a predetermined threshold, flow proceeds to step 406 where the processor stores the moment coefficient array and returns. On the other hand if the error is not below the predetermined threshold, the processor checks, in step 407, whether the coefficient count could be increased. If not, flow continues to step 406, but if the count can be increased, the processor increases the coefficient count in step 408 and returns to step 403 to compute a Fourier basis for the mirrored signal using a larger number of coefficients. [0049] FIG.
  • a processor may read in an input RGB color vector in step 501, determine a set of lattice points for a lattice in step 502, obtain or determine moments coefficient sets for spectral representations at lattice points, in step 503, and in step 504, if needed, interpolate and output moment coefficient array from the set of lattice points.
  • step 506 and output spectral representation and the computed from the interpolated moment coefficient array.
  • an uplifting model is provided with a capability to be constrained by a set of pre-defmed mappings from RGB to specific spectral shapes, which can then be preserved during an uplifting process.
  • Other RGB input values created by the process return plausible synthetic spectral shapes that smoothly interpolate with the user supplied target spectra. This allows one to, e.g., paint RGB textures that will reliably match specific spectra which were measured on a real production asset, while returning plausible values for all other areas of the RGB space.
  • RGB textures can be uplifted so that user-selected key RGB colors evaluate to specific spectra that can be measured on a real asset.
  • These pre-defmed mappings between specific RGB colors and spectra are constraints of the uplifting system and may be represented as seeds in an array of uplifting seeds. RGB colors in close vicinity to the constraints will uplift to spectral shapes similar to the original, so that minor texture edits in terms of RGB color do not cause noticeable visual distortions. All the other parts of the RGB space may return plausible, smooth spectra. This can provide new functionality of exact appearance match of renders with plate footage under varying illumination, and thereby eliminates a substantial amount of tedious appearance fine-tuning work.
  • Reflectance spectra created by a spectral uplifting process should satisfy multiple constraints. Their values should fall within the [0, 1] range, the round-trip error caused by uplift and subsequent conversion to RGB should be negligible, and the resulting spectral shapes should qualitatively correspond to real-life materials, which are generally fairly smooth and simple. Although multiple techniques for spectral uplifting exist, not all of them satisfy the above criteria.
  • the low-dimensional parametric model for spectral representation proposed by Jakob and Hanika used to create a pre-built uplifting model can be employed during rendering.
  • the structure of that uplifting model is a cube-shaped 3D lookup table, comprising evenly spaced lattice points representing RGB values.
  • Each of the points contains a mapping to its respective spectral representation, which in the model by Jakob and Hanika comprises three sigmoid coefficients. Other variations may be used.
  • the acquisition of coefficient sets for individual points can performed during the creation of the model by an optimization tool, such as the CERES solver.
  • the CERES solver is capable of modifying the coefficients to the point where they reconstruct a spectrum that evaluates to the desired RGB value.
  • This process is referred to herein as lattice point fitting.
  • Their approach produces smooth spectra that may satisfy spectral range restrictions with negligible error.
  • Jung provides a technique for wide gamut spectral uplifting by introducing new parameters for fluorescence.
  • Our uplifting model can be based on a pre-computed RGB coefficient cube that provides mappings of RGB values to their corresponding spectral representations. We can use moment-based spectral representations for our purposes.
  • MESE Maximum Entropy Spectral Estimate
  • bounded MESE has been introduced, which utilises a duality between bounded and unbounded moment problems formulated in terms of the Herglotz transform. Based on this duality, trigonometric moments can be converted to their corresponding unbounded exponential moments , so that the bounded problem represented by the trigonometric moments has a solution if and only if the dual unbounded problem represented by the exponential moments has a solution.
  • Determining the optimal resolution of the coefficient cube can be a heuristic-driven process: for the sigmoid-based fitting process, cubes with 32 3 lattice points deliver satisfactory results. The same seems to apply to the moment-based coefficient cubes, with the additional concern that a denser lattice may be needed if a very large number, or a densely spaced set, of spectral exemplars were to be used as input: our technique only works if there is at most a single exemplar spectrum per cube voxel.
  • the RGB values of the user-provided spectra do not lie exactly on the lattice points, but rather fall inside one of the cube voxels. Therefore, in order to ensure proper reconstruction upon uplifting, we assign the coefficients to all eight lattice points that are at the corners of the voxel, i.e., voxel corners. Additionally, to support larger sets of user- provided spectra, e.g., whole color atlases, we extend the cube by allowing multiple moment representations per lattice point. We distinguish them by constraint-specific IDs, which are then utilised during the uplifting process for the purposes of identifying the original seed of the voxel.
  • Cost functions may be defined as in Equation 3, where the target rgb is the RGB value of the lattice point, constraints represents a discretized reflectance spectrum of the input constraint, current spectrum represents the spectrum the current coefficients x reconstruct, current rgb is the RGB value of current spectrum and 5 is the number of samples used for internal representation of reflectance curves.
  • Equation 6 By rewriting the expression of Equation 5 and utilising the equality from Equation 4, we get Equation 6, which simplifies to Equation 7.
  • Whether the voxel has been seeded or not can be determined by means of the unique IDs assigned to the moment representations during the seeding process. If the voxel comers share a common ID, the representations corresponding to this ID can be used in order to properly reconstruct the seeds.
  • 82 of the atlas entries can not be used as seeds due to the limitation that only one seed can be present in each voxel.
  • Simply increasing the cube resolution would allow to use the 82 missing entries: again, using a 32 3 -sized coefficient cube was not done due to any intrinsic restrictions of our technique, but just to save pre-computation time, and stay with a nice power of two as cube dimension.
  • the choice of 32, or some other power of two may be done to simplify memory accesses, but other values, higher or lower than 32 may be used, and those may not be powers of two.
  • FIG. 6 illustrates problematic cases of constrained uplifting: reflectance spectra of darker colors.
  • the spectral curve labelled round-trip is a direct reconstruction of the spectrum from the moments used to seed the fitting process.
  • the dashed curve labelled uplift is the result of querying the final coefficient cube, including interpolation within the voxel the RGB value lies in. Ideally, all three curves are identical: separating “round-trip” and “uplift” shows whether any error is due to the moment-based representation, or the interpolation in the coefficient cube. While the round-trip from the moment-representation used for the purposes of seeding is reasonable (in the areas where the round-trip plot is barely visible, it mimics the original spectrum), the uplifted spectrum may demonstrate visible deviation. [0090] In an example process for determining round-trip error as illustrated in system 600 of FIG.
  • a round-trip spectral representation such as representation 602(A) or 602(B) may be converted by a color vector generator 604 into a color vector 606.
  • Color vector 606 can then be spectrally uplifted by spectral uplifting converter 608 using the moment— based uplift coefficients in a lattice storage 610, and the resulting output spectral representation compared with the input spectral representation.
  • FIG. 7 illustrates accuracy of constrained uplifting on examples of input spectra that correspond to saturated RGB colors. These types of uplifted spectra follow the spectra reconstructed from the original coefficients quite precisely, and also match the input spectra quite well.
  • the memory used for storing our cube depends on its resolution (i.e., the number of lattice points), which is, in turn, dependent on both the size of our constraint set and on the position of its spectra within the RGB cube.
  • the Macbeth Color Checker (MCC) which contains only 24 entries that are spaced quite far apart from each other in RGB space (with one of them falling outside of sRGB), requires as little as a 13 2 -sized cube. Due to the close proximity of some seeds, the 1396 sRGB entries of the Munsell Book of Color would require as much as 340 lattice points per axis for all seeds to fall into a unique voxel.
  • a coefficient cube with evenly spaced lattice points can be sub-optimal, both in terms of both memory requirements and its resulting colorimetric properties.
  • a dynamic structure like, e.g., kD-trees, which can split RGB space into variably-sized voxels according to the number of constraints may be used.
  • the approach presented herein works fine for target spectra sets up to several dozen, or maybe even low numbers of hundred, data points. This is typically sufficient for usage in visual effects scenarios, where only a few key assets (like, e.g., the main colors of the costume of a lead character) are measured on set, in order to later constrain the spectral uplift of virtual doubles of this character.
  • FIG. 8 is a table of experimental timing results. Since the uplifting model can be created prior to the rendering process, we divide the evaluation of the execution time into two parts — the cube fitting process, and the rendering speed when utilising our cube for uplifting purposes. We tested the execution time of cube fitting on multiple sets of constraints in forms of color atlases. All the experiments were performed on an Intel Core ⁇ 7-8750H CPU (12 logical cores), and the size of each cube is 32 3 . FIG. 8 shows fitting times of a 32 3 -sized coefficient cube for multiple color atlases. This cube resolution may be insufficient for the utilisation of all constraints in a given atlas, so the data shows the number of seeds that were placed on lattice points.
  • a method capable of constraining the spectral uplifting process with an arbitrary set of target spectra is provided and this can be implemented in computer software and hardware.
  • the RGB values of the target spectra are accurately uplifted to their original spectral shapes, while the rest of the RGB gamut uplifts to smooth spectra. This results in smooth transitions between the various metameric families that originate from the constraining process.
  • FIG. 9 illustrates an example visual content generation system 900 as may be used to generate imagery in the form of still images and/or video sequences of images.
  • Visual content generation system 900 may generate imagery of live action scenes, computer generated scenes, or a combination thereof.
  • users are provided with tools that allow them to specify, at high levels and low levels where necessary, what is to go into that imagery.
  • a user may be an animation artist and may use visual content generation system 900 to capture interaction between two human actors performing live on a sound stage and replace one of the human actors with a computer-generated anthropomorphic non-human being that behaves in ways that mimic the replaced human actor’s movements and mannerisms, and then add in a third computer-generated character and background scene elements that are computer-generated, all in order to tell a desired story or generate desired imagery.
  • Still images that are output by visual content generation system 900 may be represented in computer memory as pixel arrays, such as a two-dimensional array of pixel color values, each associated with a pixel having a position in a two-dimensional image array.
  • Pixel color values may be represented by three or more (or fewer) color values per pixel, such as a red value, a green value, and a blue value (e.g., in RGB format).
  • Dimensions of such a two-dimensional array of pixel color values may correspond to a preferred and/or standard display scheme, such as 1920-pixel columns by 1280-pixel rows or 4096-pixel columns by 2160-pixel rows, or some other resolution.
  • Images may or may not be stored in a certain structured format, but either way, a desired image may be represented as a two-dimensional array of pixel color values.
  • images are represented by a pair of stereo images for three-dimensional presentations and in other variations, an image output, or a portion thereof, may represent three-dimensional imagery instead of just two-dimensional views.
  • pixel values may be data structures and a pixel value can be associated with a pixel and can be a scalar value, a vector, or another data structure associated with a corresponding pixel. That pixel value may include color values, or not, and may include depth values, alpha values, weight values, object identifiers or other pixel value components.
  • a stored video sequence may include a plurality of images such as the still images described above, but where each image of the plurality of images has a place in a timing sequence and the stored video sequence is arranged so that when each image is displayed in order, at a time indicated by the timing sequence, the display presents what appears to be moving and/or changing imagery.
  • each image of the plurality of images is a video frame having a specified frame number that corresponds to an amount of time that would elapse from when a video sequence begins playing until that specified frame is displayed.
  • a frame rate may be used to describe how many frames of the stored video sequence are displayed per unit time.
  • Example video sequences may include 24 frames per second (24 FPS), 50 FPS, 140 FPS, or other frame rates. Frames may be interlaced or otherwise presented for display, but for clarity of description, in some examples, it is assumed that a video frame has one specified display time, but other variations may be contemplated.
  • One method of creating a video sequence is to simply use a video camera to record a live action scene, i.e., events that physically occur and can be recorded by a video camera.
  • the events being recorded can be events to be interpreted as viewed (such as seeing two human actors talk to each other) and/or can include events to be interpreted differently due to clever camera operations (such as moving actors about a stage to make one appear larger than the other despite the actors actually being of similar build, or using miniature objects with other miniature objects so as to be interpreted as a scene containing life-sized objects).
  • Creating video sequences for story-telling or other purposes often calls for scenes that cannot be created with live actors, such as a talking tree, an anthropomorphic object, space battles, and the like. Such video sequences may be generated computationally rather than capturing light from live scenes. In some instances, an entirety of a video sequence may be generated computationally, as in the case of a computer-animated feature film. In some video sequences, it is desirable to have some computer-generated imagery and some live action, perhaps with some careful merging of the two.
  • While computer-generated imagery may be creatable by manually specifying each color value for each pixel in each frame, this is likely too tedious to be practical.
  • a creator uses various tools to specify the imagery at a higher level.
  • an artist may specify the positions in a scene space, such as a three-dimensional coordinate system, of objects and/or lighting, as well as a camera viewpoint, and a camera view plane. From that, a rendering engine could take all of those as inputs, and compute each of the pixel color values in each of the frames.
  • an artist specifies position and movement of an articulated object having some specified texture rather than specifying the color of each pixel representing that articulated object in each frame.
  • a rendering engine performs ray tracing wherein a pixel color value is determined by computing which objects lie along a ray traced in the scene space from the camera viewpoint through a point or portion of the camera view plane that corresponds to that pixel.
  • a camera view plane may be represented as a rectangle having a position in the scene space that is divided into a grid corresponding to the pixels of the ultimate image to be generated, and if a ray defined by the camera viewpoint in the scene space and a given pixel in that grid first intersects a solid, opaque, blue object, that given pixel is assigned the color blue.
  • determining pixel colors - and thereby generating imagery - can be more complicated, as there are lighting issues, reflections, interpolations, and other considerations.
  • Live action capture system 902 captures a live scene that plays out on a stage 904.
  • Live action capture system 902 is described herein in greater detail, but may include computer processing capabilities, image processing capabilities, one or more processors, program code storage for storing program instructions executable by the one or more processors, as well as user input devices and user output devices, not all of which are shown.
  • cameras 906(1) and 906(2) capture the scene, while in some systems, there may be other sensor(s) 908 that capture information from the live scene (e.g., infrared cameras, infrared sensors, motion capture (“mo-cap”) detectors, etc.).
  • sensor(s) 908 that capture information from the live scene
  • stage 904 there may be human actors, animal actors, inanimate objects, background objects, and possibly an object such as a green screen 910 that is designed to be captured in a live scene recording in such a way that it is easily overlaid with computer generated imagery.
  • Stage 904 may also contain objects that serve as fiducials, such as fiducials 912(l)-(3), that may be used post-capture to determine where an object was during capture.
  • a live action scene may be illuminated by one or more lights, such as an overhead light 914.
  • live action capture system 902 may output live action footage to a live action footage storage 920.
  • a live action processing system 922 may process live action footage to generate data about that live action footage and store that data into a live action metadata storage 924.
  • Live action processing system 922 may include computer processing capabilities, image processing capabilities, one or more processors, program code storage for storing program instructions executable by the one or more processors, as well as user input devices and user output devices, not all of which are shown.
  • Live action processing system 922 may process live action footage to determine boundaries of objects in a frame or multiple frames, determine locations of objects in a live action scene, where a camera was relative to some action, distances between moving objects and fiducials, etc.
  • the metadata may include location, color, and intensity of overhead light 914, as that may be useful in post-processing to match computer-generated lighting on objects that are computer generated and overlaid on the live action footage.
  • Live action processing system 922 may operate autonomously, perhaps based on predetermined program instructions, to generate and output the live action metadata upon receiving and inputting the live action footage.
  • the live action footage can be camera-captured data as well as data from other sensors.
  • Animation creation system 930 is another part of visual content generation system 900.
  • Animation creation system 930 may include computer processing capabilities, image processing capabilities, one or more processors, program code storage for storing program instructions executable by the one or more processors, as well as user input devices and user output devices, not all of which are shown.
  • Animation creation system 930 may be used by animation artists, managers, and others to specify details, perhaps programmatically and/or interactively, of imagery to be generated.
  • animation creation system 930 may generate and output data representing objects (e.g., a horse, a human, a ball, a teapot, a cloud, a light source, a texture, etc.) to an object storage 934, generate and output data representing a scene into a scene description storage 936, and/or generate and output data representing animation sequences to an animation sequence storage 938.
  • objects e.g., a horse, a human, a ball, a teapot, a cloud, a light source, a texture, etc.
  • Scene data may indicate locations of objects and other visual elements, values of their parameters, lighting, camera location, camera view plane, and other details that a rendering engine 950 may use to render CGI imagery.
  • scene data may include the locations of several articulated characters, background objects, lighting, etc. specified in a two-dimensional space, three-dimensional space, or other dimensional space (such as a 2.5- dimensional space, three-quarter dimensions, pseudo-3D spaces, etc.) along with locations of a camera viewpoint and view place from which to render imagery.
  • scene data may indicate that there is to be a red, fuzzy, talking dog in the right half of a video and a stationary tree in the left half of the video, all illuminated by a bright point light source that is above and behind the camera viewpoint.
  • the camera viewpoint is not explicit, but can be determined from a viewing frustum.
  • the frustum would be a truncated pyramid.
  • Other shapes for a rendered view are possible and the camera view plane could be different for different shapes.
  • Animation creation system 930 may be interactive, allowing a user to read in animation sequences, scene descriptions, object details, etc. and edit those, possibly returning them to storage to update or replace existing data.
  • an operator may read in objects from object storage into a baking processor 942 that would transform those objects into simpler forms and return those to object storage 934 as new or different objects.
  • an operator may read in an object that has dozens of specified parameters (movable joints, color options, textures, etc.), select some values for those parameters and then save a baked object that is a simplified object with now fixed values for those parameters.
  • data from data store 932 may be used to drive object presentation. For example, if an artist is creating an animation of a spaceship passing over the surface of the Earth, instead of manually drawing or specifying a coastline, the artist may specify that animation creation system 930 is to read data from data store 932 in a file containing coordinates of Earth coastlines and generate background elements of a scene using that coastline data.
  • Animation sequence data may be in the form of time series of data for control points of an object that has attributes that are controllable.
  • an object may be a humanoid character with limbs and joints that are movable in manners similar to typical human movements.
  • An artist can specify an animation sequence at a high level, such as “the left hand moves from location (XI, Yl, Zl) to (X2, Y2, Z2) over time T1 to T2”, at a lower level (e.g., “move the elbow joint 2.5 degrees per frame”) or even at a very high level (e.g., “character A should move, consistent with the laws of physics that are given for this scene, from point PI to point P2 along a specified path”).
  • Animation sequences in an animated scene may be specified by what happens in a live action scene.
  • An animation driver generator 944 may read in live action metadata, such as data representing movements and positions of body parts of a live actor during a live action scene.
  • Animation driver generator 944 may generate corresponding animation parameters to be stored in animation sequence storage 938 for use in animating a CGI object. This can be useful where a live action scene of a human actor is captured while wearing mo- cap fiducials (e.g., high-contrast markers outside actor clothing, high-visibility paint on actor skin, face, etc.) and the movement of those fiducials is determined by live action processing system 922.
  • Animation driver generator 944 may convert that movement data into specifications of how joints of an articulated CGI character are to move over time.
  • a rendering engine 950 can read in animation sequences, scene descriptions, and object details, as well as rendering engine control inputs, such as a resolution selection and a set of rendering parameters. Resolution selection may be useful for an operator to control a trade-off between speed of rendering and clarity of detail, as speed may be more important than clarity for a movie maker to test some interaction or direction, while clarity may be more important than speed for a movie maker to generate data that will be used for final prints of feature films to be distributed.
  • Rendering engine 950 may include computer processing capabilities, image processing capabilities, one or more processors, program code storage for storing program instructions executable by the one or more processors, as well as user input devices and user output devices, not all of which are shown.
  • Visual content generation system 900 can also include a merging system 960 that merges live footage with animated content.
  • the live footage may be obtained and input by reading from live action footage storage 920 to obtain live action footage, by reading from live action metadata storage 924 to obtain details such as presumed segmentation in captured images segmenting objects in a live action scene from their background (perhaps aided by the fact that green screen 910 was part of the live action scene), and by obtaining CGI imagery from rendering engine 950.
  • a merging system 960 may also read data from rulesets for merging/combining storage 962.
  • a very simple example of a rule in a ruleset may be “obtain a full image including a two-dimensional pixel array from live footage, obtain a full image including a two-dimensional pixel array from rendering engine 950, and output an image where each pixel is a corresponding pixel from rendering engine 950 when the corresponding pixel in the live footage is a specific color of green, otherwise output a pixel value from the corresponding pixel in the live footage.”
  • Merging system 960 may include computer processing capabilities, image processing capabilities, one or more processors, program code storage for storing program instructions executable by the one or more processors, as well as user input devices and user output devices, not all of which are shown.
  • Merging system 960 may operate autonomously, following programming instructions, or may have a user interface or programmatic interface over which an operator can control a merging process.
  • An operator may specify parameter values to use in a merging process and/or may specify specific tweaks to be made to an output of merging system 960, such as modifying boundaries of segmented objects, inserting blurs to smooth out imperfections, or adding other effects.
  • merging system 960 can output an image to be stored in a static image storage 970 and/or a sequence of images in the form of video to be stored in an animated/combined video storage 972.
  • visual content generation system 900 can be used to generate video that combines live action with computer-generated animation using various components and tools, some of which are described in more detail herein. While visual content generation system 900 may be useful for such combinations, with suitable settings, it can be used for outputting entirely live action footage or entirely CGI sequences.
  • the code may also be provided and/or carried by a transitory computer readable medium, e.g., a transmission medium such as in the form of a signal transmitted over a network.
  • the techniques described herein can be implemented by one or more generalized computing systems programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination.
  • Special-purpose computing devices may be used, such as desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques.
  • a carrier medium may carry image data or other data having details generated using the methods described herein.
  • the carrier medium can comprise any medium suitable for carrying the image data or other data, including a storage medium, e.g., solid-state memory, an optical disk or a magnetic disk, or a transient medium, e.g., a signal carrying the image data such as a signal transmitted over a network, a digital signal, a radio frequency signal, an acoustic signal, an optical signal or an electrical signal.
  • FIG. 10 is a block diagram that illustrates a computer system 1000 upon which the computer systems of the systems described herein and/or visual content generation system 900 (see FIG. 9) may be implemented.
  • Computer system 1000 includes a bus 1002 or other communication mechanism for communicating information, and a processor 1004 coupled with bus 1002 for processing information.
  • Processor 1004 may be, for example, a general- purpose microprocessor.
  • Computer system 1000 also includes a main memory 1006, such as a random-access memory (RAM) or other dynamic storage device, coupled to bus 1002 for storing information and instructions to be executed by processor 1004.
  • Main memory 1006 may also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1004.
  • Such instructions when stored in non-transitory storage media accessible to processor 1004, render computer system 1000 into a special-purpose machine that is customized to perform the operations specified in the instructions.
  • Computer system 1000 further includes a read only memory (ROM) 1008 or other static storage device coupled to bus 1002 for storing static information and instructions for processor 1004.
  • ROM read only memory
  • a storage device 1010 such as a magnetic disk or optical disk, is provided and coupled to bus 1002 for storing information and instructions.
  • Computer system 1000 may be coupled via bus 1002 to a display 1012, such as a computer monitor, for displaying information to a computer user.
  • a display 1012 such as a computer monitor
  • An input device 1014 is coupled to bus 1002 for communicating information and command selections to processor 1004.
  • a cursor control 1016 is Another type of user input device
  • cursor control 1016 such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 1004 and for controlling cursor movement on display 1012.
  • This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
  • Computer system 1000 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 1000 to be a special-purpose machine.
  • the techniques herein are performed by computer system 1000 in response to processor 1004 executing one or more sequences of one or more instructions contained in main memory 1006. Such instructions may be read into main memory 1006 from another storage medium, such as storage device 1010. Execution of the sequences of instructions contained in main memory 1006 causes processor 1004 to perform the process steps described herein.
  • hard-wired circuitry may be used in place of or in combination with software instructions.
  • Non-volatile media includes, for example, optical or magnetic disks, such as storage device 1010.
  • Volatile media includes dynamic memory, such as main memory 1006.
  • Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge.
  • Storage media is distinct from but may be used in conjunction with transmission media.
  • Transmission media participates in transferring information between storage media.
  • transmission media includes coaxial cables, copper wire, and fiber optics, including the wires that include bus 1002.
  • Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
  • Any type of medium that can carry the computer/processor implementable instructions can be termed a carrier medium and this encompasses a storage medium and a transient medium, such as a transmission medium or signal.
  • Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor 1004 for execution.
  • the instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer.
  • the remote computer can load the instructions into its dynamic memory and send the instructions over a network connection.
  • a modem or network interface local to computer system 1000 can receive the data.
  • Bus 1002 carries the data to main memory 1006, from which processor 1004 retrieves and executes the instructions.
  • the instructions received by main memory 1006 may optionally be stored on storage device 1010 either before or after execution by processor 1004.
  • Computer system 1000 also includes a communication interface 1018 coupled to bus 1002.
  • Communication interface 1018 provides a two-way data communication coupling to a network link 1020 that is connected to a local network 1022.
  • network link 1020 may be a network card, a modem, a cable modem, or a satellite modem to provide a data communication connection to a corresponding type of telephone line or communications line.
  • Wireless links may also be implemented.
  • communication interface 1018 sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information.
  • Network link 1020 typically provides data communication through one or more networks to other data devices.
  • network link 1020 may provide a connection through local network 1022 to a host computer 1024 or to data equipment operated by an Internet Service Provider (ISP) 1026.
  • ISP 1026 in turn provides data communication services through the world-wide packet data communication network now commonly referred to as the “Internet” 1028.
  • Internet 1028 uses electrical, electromagnetic, or optical signals that carry digital data streams.
  • the signals through the various networks and the signals on network link 1020 and through communication interface 1018, which carry the digital data to and from computer system 1000, are example forms of transmission media.
  • Computer system 1000 can send messages and receive data, including program code, through the network(s), network link 1020, and communication interface 1018.
  • a server 1030 may transmit a requested code for an application program through the Internet 1028, ISP 1026, local network 1022, and communication interface 1018.
  • the received code may be executed by processor 1004 as it is received, and/or stored in storage device 1010, or other non-volatile storage for later execution.
  • Processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context.
  • Processes described herein may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof.
  • the code may be stored on a computer-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors.
  • the code may also be carried by any computer-readable carrier medium, such as a storage medium or a transient medium or signal, e.g. a signal transmitted over a communications network.
  • the conjunctive phrases “at least one of A, B, and C” and “at least one of A, B and C” refer to any of the following sets: ⁇ A ⁇ , ⁇ B ⁇ , ⁇ C ⁇ , (A, B ⁇ , (A, C ⁇ , (B, C ⁇ , (A, B, C ⁇ .
  • conjunctive language is not generally intended to imply that certain embodiments require at least one of A, at least one of B and at least one of C each to be present.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

A converter uplifts color vector data to spectrum data for use in image processing, using a moment-based uplift coefficient lattice in the uplifting. The moment-based uplift coefficient lattice comprises lattice storage for the moment-based uplift coefficient lattice, wherein the moment-based uplift coefficient lattice comprises a plurality of color locations in the color space, each color location representable by an associated color vector, the first color vector representing a first color location, and wherein the lattice storage is usable for storing moment coefficient arrays for lattice locations. The moments can be computed from reference spectral representations, which may be captured from a live scene or otherwise, wherein the reference spectral representations comprise data representing illumination spectra. From reference spectra used as seeds, other lattice points can be populated with interpolated reference spectra.

Description

Spectral Uplifting Converter Using Moment-Based Mapping
CROSS-REFERENCE TO RELATED APPLICATIONS This application claims the benefit of U.S. Patent Application Serial No. 17/361,212, filed on June 28, 2021, and U.S. Provisional Application No. 63/215, 908, filed on June 28, 2021, each of which is incorporated by reference in its entirety for all purposes.
FIELD
[0001] The present disclosure generally relates to converters of color and reflectance data between various representations and more particularly to a converter that efficiently converts spectral representations to color vectors.
BACKGROUND
[0002] In image rendering, image processing, and other computer operations, there is often a need to represent colors using different representations. For displaying an image using an output device such as a computer monitor, it is sufficient to represent a color by storing values of the resolution of the computer monitor. Thus, where a computer monitor is able to present colors by setting a pixel color values for each of three color components (e.g., R/red, G/green, B/blue) of a color vector with each color component allowed to be an integer in the range of 0-255, it is sufficient to allocate 24 bits of memory per pixel for representing colors to be displayed.
[0003] In rendering an image, a Tenderer may compute a color of an illuminant that illuminates an object (real or virtual), compute a reflectance of the object, and from those, compute a final pixel color. This may need to be done for a million pixels or more, so efficient processing and memory storage can be important. For realistic appearance, considerable care may be needed when processing color values. In some cases, color of an illuminant (such as a real or virtual light source) may be represented by more than three values and may be represented by a spectral representation. Likewise, reflectance of an object or a portion of an object may also be represented by a spectral representation. An example of a spectral representation may be a dataset that, when plotted over a range of frequencies, conveys the color spectrum of an illuminant’ s light or reflectance of an object. [0004] By providing a Tenderer with the capability to simulate light transport in a physically accurate fashion using spectra representations, one can obtain an intrinsically realistic scene appearance via global illumination. There are other useful capabilities of physically correct rendering, such as color accuracy, as working with spectral data allows one to predict object appearance under varying illuminations, something which is crucial for matching plate footage of real objects with rendered images of their digitized asset counterparts across different shots.
[0005] For common operations that process color as spectra, the spectral representation may comprise many more degrees of freedom than the three degrees of freedom afforded by an RGB representation. Consequently, a spectral representation can be more complicated to deal with. For example, a spectral representation may require more memory storage and the number of computing cycles needed to compute a spectral representation of a color of an object from the spectral representation of the object’s reflectance and the spectral representation of illuminants that illuminate the object compared to computing an RGB color vector from an RGB reflectance of the object and an RGB representation of the illuminant’s color.
[0006] Spectral representations can also be more difficult to deal with when image processing system allows for arbitrary artist-created textures. In some cases, it may be desirable for artist inputs to include spectral representations and in some cases it may be convenient for artists to be able to input RGB textures. As a texture may correspond to a large number of pixels and/or a broad region of an object to be depicted in imagery, it may be quite tedious to enter an entire texture using spectral representations. Thus, in image processing systems, there is often a need to deal with both spectral representations and color vectors. While RGB color vectors, representing colors in an RGB color space, are a common example, it should be understood that other color spaces may be used. In any case, where processing is done using spectral representations and color vectors, there is often a need to convert between those.
[0007] A commercially-available Tenderer may internally represent color with color vectors, such as color vectors in the RGB color space. Additionally, many virtual reality, effects, and game engine assets represent textures and material data in RGB space, as this can be easier for the artists creating them. Defining genuinely spectral assets may require artists to either specify spectral reflectances from reference data collections (e.g., color atlas data), or to measure them on a real asset with a spectrometer. Both options can be tedious, and the second one is not possible for fully virtual assets.
[0008] For these and other reasons, there is often a need to convert between a spectral representation and a color vector. As a spectral representation typically involves more data and more degrees of freedom, an RGB color space is intrinsically smaller than the space of all possible spectra, and therefore multiple different spectral representations may convert to the same RGB color vector. Converting from an RGB color vector to a spectral representation is often referred to as “spectral uplifting”.
[0009] Different spectral representations that evaluate to the same perceived color under a particular illuminant are called metamers under that illuminant. Mathematically, there could be infinitely many metamers for a spectrum, although in reality the number of metamers is limited by the properties of real colorants and pigments.
[0010] Two spectra that evaluate to the same perceived color under one illuminant may not evaluate to the same perceived color under another illuminant. A converter that performs spectral uplifting and may also provide for spectral downsampling from a spectral representation to a color vector, which may be a conventional downsampler. Preferably, appearance of assets defined via color vectors can be processed under changing illumination without creating undesirable artifacts that may be unacceptable for, e.g., workflows with very high demands on visual consistency between plate footage and visual effects renders. As but one example, if a texture of an object is defined in an RGB color space, the RGB color vectors spectrally upsampled to spectral representations, illuminants are processed, and the resulting image is then represented in RGB color space, if two nearby pixels were similarly colored but not identically colored, it may be undesirable if the resulting colors of the nearby pixels are not close as a result of the upsampling, processing, and converting.
[0011] Methods of spectral uplifting exist, but most have drawbacks that image processing systems may want to avoid. A naive uplifting technique may not be able to reproduce the fact that under one illuminant or type of light source, a texture provides a smooth gradient, while under another illuminant, the gradient breaks down. Sometimes RGB color vector values that are close to each other need to be uplifted to different metameric families, so that all parts of an asset behave as expected under changing illumination.
[0012] An improved converter for use with spectral uplifting can be useful. It is an object of at least preferred embodiments of the present invention to address at least some of the aforementioned disadvantages. An additional or alternative object is to at least provide the public with a useful choice.
SUMMARY
[0013] The disclosure relates to a computer-implemented method for converting color vector data to spectrum data for use in image processing. The method may comprise, under the control of one or more computer systems configured with executable instructions: obtaining a first reference spectral representation, wherein the first reference spectral representation comprises data representing an illumination spectrum usable in the image processing or a reflectance spectrum of an object to be processed in the image processing, obtaining a first corresponding reference color vector, wherein the first corresponding reference color vector is a first color vector in a color space and the first color vector corresponds to the first reference spectral representation, allocating a lattice storage for a moment-based uplift coefficient lattice, wherein the moment-based uplift coefficient lattice comprises a plurality of color locations in the color space, each color location representable by an associated color vector, the first color vector representing a first color location, and wherein the lattice storage is usable for storing moment coefficient arrays for lattice locations, computing a first corresponding moment coefficient array, wherein the first corresponding moment coefficient array corresponds to the first reference spectral representation, determining a set of nearby lattice points of the moment-based uplift coefficient lattice that are within a predetermined color space distance from the first color location, computing moment coefficient arrays for nearby lattice points, for each lattice point of at least some of the nearby lattice points, storing an interpolated moment coefficient array in the lattice storage in association with the lattice point, and saving the lattice storage in a computer-readable format usable for converting from an input color vector value to a corresponding input spectral representation.
[0014] The term ‘comprising’ as used in this specification means ‘consisting at least in part of. When interpreting each statement in this specification that includes the term ‘comprising’, features other than that or those prefaced by the term may also be present. Related terms such as ‘comprise’ and ‘comprises’ are to be interpreted in the same manner. [0015] The set of nearby lattice points of the moment-based uplift coefficient lattice that are within the predetermined color space distance from the first color location may be lattice points that bound a voxel of the color space wherein the voxel encloses the first color location. The method may further comprise interpolating additional moment coefficient arrays for additional lattice points based on previously computed moment coefficient arrays for previously processed lattice points.
[0016] The first array size may represent a first number of coefficients in a first corresponding moment coefficient array corresponding to the first reference spectral representation, is greater than a second array size representing a second number of coefficients in one of the additional moment coefficient arrays.
[0017] The method may further comprise obtaining the input color vector value, searching, in the lattice storage, for a corresponding lattice location, wherein the corresponding lattice location corresponds to the input color vector value, obtaining, for the corresponding lattice location, a corresponding moment coefficient array, computing, from the corresponding moment coefficient array, a corresponding spectral representation, and outputting the corresponding spectral representation in response to the input color vector value.
[0018] If the corresponding lattice location is other than a lattice point, the method may include computing the corresponding moment coefficient array by interpolation from moment coefficient arrays of nearby lattice points in the moment-based uplift coefficient lattice that are within a second predetermined color space distance from the corresponding lattice location.
[0019] Not all the possible color vectors need to be represented by a lattice point, so there may be fewer lattice points in the moment-based uplift coefficient lattice than possible color vectors. The possible color vectors may be vectors in an RGB color space. The moment coefficient arrays may comprise a plurality of coefficient of moments and at least two moment coefficient arrays comprise different numbers of coefficients. The numbers of the coefficients of the moment coefficient arrays may be determined based on round-trip error values, wherein the numbers of the coefficients of the moment coefficient arrays may be determined based on round-trip error values are determined iteratively, comprising setting an initial coefficient count for a spectral representation, determining a sufficiency of a coefficient count for the spectral representation, if the coefficient count is insufficient, increasing the coefficient count, and repeating determination of the sufficiency of the coefficient count until a sufficient coefficient count is reached or a predetermined maximum coefficient count is reached.
[0020] A computer system for generating a moment-based uplift coefficient lattice array may comprise a) at least one processor, b) lattice storage for storing a moment-based uplift coefficient lattice, wherein the moment-based uplift coefficient lattice comprises a plurality of color locations in the color space, each color location representable by an associated color vector representing a color location, and wherein the lattice storage is usable for storing moment coefficient arrays for lattice locations, and c) a computer-readable medium storing instructions, which when executed by the at least one processor, causes the computer system to obtain a first reference spectral representation, wherein the first reference spectral representation comprises data representing an illumination spectrum usable in the image processing or a reflectance spectrum of an object to be processed in the image processing, obtain a first corresponding reference color vector, wherein the first corresponding reference color vector is a first color vector in a color space and the first color vector corresponds to the first reference spectral representation and the first color vector represents a first color location, compute a first corresponding moment coefficient array, wherein the first corresponding moment coefficient array corresponds to the first reference spectral representation, determine a set of nearby lattice points of the moment-based uplift coefficient lattice that are within a predetermined color space distance from the first color location, compute moment coefficient arrays for nearby lattice points, and save the lattice storage in a computer-readable format usable for converting from an input color vector value to a corresponding input spectral representation.
[0021] A carrier medium can carry instructions, which when executed by one or more processors of a machine, cause the machine to carry out any one of the methods described above. The carrier medium may comprise a storage medium or a transient medium, such as a signal.
[0022] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to limit the scope of the claimed subject matter. A more extensive presentation of features, details, utilities, and advantages of the methods, as defined in the claims, is provided in the following written description of various embodiments of the disclosure and illustrated in the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS [0023] Various embodiments and implementations in accordance with the present disclosure will be described with reference to the drawings, in which:
[0024] FIG. 1 illustrates a conversion process between and among spectral representations and color vectors.
[0025] FIG. 2 illustrates a conversion process from color vectors to spectral representations. [0026] FIG. 3 is a flowchart illustrating a method of processing image data using moment- based coefficients.
[0027] FIG. 4 is a flowchart illustrating a method for computing moment coefficients.
[0028] FIG. 5 is a flowchart illustrating a method for converting from a color vector to a set of moment coefficients in a spectral uplifting process.
[0029] FIG. 6 illustrates examples of constrained uplifting and round-trip performance.
[0030] FIG. 7 illustrates examples of accuracy of uplifting.
[0031] FIG. 8 is a table of experimental results. [0032] FIG. 9 illustrates an example visual content generation system as may be used to generate imagery in the form of still images and/or video sequences of images.
[0033] FIG. 10 is a block diagram illustrating an example computer system upon which computer systems of the systems illustrated in FIGS. 1 and 9 may be implemented.
DETAILED DESCRIPTION
[0034] In the following description, various embodiments and implementations will be described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the implementations and embodiments. However, it will also be apparent to one skilled in the art that the embodiments may be practiced without the specific details and that various combinations and substitutions would be apparent after reading this disclosure. Furthermore, well-known features may be omitted or simplified in order not to obscure what is being described.
[0035] Spectral rendering can be used in appearance-critical rendering workflows due to its ability to predict color values under varying illuminants. However, directly modelling assets via input of spectral data is a tedious process and if asset appearance is defined via artist- created textures, these may be drawn in color space, i.e., the RGB color space. Converting these RGB values to equivalent spectral representations is an ambiguous problem and does not necessarily provide the user with further control over the resulting spectral shape. A method for constraining a spectral uplifting process is provided so that for a finite number of input spectra that need to be preserved, it may always yield a correct uplifted spectrum for the corresponding RGB value. Due to constraints placed on the uplifting process, target RGB values that are in close proximity to one another uplift to spectra within the same metameric family, so that textures with color variations can be meaningfully uplifted.
[0036] Accurately reproducing colors has several uses. Visual effects artists often deal with the task of matching plate footage of real objects with rendered images of their digitized asset counterparts. In such scenarios, it is important to preserve visual continuity so that the viewer is not aware that a real asset is replaced with a virtual one, such as by seeing an abrupt color change under certain illumination. In order for the viewer to be oblivious to such an asset switch, color differences between the real asset and the virtual asset should be close enough that the differences are not visible.
[0037] For such work, 3D artists currently use standard color space modelling tools to carefully craft elaborate virtual doubles of a real asset, and obtain a perfect appearance match under the main target illumination (e.g., for daylight). If the same asset is later used in another scene with different illumination (e.g., fluorescent lamps, in an indoor setting), the entire virtual asset appearance has to be manually fine-tuned again. Such a process is, as expected, both tedious and time-consuming.
[0038] If the virtual double were modelled using spectral data, appearance continuity under varying illumination would be guaranteed in a spectral rendering system, but then artists often cannot easily work directly with spectral data. Even if core rendering technology uses spectral systems, asset textures may continue to be painted in color space, as this can be an intuitive way for artists to perform such work.
[0039] FIG. 1 illustrates a system 100 usable for converting among spectral representations and color vectors. As illustrated there, a spectral representation storage 102 may store several spectral representations one of which is illustrated by plot 104 showing a plot of reflectance ratio as a function of wavelength. It is non-limiting example, the spectral representation ranges from around 400 nm to around 700 nm. A color vector generator 106 can generate a color vector 108 from the spectral representation and color vector one away can be stored in color vector storage 110. In this example, color vector 108 comprises three color components labeled “R Value”, “G Value”, and “B Value” representing components in a three- dimensional RGB color space. As reflected in the figure, a spectral representation can have many more than the three degrees of freedom of an RGB color vector. A spectral uplifting converter 120 can map a color vector from color vector storage 110 into a spectral representation for use in computer operations that operate on spectra rather than color vectors. Using system 100, an artist or other user of an image processing system can operate in the spectral space or in the color space depending on the needs of an operation. Preferably, moving between the spectral space and the color space does not introduce undesirable artifacts. While FIG. 1 depicts only one spectral representation and one color vector, it should be understood that color vector storage 110 may store multiple color vectors, as is the case when color vector storage 110 is used for image storage. An image store may store a plurality of color vectors each associated with a pixel or voxel position of an image.
Likewise, spectral representation storage 102 may store a plurality of spectra.
[0040] FIG. 2 illustrates a conversion process 200 from color vectors to spectral representations. As illustrated there, an input spectral representation 202 is provided to a color vector generator 204 that downsamples input spectral representation 202 to determine a color vector 206. In this example, color vector 206 is a vector in a three-dimensional RGB color space, but other color spaces are known in the art and could be used. Once converted to a color vector, the input color can be processed and manipulated by human or computer users of color vectors 210. These may include image or animation manipulation tools that may provide, for example, for altering an image captured by a camera by the insertion of virtual objects into a scene. As such, it can be useful for a spectral representation that represents a captured portion of a real object in a scene to map to a color vector, have an artist create a virtual object based on that color vector, and then be able to generate a spectral representation of a texture of that virtual object that appears, under various illumination, to match that of the real object under those same illuminations. This can be achieved at least in part by having a spectral uplifting converter 214 convert color vector 206 based on stored moment coefficient sets in a lattice storage 230 as described in more detail herein. The goal of matching colors under varying illumination may be obtained if an output spectral representation 220 to which a color vector is uplifted to is a spectrum similar to that generated from a set of moment coefficients that are stored in association with that color vector or nearby points in a lattice structure.
[0041] As shown in FIG. 2, lattice storage 230 can be storage for an array of seeds 232 and an array of lattice points 234. An entry in array of lattice points 234 may contain an indication of a color vector, as may correspond to a lattice point of the lattice, and a color location of a voxel for that lattice point. By some convention, each lattice point could define one voxel, such as having the lattice point for (Rx, Gx, Bx) be the lattice point for the voxel bounded by the eight lattice points (Rx, Gx, Bx), (Rx, Gx, Bx+i), (Rx, Gx+i, Bx), (Rx, Gx+i, Bx+i), (Rx+i, Gx, Bx), (Rx+i, Gx, Bx+i), (Rx+i, Gx+i, Bx), and (Rx+i, Gx+i, Bx+i). Other data may be provided for a lattice point, but at shown, an entry of array of lattice points 234 includes an indication of the lattice point (which in turn can define a voxel), a moment coefficients count (which can vary if needed so that some more significant spectra are more closely defined, which others are defined at lower precision to save on memory and processing), and a set of moment coefficients numbering as indicated by the lattice point’s moment coefficients count. [0042] A seed entry in array of seeds 232 may indicate a color vector for a seed (which need not be an integer and can be a floating point value), which lattice voxel that color vector may fall into and the moment coefficients as may be computed as illustrated in FIG. 4, described below. As an example, if a lattice is defined as the intersections of 32 evenly spaced red planes in an RGB color space, 32 evenly spaced green planes in the RGB color space, and 32 evenly spaced blue planes in the RGB color space, when a seed spectrum is downsampled to a color vector, the color vector (RA, GA, BA) may fall within the voxel Vi(R,G,B) bounded by eight such intersections. As explained herein, array of lattice points 234 could be populated from seeds, wherein a set of moment coefficients for a seed are known, and with several seeds (or possibly only one seed), the sets of moment coefficients for the lattice points of the voxels that contain seeds can be computed, and the sets of moment coefficients for other lattice points can be interpolated. Then, in use, a spectral uplifting can be done for a given color vector, by identifying a lattice location for that color vector, the color location, interpolating nearby moment coefficient sets, and reconstructing a spectrum from an interpolated moment coefficient set. As metamers tend to have similar moment coefficient sets, a spectrum that is uplifted for a color vector that was a function of an input spectral representation will likely be a spectrum that appears similar to the input spectrum under different illumination conditions. This can be important where, for example, in animation or adding visual effects, that colors appear to match from frame to frame even though image manipulation was done in the color space and not with full spectral representations.
[0043] FIG. 3 is a flowchart illustrating a method of processing image data using moment- based coefficients. In the example shown there, in step 301, a spectral uplifting converter obtains reference spectral representations. These reference spectral representations may be supplied by an artist, another person, or some computer process. A reference spectral representation may be structured so as to define a reflection spectrum over a range of wavelengths. In step 302, the spectral uplifting converter determines corresponding reference color vectors for the reference spectral representations. A corresponding reference color vector may be provided by an editor or in some other way determined to be a color vector that could be used to represent the spectrum of the corresponding reference spectral representation. For example, for a spectral representation that represents the spectrum of light given off by a pure green light source, the corresponding color vector may be specified as (R, G, B} = {0, 255, 0}.
[0044] For the reference spectral representations, the spectral uplifting converter may compute a set of moment coefficients for each reference spectral representation in step 303 and as explained in greater detail below. In step 304, the spectral uplifting converter may then store sets of moment coefficients, perhaps in lattice array 230 shown in FIG. 2. As the sets of moment coefficients each correspond to a specified color vector, they can serve as seeded values in a moment-based uplift coefficient lattice array. In step 305, the spectral uplifting converter can compute additional moment coefficients for color vectors other than those seeded values.
[0045] The above steps may be performed once during a set up and then the moment-based uplift coefficient lattice array could be used repeatedly for spectral uplifting. In step 306, the spectral uplifting converter may use the moment-based uplift coefficient lattice array to determine an input spectral representation for an RGB color vector. For example, the spectral uplifting converter may receive an RGB color vector and return a set of moments that represent a suitable corresponding spectrum. Using such a converter, an image processing system or color processing system can move between a spectral space and a color vector space with certain preserving qualities. An example of a preserving quality may be that a round-trip spectrum, computed by converting a spectrum to a color vector and then back to a spectrum, introduces only small errors. Another example of a preserving quality is that to color vectors that are perceptual close in a color space remain close when uplifted to spectra and downsampled back to color vectors.
[0046] In effect, uplifting is constrained uplifting, as a spectrum is converted to a set of moment coefficients wherein the mapping conforms to constraints that can be specified in advance, e.g., that a given moment coefficient set should map to a given color vector, another given moment coefficient set should map to another given color vector, and so on, and that unspecified mappings for other color vectors could be smoothly mapped so that the mapping is smooth and matches the constraint of the given seeded moment coefficient sets. The seeds may come from spectra measured using spectral photometers.
[0047] FIG. 4 is a flowchart illustrating a method for computing moment coefficients as may be used in step 303 of FIG. 3 by a processor that may be part of a spectral uplifting converter. In the example there, in step 401, a processor may read in a spectral representation, which may be stored as computer-readable data from a computer memory. The processor may convert the spectral representation into a signal data structure that can be processed as a signal, such as a digital representation of an analogue time-domain signal. In step 402, a mirrored signal is generated from the spectral representation and other processing may be done to reduce artifacts at signal boundaries.
[0048] In step 403, the processor computes a Fourier basis for the mirrored signal wherein the number of coefficients in a set of moment coefficients is equal to a specified initial coefficient count. In step 404, the processor determines a round-trip error and in step 405 if the error is below a predetermined threshold, flow proceeds to step 406 where the processor stores the moment coefficient array and returns. On the other hand if the error is not below the predetermined threshold, the processor checks, in step 407, whether the coefficient count could be increased. If not, flow continues to step 406, but if the count can be increased, the processor increases the coefficient count in step 408 and returns to step 403 to compute a Fourier basis for the mirrored signal using a larger number of coefficients. [0049] FIG. 5 is a flowchart illustrating a method for converting from a color vector to a set of moment coefficients in a spectral uplifting process as may be performed by a processor executing the method of step 306 in FIG. 3. In this example, a processor may read in an input RGB color vector in step 501, determine a set of lattice points for a lattice in step 502, obtain or determine moments coefficient sets for spectral representations at lattice points, in step 503, and in step 504, if needed, interpolate and output moment coefficient array from the set of lattice points. In step 506, and output spectral representation and the computed from the interpolated moment coefficient array.
Detailed Examples
[0050] In examples described herein, an uplifting model is provided with a capability to be constrained by a set of pre-defmed mappings from RGB to specific spectral shapes, which can then be preserved during an uplifting process. Other RGB input values created by the process return plausible synthetic spectral shapes that smoothly interpolate with the user supplied target spectra. This allows one to, e.g., paint RGB textures that will reliably match specific spectra which were measured on a real production asset, while returning plausible values for all other areas of the RGB space.
[0051] An approach described herein can provide an advantage that it allows RGB textures to be uplifted so that user-selected key RGB colors evaluate to specific spectra that can be measured on a real asset. These pre-defmed mappings between specific RGB colors and spectra are constraints of the uplifting system and may be represented as seeds in an array of uplifting seeds. RGB colors in close vicinity to the constraints will uplift to spectral shapes similar to the original, so that minor texture edits in terms of RGB color do not cause noticeable visual distortions. All the other parts of the RGB space may return plausible, smooth spectra. This can provide new functionality of exact appearance match of renders with plate footage under varying illumination, and thereby eliminates a substantial amount of tedious appearance fine-tuning work.
[0052] Reflectance spectra created by a spectral uplifting process should satisfy multiple constraints. Their values should fall within the [0, 1] range, the round-trip error caused by uplift and subsequent conversion to RGB should be negligible, and the resulting spectral shapes should qualitatively correspond to real-life materials, which are generally fairly smooth and simple. Although multiple techniques for spectral uplifting exist, not all of them satisfy the above criteria.
[0053] The technique by MacAdam is capable of creating only blocky spectra, which are not representative of the smooth reflectances usually found in nature. The widely used proposal by Smits is prone to minor round-trip errors, which arise from slightly out-of-range spectra: the process is based on a set of blocky basis functions, and the result is, while usually fairly close, not guaranteed to be in the [0, 1] range. An approach that produced smooth spectra was proposed by Meng. Unfortunately, it did not take energy conservation into account, which resulted in colors with no real physical counterpart,, i.e., materials that violate the conservation of energy. Otsu introduced a technique that is capable of outperforming most of the existing approaches under specific conditions. Its drawback is its inability to satisfy the [0, 1] spectral range restrictions, which again causes color errors upon round trips.
[0054] The low-dimensional parametric model for spectral representation proposed by Jakob and Hanika used to create a pre-built uplifting model can be employed during rendering. The structure of that uplifting model is a cube-shaped 3D lookup table, comprising evenly spaced lattice points representing RGB values. Each of the points contains a mapping to its respective spectral representation, which in the model by Jakob and Hanika comprises three sigmoid coefficients. Other variations may be used. The acquisition of coefficient sets for individual points can performed during the creation of the model by an optimization tool, such as the CERES solver. Requiring only a set of prior coefficients and the definition of functions which to minimize (in case of Jakob and Hanika, the difference between the reconstructed and the target RGB), the CERES solver is capable of modifying the coefficients to the point where they reconstruct a spectrum that evaluates to the desired RGB value. This process is referred to herein as lattice point fitting. Jakob and Hanika initially fit the coefficients for the center of the cube (i.e., the lattice point with RGB = (0.5, 0.5, 0.5)), and let the fitting process gradually fill in all lattice points by using the coefficients of already fitted neighbours as prior for non-fitted points. Their approach produces smooth spectra that may satisfy spectral range restrictions with negligible error. Jung provides a technique for wide gamut spectral uplifting by introducing new parameters for fluorescence.
[0055] A way to constrain an uplifting process to deliver specific spectral shapes could be an improvement over these approachesand allow for reproducing user-defined spectra. Moment-based Spectral Representation
[0056] A trivial way to store spectral information is via regular sampling. While being easy to implement and handle, this approach is not very efficient insofar as choosing the optimal number of samples for a given task is not always easy to determine: and when faced with uncertainty, the standard approach seems to be to use more samples than strictly necessary, to make sure no accuracy is lost. [0057] Using a lower-dimensional linear function space, such as the Fourier series, may provide reasonable round-trip accuracy but may not have a physical counterpart, as the reconstruction does not obey the [0, 1] range constraint needed for physically plausible reflectance spectra. In addition to linear function spaces, non-linear approaches to spectral representations have also been proposed. The representations are, however, incompatible with linear pre-filtering of textures.
Constrained Spectral Uplifting
[0058] Our uplifting model can be based on a pre-computed RGB coefficient cube that provides mappings of RGB values to their corresponding spectral representations. We can use moment-based spectral representations for our purposes.
Obtaining Moment-Based Coefficients
[0059] As the shapes of spectra are aperiodic, storing them via Fourier coefficients requires their conversion to a periodic signal for which the Fourier coefficients can then be computed. Although the mapping of wavelength range to the signal can be performed linearly, mirroring of the signal may help eliminates artifacts at the boundaries. Warping and focusing reconstruction accuracy on the area around 550nm may help, but it may be that only mirroring is performed, in part due to more accurate round-trips. The moment-based coefficients stored for a periodic signal #(f) are then computed as in Equation 1, where o(f) is the Fourier basis.
Figure imgf000016_0001
[0060] The coefficients c are referred to as trigonometric moments. Note that using m trigonometric moments for spectral representation implies that c = m+l coefficients are actually stored. The additional factor is the zeroth moment, co.
Reconstruction from Moment-Based Coefficients
[0061] One process of spectral reconstruction from an input set of trigonometric moments is based on the theory of moments, specifically on Maximum Entropy Spectral Estimate (MESE). However, as the MESE is not bounded, it cannot be directly used for the reconstruction of reflectance spectra, which must satisfy the [0, 1] range constraint.
Therefore, bounded MESE has been introduced, which utilises a duality between bounded and unbounded moment problems formulated in terms of the Herglotz transform. Based on this duality, trigonometric moments can be converted to their corresponding unbounded exponential moments , so that the bounded problem represented by the trigonometric moments has a solution if and only if the dual unbounded problem represented by the exponential moments has a solution.
[0062] Although this method has been shown to perform rather accurately for even a low number of moments (e.g., 5), reconstruction of complex reflectance spectra with sharp edges often requires a substantially higher coefficient count. By computing the Delta E round-trip error (specifically, the CIE 1976 Delta E) under multiple illuminants for a large database of spectra, such as greater than 12,000 entries from multiple color atlases, we empirically determined that for typical reflectance spectra the error stabilizes at around m= 20, i.e., c=21. [0063] Naturally, we wish for the reconstruction to be as precise as possible, however, we want to prevent unnecessarily high coefficient counts due to both memory consumption and, as discussed later, also time performance. Therefore, we opt for using a variable number of coefficients for each spectrum. This number is computed with a heuristically-based iterative method — starting from c = 4, we check whether the coefficient count is sufficient, and, if not, we increase it and move on to the next iteration, repeating this process up to c = 21. The adequacy of the representation is determined by its round-trip error under an error-prone illuminant (specifically, FL11).
[0064] On average, moment-based representations satisfying our model require sixteen coefficients. For most of the constraints, such precision is hardly necessary, and could be decreased if we were to focus on memory utilisation. However, if a goal is to properly assess the accuracy of the uplifting process, slight memory overhead is not a problem.
Fitting a Moment-Based Uplift Coefficient Cube
[0065] After obtaining the trigonometric moment representations of all user-provided input spectra, we assign them to their nearest lattice points within the RGB coefficient cube. We refer herein to this process as seeding the cube, as it provides initial starting points for the fitting process.
[0066] Determining the optimal resolution of the coefficient cube can be a heuristic-driven process: for the sigmoid-based fitting process, cubes with 323 lattice points deliver satisfactory results. The same seems to apply to the moment-based coefficient cubes, with the additional concern that a denser lattice may be needed if a very large number, or a densely spaced set, of spectral exemplars were to be used as input: our technique only works if there is at most a single exemplar spectrum per cube voxel.
[0067] Typically, the RGB values of the user-provided spectra do not lie exactly on the lattice points, but rather fall inside one of the cube voxels. Therefore, in order to ensure proper reconstruction upon uplifting, we assign the coefficients to all eight lattice points that are at the corners of the voxel, i.e., voxel corners. Additionally, to support larger sets of user- provided spectra, e.g., whole color atlases, we extend the cube by allowing multiple moment representations per lattice point. We distinguish them by constraint-specific IDs, which are then utilised during the uplifting process for the purposes of identifying the original seed of the voxel.
[0068] By seeding the cube, we have placed initial coefficients at some of the lattice points which ultimately do not necessarily reconstruct curves that evaluate to the target RGB. Therefore, the coefficients must be modified so that the resulting color difference is as low as possible.
[0069] One approach to improving the coefficients is using the Non-linear Least Squares problem. Non-linear Least Squares is an unconstrained minimization problem in the form of Equation 2, where x = (xi, X2, ... } is a parameter block that we are improving (i.e. our coefficients) and f represent cost functions , which we want to minimize. minimize
Figure imgf000018_0001
(Eqn. 2)
[0070] We can use the CERES solver for solving this problem. However, while their cost functions focused only on minimizing the difference between the target and the current RGB values, we also add a cost function specifying the distance between the prior and the reconstructed spectral shape. This can preserve the input spectral shapes.
[0071] Cost functions may be defined as in Equation 3, where the target rgb is the RGB value of the lattice point, constraints represents a discretized reflectance spectrum of the input constraint, current spectrum represents the spectrum the current coefficients x reconstruct, current rgb is the RGB value of current spectrum and 5 is the number of samples used for internal representation of reflectance curves.
/oO) = I targetjrgb.R — current _rgb.R\ f (x) — I targetjrgb. G — current_rgb. G\ /2(x) = I targetjrgb. B — current_rgb.B\
Figure imgf000018_0002
(Eqn. 3) [0072] While the /o(x),/i(x), and fi(x) residuals are identical to the previous approaches, an fix) residual (referred to herein as the distance residual) can be added to preserve the input spectral shapes.
Reduced Coefficient Count in Unseeded Cube Parts
[0073] As the properties of the input spectra correspond to the initial fits of the lattice points, their number of coefficients is bound to be comparatively high. However, after the initial fitting round that deals with the RGB cube voxels that contain seeds, we do not need to maintain such high coefficient counts: this would be inefficient and memory consuming, in addition to propagating specific spectral shape features beyond the area of the RGB cube where they are actually wanted. Instead, once we leave the initial fitting regions that contain exemplar spectra we want to reproduce, we switch to lower-dimensional moment representations that intrinsically yield smooth spectra not unlike the sigmoids of the original technique and the remainder of the lattice points are fit with three coefficients only.
[0074] The loosened requirements on the spectral shapes of the unseeded lattice points (i.e., only needed to be smooth and within the same metameric family as their neighbors) allows for simpler processing and can eliminate the distance cost function. Computing only three RGB residuals has the added benefit of lower time complexity.
[0075] The conversion of a moment representation, m, of a seeded point to a fitting prior, c, for a non-seeded lattice point is performed by spectral reconstruction of m , and its subsequent storage with only three coefficients. Although this process, called coefficient recalculation , causes loss of spectral information, it preserves the rough outline of the curve, in effect performing low pass filtering. This works to our benefit — it reduces the likelihood of significant color artefacts between the seeded and non-seeded points, while keeping the spectra smooth.
[0076] A problem may arise if the lattice point that is being recalculated (denoted P) for the purposes of fitting a non-seeded point (denoted Q) contains multiple moment representations from different metameric families: this is the case if several of its neighbour voxels have been seeded with different spectral shapes. To demonstrate the problem, let us assume two seeded neighbour voxels of P , A , and B. Choosing to recalculate the representation corresponding to A would result in visible color artefacts between Q and the voxel seeded with B. As it is our intention to keep the color transitions within all voxel pairs smooth, we interpolate the spectra reconstructed from the moment representations instead. Interpolation of Metameric Spectra
[0077] In the following, we show that the linear combination of two spectra that are metameric under a given light source results in another metameric spectrum. To our best knowledge, this insight, while not particularly mathematically complex, has not been explicitly stated in graphics literature before: in our technique, we use this observation to interpolate between metameric spectra stored (in the form of moment representations) at lattice points that contain multiple coefficient representations.
[0078] Let us assume the spectral power distributions of two metamers saved at a lattice point, Pi (l) and Pi (l), that both satisfy the conditions of Equation 4, where r (l), g (l), and b (l) are the RGB color matching functions.
/ P1(A)r(A)dA = / P2(A)r(X)dA f R!(L l)άl = f R2(lyl)άl
(Eqn. 4) f P1(A)b(A)dA = f P2(A)b(A)dA
[0079] Let us express the R component of the RGB value resulting from the linear combination of Pi (l) and Pi (l) as in Euqation 5, where a + b = 1.
Figure imgf000020_0001
[0080] By rewriting the expression of Equation 5 and utilising the equality from Equation 4, we get Equation 6, which simplifies to Equation 7.
Figure imgf000020_0002
R = f P1(A)r(A)dA (Eqn. 7)
[0081] Simimar steps can be done for the G and B components of the resulting RGB value, thus showing that the resulting spectral distribution is also a metamer. a Moment-Based Coefficient Cube in a Renderer
Figure imgf000020_0003
[0082] Discretisation of the RGB space in terms of a cube-like structure poses the problem how to uplift RGB query values for which no direct mapping to a moment representation exists (i.e., which do not directly lie on a lattice point). In such a case, it is reasonable to employ a weighted trilinear interpolation of the data stored at the eight voxel comers. As our technique uses variable coefficients counts, interpolating coefficients within a voxel is generally not an option. In such cases, interpolations can be done with the reconstructed spectra.
[0083] Due to the potential presence of multiple moment representations per lattice point, reconstruction of spectra at such lattice points is not straightforward. If a voxel has been seeded during the construction of the uplifting model, we force exclusive use of the metameric family of the original spectral seed, in order to achieve our goal of matching the input spectra. In all other voxels, for each of its eight lattice points, we reconstruct spectra of all moment representations that have accumulated there, and interpolate between them in equal ratios. As described herein, this is permissible, and also yields a metamer for the RGB coordinates of the lattice point. This “hybrid” metamer is then used as input for the eight comer trilinear interpolation that yields the actual result spectrum for the RGB query value. The reason for this strategy is the same as for the coefficient recalculation,, i.e., to provide a smooth transition between metameric families.
[0084] Whether the voxel has been seeded or not can be determined by means of the unique IDs assigned to the moment representations during the seeding process. If the voxel comers share a common ID, the representations corresponding to this ID can be used in order to properly reconstruct the seeds.
Experimental Results
[0085] In the following, we evaluate the accuracy of our technique and compare its results to the sigmoid-based uplift as defined by Jakob and Hanika. We start by assessing the quality of our implementation in terms of both accuracy upon uplifting constraints and the colorimetric properties of the uplifting system as a whole. We then provide measurements of both memory utilisation and time performance. All the data of color atlases and illuminants used in our experiments is provided by ART.
Accuracy of Constrained Uplifting
[0086] We tested the accuracy of our proposed constrained uplifting approach on the Munsell Book of Color (MBOC), which we utilise as a constraint set for a 323-sized coefficient cube for the sRGB color space. Of the 1598 entries in the MBOC, 1396 are in the sRGB gamut: so our experiments only included those. If a larger RGB space (such as, e.g., Adobe RGB) were used as input space, the full number of atlas entries could be used: working with sRGB was not due to any restrictions in our proposed technique, but only done to stay within the standard RGB space of graphics. Furthermore, in a 323-sized coefficient cube, 82 of the atlas entries can not be used as seeds due to the limitation that only one seed can be present in each voxel. Simply increasing the cube resolution would allow to use the 82 missing entries: again, using a 323-sized coefficient cube was not done due to any intrinsic restrictions of our technique, but just to save pre-computation time, and stay with a nice power of two as cube dimension. The choice of 32, or some other power of two, may be done to simplify memory accesses, but other values, higher or lower than 32 may be used, and those may not be powers of two. [0087] We compared the difference between the uplifted spectra to the original ones via the standard CIE Delta E color difference metric. In the following, we do not quote the results for the original fitting illuminant CIE D65, as accuracy is excellent in that case anyway (negligible Delta E throughout). The interesting case is how well color appearance matches under CIE FL11, which we used as an example of a spiky fluorescent illuminant that is qualitatively similar (but not identical) to the fluorescent lamp used in the xRite Judge QC viewing booth (see figure 0).
[0088] Even under this illuminant with low color rendering index (read: with a high propensity to bring out metameric failures in materials), the average round-trip error is just a Delta E of 0.21. As the maximum error perceivable by a standard observer is A E < 1, this value is negligible and can be regarded as highly satisfactory. Of the 1314 entries, only 22 were found to return a A E < 1, and all of these were RGB values extremely close to (0, 0, 0). [0089] FIG. 6 illustrates problematic cases of constrained uplifting: reflectance spectra of darker colors. The spectral curve labelled round-trip is a direct reconstruction of the spectrum from the moments used to seed the fitting process. The dashed curve labelled uplift is the result of querying the final coefficient cube, including interpolation within the voxel the RGB value lies in. Ideally, all three curves are identical: separating “round-trip” and “uplift” shows whether any error is due to the moment-based representation, or the interpolation in the coefficient cube. While the round-trip from the moment-representation used for the purposes of seeding is reasonable (in the areas where the round-trip plot is barely visible, it mimics the original spectrum), the uplifted spectrum may demonstrate visible deviation. [0090] In an example process for determining round-trip error as illustrated in system 600 of FIG. 6, a round-trip spectral representation, such as representation 602(A) or 602(B), may be converted by a color vector generator 604 into a color vector 606. Color vector 606 can then be spectrally uplifted by spectral uplifting converter 608 using the moment— based uplift coefficients in a lattice storage 610, and the resulting output spectral representation compared with the input spectral representation.
[0091] FIG. 7 illustrates accuracy of constrained uplifting on examples of input spectra that correspond to saturated RGB colors. These types of uplifted spectra follow the spectra reconstructed from the original coefficients quite precisely, and also match the input spectra quite well.
Uplift Consistency Across RGB Space
[0092] In order to assess how our technique uplifts the entire RGB gamut (and not just the regions around the seeds), we created multiple coefficient cubes that were seeded with different color atlases. This included an atlas with a single starting constraint at RGB = (0.5, 0.5, 0.5), i.e., fitted with the same sole starting constraint as the sigmoid-based approach by Jakob and Hanika).
[0093] We first compared their performance in terms of color reconstruction with regard to uplifting a gradient texture. We selected a gradient with saturated colors in the red-yellow- green region, as that is where differences are most perceivable. While the distinctions between individual uplifts under FL11 are barely perceivable by the human eye, the difference images demonstrate that there are some variations - mainly around the locations of seed points, which is precisely what is intended by constraining the uplift process in these locations. None of the gradient textures exhibit any visible discontinuities, though, which indicates that our interpolation approach works properly in the presence of multiple metameric families of reflectance spectra.
[0094] Our approach can properly uplift large regions of the RGB gamut simultaneously, without showing artefacts under varying illuminations. We provide multiple renderings of an uplifted rainbow texture that covers most of the RGB gamut. We uplift this texture for various constraint sets, and under various illuminants. None of these renderings exhibits significant artefacts, even though a subset of all voxels was seeded, and the remainder was filled in with smoother spectra.
Performance
[0095] In this section, we evaluate the performance of our method in terms of both memory and execution time.
[0096] The memory used for storing our cube depends on its resolution (i.e., the number of lattice points), which is, in turn, dependent on both the size of our constraint set and on the position of its spectra within the RGB cube. The Macbeth Color Checker (MCC), which contains only 24 entries that are spaced quite far apart from each other in RGB space (with one of them falling outside of sRGB), requires as little as a 132-sized cube. Due to the close proximity of some seeds, the 1396 sRGB entries of the Munsell Book of Color would require as much as 340 lattice points per axis for all seeds to fall into a unique voxel. Additionally, due to the rigid nature of an evenly spaced voxel grid, using a higher cube dimension does not necessarily imply more points that can be successfully seeded. Due to voxel edges being in different positions for different cube dimensions, increasing cube size may even have an adverse effect — for example, while a cube of size 903 is sufficient for the RAL Design atlas, in a 3003-sized cube, 1 point remains unfitted due to a voxel collision. [0097] To store the coefficients of cube entries, three floating point values may be used for each non-seeded point, and, on average, sixteen floating point values per constraint. For the 3403-sized cube required for the proper coverage of the Munsell Book of Color, this would yield a size of over 450.35 MB. Although using less coefficients for storing constraints is possible, it would not noticeably improve the size of the cube — even if we were to use thre coefficients for all coefficient representations within the cube, the overall size would still be over 449.8 MB. That is a negligible improvement, as the overall size can be excessive — after all, a seeding of the whole Munsell Book of Color requires only 1396 voxels, which sums up to a maximum of 8 times 1396 lattice points. Additionally, while most of the regions of the cube are barely utilised, there exist some that have all of their voxels fitted, which may result in a lack of smooth color transitions within these regions.
[0098] We therefore conclude that, for the purposes of constrained spectral uplifting for large sets of user-supplied target spectra (like, e.g., entire color atlases), a coefficient cube with evenly spaced lattice points can be sub-optimal, both in terms of both memory requirements and its resulting colorimetric properties. A dynamic structure like, e.g., kD-trees, which can split RGB space into variably-sized voxels according to the number of constraints may be used. However, the approach presented herein works fine for target spectra sets up to several dozen, or maybe even low numbers of hundred, data points. This is typically sufficient for usage in visual effects scenarios, where only a few key assets (like, e.g., the main colors of the costume of a lead character) are measured on set, in order to later constrain the spectral uplift of virtual doubles of this character.
[0099] FIG. 8 is a table of experimental timing results. Since the uplifting model can be created prior to the rendering process, we divide the evaluation of the execution time into two parts — the cube fitting process, and the rendering speed when utilising our cube for uplifting purposes. We tested the execution time of cube fitting on multiple sets of constraints in forms of color atlases. All the experiments were performed on an Intel Core Ϊ7-8750H CPU (12 logical cores), and the size of each cube is 323. FIG. 8 shows fitting times of a 323-sized coefficient cube for multiple color atlases. This cube resolution may be insufficient for the utilisation of all constraints in a given atlas, so the data shows the number of seeds that were placed on lattice points.
[0100] Due to the higher coefficient count and the strict requirements placed upon the shapes of the reconstructed spectral curves, the fitting of seeded lattice points can take a lot longer than the fitting of the latter (on average, a seeded point takes 2.4 seconds to fit, in comparison to the 0.03 seconds for those that are not seeded). However, as the cube fitting process is multi -threaded, it can benefit from multiple starting points evenly positioned across the RGB cube.
[0101] A method capable of constraining the spectral uplifting process with an arbitrary set of target spectra is provided and this can be implemented in computer software and hardware. By utilising a trigonometric moment-based approach for spectral representation, the RGB values of the target spectra are accurately uplifted to their original spectral shapes, while the rest of the RGB gamut uplifts to smooth spectra. This results in smooth transitions between the various metameric families that originate from the constraining process.
[0102] FIG. 9 illustrates an example visual content generation system 900 as may be used to generate imagery in the form of still images and/or video sequences of images. Visual content generation system 900 may generate imagery of live action scenes, computer generated scenes, or a combination thereof. In a practical system, users are provided with tools that allow them to specify, at high levels and low levels where necessary, what is to go into that imagery. For example, a user may be an animation artist and may use visual content generation system 900 to capture interaction between two human actors performing live on a sound stage and replace one of the human actors with a computer-generated anthropomorphic non-human being that behaves in ways that mimic the replaced human actor’s movements and mannerisms, and then add in a third computer-generated character and background scene elements that are computer-generated, all in order to tell a desired story or generate desired imagery.
[0103] Still images that are output by visual content generation system 900 may be represented in computer memory as pixel arrays, such as a two-dimensional array of pixel color values, each associated with a pixel having a position in a two-dimensional image array. Pixel color values may be represented by three or more (or fewer) color values per pixel, such as a red value, a green value, and a blue value (e.g., in RGB format). Dimensions of such a two-dimensional array of pixel color values may correspond to a preferred and/or standard display scheme, such as 1920-pixel columns by 1280-pixel rows or 4096-pixel columns by 2160-pixel rows, or some other resolution. Images may or may not be stored in a certain structured format, but either way, a desired image may be represented as a two-dimensional array of pixel color values. In another variation, images are represented by a pair of stereo images for three-dimensional presentations and in other variations, an image output, or a portion thereof, may represent three-dimensional imagery instead of just two-dimensional views. Alternatively, pixel values may be data structures and a pixel value can be associated with a pixel and can be a scalar value, a vector, or another data structure associated with a corresponding pixel. That pixel value may include color values, or not, and may include depth values, alpha values, weight values, object identifiers or other pixel value components. [0104] A stored video sequence may include a plurality of images such as the still images described above, but where each image of the plurality of images has a place in a timing sequence and the stored video sequence is arranged so that when each image is displayed in order, at a time indicated by the timing sequence, the display presents what appears to be moving and/or changing imagery. In one representation, each image of the plurality of images is a video frame having a specified frame number that corresponds to an amount of time that would elapse from when a video sequence begins playing until that specified frame is displayed. A frame rate may be used to describe how many frames of the stored video sequence are displayed per unit time. Example video sequences may include 24 frames per second (24 FPS), 50 FPS, 140 FPS, or other frame rates. Frames may be interlaced or otherwise presented for display, but for clarity of description, in some examples, it is assumed that a video frame has one specified display time, but other variations may be contemplated.
[0105] One method of creating a video sequence is to simply use a video camera to record a live action scene, i.e., events that physically occur and can be recorded by a video camera.
The events being recorded can be events to be interpreted as viewed (such as seeing two human actors talk to each other) and/or can include events to be interpreted differently due to clever camera operations (such as moving actors about a stage to make one appear larger than the other despite the actors actually being of similar build, or using miniature objects with other miniature objects so as to be interpreted as a scene containing life-sized objects).
[0106] Creating video sequences for story-telling or other purposes often calls for scenes that cannot be created with live actors, such as a talking tree, an anthropomorphic object, space battles, and the like. Such video sequences may be generated computationally rather than capturing light from live scenes. In some instances, an entirety of a video sequence may be generated computationally, as in the case of a computer-animated feature film. In some video sequences, it is desirable to have some computer-generated imagery and some live action, perhaps with some careful merging of the two.
[0107] While computer-generated imagery may be creatable by manually specifying each color value for each pixel in each frame, this is likely too tedious to be practical. As a result, a creator uses various tools to specify the imagery at a higher level. As an example, an artist may specify the positions in a scene space, such as a three-dimensional coordinate system, of objects and/or lighting, as well as a camera viewpoint, and a camera view plane. From that, a rendering engine could take all of those as inputs, and compute each of the pixel color values in each of the frames. In another example, an artist specifies position and movement of an articulated object having some specified texture rather than specifying the color of each pixel representing that articulated object in each frame.
[0108] In a specific example, a rendering engine performs ray tracing wherein a pixel color value is determined by computing which objects lie along a ray traced in the scene space from the camera viewpoint through a point or portion of the camera view plane that corresponds to that pixel. For example, a camera view plane may be represented as a rectangle having a position in the scene space that is divided into a grid corresponding to the pixels of the ultimate image to be generated, and if a ray defined by the camera viewpoint in the scene space and a given pixel in that grid first intersects a solid, opaque, blue object, that given pixel is assigned the color blue. Of course, for modern computer-generated imagery, determining pixel colors - and thereby generating imagery - can be more complicated, as there are lighting issues, reflections, interpolations, and other considerations.
[0109] As illustrated in FIG. 9, a live action capture system 902 captures a live scene that plays out on a stage 904. Live action capture system 902 is described herein in greater detail, but may include computer processing capabilities, image processing capabilities, one or more processors, program code storage for storing program instructions executable by the one or more processors, as well as user input devices and user output devices, not all of which are shown.
[0110] In a specific live action capture system, cameras 906(1) and 906(2) capture the scene, while in some systems, there may be other sensor(s) 908 that capture information from the live scene (e.g., infrared cameras, infrared sensors, motion capture (“mo-cap”) detectors, etc.). On stage 904, there may be human actors, animal actors, inanimate objects, background objects, and possibly an object such as a green screen 910 that is designed to be captured in a live scene recording in such a way that it is easily overlaid with computer generated imagery. Stage 904 may also contain objects that serve as fiducials, such as fiducials 912(l)-(3), that may be used post-capture to determine where an object was during capture. A live action scene may be illuminated by one or more lights, such as an overhead light 914.
[0111] During or following the capture of a live action scene, live action capture system 902 may output live action footage to a live action footage storage 920. A live action processing system 922 may process live action footage to generate data about that live action footage and store that data into a live action metadata storage 924. Live action processing system 922 may include computer processing capabilities, image processing capabilities, one or more processors, program code storage for storing program instructions executable by the one or more processors, as well as user input devices and user output devices, not all of which are shown. Live action processing system 922 may process live action footage to determine boundaries of objects in a frame or multiple frames, determine locations of objects in a live action scene, where a camera was relative to some action, distances between moving objects and fiducials, etc. Where elements have sensors attached to them or are detected, the metadata may include location, color, and intensity of overhead light 914, as that may be useful in post-processing to match computer-generated lighting on objects that are computer generated and overlaid on the live action footage. Live action processing system 922 may operate autonomously, perhaps based on predetermined program instructions, to generate and output the live action metadata upon receiving and inputting the live action footage. The live action footage can be camera-captured data as well as data from other sensors.
[0112] An animation creation system 930 is another part of visual content generation system 900. Animation creation system 930 may include computer processing capabilities, image processing capabilities, one or more processors, program code storage for storing program instructions executable by the one or more processors, as well as user input devices and user output devices, not all of which are shown. Animation creation system 930 may be used by animation artists, managers, and others to specify details, perhaps programmatically and/or interactively, of imagery to be generated. From user input and data from a database or other data source, indicated as a data store 932, animation creation system 930 may generate and output data representing objects (e.g., a horse, a human, a ball, a teapot, a cloud, a light source, a texture, etc.) to an object storage 934, generate and output data representing a scene into a scene description storage 936, and/or generate and output data representing animation sequences to an animation sequence storage 938.
[0113] Scene data may indicate locations of objects and other visual elements, values of their parameters, lighting, camera location, camera view plane, and other details that a rendering engine 950 may use to render CGI imagery. For example, scene data may include the locations of several articulated characters, background objects, lighting, etc. specified in a two-dimensional space, three-dimensional space, or other dimensional space (such as a 2.5- dimensional space, three-quarter dimensions, pseudo-3D spaces, etc.) along with locations of a camera viewpoint and view place from which to render imagery. For example, scene data may indicate that there is to be a red, fuzzy, talking dog in the right half of a video and a stationary tree in the left half of the video, all illuminated by a bright point light source that is above and behind the camera viewpoint. In some cases, the camera viewpoint is not explicit, but can be determined from a viewing frustum. In the case of imagery that is to be rendered to a rectangular view, the frustum would be a truncated pyramid. Other shapes for a rendered view are possible and the camera view plane could be different for different shapes.
[0114] Animation creation system 930 may be interactive, allowing a user to read in animation sequences, scene descriptions, object details, etc. and edit those, possibly returning them to storage to update or replace existing data. As an example, an operator may read in objects from object storage into a baking processor 942 that would transform those objects into simpler forms and return those to object storage 934 as new or different objects. For example, an operator may read in an object that has dozens of specified parameters (movable joints, color options, textures, etc.), select some values for those parameters and then save a baked object that is a simplified object with now fixed values for those parameters.
[0115] Rather than requiring user specification of each detail of a scene, data from data store 932 may be used to drive object presentation. For example, if an artist is creating an animation of a spaceship passing over the surface of the Earth, instead of manually drawing or specifying a coastline, the artist may specify that animation creation system 930 is to read data from data store 932 in a file containing coordinates of Earth coastlines and generate background elements of a scene using that coastline data.
[0116] Animation sequence data may be in the form of time series of data for control points of an object that has attributes that are controllable. For example, an object may be a humanoid character with limbs and joints that are movable in manners similar to typical human movements. An artist can specify an animation sequence at a high level, such as “the left hand moves from location (XI, Yl, Zl) to (X2, Y2, Z2) over time T1 to T2”, at a lower level (e.g., “move the elbow joint 2.5 degrees per frame”) or even at a very high level (e.g., “character A should move, consistent with the laws of physics that are given for this scene, from point PI to point P2 along a specified path”).
[0117] Animation sequences in an animated scene may be specified by what happens in a live action scene. An animation driver generator 944 may read in live action metadata, such as data representing movements and positions of body parts of a live actor during a live action scene. Animation driver generator 944 may generate corresponding animation parameters to be stored in animation sequence storage 938 for use in animating a CGI object. This can be useful where a live action scene of a human actor is captured while wearing mo- cap fiducials (e.g., high-contrast markers outside actor clothing, high-visibility paint on actor skin, face, etc.) and the movement of those fiducials is determined by live action processing system 922. Animation driver generator 944 may convert that movement data into specifications of how joints of an articulated CGI character are to move over time.
[0118] A rendering engine 950 can read in animation sequences, scene descriptions, and object details, as well as rendering engine control inputs, such as a resolution selection and a set of rendering parameters. Resolution selection may be useful for an operator to control a trade-off between speed of rendering and clarity of detail, as speed may be more important than clarity for a movie maker to test some interaction or direction, while clarity may be more important than speed for a movie maker to generate data that will be used for final prints of feature films to be distributed. Rendering engine 950 may include computer processing capabilities, image processing capabilities, one or more processors, program code storage for storing program instructions executable by the one or more processors, as well as user input devices and user output devices, not all of which are shown.
[0119] Visual content generation system 900 can also include a merging system 960 that merges live footage with animated content. The live footage may be obtained and input by reading from live action footage storage 920 to obtain live action footage, by reading from live action metadata storage 924 to obtain details such as presumed segmentation in captured images segmenting objects in a live action scene from their background (perhaps aided by the fact that green screen 910 was part of the live action scene), and by obtaining CGI imagery from rendering engine 950.
[0120] A merging system 960 may also read data from rulesets for merging/combining storage 962. A very simple example of a rule in a ruleset may be “obtain a full image including a two-dimensional pixel array from live footage, obtain a full image including a two-dimensional pixel array from rendering engine 950, and output an image where each pixel is a corresponding pixel from rendering engine 950 when the corresponding pixel in the live footage is a specific color of green, otherwise output a pixel value from the corresponding pixel in the live footage.”
[0121] Merging system 960 may include computer processing capabilities, image processing capabilities, one or more processors, program code storage for storing program instructions executable by the one or more processors, as well as user input devices and user output devices, not all of which are shown. Merging system 960 may operate autonomously, following programming instructions, or may have a user interface or programmatic interface over which an operator can control a merging process. An operator may specify parameter values to use in a merging process and/or may specify specific tweaks to be made to an output of merging system 960, such as modifying boundaries of segmented objects, inserting blurs to smooth out imperfections, or adding other effects. Based on its inputs, merging system 960 can output an image to be stored in a static image storage 970 and/or a sequence of images in the form of video to be stored in an animated/combined video storage 972.
[0122] Thus, as described, visual content generation system 900 can be used to generate video that combines live action with computer-generated animation using various components and tools, some of which are described in more detail herein. While visual content generation system 900 may be useful for such combinations, with suitable settings, it can be used for outputting entirely live action footage or entirely CGI sequences. The code may also be provided and/or carried by a transitory computer readable medium, e.g., a transmission medium such as in the form of a signal transmitted over a network.
[0123] The techniques described herein can be implemented by one or more generalized computing systems programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Special-purpose computing devices may be used, such as desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques.
[0124] A carrier medium may carry image data or other data having details generated using the methods described herein. The carrier medium can comprise any medium suitable for carrying the image data or other data, including a storage medium, e.g., solid-state memory, an optical disk or a magnetic disk, or a transient medium, e.g., a signal carrying the image data such as a signal transmitted over a network, a digital signal, a radio frequency signal, an acoustic signal, an optical signal or an electrical signal.
[0125] FIG. 10 is a block diagram that illustrates a computer system 1000 upon which the computer systems of the systems described herein and/or visual content generation system 900 (see FIG. 9) may be implemented. Computer system 1000 includes a bus 1002 or other communication mechanism for communicating information, and a processor 1004 coupled with bus 1002 for processing information. Processor 1004 may be, for example, a general- purpose microprocessor.
[0126] Computer system 1000 also includes a main memory 1006, such as a random-access memory (RAM) or other dynamic storage device, coupled to bus 1002 for storing information and instructions to be executed by processor 1004. Main memory 1006 may also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1004. Such instructions, when stored in non-transitory storage media accessible to processor 1004, render computer system 1000 into a special-purpose machine that is customized to perform the operations specified in the instructions.
[0127] Computer system 1000 further includes a read only memory (ROM) 1008 or other static storage device coupled to bus 1002 for storing static information and instructions for processor 1004. A storage device 1010, such as a magnetic disk or optical disk, is provided and coupled to bus 1002 for storing information and instructions.
[0128] Computer system 1000 may be coupled via bus 1002 to a display 1012, such as a computer monitor, for displaying information to a computer user. An input device 1014, including alphanumeric and other keys, is coupled to bus 1002 for communicating information and command selections to processor 1004. Another type of user input device is a cursor control 1016, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 1004 and for controlling cursor movement on display 1012. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
[0129] Computer system 1000 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 1000 to be a special-purpose machine. The techniques herein are performed by computer system 1000 in response to processor 1004 executing one or more sequences of one or more instructions contained in main memory 1006. Such instructions may be read into main memory 1006 from another storage medium, such as storage device 1010. Execution of the sequences of instructions contained in main memory 1006 causes processor 1004 to perform the process steps described herein. Alternatively, hard-wired circuitry may be used in place of or in combination with software instructions.
[0130] The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operation in a specific fashion. Such storage media may include non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 1010. Volatile media includes dynamic memory, such as main memory 1006. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge. [0131] Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire, and fiber optics, including the wires that include bus 1002. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Any type of medium that can carry the computer/processor implementable instructions can be termed a carrier medium and this encompasses a storage medium and a transient medium, such as a transmission medium or signal.
[0132] Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor 1004 for execution. For example, the instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a network connection. A modem or network interface local to computer system 1000 can receive the data. Bus 1002 carries the data to main memory 1006, from which processor 1004 retrieves and executes the instructions. The instructions received by main memory 1006 may optionally be stored on storage device 1010 either before or after execution by processor 1004.
[0133] Computer system 1000 also includes a communication interface 1018 coupled to bus 1002. Communication interface 1018 provides a two-way data communication coupling to a network link 1020 that is connected to a local network 1022. For example, communication interface 1018 may be a network card, a modem, a cable modem, or a satellite modem to provide a data communication connection to a corresponding type of telephone line or communications line. Wireless links may also be implemented. In any such implementation, communication interface 1018 sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information.
[0134] Network link 1020 typically provides data communication through one or more networks to other data devices. For example, network link 1020 may provide a connection through local network 1022 to a host computer 1024 or to data equipment operated by an Internet Service Provider (ISP) 1026. ISP 1026 in turn provides data communication services through the world-wide packet data communication network now commonly referred to as the “Internet” 1028. Local network 1022 and Internet 1028 both use electrical, electromagnetic, or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 1020 and through communication interface 1018, which carry the digital data to and from computer system 1000, are example forms of transmission media.
[0135] Computer system 1000 can send messages and receive data, including program code, through the network(s), network link 1020, and communication interface 1018. In the Internet example, a server 1030 may transmit a requested code for an application program through the Internet 1028, ISP 1026, local network 1022, and communication interface 1018. The received code may be executed by processor 1004 as it is received, and/or stored in storage device 1010, or other non-volatile storage for later execution.
[0136] Operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. Processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. The code may be stored on a computer-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. The code may also be carried by any computer-readable carrier medium, such as a storage medium or a transient medium or signal, e.g. a signal transmitted over a communications network.
[0137] Conjunctive language, such as phrases of the form “at least one of A, B, and C,” or “at least one of A, B and C,” unless specifically stated otherwise or otherwise clearly contradicted by context, is otherwise understood with the context as used in general to present that an item, term, etc., may be either A or B or C, or any nonempty subset of the set of A and B and C. For instance, in the illustrative example of a set having three members, the conjunctive phrases “at least one of A, B, and C” and “at least one of A, B and C” refer to any of the following sets: {A}, {B}, {C}, (A, B}, (A, C}, (B, C}, (A, B, C}. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of A, at least one of B and at least one of C each to be present.
[0138] The use of examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention. [0139] In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction.
[0140] Further embodiments can be envisioned to one of ordinary skill in the art after reading this disclosure. In other embodiments, combinations or sub-combinations of the above- disclosed invention can be advantageously made. The example arrangements of components are shown for purposes of illustration and combinations, additions, re-arrangements, and the like are contemplated in alternative embodiments of the present invention. Thus, while the invention has been described with respect to exemplary embodiments, one skilled in the art will recognize that numerous modifications are possible.
[0141] For example, the processes described herein may be implemented using hardware components, software components, and/or any combination thereof. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims and that the invention is intended to cover all modifications and equivalents within the scope of the following claims.
[0142] All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.
[0143] In this specification where reference has been made to patent specifications, other external documents, or other sources of information, this is generally for the purpose of providing a context for discussing the features of the invention. Unless specifically stated otherwise, reference to such external documents is not to be construed as an admission that such documents, or such sources of information, in any jurisdiction, are prior art, or form part of the common general knowledge in the art.

Claims

WHAT IS CLAIMED IS:
1. A computer-implemented method for converting color vector data to spectrum data for use in image processing, comprising: under the control of one or more computer systems configured with executable instructions: obtaining a first reference spectral representation, wherein the first reference spectral representation comprises data representing an illumination spectrum usable in the image processing or a reflectance spectrum of an object to be processed in the image processing; obtaining a first corresponding reference color vector, wherein the first corresponding reference color vector is a first color vector in a color space and the first color vector corresponds to the first reference spectral representation; allocating a lattice storage for a moment-based uplift coefficient lattice, wherein the moment-based uplift coefficient lattice comprises a plurality of color locations in the color space, each color location representable by an associated color vector, the first color vector representing a first color location, and wherein the lattice storage is usable for storing moment coefficient arrays for lattice locations; computing a first corresponding moment coefficient array, wherein the first corresponding moment coefficient array corresponds to the first reference spectral representation; determining a set of nearby lattice points of the moment-based uplift coefficient lattice that are within a predetermined color space distance from the first color location; computing moment coefficient arrays for nearby lattice points; for each lattice point of at least some of the nearby lattice points, storing an interpolated moment coefficient array in the lattice storage in association with the lattice point; and saving the lattice storage in a computer-readable format usable for converting from an input color vector value to a corresponding input spectral representation.
2. The computer-implemented method of claim 1, wherein the set of nearby lattice points of the moment-based uplift coefficient lattice that are within the predetermined color space distance from the first color location are lattice points that bound a voxel of the color space wherein the voxel encloses the first color location.
3. The computer-implemented method of claim 1 or 2, further comprising interpolating additional moment coefficient arrays for additional lattice points based on previously computed moment coefficient arrays for previously processed lattice points.
4. The computer-implemented method of claim 3, wherein a first array size, representing a first number of coefficients in the first corresponding moment coefficient array corresponding to the first reference spectral representation, is greater than a second array size representing a second number of coefficients in one of the additional moment coefficient arrays.
5. The computer-implemented method of any one of claims 1 - 4, further comprising: obtaining the input color vector value; searching, in the lattice storage, for a corresponding lattice location, wherein the corresponding lattice location corresponds to the input color vector value; obtaining, for the corresponding lattice location, a corresponding moment coefficient array; computing, from the corresponding moment coefficient array, a corresponding spectral representation; and outputting the corresponding spectral representation in response to the input color vector value.
6. The computer-implemented method of claim 5, further comprising: if the corresponding lattice location is other than a lattice point, computing the corresponding moment coefficient array by interpolation from moment coefficient arrays of nearby lattice points in the moment-based uplift coefficient lattice that are within a second predetermined color space distance from the corresponding lattice location.
7. The computer-implemented method of any one of claims 1 - 5, wherein a number of lattice points in the moment-based uplift coefficient lattice are fewer than a number of possible color vectors.
8. The computer-implemented method of claim 7, wherein the possible color vectors are vectors in an RGB color space.
9. The computer-implemented method of any one of claims 1 - 8, wherein moment coefficient arrays comprise a plurality of coefficient of moments and at least two moment coefficient arrays comprise different numbers of coefficients.
10. The computer-implemented method of claim 9, wherein the numbers of the coefficients of the moment coefficient arrays are determined based on round-trip error values.
11. The computer-implemented method of claim 10, wherein the numbers of the coefficients of the moment coefficient arrays determined based on round-trip error values are determined iteratively, comprising: setting an initial coefficient count for a spectral representation; determining a sufficiency of a coefficient count for the spectral representation; if the coefficient count is insufficient, increasing the coefficient count; and repeating determination of the sufficiency of the coefficient count until a sufficient coefficient count is reached or a predetermined maximum coefficient count is reached.
12. A computer system for generating a moment-based uplift coefficient lattice array, the computer system comprising: a) at least one processor; b) lattice storage for storing a moment-based uplift coefficient lattice, wherein the moment-based uplift coefficient lattice comprises a plurality of color locations in the color space, each color location representable by an associated color vector representing a color location, and wherein the lattice storage is usable for storing moment coefficient arrays for lattice locations; and c) a computer-readable medium storing instructions, which when executed by the at least one processor, causes the computer system to: obtain a first reference spectral representation, wherein the first reference spectral representation comprises data representing an illumination spectrum usable in the image processing or a reflectance spectrum of an object to be processed in the image processing; obtain a first corresponding reference color vector, wherein the first corresponding reference color vector is a first color vector in a color space and the first color vector corresponds to the first reference spectral representation and the first color vector represents a first color location; compute a first corresponding moment coefficient array, wherein the first corresponding moment coefficient array corresponds to the first reference spectral representation; determine a set of nearby lattice points of the moment-based uplift coefficient lattice that are within a predetermined color space distance from the first color location; compute moment coefficient arrays for nearby lattice points; and save the lattice storage in a computer-readable format usable for converting from an input color vector value to a corresponding input spectral representation.
13. The computer system of claim 12, wherein the set of nearby lattice points of the moment-based uplift coefficient lattice that are within the predetermined color space distance from the first color location are lattice points that bound a voxel of the color space wherein the voxel encloses the first color location.
14. The computer system of claim 12 or 13, wherein the instructions further cause the computer system to interpolate additional moment coefficient arrays for additional lattice points based on previously computed moment coefficient arrays for previously processed lattice points.
15. The computer system of claim 14, wherein a first array size, representing a first number of coefficients in the first corresponding moment coefficient array corresponding to the first reference spectral representation, is greater than a second array size representing a second number of coefficients in one of the additional moment coefficient arrays.
16. The computer system of any one of claims 12 - 15, wherein a number of lattice points in the moment-based uplift coefficient lattice array are fewer than a number of possible color vectors and wherein the possible color vectors are vectors in an RGB color space.
17. The computer system of any one of claims 12 - 16, wherein moment coefficient arrays comprise a plurality of coefficient of moments and at least two moment coefficient arrays comprise different numbers of coefficients.
18. The computer system of claim 17, wherein the numbers of the coefficients of the moment coefficient arrays are determined based on round-trip error values.
19. The computer system of claim 18, wherein the numbers of the coefficients of the moment coefficient arrays determined based on round-trip error values are determined iteratively, and wherein the instructions further cause the computer system to: set an initial coefficient count for a spectral representation; determine a sufficiency of a coefficient count for the spectral representation; if the coefficient count is insufficient, increase the coefficient count; repeat determination of the sufficiency of the coefficient count until a sufficient coefficient count is reached or a predetermined maximum coefficient count is reached.
21. At least one computer-readable medium carrying instructions, which when executed by at least one processor of a computer system, causes the computer system to carry out the method of any one of claims 1-11.
22. A computer system comprising: at least one processor; and a memory storing instructions, which when executed by the at least one processor, cause the at least one processor to implement the method of any one of claims 1- 11
23. At least one computer-readable medium carrying image data that includes color values generated according to the method of any one of claims 1-11.
PCT/NZ2021/050156 2021-06-28 2021-08-30 Spectral uplifting converter using moment-based mapping WO2023277702A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202163215908P 2021-06-28 2021-06-28
US202117361212A 2021-06-28 2021-06-28
US63/215,908 2021-06-28
US17/361,212 2021-06-28

Publications (1)

Publication Number Publication Date
WO2023277702A1 true WO2023277702A1 (en) 2023-01-05

Family

ID=78086023

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NZ2021/050156 WO2023277702A1 (en) 2021-06-28 2021-08-30 Spectral uplifting converter using moment-based mapping

Country Status (1)

Country Link
WO (1) WO2023277702A1 (en)

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CGG COMPUTER GRAPHICS GROUP - CHARLES UNIVERSITY: "Moment-based Constrained Spectral Uplifting", 7 July 2021 (2021-07-07), XP055893130, Retrieved from the Internet <URL:https://www.youtube.com/watch?v=01VJaIK3vHc> [retrieved on 20220217] *
PETERS CHRISTOPH ET AL: "Spectral Rendering with the Bounded MESE and sRGB Data", 1 January 2019 (2019-01-01), XP055893085, Retrieved from the Internet <URL:https://momentsingraphics.de/Media/MAM2019/Peters2019-SpectralRenderingMAM.pdf> [retrieved on 20220217], DOI: 10.2312/mam.20191304 *
PETERS CHRISTOPH ET AL: "Using moments to represent bounded signals for spectral rendering", ACM TRANSACTIONS ON GRAPHICS, vol. 38, no. 4, 31 August 2019 (2019-08-31), US, pages 1 - 14, XP055893086, ISSN: 0730-0301, Retrieved from the Internet <URL:https://cg.ivd.kit.edu/publications/2019/compact_spectra/Peters2019-CompactSpectra.pdf> [retrieved on 20220217], DOI: 10.1145/3306346.3322964 *
TÓDOVÁ LUCIA ET AL: "Moment-based Constrained Spectral Uplifting", EUROGRAPHICS SYMPOSIUM ON RENDERING (DL-ONLY TRACK), 29 June 2021 (2021-06-29), XP055893070, Retrieved from the Internet <URL:https://diglib.eg.org/bitstream/handle/10.2312/sr20211304/215-224.pdf?sequence=1&isAllowed=y> [retrieved on 20220217], DOI: 10.2312/sr.20211304 *

Similar Documents

Publication Publication Date Title
CN114119849B (en) Three-dimensional scene rendering method, device and storage medium
US11721071B2 (en) Methods and systems for producing content in multiple reality environments
WO2022103276A1 (en) Method for processing image data to provide for soft shadow effects using shadow depth information
US20220375152A1 (en) Method for Efficiently Computing and Specifying Level Sets for Use in Computer Simulations, Computer Graphics and Other Purposes
US11803998B2 (en) Method for computation of local densities for virtual fibers
US11600041B2 (en) Computing illumination of an elongated shape having a noncircular cross section
WO2023277702A1 (en) Spectral uplifting converter using moment-based mapping
US11170533B1 (en) Method for compressing image data having depth information
US11887274B2 (en) Method for interpolating pixel data from image data having depth information
US11380048B2 (en) Method and system for determining a spectral representation of a color
CA3116076C (en) Method and system for rendering
US20230260206A1 (en) Computing illumination of an elongated shape having a noncircular cross section
US20230196649A1 (en) Deforming points in space using a curve deformer
EP4176415A1 (en) Method for computation of local densities for virtual fibers
KR20190017294A (en) Method and apparatus for editing three dimensional shading data
CN113781618A (en) Method and device for lightening three-dimensional model, electronic equipment and storage medium
Peers et al. Free-form acquisition of shape and appearance

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21789887

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE