GB2623528A - Image generation system and method - Google Patents

Image generation system and method Download PDF

Info

Publication number
GB2623528A
GB2623528A GB2215359.7A GB202215359A GB2623528A GB 2623528 A GB2623528 A GB 2623528A GB 202215359 A GB202215359 A GB 202215359A GB 2623528 A GB2623528 A GB 2623528A
Authority
GB
United Kingdom
Prior art keywords
objects
complexity
representations
virtual environment
representation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2215359.7A
Other versions
GB202215359D0 (en
Inventor
Green Lawrence
Maw Ross
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment Inc
Original Assignee
Sony Interactive Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Interactive Entertainment Inc filed Critical Sony Interactive Entertainment Inc
Priority to GB2215359.7A priority Critical patent/GB2623528A/en
Publication of GB202215359D0 publication Critical patent/GB202215359D0/en
Publication of GB2623528A publication Critical patent/GB2623528A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

A system and method for generating images of a virtual environment are provided. The system comprises an object identification unit to identify one or more objects 510-550 in the virtual environment that are to appear in an image to be rendered. An evaluation unit identifies which objects in the virtual environment are to be modified to reduce a complexity of images of the virtual environment. A representation obtaining unit selects an alternative representation of one or more of the identified objects, wherein the alternative representation has a lower complexity than the corresponding identified object. An image rendering unit renders an image of the virtual environment using the selected alternative representations. The alternative representations may be associated with a simplified shape, simplified texture, reduced colour variation, reduced motion or reduced number of visual effects.

Description

IMAGE GENERATION SYSTEM AND METHOD BACKGROUND OF THE INVENTION
Field of the invention
This disclosure relates to an image generation system and method.
Description of the Prior Art
The "background" description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description which may not otherwise qualify as prior art at the time of filing, are neither expressly or impliedly admitted as prior art against the present invention.
Over time there have been significant increases in the processing power available to devices, with more powerful devices becoming increasingly common for consumer use. With this increasing processing power has come an increasing complexity of content that is able to be rendered for presentation to a viewer of such content; this can include both video content and interactive content such as computer games. This complexity may be due to the ability of devices to support the rendering of more on-screen objects at a time, as well as the ability to support more detailed models.
While this can increase the richness of a user experience, and lead to the creation of more immersive and/or realistic-looking content, for some users this increase in complexity can be rather problematic. This is because for some users it may be challenging for them to keep track of all of the elements onscreen when there is a significant amount of information being displayed simultaneously.
In addition to this, content which is designed for higher-end devices (such as fully-fledged games consoles) may not be appropriate for use with a lower-end device (such as a portable games console or mobile phone) due to the higher level of complexity of the displayed images.
In view of these considerations, it may be advantageous in some cases to allow the complexity of rendered images to be reduced so as to lower the burden of those images upon one or both of a user's attention and a devices processing.
It is in the context of the above discussion that the present disclosure arises.
SUMMARY OF THE INVENTION
This disclosure is defined by claim 1.
Further respective aspects and features of the disclosure are defined in the appended claims.
It is to be understood that both the foregoing general description of the invention and the following detailed description are exemplary, but are not restrictive, of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein: Figure 1 schematically illustrates an exemplary entertainment system; Figure 2 schematically illustrates an image display process; Figure 3 schematically illustrates an object representation generation process; Figure 4 schematically illustrates an object representation selection process; Figure 5 schematically illustrates an exemplary scene; Figure 6 schematically illustrates a system for implementing embodiments of the present disclosure; and Figure 7 schematically illustrates a method for implementing embodiments of the present disclosure. DESCRIPTION OF THE EMBODIMENTS Referring now to the drawings, wherein like reference numerals designate identical or corresponding parts throughout the several views, embodiments of the present disclosure are described.
Referring to Figure 1, an example of an entertainment system 10 is a computer or console such as the Sony® PlayStation 5® (PS5).
The entertainment system 10 comprises a central processor 20. This may be a single or multi core processor, for example comprising eight cores as in the PS5. The entertainment system also comprises a graphical processing unit or GPU 30. The GPU can be physically separate to the CPU, or integrated with the CPU as a system on a chip (SoC) as in the PSS.
The entertainment device also comprises RAM 40, and may either have separate RAM for each of the CPU and GPU, or shared RAM as in the P55. The or each RAM can be physically separate, or integrated as part of an SoC as in the P55. Further storage is provided by a disk 50, either as an external or internal hard drive, or as an external solid state drive, or an internal solid state drive as in the PS5.
The entertainment device may transmit or receive data via one or more data ports 60, such as a USB port, Ethernet® port, Wi-Fi® port, Bluetooth ® port or similar, as appropriate. It may also optionally receive data via an optical drive 70.
Interaction with the system is typically provided using one or more handheld controllers 80, such as the DualSense® controller in the case of the P55.
Audio/visual outputs from the entertainment device are typically provided through one or more A/V ports 90, or through one or more of the wired or wireless data ports 60.
Where components are not integrated, they may be connected as appropriate either by a dedicated data link or via a bus 100.
An example of a device for displaying images output by the entertainment system is a head mounted display 'HMD' 802, worn by a user 800.
Embodiments of the present disclosure are directed towards arrangements and methods for simplifying objects present in a scene so as to reduce the overall complexity of the scene. The display of a simplified scene is considered to be advantageous both in improving a user's ability to interpret the scene, through reducing visual noise or the like, whilst also reducing a processing burden upon a device which generates images of the scene for display.
Figure 2 schematically illustrates an example of a method in accordance with embodiments of the present disclosure. The discussion of Figure 2 is provided so as to give a high-level understanding of the process, with more specific details of the steps being provided below (such as with reference to Figures 3 and 4, for example). In some implementations, it is considered that each of the steps is performed by the same device; however, distributed implementations are also envisaged such that the steps (or at least portions of the steps) are implemented by two or more devices.
A step 200 comprises generating a one or more representations of an object, such that two or more versions or representations of the object are available for selection (that is, the object and one or more generated representations). This may be performed upon initialisation of the content, for example, or may be performed as a part of the content creation process. In the context of computer games, these may be interpreted as being performed during the launching or installation of the game or as a part of the game development process (such that the representations are able to be distributed as a part of the game files).
The generated representations of the object are simplified representations of that object so as to reduce the visual complexity of the object when displayed. This may be achieved in any suitable manner; examples of simplifications include lowering a polygon count or reducing the resolution of a texture associated with the object, or smoothing surfaces of the object. Such modifications result in a simplified representation in that the amount of information contained in the representation is reduced -for instance, corners may be removed and/or textures may have a reduced amount of detail.
A step 210 comprises selecting from amongst the representations for an object when rendering a scene comprising that object. This selection can be made at any suitable time during the content display process; in some cases, representations may be selected on a per-scene basis, while in others a per-frame basis or any other measure of time may be more appropriate. Such a process may be performed for any number of objects within a scene; the selection process is only limited by the number of objects for which alternative representations exist. In some examples it is considered that a number of instances of the same object may be present in a scene (for instance, a row of trees); in such cases a representation may be selected for each instance of the object separately, or a representation may be selected for each of the instances as a group.
This selection may be based upon information contained in one or more files representing an object (such as a flag, indicating the availability of alternative meshes/textures, which is found in a primary mesh/texture), or based upon information provided in metadata associated with the object, scene, content, or any other aspect content.
A step 220 comprises generating and displaying images comprising the selected representations in step 210. Any suitable rendering process may be utilised as a part of this generation and display step.
As discussed above, methods in accordance with Figure 2 may be considered advantageous in that a scene may be generated which has a lower overall complexity relative to a typical rendering of that scene. This is due to the per-object approach to rendering, rather than selecting display parameters on a per-image basis (such as modifying the resolution).
While in some cases the method of Figure 2 may be implemented during playback of the content, in other cases one or more steps (or parts of the respective steps) may be performed prior to the content playback; for instance, during a content creation process. For example, the complexity of an object may be evaluated during the content creation process such that this can be referenced at a later time. Similarly, an evaluation of a representative image may be performed ahead of time, with the results being applied more generally (such as to an entire scene to which the object belongs, or the content as a whole) during the content reproduction process.
Figure 3 schematically illustrates a process by which a number of representations of an object may be generated. This process is considered exemplary, in that variations and/or alternatives to the steps disclosed may be performed in different implementations. Such a process may be performed automatically (for instance, during install or launch of a game), or may be performed in response to a user request or particular conditions being fulfilled (such as a user with a profile indicating a preference for reduced complexity launching a game, or the game being launched on a device with reduced processing power).
A step 300 comprises obtaining an object for which representations are to be generated. This object may be obtained from game asset files, for instance, or any other stored content (stored locally or remotely). Objects may be considered to be suitable candidates for such processing due to being of particular high visual complexity (and as such contributing significantly to the complexity of a displayed image/image to be displayed), and/or due to being able to be simplified without leading to a significant impact upon the overall impression of the scene. In other words, it objects may be considered in dependence upon their complexity and/or their suitability for simplification.
Such an object may be selected in any suitable manner; in some implementations objects may have an associated flag (for instance, set by a content developer) which indicates that representations should be generated. Alternatively, or in addition, objects may be identified for the representation generation process based upon an analysis of the object itself to determine its visual complexity; similarly, a visual scene analysis during gameplay may be performed (either during the user's gameplay, or during a beta testing process, for instance) so as to determine which objects contribute to the scene complexity. It is also considered that users themselves may be able to provide feedback so as to be able to identify problematic objects or characteristics of objects to assist with the selection process.
A step 310 comprises identifying one or more modifications to be applied to the obtained object so as to reduce its complexity. These modifications may be selected in dependence upon any of a number of different considerations of the object, scene being rendered, user, and/or device used to render or display the generated images. These considerations can be used to determine which modifications are expected to be successful in reducing the complexity of the object (for instance, simplifying an overall shape would not be expected to solve issues with a complex surface texture), as well as which modifications can be applied without altering the object too significantly. Here, a significant alteration could be considered to be any alteration which modifies the overall character of the object -such as a large variation in the shape (such that a square becomes a circle, for instance) or texture (such that a wooden object no longer appears to be made of wood due to a simplification of the grain, for instance).
Examples of modifications include modifications to the object itself, such as simplifying the outline of the object, reducing the complexity of the texture associated with the object, modifying the colours associated with the object, and/or varying the size of the object.
Alternatively, or in addition, the modifications relate to visual effects associated with the model. These can include a level of reflectivity, for instance, which would usually only be realised when rendering the object. The reduction in the reflectivity of an object may be advantageous in that it causes the surface of the object to be much simpler, as it will not be obscured by lighting effects or reflected views of other objects in the scene. In some cases, the complexity of a scene can be reduced by highlighting an object; this may be particularly true of objects that are often camouflaged or otherwise difficult to see in an environment.
Examples of considerations of an object that may be used to select one or more modifications include the importance of maintaining the shape or texture, or the importance of the object itself in the scene (such as a measure of how likely it is a user will view or interact with the object). For example, it may be less important to maintain the shape of a tree -a simplified representation of a tree would still be identifiable. In contrast, the shape of an object such as a coin may be more important to be maintained as this may be important in being able to identify which denomination of coin it is.
An example of a consideration of a scene being rendered that may be used to select one or more modifications includes a measure of how complex the scene is overall; this context may be useful in determining whether a reduction in the complexity is required. For instance, the threshold for determining whether an object should be simplified may be set in dependence upon the overall scene complexity -this may be implemented as a 'complexity budget', for example, in which the complexity of the scene is evaluated and compared to an overall threshold. Other considerations, such as how static or dynamic the scene is, may also be considered -a static scene may be less burdensome upon a user than a dynamic scene having the same complexity, for instance.
Examples of considerations of a user that may be used to select one or more modifications include a level of complexity that the user is comfortable with, preferences relating to particular modifications (such as a preference to maintain shapes over textures, for instance), and/or indications of particular effects or characteristics that can cause issues for a user (so that these effects or characteristics can be preferentially addressed with the selected modifications).
Examples of considerations of a device used to render or display the images that may be used to select one or more modifications include an evaluation of the display resolution, processing power, and/or other display parameters such as colour gamut and contrast. These can be indicative of how well-suited a device is to generating complex images and displaying them -for instance, a low-resolution display may not be adequate for displaying certain levels of complexity, and as such the complexity of an object could be reduced without significant impact upon the overall impression of the generated image.
A step 320 comprises generating a plurality of representations of the selected object having modifications applied -each representation may be generated using different modifications and/or different degrees of modification such that each representation is unique. Of course, some of these representations may appear to be similar in some cases -this may be particularly true in the case in which the selected modifications and degrees of modification are similar, or combine to produce a similar overall effect. The number of representations may be selected freely -in some cases, a small number (such as three, corresponding to low, medium, and high complexity) may be generated while in others a much greater number may be generated such as hundreds or thousands of representations.
An optional step 330 comprises selecting, from amongst the plurality of representations, one or more representations to be stored. In the case that this step is not performed, each of the representations generated in step 320 may be stored or a random (or pseudo-random) selection of representations to store may be identified.
The selection process may be performed so as to determine whether the overall character of the object is retained, for example. This may be achieved in a number of manners -each of which may be considered in isolation or in any combination. For example, a plurality of different selection processes may be performed in succession so as to gradually reduce the number of candidate representations to be selected from, or in combination so as to perform a selection in a reduced number of steps. A number of non-limiting examples of selection processes are discussed below.
A first example is that of determining how significantly the shape of the object has deviated from that of the original object; this can be performed by comparing the distances between a vertex in a generated representation with the nearest point in the original object. This can be performed for each vertex in the generated representation, or a representative sample of vertices, as an estimate of the overall variation in the shape between the original object and the generated representation. This can be performed using meshes of the object and representation, for instance, or an outline of the object and generated representation in rendered images can be compared.
A second example is that of performing an object recognition process on generated representations. For instance, computer vision techniques may be used to identify an image of a generated representation. If the identification does not match that of the original object (or at least is not sufficiently close), then the representation may be discarded as it may be considered to have deviated too far from the original object. For instance, rather than requiring a complete match (such as identifying a particular breed of dog), it may be considered sufficient to be within the same object class (such as identifying that the representation is a dog without being able to identify the breed).
A third example is that of utilising user feedback upon representations to determine which should be retained and which should be discarded. For instance, representations may be scored by users based upon their relative complexity (or another aspect of their appearance) as well as how well they represent the original object. Based upon this user feedback, and automated process may be implemented in which representations are able to be rated based upon a model of expected user feedback -this can be implemented using a machine learning model, for instance.
Alternatively, or in addition, the selection process may be performed so as to determine a representative selection of the representations. An example of this could be the selection of ten different representations from a generated set of one hundred representations, with the ten representations each corresponding to a particular level of complexity. In this manner, the number of stored representations can be reduced without a concern that each of the stored representations relates to the same (or similar) level of complexity.
Figure 4 schematically illustrates a method for selecting representations for display in accordance with step 210 of Figure 2. Such a method is considered exemplary, and as such non-limiting, in that a number of variations may be considered in dependence upon the particular implementation. For instance, the order of the steps described below may be varied (such as the optional step 420 being performed during the content development process), and additional/alternative steps may be considered as appropriate.
A step 400 comprises a determination of the desired level of complexity for the rendered image. This may be based upon a user preference, for example, which may be indicated in a user profile or provided as an input as part of a content or device initiation process. In some embodiments, a complexity level may be calculated based upon information regarding a user's level of concentration or interest, for instance, or other factors such as a time of day (which may be reflective of tiredness) which may contribute to a reduced ability of the user to process complex images. As noted above, such a determination may also (or instead) be based upon the hardware used to render and/or display the images.
A step 410 comprises identifying the objects to be displayed in the scene. This may be based upon information about the viewpoint within the virtual environment and information about the arrangement of objects within the virtual environment, for instance, or through the generation of a placeholder image (that is, an image which corresponds to that which is to be displayed but with an arbitrary level of complexity such as each object being represented using the lowest level of complexity).
As well as identifying which objects are present in the scene, this step may also comprise identifying information about the objects such as the number of alternative representations, the identity of the objects, and/or their significance within the scene or content (such as whether the object is expected to be interacted with by a user).
An optional step 420 comprises determining the contribution of the objects to the scene. This may be performed so as to determine the visual impact of the object (such as whether the object has a particularly striking appearance, or occupies a particularly large area of the displayed image); alternatively, or in addition, this step may include the determination of the complexity of the object relative to the scene and/or other objects within the scene. In other words, this step may be performed so as to enable an estimate the impact of any modifications to the object on the image as a whole. This can enable particular objects to be prioritised when identifying targets for simplification so as to provide an increased amount of simplification relative to the number of modifications.
A step 430 comprises selecting representations for one or more objects in the scene in dependence upon the determinations of step 410 and the optional step 420. In this manner, both the objects for which representations are to be selected and the selection of the representations for those objects may be determined. The selection of representations may be performed in any suitable manner for a given implementation; this selection may be content-specific, for instance, or more specific such as the selection process being tailored to a particular scene or even frame of the content.
In some implementations, it may be considered advantageous to select objects for which lower-complexity representations are to be used in dependence upon their deviation from the average level of complexity for the scene. This means that those objects which are particularly complex relative to the rest of the environment (such as a highly-textured object against a blank background) can be preferentially replaced with lower-complexity representations. Such an approach may be advantageous in that the average level of complexity for an image can be reduced whilst mitigating the risk of oversimplifying already low-complexity objects.
Similarly, it may be considered advantageous to select objects for which lower-complexity representations are to be used in dependence upon their relevance to the scene. For instance, quest-specific objects may be omitted from this process in favour of reducing the complexity of background objects -in this manner, an in-game boss or item may have its complexity maintained while the complexity of the trees in the background is reduced.
Alternatively, or in addition, in the case that representations are associated with a particular level of complexity it may be considered that a threshold complexity value is set and any objects having a higher complexity value are replaced with representations having a complexity value that is below the threshold While in some cases step 420 may be performed in a simplified fashion by referring to characteristics of the object and information about its display in an image, in other cases an iterative process may be used to determine the contribution of the object to the scene. While such a process may not be considered suitable for use during playback of the content, this process could be performed in advance of the rendering process (such as during the content development process). The outcomes of this process can then be stored alongside the content (such as as a part of the game data) for application to particular scenes, objects, and/or the content as a whole.
Examples of such iterative processes is that of rendering a number of different versions of a scene using different representations of the objects and determining the changes in the overall complexity as a result of these modifications to a scene. These modifications may be implemented individually, or a number of modifications may be performed at once (that is, a single object may be switched when generating a different version of the scene, or multiple objects may be switched for each new version of the scene). Based upon the correlation between modifications and changes in complexity, the impact of particular modifications may be determined.
One example of such a process is that of using object recognition processes on a rendered image; these may be machine learning models, for instance. Given that the number of objects in the scene can be determined from the rendering process (for instance, by counting the number of meshes), this provides a ground truth against which the results of the object recognition process can be measured. In these examples, the success of the object recognition process in identifying the correct number of objects in the scene can be considered indicative of the scene complexity; this is because it can be expected that a simple scene provides fewer opportunities for such a process to make mistakes. As such, the error rate can be expected to correlate with scene complexity and therefore the impact of a modification can be measured by whether the object recognition process becomes more or less accurate for the newly-generated image comprising the modification (or modifications).
Alternatively, or in addition, processing may be performed on an image to generate a measure of the scene complexity. An example of such processing is an image compression method; an image that is able to be compressed more effectively is considered to have a lower complexity, as many compression methods exploit the existence of visual similarities between different areas. In this way, the compression of different versions of the scene may be compared as an estimate of their relative complexity. Other processes may also be performed, such as processes which evaluate the distribution of colours and contrast values throughout an image as a proxy for image complexity.
Another alternative or additional process is that in which a process is performed in which objects are omitted altogether in different versions of the scene; each of these versions may be compared with the original version (with all original objects present) and/or other versions to determine a degree of visual similarity. This degree of visual similarity may be determined using a machine vision based process, for example, or user feedback. Those objects which are found to be correlated with low differences to the degree of visual similarity can be assumed to be relatively low-impact upon the scene, such that these objects may be preferentially selected for replacement with a lower-complexity representation.
In each of these examples, or alternatives that may be implemented in accordance with these examples, it is considered that by generating a number of images of a scene and comparing them it is possible to derive information about the effect of individual modifications upon the content. In this manner, it is possible to determine appropriate modifications to be made on a per-object basis as the impact of such a modification is able to be more accurately predicted.
Figure 5 schematically illustrates an exemplary scene 500 to be displayed on a display device, with the scene being a candidate for a complexity reduction process in accordance with the processes described above. The scene comprises a number of elements, each of which may have a different level of complexity and importance within the scene, and a differing level of visual impact upon the user viewing the scene. As such, it may be considered advantageous that the display of the elements within the scene may be implemented in a non-uniform manner when seeking to reduce the overall complexity of the scene.
The scene 500 in Figure 5 includes an advertisement board 510, a tree 520, a shop 530 with corresponding signage 540, and a plant 550. As noted above, these are considered to be a selection of exemplary elements that could appear in a scene representing a range of different levels of importance, complexity, and visual impact; as such, the specific representations of these elements should not be considered limiting, and it should be appreciated that the general principles discussed here are applicable to any other virtual objects or elements as appropriate.
When viewing such a scene, it is considered that the main sources of visual complexity would likely be associated with the tree 520 and the signage 540; these objects are the most likely to comprise a complex texture (lots of leaves on the tree) for instance) or visual effect (a reflective or otherwise lit-up signage). It can therefore be considered that modifications of these objects would lead to the most significant reductions in overall scene complexity. Given the fact that the tree 520 is likely just background for the scene, a modification to this can be quite significant without overly harming the user experience -the original object can therefore be substituted with a low-complexity representation. A modification of the tree could be an increase in the leaf size or reduction of leaf numbers to reduce the complexity, for instance, or a replacement with a simplified tree. Alternatively, or in addition, the motion of the tree due to wind or the like may be omitted to make the scene more static over a predetermined number of image frames.
However, the signage 540 may be considered important as an indicator of the nature of the shop 530, and as such it is considered that this may be likely to be interacted with (viewed) by a user. In view of this, it may be considered that the complexity of this object should be maintained, and if the overall scene complexity is still too high then other objects may be reduced in complexity to compensate for maintaining the appearance of the signage 540. In this case, the plant 550 may be replaced with a lower-complexity representation (such as a simpler pot, or a flower with a simpler shape) despite the fact that it would likely have a much smaller impact on the overall complexity of the scene.
The advertisement board 510 is an example of an object which may be omitted from such modifications or prioritised in dependence upon the specifications of the user or content developer. For instance, a developer may specify (as part of their contract with a third party) that advertisements must always be displayed in their original form; equally, a user may be able to indicate that they wish to prioritise these objects for a complexity reduction due to their lack of interest. It is therefore apparent that factors other than the complexity and impact upon the scene may be considered when determining modifications to a scene.
Figure 6 schematically illustrates a system for generating images of a virtual environment for display, the system comprising an object identification unit 600, an evaluation unit 610, an optional representation generation unit 620, a representation obtaining unit 630, and an image rendering unit 640. While shown here as being collocated, it should be appreciated that these functional units may be implemented by processing hardware (such as CPUs or GPUs) located at any number of processing devices.
The object identification unit 600 is configured to identify one or more objects in the virtual environment that are to appear in an image to be rendered. For instance, the object identification unit 600 may be configured to identify objects in dependence upon a defined viewpoint and view direction within the virtual environment; alternatively, a sample image of a virtual environment may be used as an input for the object identification process.
The evaluation unit 610 is configured to identify which objects in the virtual environment are to be modified to reduce a complexity of images of the virtual environment. This may be performed in accordance with any of the methods discussed with reference to steps 410 and 420 of Figure 4, for example, and/or step 300 of Figure 3.
In particular, it is noted that the evaluation unit 610 may be configured to identify objects having an above-threshold visual complexity for modification. This threshold may be determined based upon user inputs indicating a maximum level of complexity for objects or an overall image, for example. Alternatively, or in addition, the evaluation unit 610 may be configured to identify objects having an above-average visual complexity for modification, wherein the average is determined for each of the objects to appear in the image to be rendered or for the image as a whole. In some embodiments, the evaluation unit 610 is configured to determine objects to be modified in dependence upon a user indication of a desired overall image complexity; this overall image complexity can be used to define a 'complexity budget' or the like so as to inform the amount of complexity reduction that is required (and in turn, how many objects may need to be simplified and to what degree).
The optional representation generation unit 620 is configured to generate one or more representations of each of a plurality of objects, the plurality of objects including the identified objects. This may be performed in accordance with the discussion relating to Figure 4, for example. The alternative representations may be associated with one or more of a simplified shape, simplified texture, reduced colour variation, reduced motion, and/or reduced number of visual effects (such as reflectivity or brightness). Other factors, such as a contrast variation and edge complexity (a measure of how simple the outline of the object is) may also be considered.
In some embodiments, the representation unit 620 is configured to generate a plurality of representations each corresponding to a predefined level of complexity -for instance, three representations corresponding to low/medium/high complexity, with the original object corresponding to 'maximum' complexity. Alternatively, the representation unit 620 may be configured to generate a plurality of representations (such as ten, a hundred, or a thousand) from which a selection of representations is stored, such that each of the stored representations corresponds to a predefined level of complexity. In such embodiments, the number of levels of complexity for which representations are stored can of course be selected freely.
In some embodiments the representation unit 620 may be configured to perform an object recognition process on the generated representations and to discard any representations that are not recognised as being the same object as the object for which the representation is generated. In other words, representations are able to be discarded if they deviate from the original object so far as to no longer be reliably recognised as the same object; of course, these need not be a binary recognised/not recognised determination as this can of course be considered in reference to a confidence value generated as part of an object recognition process.
The representation obtaining unit 630 is configured to select an alternative representation of one or more of the identified objects, wherein the alternative representation has a lower complexity than the corresponding identified object.
The image rendering unit 640 is configured to render an image of the virtual environment using the selected alternative representations. The image rendering unit 640 may be further configured to render a plurality of successive images of the virtual environment using the selected alternative representations. The plurality of successive images may correspond to all images rendered of the same virtual environment and/or the same objects, for instance -in other words, a determination of how the representations should be selected is not limited to a single image but instead can be used to inform the rendering of images for a predetermined duration of time, a predetermined number of frames, a particular selection of objects, a particular scene or game level, and/or the content as a whole.
The arrangement of Figure 6 is an example of a processor (for example, a GPU, TPU, and/or CPU located in a games console or any other computing device) that is operable to generate images of a virtual environment for display, and in particular is operable to: identify one or more objects in the virtual environment that are to appear in an image to be rendered; identify which objects in the virtual environment are to be modified to reduce a complexity of images of the virtual environment; select an alternative representation of one or more of the identified objects, wherein the alternative representation has a lower complexity than the corresponding identified object; and render an image of the virtual environment using the selected alternative representations.
Figure 7 schematically illustrates a method for generating images of a virtual environment for display.
While a specific order of the steps is shown, this is for illustration purposes only and it should be understood that the steps may be performed in any suitable order. For instance, the optional step 720 may be performed for a number of objects prior to step 700.
A step 700 comprises identifying one or more objects in the virtual environment that are to appear in an image to be rendered.
A step 710 comprises identifying which objects in the virtual environment are to be modified to reduce a complexity of images of the virtual environment An optional step 720 comprises generating one or more representations of each of a plurality of objects, the plurality of objects including the identified objects.
A step 730 comprises selecting an alternative representation of one or more of the identified objects, wherein the alternative representation has a lower complexity than the corresponding identified object.
A step 740 comprises rendering an image of the virtual environment using the selected alternative representations.
The techniques described above may be implemented in hardware, software or combinations of the two. In the case that a software-controlled data processing apparatus is employed to implement one or more features of the embodiments, it will be appreciated that such software, and a storage or transmission medium such as a non-transitory machine-readable storage medium by which such software is provided, are also considered as embodiments of the disclosure.
Thus, the foregoing discussion discloses and describes merely exemplary embodiments of the present invention. As will be understood by those skilled in the art, the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting of the scope of the invention, as well as other claims. The disclosure, including any readily discernible variants of the teachings herein, defines, in part, the scope of the foregoing claim terminology such that no inventive subject matter is dedicated to the public.

Claims (15)

  1. CLAIMS1. A system for generating images of a virtual environment for display, the system comprising: an object identification unit configured to identify one or more objects in the virtual environment that are to appear in an image to be rendered; S an evaluation unit configured to identify which objects in the virtual environment are to be modified to reduce a complexity of images of the virtual environment; a representation obtaining unit configured to select an alternative representation of one or more of the identified objects, wherein the alternative representation has a lower complexity than the corresponding identified object; and an image rendering unit configured to render an image of the virtual environment using the selected alternative representations.
  2. 2. A system according to claim 1, wherein the object identification unit is configured to identify objects in dependence upon a defined viewpoint and view direction within the virtual environment.
  3. 3. A system according to any preceding claim, wherein the evaluation unit is configured to identify objects having an above-threshold visual complexity for modification.
  4. 4. A system according to any preceding claim, wherein the evaluation unit is configured to identify objects having an above-average visual complexity for modification, wherein the average is determined for each of the objects to appear in the image to be rendered.
  5. S. A system according to any preceding claim, wherein the alternative representations are associated with one or more of a simplified shape, simplified texture, reduced colour variation, reduced motion, and/or reduced number of visual effects.
  6. 6. A system according to any preceding claim, wherein the image rendering unit is configured to render a plurality of successive images of the virtual environment using the selected alternative representations.
  7. 7. A system according to claim 6, wherein the plurality of successive images corresponds to all images rendered of the same virtual environment and/or the same objects.
  8. 8. A system according to any preceding claim, comprising a representation generation unit configured to generate one or more representations of each of a plurality of objects, the plurality of objects including the identified objects.
  9. 9. A system according to claim 8, wherein the representation unit is configured to generate a plurality of representations each corresponding to a predefined level of complexity.
  10. 10. A system according to claim 8, wherein the representation unit is configured to generate a plurality of representations from which a selection of representations is stored, each of the stored representations corresponding to a predefined level of complexity.
  11. 11. A system according to any of claims 8-10, wherein the representation unit is configured to perform an object recognition process on the generated representations and to discard any representations that are not recognised as being the same object as the object for which the representation is generated.
  12. 12. A system according any preceding claim, wherein the evaluation unit is configured to determine objects to be modified in dependence upon a user indication of a desired overall image complexity.
  13. 13. A method for generating images of a virtual environment for display, the method comprising: identifying one or more objects in the virtual environment that are to appear in an image to be rendered; identifying which objects in the virtual environment are to be modified to reduce a complexity of images of the virtual environment; selecting an alternative representation of one or more of the identified objects, wherein the alternative representation has a lower complexity than the corresponding identified object; and rendering an image of the virtual environment using the selected alternative representations.
  14. 14. Computer software which, when executed by a computer, causes the computer to carry out the method of claim 13.
  15. 15. A non-transitory machine-readable storage medium which stores computer software according to claim 14.
GB2215359.7A 2022-10-18 2022-10-18 Image generation system and method Pending GB2623528A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2215359.7A GB2623528A (en) 2022-10-18 2022-10-18 Image generation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2215359.7A GB2623528A (en) 2022-10-18 2022-10-18 Image generation system and method

Publications (2)

Publication Number Publication Date
GB202215359D0 GB202215359D0 (en) 2022-11-30
GB2623528A true GB2623528A (en) 2024-04-24

Family

ID=84818379

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2215359.7A Pending GB2623528A (en) 2022-10-18 2022-10-18 Image generation system and method

Country Status (1)

Country Link
GB (1) GB2623528A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6646641B1 (en) * 1999-12-08 2003-11-11 Autodesk, Inc. Extrapolation of behavioral constraints in a computer-implemented graphics system
US20110187860A1 (en) * 2008-08-28 2011-08-04 Centre National D'etudes Spatiales (Cnes) Compression of earth observation satellite images
EP2437220A1 (en) * 2010-09-29 2012-04-04 Alcatel Lucent Method and arrangement for censoring content in three-dimensional images
JP2012157440A (en) * 2011-01-31 2012-08-23 Kyoraku Sangyo Kk Game machine, control method of game machine, and program
US11232643B1 (en) * 2020-12-22 2022-01-25 Facebook Technologies, Llc Collapsing of 3D objects to 2D images in an artificial reality environment
WO2022066341A1 (en) * 2020-09-22 2022-03-31 Sterling Labs Llc Attention-driven rendering for computer-generated objects

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6646641B1 (en) * 1999-12-08 2003-11-11 Autodesk, Inc. Extrapolation of behavioral constraints in a computer-implemented graphics system
US20110187860A1 (en) * 2008-08-28 2011-08-04 Centre National D'etudes Spatiales (Cnes) Compression of earth observation satellite images
EP2437220A1 (en) * 2010-09-29 2012-04-04 Alcatel Lucent Method and arrangement for censoring content in three-dimensional images
JP2012157440A (en) * 2011-01-31 2012-08-23 Kyoraku Sangyo Kk Game machine, control method of game machine, and program
WO2022066341A1 (en) * 2020-09-22 2022-03-31 Sterling Labs Llc Attention-driven rendering for computer-generated objects
US11232643B1 (en) * 2020-12-22 2022-01-25 Facebook Technologies, Llc Collapsing of 3D objects to 2D images in an artificial reality environment

Also Published As

Publication number Publication date
GB202215359D0 (en) 2022-11-30

Similar Documents

Publication Publication Date Title
CN103460234B (en) For producing the method and system of dynamic advertising in the video-game of portable computing
US20200184225A1 (en) System and method for obtaining image content
US20220121874A1 (en) Detection of counterfeit virtual objects
US20100177117A1 (en) Contextual templates for modifying objects in a virtual universe
US20050110789A1 (en) Dynamic 2D imposters of 3D graphic objects
JP2010532890A (en) Avatar customization apparatus and method
US8360856B2 (en) Entertainment apparatus and method
WO2010000164A1 (en) Method and system for pushing advertisement to client
US6816159B2 (en) Incorporating a personalized wireframe image in a computer software application
CN105556574A (en) Rendering apparatus, rendering method thereof, program and recording medium
JP4686602B2 (en) Method for inserting moving image on 3D screen and recording medium thereof
CN113795314A (en) Contextual in-game element identification and dynamic advertisement overlay
CN107638690A (en) Method, device, server and medium for realizing augmented reality
US20230290043A1 (en) Picture generation method and apparatus, device, and medium
US10891801B2 (en) Method and system for generating a user-customized computer-generated animation
US20220309735A1 (en) Image rendering method and apparatus
US20240157250A1 (en) Automatic presentation of suitable content
US20180247431A1 (en) Process, system and apparatus for machine colour characterisation of digital media
JP2014505954A (en) Estimation method of concealment in virtual environment
CN112604279A (en) Special effect display method and device
GB2623528A (en) Image generation system and method
CN115699099A (en) Visual asset development using generation of countermeasure networks
EP3876543A1 (en) Video playback method and apparatus
US20230149811A1 (en) Systems and Methods for Dynamically, Automatically Generating and/or Filtering Shadow Maps in a Video Game
US11074452B1 (en) Methods and systems for generating multiscale data representing objects at different distances from a virtual vantage point