GB2609197A - Virtual environment generation system and method - Google Patents

Virtual environment generation system and method Download PDF

Info

Publication number
GB2609197A
GB2609197A GB2110467.4A GB202110467A GB2609197A GB 2609197 A GB2609197 A GB 2609197A GB 202110467 A GB202110467 A GB 202110467A GB 2609197 A GB2609197 A GB 2609197A
Authority
GB
United Kingdom
Prior art keywords
map
virtual environment
generating unit
features
portions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2110467.4A
Other versions
GB202110467D0 (en
Inventor
Armstrong Calum
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment Inc
Original Assignee
Sony Interactive Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Interactive Entertainment Inc filed Critical Sony Interactive Entertainment Inc
Priority to GB2110467.4A priority Critical patent/GB2609197A/en
Publication of GB202110467D0 publication Critical patent/GB202110467D0/en
Publication of GB2609197A publication Critical patent/GB2609197A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/67Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A system for generating a virtual environment, the system comprising a feature detemining unit to determine one or more features to be associated with a virtual environment and a map generating unit operable to generate one or more portions of a map of the virtual environment in dependence upon at least one of the determined features. The map is generated to include the determined features and a virtual environment generating unit operates to generate a virtual environment corresponding to at least a portion of the map, wherein the map generating unit is operable to utilise a trained machine learning model to generate the map portions.

Description

VIRTUAL ENVIRONMENT GENERATION SYSTEM AND METHOD
Field of Invention
The present invention relates to a system and method for generating a virtual environment. Background The creation and use of virtual environments has become increasingly popular in recent years, with the scope of the created virtual environments expanding considerably. Video games are one area where virtual environments are frequently used, with such virtual environments having become larger, more immersive, and/or more complex over time.
However, the creation of a virtual environment is a time consuming and expensive; often requiring a team of experienced designers. Further to this, the distribution of these virtual environments can lead to a significant data overhead as the file size can be rather large -this is particularly true in more modern games or virtual environments in which a user expects a larger map and/or increased visual quality. While compression techniques can assist with mitigating this problem, with large numbers of maps this can still be problematic.
Additionally, users often desire to create their own customised content in a pre-existing video game, such as customised challenges and levels. Furthermore, users often desire the ability to create their own virtual environments for their customised content. However, in order to create content that is of a comparable quality to that generated by the developers of a game a significant burden is placed upon a user -designing such content may require a significant amount of skill and time, and this can limit the ability of a user to fully realise their content.
Some developers rely on procedural generation to generate a virtual environment. This can be advantageous in that a new map can be generated based upon a number of predefined rules or relationships between different elements -and thereby the number of possible maps can be increased without adding a significant overhead in terms of data distribution or developer time.
However, whilst a vast number procedurally generated virtual environments are possible, their generation is frequently too heavily constrained, resulting in a lack of variety, or too loosely constrained, resulting in jarring or discontinuous virtual environments as well as potentially not being suitable for the content that the virtual environment is generated for.
One particular problem associated with procedural generation is the condition that may be known as 'procedural oatmeal', in which a large number of maps (or other content) can be generated which are all different but lack distinctiveness. The name of this is derived from an analogy in which procedural generation is used to generate bowls of oatmeal -while each may have a different arrangement of oats, the person eating the oatmeal will not be able to tell the difference. In the context of a virtual environment, this could lead to an example in which a user can be provided with a large number of different environments that each produce the same overall impression during gameplay.
Therefore while procedural generation has been identified as a solution to a number of problems associated with earlier arrangements for generating virtual environments, there are still outstanding problems that make the adoption of procedural generation undesirable in many circumstances.
It is in this context that the present disclosure arises.
Summary of the Invention
In a first aspect, a system for generating a virtual environment is provided in claim 1.
In another aspect, a method for generating a virtual environment is provided in claim 13.
Further respective aspects and features of the invention are defined in the appended claims.
Embodiments of the present invention will now be described by way of example with reference to the accompanying drawings, in which: - Figure 1 schematically illustrates an example virtual environment generation system; - Figure 2 schematically illustrates an environment generation unit; Figures 3A, 3B and 3C each schematically illustrate an example generation of map; and Figure 4 schematically illustrates a flowchart of a virtual environment generation method. Description of the Embodiments In the following description, a number of specific details are presented in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to a person skilled in the art that these specific details need not be employed to practice the present invention. Conversely, specific details known to the person skilled in the art are omitted for the purposes of clarity where appropriate.
With the significant time and high cost required to generate a virtual environment, it is desirable to provide a virtual environment generation system and method that may advantageously enable a map to be generated in dependence upon, and including, one or more features, and a virtual environment to be generated corresponding to the generated map. In the following discussion the terms 'map' and 'virtual environment' have in places been used interchangeably; in such cases, the term 'virtual environment' can be interpreted as referring to a virtual environment that would be generated in dependence upon the corresponding map.
Figure 1 schematically illustrates a system for generating and using a virtual environment in accordance with one or more embodiments of the present disclosure. This system comprises an input unit 100, an environment generation unit 110, and a display unit 120. The system of Figure 1 is configured to receive inputs and use these to generate and output maps and/or virtual environments generated in accordance with a generated map.
The input unit 100 is configured to receive one or more inputs from a user (such as the player of a game, or a developer of content comprising one or more virtual environments). These inputs can comprise one or more control devices (such as a gamepad or keyboard and mouse), and these inputs can be provided to a different device to the device that performs the environment generation process. Alternatively, or in addition, inputs may be generated without a user input in dependence upon one or more predetermined conditions that can be used to specify the values of inputs. The inputs are used to constrain the map generation process; examples of inputs are discussed below, but can include specifying one or more objects or features to be included in a virtual map and/or environment.
The environment generation unit 110 is configured to generate one or more virtual environments (or representations of virtual environments) in dependence upon at least the inputs provided to the input unit 100. These representations of virtual environments can take any suitable format; one example is that of a map that comprises sufficient detail so as to enable the generation of an environment corresponding to that map. In some embodiments, the environment itself may be generated by providing this map to one or more further processes that are used to generate environmental details (for instance, smaller-scale features such as individual tree placement in a forest) in dependence upon map features.
The display unit 120 is operable to display one or more of the virtual environments generated by the environment generation unit 110. This display may include one or more rendering processes to generate images for display, or it may simply display images that are output by a device associated with the environment generation unit 110 or an intermediate device that executes a game (or other application) in dependence upon the output from the environment generation unit 110.
The system of Figure 1 may be implemented using any suitable combination of processing devices and/or other components. For instance, the input unit 100 may be associated with a first processing device (such as a personal computer used by a developer) while the environment generation unit 110 is implemented by a games console that is used to generate images of gameplay (comprising the generated environment or environments) for a player. In such an example, the display unit 120 may be used to generate and/or display the images of the gameplay -this may be a television or the like.
In some embodiments, one or more of the processing functions described above may be implemented using one or more server-based processing systems. This may be particularly advantageous in cloud gaming arrangements, for instance. In such cases, a server may be configured to receive inputs from a user to control the environment generation (which is performed by the server) with the environment being rendered as part of the gameplay that can then be streamed by the user at their client device. Alternatively, each of the processing functions may be implemented by a single device and an associated display (such as a games console and television) or a single device comprising each of the elements (such as a handheld games console).
In some embodiments, the images to be displayed to a user via the display unit 120 may be generated by an intermediate device that does not perform the environment generation -for instance, the environment generation unit 110 may be located at a server and the user may download the generated environment (or a representation of a generated environment) to a games console that is then operable to render the virtual environment as a part of a game.
Accordingly, turning now to figure 2, a system 200 for generating a virtual environment is provided; this system 200 may correspond to the environment generation unit 110 of Figure 1, for example, although in some embodiments the processing performed by the system 200 may be distributed between different processing devices. The system 200 comprises a feature determining unit 210, a map generating unit 220, a virtual environment generating unit 230.
The feature determining unit 210 is operable to determine one or more features to be associated with a virtual environment. These features may be determined based upon user inputs, for example, or upon one or more conditions or criteria that can be used to infer features that are to be associated with a virtual environment.
The map generating unit 220 is operable to generate one or more portions of a map of the virtual environment in dependence upon at least one of the determined features, the map being generated so as to include the determined features. The determined features are those that are determined for association with the virtual environment by the feature determining unit 210; the phrase 'determined features' is used throughout this disclosure to refer to features that are to be included in the generated map. Generation of the map may include selecting one or more additional features to be included in the map, as well as determining general characteristics of the map such as size or density of particular features.
The virtual environment generating unit 230 is operable to generate a virtual environment corresponding to at least a portion of the map, wherein the map generating unit 220 is operable to utilise a trained machine learning model to generate the map portions. In some instances, the virtual environment generating unit 230 may execute a rendering process associated with a video game; alternatively, the virtual environment generating unit 230 may generate a virtual environment in a format that enables its use in another application.
The arrangement of Figure 2 is an example of a processor (for example, a GPU and/or CPU located in a games console or any other computing device) that is operable to generate a virtual environment, and in particular is operable to: determine one or more features to be associated with a virtual environment; generate, utilise a trained machine learning model to generate the map portions, one or more portions of a map of the virtual environment in dependence upon at least one of the determined features, the map being generated so as to include the determined features; and generate a virtual environment corresponding to at least a portion of the map.
Optionally as indicated by the dashed outline in figure 1, the system 200 may comprise an output unit 240 that is configured to output the one or more portions of the map of the virtual environment and/or the virtual environment. For example, the output unit 240 may output to a non-transitory storage medium, a display device, a client device, a transmission device, or any other suitable output device.
The determined features may be, for example, one or more significant features, which in some embodiments may be specified by a user as features included in the map generated by the map generating unit 220. A significant feature may be any feature that can be regarded as being distinctive within an environment; in other words, a significant feature may be a feature that differentiates a particular virtual environment from other virtual environments according to a suitable measure.
For instance, the distinctiveness or significance of a feature can be determined in dependence upon a visual impression upon a user, interactivity for a user, impact upon the virtual environment, the uniqueness of the feature within a particular context (such as the rarity of an object within a game), and/or one or more physical characteristics of the feature (such as whether the feature has a size that greatly exceeds that of other features within the environment).
Examples of determined features include: one or more locations for one or more "boss" type enemies, as these are often a defining feature of a virtual environment (such as a particular level in a game); specific terrain features such as a mountain, cave or canyon which can be used to differentiate between different virtual environments; a specific type of terrain, referred to as a "biome", such as an ocean, a desert or a jungle that is used to characterise the entire environment (and therefore is significant in terms of its impact upon the virtual environment); a set of buildings or virtual areas which can be significant sources of interaction for a player in a game; or any other
suitable feature.
While described above in the context of particular features or elements, the determined features can instead be provided in the form of a map segment that identifies the location of one or more features or elements. The map generation process can then generate further areas of map to surround (or otherwise incorporate) this map segment, and/or use it as an input to generate map portions that include one or more of the features and/or elements without including the map segment itself.
Alternatively, or in addition, the one or more determined features may be determined by the feature determining unit 210 based on one or more criteria provided to the feature determining unit 210. These criteria can be used to constrain the determination process so as to result in the selection of more suitable features for a given application; for instance, by increasing immersion (where the criteria relate to the environment, for instance) or by improving the relevance of the final content (where the criteria relate to activity within the generated map, for instance). For example, the criteria may be an amount of time that the virtual environment is to be used for, which may limit the size or complexity of the virtual environment, or a set of tasks that a user may complete in a virtual environment. The tasks may require specific features to be included in a map generated by the map generating unit 220.
For example, consider a set of tasks that comprises the following individual tasks: "gather cacti"; "talk to a monarch"; "sail a ship"; and "defeat a dragon". These tasks require the virtual environment to contain at least the following virtual assets to enable completion of the tasks: cacti to be gathered; a monarch to talk to; a ship to sail; a body of water to sail the ship on; and a dragon to be defeated. Therefore it is apparent that the provision of criteria (such as quests) can be used to identify features to be associated with a virtual environment.
Each of these virtual assets may have one or more associations with one or more other features or characteristics that may be included in a map generated by the map generating unit 220. There are multiple options for how the associations may be determined. For example, one or more the associations may be predefined by a game developer. Alternatively, or in addition, one or more of the association may be derived by the machine learning model when the machine learning model is trained.
For example, cacti may be associated with a desert environment as this is where they are often found -and it can in some cases be beneficial for a user's sense of immersion if there is a realistic link between flora/fauna and a selected biome. Similarly, a monarch may be associated with a palace, castle, or marching army -thereby representing both structures and non-player characters (NPCs) as features that can be associated with an asset. Both a ship and a body of water may be associated with a lake, reservoir, or ocean, for instance. A dragon may be associated with a mountain, a dragon's lair, or a cave as a common habitat, indicating that the presence of a particular enemy can also introduce constraints upon the generated map.
Therefore, the feature determining unit 210 may determine, based on the associations between the one or more features that may be included in a map generated by the map generating unit 220 and the virtual assets required by the virtual environment to enable the user to complete the set of tasks, that a map generated by the map generating unit requires the following set of features to enable a user to complete this set of tasks in the virtual environment: a desert to enable cacti to be gathered; a palace where the monarch may reside; an ocean to enable a ship to be sailed; and a dragon's lair where a dragon may be found.
In some embodiments of the present disclosure, a position of each of the determined features may be specified, by a user for example, before the generation of the map. In which case, the map generating unit 220 generates the map so as to include each of the determined features at the respective specified position of each determined feature. In some cases, a tolerance on the location may be provided so as to enable the map generation process to change the location of a determined feature where appropriate; for instance, if different features are too close together to enable a realistic map to be generated then one or more of the features could be moved to overcome this.
In some embodiments of the present disclosure, a generative adversarial network (GAN) may be utilised in which a generative network is trained to generate map portions (from one or more determined features) that produce a desired result when they are evaluated by a discriminative network that is trained to evaluate a suitability of one or more maps. The discriminative network may be trained using a database of maps, where each map may, for example, be classified as "suitable" or "not-suitable". Other classifications for each map in the database may also, or alternatively, be used such as a suitability score, where each map receives is classified by a numerical value that represents the map's suitability for example.
A suitability score (or rating) may be defined in a number of suitable ways, as appropriate for a particular implementation. In other words, the suitability of a map may vary in dependence upon a particular application for that map -this can be the genre of a game for which the map is to be used or the scenario in which the map is to be used within a game (such as a boss fight or an open-world exploration segment of a game).
The suitability itself can be determined in dependence upon a number of features of the map that relate to characteristics that are or are not desirable for a particular implementation. For instance, a particular environment may be desired in which there are no big cities (such as in a game set in the past) -in such a case, a suitability score may comprise an assessment of the density and/or scale of cities or towns present on the map. Similarly, a virtual environment may be desired that has particular geographic features, in which case the suitability score can be determined in dependence upon whether those features are present -a game that is set in the Himalayas would require that a map that is devoid of mountains would have a low suitability score.
Furthermore, the classification of the suitability of each map in the database may be variable in dependence upon the type of map that the GAN is to be trained for. For example, some maps may be classified as "suitable" when a GAN is trained to generate maps intended for a platforming video game, whilst being classified as "unsuitable" when a GAN is trained to generate maps intended for an open world role playing game (RPG). This is because each of these genres have different requirements for a map due to differing types of gameplay; an open world game may require a larger map, for instance, or a greater number of areas for interacting with non-player characters.
Of course, the suitability score could be derived based upon consideration of a number of different parameters rather than only a single parameter (such as the presence of a geographical feature). In the example of using a GAN, the discriminator may be trained without any explicit knowledge of which features are considered to be indicative of particular suitability for a given application -in such a case, an existing dataset may be provided that indicates which maps would be suitable and which would be unsuitable. From this, the discriminator could be trained to derive which parameters are indicative of suitability and therefore determine the suitability of input maps from the generator.
Alternatively, or in addition, a reinforcement learning model may be used in which a reward is determined based upon one or more characteristics of the generated map. These characteristics can be selected in any suitable way; for instance, to represent realism (such as a measure of whether the map could pass for a real-world location), non-uniformity (ensuring that the map isn't too homogeneous), playability (a measure of how well the user can interact with the map and complete quests), and/or navigability (for instance, whether there are impassable obstacles that prevent a user from accessing parts of the map).
For example, in a training phase of the reinforcement learning model, a reward function may provide rewards when a generated map has similar characteristics compared to one or more comparison maps, where the comparison maps are preselected prior to training the reinforcement learning model. For example, one of the characteristics to be compared may be the density of features on a map, which may be an average density of features across the entire map. Optionally, a higher resolution approach to the density of features on a map may be used, such as comparing a heat-map of the density of features on a map.
In some embodiments of the present disclosure, the machine learning model may be trained using one or more maps of real and/or virtual environments as inputs. For example, the machine learning model may be trained using current or historical maps of real world cities and towns, maps used in video games, maps used in fantasy books, and/or any other suitable real or fictional maps. In some embodiments, a model may be trained specifically upon fantasy world maps (for example) as this may lead to improved map generation for corresponding applications (that is, the generation of maps for further fantasy-themed environments). In other words, the source of maps for training a model may be selected in accordance with the type of map that is intended to be generated.
The training of models may be performed with any desired level of granularity in respect of the intended usage. For example, in some cases a model may be trained that is intended to be used for a specific game, while in other cases the model may be trained so as to instead be used to generate maps for a series of games or an entire genre. Similarly, a model could be trained for a more specific purpose, so as to only generate maps for indoor environments, or a particular biome within a game or the like.
In the case that a model is intended to be used for more than a single game, it is considered that it may be challenging to identify assets to be rendered in a virtual environment corresponding to the map. This is because each game will have their own set of assets, which may comprise different elements and the like. In such cases, the generated map may instead use descriptors to identify different map areas or features rather than identifying specific assets. For instance, rather than specifying the location of a particular tree model the map may specify that a tree is to be rendered at a particular location. Upon rendering, the game engine can then identify a particular tree model in accordance with this.
Optionally, each of the one or more maps that may be used as inputs may be associated with one or more tags. These tags can be used to characterise the input data set so as to increase the efficiency and/or effectiveness of the training process. These tags can be used to describe features of the maps such as a categorisation of the map itself (such as indoor/outdoor), the source of the map, one or more significant features of the map (such as 'contains rivers'), and/or one or more uses of the map (such as 'boss level in an action game'). These tags can in some cases be used to identify one or more characteristics that may not be derivable from the map itself, or would not necessarily be easy to derive.
For example, the one or more tags may describe the area type of a map such as "interior", "urban", "rural", "mountainous", or any other suitable map area type or biome. Alternatively, or in addition, one or more tags may describe specific features or objects that are present on a map such as "trees", "forests", "buildings", "roads", "rivers", "castle", or any other suitable map feature. Further optionally, the one or more tags that describe features that are present on a map may also include position information, which describes where the one or more features that correspond to each tag are present on the map. This may assist in further characterising the map, as the relative positions of elements within the map may be of significance in a given context.
Other tags may describe the source of a map such as "real world", "fantasy", "video games", "films", or any other suitable source of a map. Some tags may describe additional information that may be included in a map such as a terrain elevation map or a resource deposit map. The above examples of one or more tags that may be associated with each of the one or more other maps that may be used as inputs are not exhaustive and any other suitable tags may be used to categorise the other maps.
While the above discussion has focused upon the use of a single trained model for generating a map, in some embodiments it may be considered advantageous to use a plurality of trained models for generating a map. For instance, a first model may be used to generate a map comprising large-scale features (such as woodland, cities, mountains, and oceans), while a second model may be used to populate areas of that generated map. For example, the second model could be trained to specifically generate cities (such as by arranging roads and buildings in a particular manner) and this can be used to add detail to the map generated by the first model which specifies only that a city exists. This can be advantageous in that the use of two separate models can enable a more specific training for providing specific features, which can improve the generated maps. Of course, more than two models could be used in a particular implementation -for instance, a third model could be used to generate a particular arrangement of trees and paths in the woodland.
Alternatively, or in addition, trained models can be used in conjunction with other methods for generating a complete map. For example, a first model may specify that a city exists and this city may be generated for the map by importing an existing model (such as if the same model, such as a capital city in a virtual world, is intended to be used for all players of a game) or a city generation process (such as a process that generates a city map using a random seed). The use of existing models can be advantageous in ensuring that a particular level of uniformity of gameplay (such as having the same quest environments or NPC interactions) is experienced by players of a game, for instance.
In some embodiments of the present disclosure, the machine learning model may be trained using one or more other virtual environments (rather than maps) as an input. For example, the machine learning model may be trained using locations of one or more of: objects; regions; areas; biomes; paths; or any other suitable features in a respective virtual environment. The one or more virtual environments may be virtual environments used in video games for example. This may be distinct from the use of maps as an input as it can enable the identification of particular features directly, and in some cases on a smaller scale (for instance) a virtual environment would contain individual trees while a corresponding map may only identify that a forest exists).
Rather than being limited to a map generation process in which determined features are input to generate a map, it is considered that alternative inputs may also be provided. It is also considered that further characteristics of a map and/or a virtual environment corresponding to a map can be derived from inputs.
For instance, in some embodiments of the present disclosure, the map generating unit 220 may generate the one or more portions of the map in dependence one or more input images in addition to the determined features. In some cases, this process may comprise identifying one or more features or objects within the image and using these features as the determined features discussed previously. This is an example of a case in which the inputs are not explicitly the features themselves, but can also comprise representations of those features. Further examples include text-based (or other) descriptions of features (such as the name of a particular feature or element) or a desired environment (such as a particular biome).
In some embodiments the input images (or other inputs) may be used to determine other characteristics of the map (or a corresponding virtual environment) other than the location of features. For instance, an image may be a suitable input for specifying a preferred colour palette; this can increase a probability that colours in the preferred colour palette are used over other colours by the map generating unit 210 -the preferred colour palette can be indicated using metadata associated with the map, for example, or can be used to directly influence the placement of features or the like. For instance, if a preferred colour palette includes lots of greens then a more forest-rich map may be generated; similarly, a more orange preferred colour palette can lead to an increased number of deserts in the map.
As another example, the input images may provide a theme to be used in the generation of the map, such as the style of architecture used for buildings generated in one or more regions of the map. This information can be implemented in a similar manner to the colour palette information already described.
Similarly, in some embodiments of the present disclosure, the map generating unit 220 may identify one or more visual characteristics for association with each of one or more map portions or parts thereof. Visual characteristics associated with a map are those characteristics that influence the display of the features that are present on the map; it is considered that the same map can have a number of distinctive variations in appearance by varying the visual characteristics. In some cases, the visual characteristics may comprise one or more suitable textures, a colour palette, and/or a representative image.
Whilst specific examples for the type of machine learning model utilised by the map generating portion 220 have been given above, the machine learning model is not limited to these examples and any other appropriate type of machine learning model may be used.
Turning now to figures 3A, 3B and 3C, these figures each illustrate an example of a map generated by a map generating unit 220 operable to generate one or more portions of a map of the virtual environment in dependence upon at least one determined feature, the map being generated so as to include the determined features, and operable to utilise a trained machine learning model to generate the map portions.
In figure 3A, and similarly for figure 3B and figure 3C in turn, the box on the left shows the one or more determined features, whilst the box on the right shows a map of a virtual environment where one or more portions of the map may be generated by the map generation unit 220, utilising a trained machine learning model, in dependence upon at least one of the determined features. Each of figures 3A, 3B and 3C will now be described in turn.
In the example of figure 3A, there are two determined features: a mountain 310 and an ocean 315.
These determined features may be provided in dependence upon the quests defined for a player; for instance, if a player is required to steal gold from a dragon and then escape on a boat then it may be inferred that a mountain is required (as the setting for a dragon's lair) as is an ocean (as required for the provision of a boat). While shown to be at particular locations in the image on the left of Figure 3A, it is also possible that the determined features are specified without a location and that the map generation process determines a suitable location for these features.
In dependence upon these determined features, the map generating unit 220 utilises a trained machine learning model to generate one or more of the portions of a map of a virtual environment, which is shown in the box on the right-hand side of figure 3A. This map includes the determined features (the mountain 310 and ocean 315), and, in this example, the position of the determined features is specified before the generation of the map.
The generated map portions also include four settlements, which have been illustrated as circles in figure 3A. The number and locations of these settlements may be determined by the map generating unit 220; the number of settlements and their locations may be influenced by the determined features in that mountain settlements are less common than those near the ocean or other waterways, for example. Of course, the specific distribution of the settlements is determined by a trained model that may be trained so as to consider a number of different characteristics that can be used to constrain the placement. The map generating unit may also generate roads to connect the settlements, which have been illustrated as the dotted lines in figure 3A, as it is useful if it is possible to travel between the settlements. Other examples of map portions generated by the map generating section 220 are hills, which are illustrated to the left of the mountain 310 in this example, and a forest, which is shown in the bottom right-hand corner of the map shown in figure 3A.
The final map portion that is illustrated in this example is a river. The river is shown by the solid lines that start by the mountains 310 and the hills and ends at the ocean. The river in this example is formed by three tributaries, which start in the mountain 310 and the hills, and the river mouth and delta is formed at the edge of the ocean 315.
During training, the machine learning model may identify, from an input data set, a relationship between rivers and other map features. For example, the relationship may be that rivers are likely to start in map portions with high elevation, such as the mountain 310, and flow to areas of lower elevation whilst ending at a large body of water, such as the ocean 315. Therefore, the map generating portion may determine the placement of the river in dependence upon the position of the mountain 310, ocean 315 and the identified relationship. Such a relationship may also be pre-defined, rather than learned through training, as an example of constraint that ensures that realistic rivers (or other features) can be generated. This may be particularly appropriate in the case of constraints that may not be easy to derive based upon maps -such as the distance for which a river can be straight being no more than ten times its width.
In some embodiments of the present disclosure, the map generating unit 220 may determine one or more parameters for the river portion of the map. These one of these parameters may indicate the motion of water in the river within the virtual environment. For example, the parameter may indicate that the water flows from the mountain 310 to the ocean 315.
One or more additional inputs may also be used to influence the map generation -for instance, in addition to the determined inputs 310 and 315 a player quest may be considered to generate further features of the map. For example, the forest may be generated in response to a player having a quest to obtain wood without the forest being necessarily specified as a determined input. This is because it may be considered that the location of a forest in the map is less significant than the determined features (mountain and ocean), for example, or that the forest may be easier to place in a map due to having a smaller impact on the surrounding area.
This provides an example where the map generating unit 220 may determine one or more parameters for a portion of the map that modify the motion of one or more dynamic elements within the virtual environment corresponding to that portion of the map.
In the example of figure 3B, there is one determined feature: a location of a boss-type enemy (boss location) 320. In dependence upon the boss location 320, which may be specified by a user, the map generating unit 220 utilises a trained machine learning model to generate one or more of the portions of a map of a virtual environment, which is shown in the box on the right of figure 3B. This map includes the determined feature (the boss location 320).
In this example, the map generating unit 220 also generates one or more rooms of various sizes, including a room that contains the boss location 320. A specific example of the map generating unit 220 generating a room in dependence upon the determined feature (the location of the boss type enemy 320) is provided by a room that contains a treasure chest 325, which may contain items or currency. The map generating unit may place this room in a position that requires a user to pass through the boss location 320 in order to access the room with the treasure chest 325.
The locations and sizes of each of the rooms can be determined freely by the model, with the only necessary constraint being that a path must be able to be taken between a user's start point and the boss location (and/or that each of the rooms is accessible by a user). Further constraints may also be considered, or parameters may be learned during a training process, that limit the size of rooms or the like in response to different inputs. For instance, it may be considered that rooms which are too small are impractical for fighting and as such the training data set (or a minimum room size parameter) may be selected so as to provide a minimum room size. The map generating unit 220 also generates a corridor portion that includes a trap 330, which is indicated by the shaded portion. The map generating unit 220 may also generate one or more enemy locations 335, which are indicated by the lightning bolts.
As noted above, as a part of the map generation process the location of a number of enemies (as indicated by the icons 335) may be determined. However, in some embodiments it is considered that this placement of enemies is instead determined by a particular game upon rendering of the virtual environment or input of the map. Rather than indicating enemies specifically, it is also considered that the map generation process may instead define spawn points for enemies (such as a particular location or room) and the enemies are generated as desired for the game setting.
When considering the location and/or number of enemies that are to be provided, it may be appropriate to factor in the difficulty of the game. This is because different difficulties may be associated with different enemy distributions. While in many cases this is something that may be determined by the game engine itself, in some embodiments a map may be tailored for a particular difficulty. In addition to this, the difficulty distribution throughout the map may also be considered -for instance, the difficulty may be desired to be higher nearer to a boss, and as such more (or harder) groups of enemies may be generated in accordance with this.
In some cases the map generation process may include the generation of physical challenges to be navigated by a user -examples include a room comprising a number of platforms for a user to jump between. The existence of such rooms may be defined as a predetermined input, for example, or may be influenced by the training of the model.
Additionally, the map generating unit 220 may generate the one or more rooms so that the rooms closer to the boss location 320 increase in difficulty in comparison to rooms that are further away from the boss location. The increase in difficulty may be provided by, for example, increasing the number or level of enemies present in each room, or the frequency and level of traps. This provides further examples of the map generating unit 220 generating one or more portions of a map of a virtual environment in dependence upon at least one determined feature. In addition to this, the example discussed with reference to Figure 3B can be indicative of game-specific generation of a map in that the relationships between enemy locations and the like can be determined in a game-specific manner.
Furthermore, the locations of the treasure chest 325, the trap 330 and the one or more enemies 335 provide examples where the map generating unit may determine the locations of one or more non-terrain elements within the generated map portions, wherein the one or more non-terrain elements may comprise one or more non-playable characters, quests, and/or interactive elements.
In the example of figure 3C, there is one determined feature: a castle 340. However, in this example the location of the castle has not been specified prior to the generation of the map. In dependence upon the castle 340, the map generating unit 220 utilises a trained machine learning model to generate one or more of the portions of a map of a virtual environment, which is shown in the box on the right of figure 3C. This map includes the determined feature (the castle 340). The machine learning model associated with this map generation is an example of a model that is trained for generating a specific type of environment; that is, the model here is trained to generate a city, and as such a corresponding training data set (comprising maps of cities of a desired style) would be used to generate this model.
One of the portions of the map, generated by the map generating unit 220, is the forest in the top left-hand corner. In the example of figure 3C, other portions of the map, generated by the map generating portion, include locations of: roads or paths, which are illustrated by the dotted lines; one or more residential buildings; one or more non-player characters (NPCs) 345; an inn 350; a stable 355; and a quest giving NPC 360, which a user may interact with in the virtual environment to receive one or more quests or tasks.
It is further noted that the city generation shown in Figure 3C could be used to provide one of the settlements indicated in Figure 3A; this is an example of an embodiment in which two (or more) trained models can be used in associated to generate a map. In other words, the map generated in Figure 3A can specify the location of a settlement without generating the settlement itself, while the model that is used to generate the city in Figure 3C could be used to generate that settlement. This may result in an improved city generation for the map in Figure 3A, as the use of a more specific model for generating cities would be expected to generate better results than a more general model.
In some embodiments of the present disclosure, the virtual environment generating unit 230 may generate a virtual environment corresponding to at least a portion of the map by generating terrain for the virtual environment corresponding to a terrain indicated by at least the portion of the map for example. The virtual environment generating unit may then insert one or more virtual assets into the virtual environment, one or more of the inserted virtual assets may correspond to one or more features indicated on the map, such as buildings, trees or NPCs.
The virtual assets that are discussed here include one or more models and/or textures that are used when rendering a virtual environment corresponding to a generated map. These may be accessible during the map generation process, or may only be identified (for instance, by a label or tag indicating an object that is located at a specific map position) so that the correct asset (or an appropriate asset, in the case that multiple assets correspond to the same label) can be used at the time of rendering the corresponding virtual environment. It is therefore considered that the virtual assets may be those provided in game data or are otherwise specific to a particular application.
Optionally, one or more other inserted virtual assets may not be directly shown on the map, but the map may indicate that one or more regions of the virtual environment may include the other virtual assets, such as individual trees of a forest region, which may not be shown on the map but the presence of trees may be indicated by a forest region of the map. This may advantageously enable the map generating unit 220 to be trained with a focus on large scale features, such as the location of a forest, instead of smaller scale features, such as the placement of individual trees.
In some embodiments of the present disclosure, the map generating unit may assign one or more labels to each of one or more portions of the map. The labels may indicate at least a biome to be associated with a respective portion of the map. In these embodiments, the virtual environment generating unit may determine suitable features within the environment in response to those labels.
In the case of figure 3A for example, the map generating unit may assign a label to the bottom right-hand portion of the map that indicates that the portion is a forest biome. The virtual environment generating unit may then determine that trees are a suitable feature within this portion of the map in response to the forest biome label. The virtual environment generating unit may then insert one or more trees in the area of the virtual environment corresponding to the forest region of the map.
In some examples, it is also considered that such an approach may lead to variations in the gameplay by a user and not just the appearance. This is because the arrangement of trees may result in both an aesthetic change and a navigational change -different paths may be defined through a forest in dependence upon the location. This can therefore lead to a gameplay change for a user of a first virtual environment relative to the gameplay of a user of a second virtual environment generated from the same map.
One advantage of this feature, for example, is that it may enable a plurality of different, albeit similar, virtual environments to be generated based on a single map of a virtual environment. This therefore leads to an increase in the number of virtual environments that are able to be generated using the same base data, thereby increasing the efficiency of the map generation as only a single map need be generated to provide a number of players with unique experiences.
As another example (not illustrated), the map generating unit may assign a label to a portion of a map that indicates that the portion is a desert biome. The virtual environment generating 230 unit may then determine that cacti and sand dunes are suitable features within this portion of the map in response to the desert biome label. This determination by the virtual environment generating unit 230 may be based on one or more associations, as described elsewhere herein, between features and biomes for example. The virtual environment generating unit may then insert one or more cacti and one or more sand dunes in the area of the virtual environment corresponding to the desert region of the map.
In some embodiments of the present disclosure, the map generating unit may identify one or more audio characteristics for associating with each of one or more map portions or parts thereof. The identification may be based on one or more associations as described elsewhere herein. In some cases, the audio characteristics may comprise one or more audio files, descriptors of audio, or modifiers for output audio associated within the corresponding map portions.
In the case of figure 3A for example, the map generating unit may identify an audio file for the ocean portion 315 that comprises ocean sounds and sounds of wildlife commonly found near an ocean such as seagulls or seals, and an audio file for the mountain portion 310 that comprises a sound of wind howling, wolves and mountain lions. Appropriate audio files may be identified based upon a lookup table or the like, with particular elements within a map being associated with a number of different sounds. Alternatively, the audio generation may be performed during the rendering of the virtual environment, with a game engine or the like identifying appropriate sounds based upon which virtual assets are used in the rendering process.
In the case of figure 38 for example, the map generating unit may identify a modifier for output audio for each room, which may be based on the size and contents of the room. For example, the map generating unit may identify a modifier that increases the audio's reverb for the audio in the large room in the bottom right-hand room, and a modifier that decrease the audio's reverb for the audio in the smaller corridor portion that contains the trap 330. In some embodiments, a particular feature of a map (such as a particular terrain) may be associated with audio modifiers based upon known relationships -for instance, a mountain may be associated with rocky surfaces which will result in footsteps that sound different to those which would be heard when walking through woodland.
In the case of figure 3C for example, the map generating unit may identify one or more descriptors of audio. For example, the map generating unit may identify a descriptor of audio for the location of the stable 355 as "horses neighing" and for the location of the inn 350 as "conversation and music".
In some embodiments of the present disclosure, the virtual environment may be for use in a video game. In these embodiments, the map generating unit 220 may generate the one or more portions of the map in response to an initialisation of the video game or a user input to the video game.
This may advantageously enable a reduction in the file size of a video game using a system 200 for generating a virtual environment in accordance with the present disclosure compared to a video game model not using the system 200 for generating a virtual environment. This is because it would only be necessary to provide the map, rather than the virtual environment, which will have a much smaller file size. This may therefore reduce the required amount of data that needs to be downloaded from a server or provided on a disc, for example.
For example, a client device (such as a games console) comprising the system 200 (that is, a processor for implementing the functionality of the environment generation unit 110 of Figure 1) may download a video game (comprising a trained model for generating a map) without requiring virtual environment data or virtual environment map data to be downloaded, as the map generating unit 220 may generate one or more portions of a map of a virtual environment in response to an initialisation of the video game or a user input to the video game. The virtual environment generating unit 230 may then generate a virtual environment corresponding to at least a portion of the map.
However, in some cases, a remote server may comprise the map generating unit 220, which may generate one or more portions of a map of a virtual environment. The server may then transmit the one or more portions of the map of the virtual environment to a client device, such as a games console. The client device in such an example would comprise the virtual environment generating unit 230. Upon receipt of the map from the server, the virtual environment generating unit 230 may generate a virtual environment corresponding to at least a portion of the map at a remote server.
This is therefore an example of an implementation in which the functionality of the environment generation unit 110 of Figure 1 is provided across multiple processing devices.
Whilst the example directly above may require a client device to download virtual environment map data, file sizes may still be advantageously reduced as the client device would still not need to download virtual environment data (such as meshes and textures corresponding to features within a virtual environment).
Figure 4 illustrates a method for generating a virtual environment provided by the present disclosure. The method comprising: determining 410 one or more features to be associated with a map of the virtual environment; generating 420 one or more portions of the map in dependence upon at least one of the determined features, the map being generated so as to include the determined features; and generating 430 a virtual environment corresponding to at least a portion of the map. The step of generating 420 one or more portions of the map utilises a trained machine learning model to generate the map portions.
In some embodiments of the present disclosure, a computer program is provided. The computer program comprising computer executable instructions adapted to cause a computer system to perform any of the methods described elsewhere herein.
It will be appreciated that the above methods may be carried out on conventional hardware (such as system 200) suitably adapted as applicable by software instruction or by the inclusion or substitution of dedicated hardware.
Thus the required adaptation to existing parts of a conventional equivalent device may be implemented in the form of a computer program product comprising processor implementable instructions stored on a non-transitory machine-readable medium such as a floppy disk, optical disk, hard disk, solid state disk, PROM, RAM, flash memory or any combination of these or other storage media, or realised in hardware as an ASIC (application specific integrated circuit) or an FPGA (field programmable gate array) or other configurable circuit suitable to use in adapting the conventional equivalent device. Separately, such a computer program may be transmitted via data signals on a network such as an Ethernet, a wireless network, the Internet, or any combination of these or other networks.
The foregoing discussion discloses and describes merely exemplary embodiments of the present invention. As will be understood by those skilled in the art, the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting of the scope of the invention, as well as other claims. The disclosure, including any readily discernible variants of the teachings herein, defines, in part, the scope of the foregoing claim terminology such that no inventive subject matter is dedicated to the public.

Claims (15)

  1. CLAIMS1. A system for generating a virtual environment, the system comprising: a feature determining unit operable to determine one or more features to be associated with a virtual environment; a map generating unit operable to generate one or more portions of a map of the virtual environment in dependence upon at least one of the determined features, the map being generated so as to include the determined features; and a virtual environment generating unit operable to generate a virtual environment corresponding to at least a portion of the map, wherein the map generating unit is operable to utilise a trained machine learning model to generate the map portions.
  2. 2. A system according to claim 1, wherein the machine learning model is trained using one or more other virtual environments as an input.
  3. 3. A system according to any preceding claim, wherein the machine learning model is trained using one or more other maps of real and/or virtual environments as inputs.
  4. 4. A system according to any preceding claim, wherein the map generating unit is operable to determine the locations of one or more non-terrain elements within the generated map portions, wherein the one or more non-terrain elements comprise one or more non-playable characters, quests, and/or interactive elements.
  5. 5. A system according to any preceding claim, wherein the map generating unit is operable to determine one or more parameters for a portion of the map that modify the motion of one or more dynamic elements within the virtual environment corresponding to that portion of the map
  6. 6. A system according to any preceding claim, wherein the map generating unit is operable to identify one or more visual characteristics for association with each of one or more map portions or parts thereof.
  7. 7. A system according to claim 6, wherein the visual characteristics comprise one or more suitable textures, a colour palette, and/or a representative image.
  8. 8. A system according to any preceding claim, wherein the map generating unit is operable to identify one or more audio characteristics for associating with each of one or more map portions or parts thereof.
  9. 9. A system according to claim 8, wherein the audio characteristics comprise one or more audio files, descriptors of audio, or modifiers for output audio associated within the corresponding map portions.
  10. 10. A system according to any preceding claim, wherein the map generating unit is operable to generate the one or more portions of the map in dependence one or more input images in addition to the determined features.
  11. 11. A system according to any preceding claim, wherein the virtual environment is for use in a video game, and the map generating unit is operable to generate the one or more portions of the map in response to an initialisation of the video game or a user input to the video game.
  12. 12. A system according to any preceding claim, wherein the map generating unit is operable to assign one or more labels to each of one or more portions of the map, the labels indicating at least a biome to be associated with a respective portion of the map, and wherein the virtual environment generating unit is operable to determine suitable features within the environment in response to those labels.
  13. 13. A method, for generating a virtual environment, the method comprising: determining one or more features to be associated with a map of the virtual environment; generating one or more portions of the map in dependence upon at least one of the determined features, the map being generated so as to include the determined features; and generating a virtual environment corresponding to at least a portion of the map, wherein the step of generating one or more portions of the map utilises a trained machine learning model to generate the map portions.
  14. 14. Computer software which, when executed by a computer, causes the computer to carry out the method of claim 13.
  15. 15. A non-transitory machine-readable storage medium which stores computer software according to claim 14.
GB2110467.4A 2021-07-21 2021-07-21 Virtual environment generation system and method Pending GB2609197A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2110467.4A GB2609197A (en) 2021-07-21 2021-07-21 Virtual environment generation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2110467.4A GB2609197A (en) 2021-07-21 2021-07-21 Virtual environment generation system and method

Publications (2)

Publication Number Publication Date
GB202110467D0 GB202110467D0 (en) 2021-09-01
GB2609197A true GB2609197A (en) 2023-02-01

Family

ID=77443334

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2110467.4A Pending GB2609197A (en) 2021-07-21 2021-07-21 Virtual environment generation system and method

Country Status (1)

Country Link
GB (1) GB2609197A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210178267A1 (en) * 2019-12-11 2021-06-17 PUBG Amsterdam BV Machine learned virtual gaming environment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210178267A1 (en) * 2019-12-11 2021-06-17 PUBG Amsterdam BV Machine learned virtual gaming environment

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
ADAM SUMMERVILLE ET AL: "Procedural Content Generation via Machine Learning (PCGML)", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 2 February 2017 (2017-02-02), XP080746389, DOI: 10.1109/TG.2018.2846639 *
ANTONIOS LIAPIS ET AL: "Sentient World: Human-Based Procedural Cartography", 3 April 2013, EVOLUTIONARY AND BIOLOGICALLY INSPIRED MUSIC, SOUND, ART AND DESIGN, SPRINGER BERLIN HEIDELBERG, BERLIN, HEIDELBERG, PAGE(S) 180 - 191, ISBN: 978-3-642-36954-4, XP047026175 *
IVANOV GEORGI GEORGI IVANOV0563@GMAIL COM ET AL: "An Explorative Design Process for Game Map Generation Based on Satellite Images and Playability Factors", INTERNATIONAL CONFERENCE ON THE FOUNDATIONS OF DIGITAL GAMES, ACMPUB27, NEW YORK, NY, USA, 15 September 2020 (2020-09-15), pages 1 - 4, XP058474874, ISBN: 978-1-4503-8807-8, DOI: 10.1145/3402942.3402997 *
JIALIN LIU ET AL: "Deep Learning for Procedural Content Generation", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 9 October 2020 (2020-10-09), XP081782426, DOI: 10.1007/S00521-020-05383-8 *
KIM SUZI ET AL: "CityCraft: 3D virtual city creation from a single image", VISUAL COMPUTER, SPRINGER, BERLIN, DE, vol. 36, no. 5, 20 May 2019 (2019-05-20), pages 911 - 924, XP037091878, ISSN: 0178-2789, [retrieved on 20190520], DOI: 10.1007/S00371-019-01701-X *
PING KUANG ET AL: "Conditional Convolutional Generative Adversarial Networks Based Interactive Procedural Game Map Generation", 25 February 2020 (2020-02-25), XP009534852, ISBN: 978-3-030-39445-5, Retrieved from the Internet <URL:https://link.springer.com/chapter/10.1007/978-3-030-39445-5_30> *
SNODGRASS SAM ET AL: "Learning to Generate Video Game Maps Using Markov Models", IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES, vol. 9, no. 4, 1 December 2017 (2017-12-01), USA, pages 410 - 422, XP055909439, ISSN: 1943-068X, Retrieved from the Internet <URL:https://ieeexplore.ieee.org/stampPDF/getPDF.jsp?tp=&arnumber=7728021&ref=aHR0cHM6Ly9pZWVleHBsb3JlLmllZWUub3JnL2RvY3VtZW50Lzc3MjgwMjE=> DOI: 10.1109/TCIAIG.2016.2623560 *
YOO BYUNGHO ET AL: "Changing video game graphic styles using neural algorithms", 2016 IEEE CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND GAMES (CIG), IEEE, 20 September 2016 (2016-09-20), pages 1 - 2, XP033067612, DOI: 10.1109/CIG.2016.7860390 *

Also Published As

Publication number Publication date
GB202110467D0 (en) 2021-09-01

Similar Documents

Publication Publication Date Title
JP7320672B2 (en) Artificial Intelligence (AI) controlled camera perspective generator and AI broadcaster
CN109144610B (en) Audio playing method and device, electronic device and computer readable storage medium
Raffe et al. A survey of procedural terrain generation techniques using evolutionary algorithms
JP7224715B2 (en) Artificial Intelligence (AI) controlled camera perspective generator and AI broadcaster
US9182978B2 (en) Application configuration using binary large objects
KR20100110711A (en) Graphical representation of gaming experience
Nitsche et al. Designing procedural game spaces: A case study
CN111888763B (en) Method and device for generating obstacle in game scene
CN112642148A (en) Game scene generation method and device and computer equipment
CN111330273B (en) Virtual tower generation method and device, computer equipment and storage medium
GB2609197A (en) Virtual environment generation system and method
Norton et al. Monsters of Darwin: A strategic game based on artificial intelligence and genetic algorithms
Li Towards Factor-oriented understanding of video game genres using exploratory factor analysis on steam Game Tags
Latif et al. A critical evaluation of procedural content generation approaches for Digital Twins
US8565906B1 (en) Audio processing in a social environment
US20230146564A1 (en) System and method for positioning objects within an environment
Thawonmas et al. Identification of player types in massively multiplayer online games
Galdieri et al. Users’ evaluation of procedurally generated game levels
CN111939565A (en) Virtual scene display method, system, device, equipment and storage medium
GB2612775A (en) System and method for generating assets
KR102589453B1 (en) System and method for simulating a game environment
KR101538968B1 (en) on-line game server
JP7356610B1 (en) Information processing methods, programs, information processing systems
Liao et al. Witch Roundtable: Investigating PCG for Player Experience
Latif et al. Research Article A Critical Evaluation of Procedural Content Generation Approaches for Digital Twins