CN117689849A - Game scene image generation method and device and electronic equipment - Google Patents

Game scene image generation method and device and electronic equipment Download PDF

Info

Publication number
CN117689849A
CN117689849A CN202311533516.8A CN202311533516A CN117689849A CN 117689849 A CN117689849 A CN 117689849A CN 202311533516 A CN202311533516 A CN 202311533516A CN 117689849 A CN117689849 A CN 117689849A
Authority
CN
China
Prior art keywords
building
image
scene
information
scene image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311533516.8A
Other languages
Chinese (zh)
Inventor
郭晓强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311533516.8A priority Critical patent/CN117689849A/en
Publication of CN117689849A publication Critical patent/CN117689849A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2024Style variation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a game scene image generation method, a game scene image generation device, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: determining reference scene images which are respectively matched with all sub-requirement information in the scene requirement information; each piece of sub-requirement information is requirement information corresponding to each region in the game scene image to be generated; acquiring description information corresponding to scene demand information, wherein the description information is used for describing an environment corresponding to a game scene image to be generated; generating a scene image set according to the description information and the reference scene image; and selecting a plurality of images from the scene image set to splice to obtain the game scene image reflecting the scene demand information. According to the scheme provided by the application, the game scene images meeting the requirements can be efficiently generated, so that the game immersion of a player can be effectively promoted when the game scene built according to the generated game scene images is applied to the virtual game.

Description

Game scene image generation method and device and electronic equipment
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and apparatus for generating a game scene image, an electronic device, and a computer readable storage medium.
Background
In virtual games, it is often necessary to generate a game scene of a specific style in order to effectively enhance the game immersion of a player. For example, when it is necessary to create an easily pleasant atmosphere in a virtual game, it is often necessary to match a clear and tidy game scene to create an immersive sensation for the player. When the generated game scene meets the requirements, the game immersion of the player can be effectively promoted.
Conventionally, in order to generate a desired game scene, a game scene image is usually manually drawn as needed, and the desired game scene is created from the drawn game scene image. However, in practical application, the game scene has a large demand, and manual drawing requires a large amount of labor cost, so that the efficiency is extremely low. Therefore, how to efficiently generate a game scene image that satisfies the demand becomes extremely important.
Disclosure of Invention
The application provides a game scene image generation method, a game scene image generation device, electronic equipment and a computer readable storage medium, which can efficiently generate game scene images meeting requirements. The specific scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for generating a game scene image, where the method includes:
Determining reference scene images which are respectively matched with all sub-requirement information in the scene requirement information; the sub-demand information is demand information corresponding to each region in the game scene image to be generated;
acquiring description information corresponding to the scene demand information, wherein the description information is used for describing an environment style corresponding to a game scene image to be generated;
generating a scene image set according to the description information and the reference scene image;
and selecting a plurality of images from the scene image set to splice to obtain the game scene image reflecting the scene demand information.
In a second aspect, an embodiment of the present application provides a generating device for a game scene image, where the device includes:
the determining unit is used for determining reference scene images which are respectively matched with all the sub-requirement information in the scene requirement information; the sub-demand information is demand information corresponding to each region in the game scene image to be generated;
the acquisition unit is used for acquiring description information corresponding to the scene demand information, wherein the description information is used for describing an environment style corresponding to a game scene image to be generated;
a generating unit, configured to generate a scene image set according to the description information and the reference scene image;
And the splicing unit is used for selecting a plurality of images from the scene image set to splice so as to obtain a game scene image reflecting the scene demand information.
In a third aspect, the present application further provides an electronic device, including:
a processor; and
a memory for storing a data processing program, the electronic device being powered on and executing the program by the processor, to perform the method according to the first aspect.
In a fourth aspect, embodiments of the present application also provide a computer readable storage medium storing a data processing program, the program being executed by a processor to perform a method as described in the first aspect.
Compared with the prior art, the application has the following advantages:
the method for generating the game scene image comprises the steps of firstly, determining reference scene images which are respectively matched with all sub-requirement information in scene requirement information; the sub-demand information is demand information corresponding to each region in the game scene image to be generated; secondly, descriptive information corresponding to the scene demand information is acquired, wherein the descriptive information is used for describing an environment style corresponding to a game scene image to be generated; then, generating a scene image set according to the description information and the reference scene image; and finally, selecting a plurality of images from the scene image set to splice to obtain the game scene image reflecting the scene demand information. Since the reference scene image is an image matching with each sub-requirement information in the scene requirement information, and the description information is information for describing an environmental style corresponding to the game scene image to be generated, the scene image set generated from the description information and the reference scene image includes images matching with each sub-requirement in the scene requirement information and the environmental style matching with the environmental style corresponding to the game scene image to be generated, respectively. Therefore, a plurality of images can be selected from the scene image set for stitching, so that game scene images meeting all sub-requirement information and meeting the required environmental style are obtained, and the obtained game scene images meet all the sub-requirement information corresponding to the scene requirement information, so that the obtained game scene images can reflect the scene requirement information.
Therefore, the game scene image generation method can efficiently meet the game scene image required, so that the game immersion of a player can be effectively promoted when the game scene built according to the generated game scene image is applied to the virtual game.
Drawings
FIG. 1 is a flowchart of a method for generating a game scene image provided by an embodiment of the present application;
FIG. 2 is a flowchart of generating a game scene image through a preset image generation model in the game scene image generation method provided in the embodiment of the present application;
FIG. 3 is an exemplary diagram of a reference scene image in a method for generating a game scene image provided by an embodiment of the present application;
FIG. 4 is an exemplary diagram of an example of a scene image in a scene image set generated in a game scene image generation method provided by an embodiment of the present application;
FIG. 5 is an exemplary diagram of another example of a scene image in a scene image set generated in a method for generating a game scene image provided in an embodiment of the present application;
FIG. 6-a is an exemplary diagram of one example of a building image used for refinement of building components in a method of generating a game scene image provided by embodiments of the present application;
FIG. 6-b is an exemplary diagram of another example of a building image used for refinement of building components in a method of generating a game scene image provided by an embodiment of the present application;
FIG. 6-c is an exemplary diagram of another example of a building image used for refinement of building components in a method of generating a game scene image provided by an embodiment of the present application;
FIG. 7 is an exemplary diagram of an example of an image of a three-dimensional structural model built for refinement of building components in a game scene image generation method provided by an embodiment of the present application;
fig. 8 is an exemplary diagram showing an example of a thinned image of a virtual building generated in the game scene image generation method according to the embodiment of the present application;
fig. 9 is a block diagram showing an example of a game scene image generating apparatus according to the embodiment of the present application;
fig. 10 is a block diagram illustrating an example of an electronic device according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is, however, susceptible of embodiment in many other ways than those herein described and similar generalizations can be made by those skilled in the art without departing from the spirit of the application and the application is therefore not limited to the specific embodiments disclosed below.
It should be noted that the terms "first," "second," "third," and the like in the claims, specification, and drawings herein are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. The data so used may be interchanged where appropriate to facilitate the embodiments of the present application described herein, and may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and their variants are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Prior art will be further described first before describing embodiments of the present application in detail.
With the rapid development of the game industry, game scenes corresponding to virtual games are becoming more and more abundant, and in order to promote game immersion of players, the game scenes are required to have a specific style, so that players can feel as if they are playing the games. Based on this, new requirements are put on the scene image corresponding to the game scene.
The required game scene image can be generated in the related art by the following two ways:
mode one: the developer manually makes the game scene image according to the requirement.
Mode two: the desired game scene image is generated by an artificial intelligence model.
However, in the first embodiment, a lot of labor cost is required for manually creating the game scene image, which is extremely inefficient. In the second mode, the amount of resources and the types of resources in the existing gallery are limited, and big data used by the artificial intelligent model during training are often insufficient, so that the artificial intelligent model cannot well recognize building logic during game scene image generation, and the generated game scene image does not meet development requirements.
For example, when a game scene image of a national wind type needs to be generated, firstly, because the resources of the traditional gallery of the national wind type are less, the magnitude of the traditional gallery of the traditional wind type is difficult to meet various building types and corresponding environments required in the virtual game, and the artificial intelligence model cannot generate the game scene image of the required type; secondly, compared with modern buildings or European buildings, the national wind building has a special bucket arch mortise-tenon structure and builds a French, resources in the existing gallery cannot meet the development requirement no matter from the national wind humane environment to the fine national wind building model, and therefore an artificial intelligent model cannot generate a game scene image with a special building style; in addition, in practical research and development, in order to obtain high-quality game scene images, the structure and the modeling of virtual buildings in the game scene images have high requirements, and the game scene images generated by the artificial intelligent model are generally random and cannot meet the customized development requirements.
Therefore, how to efficiently generate a game scene image that satisfies the demand becomes extremely important.
For the above reasons, in order to efficiently generate a game scene image meeting the requirements, the first embodiment of the present application provides a method for generating a game scene image, where the method is applied to an electronic device, and the electronic device may be a desktop computer, a notebook computer, a mobile phone, a tablet computer, a server, a terminal device, or other electronic devices capable of generating a game scene image, and the embodiment of the present application is not particularly limited.
The game processing method provided in the embodiment of the present application is described below with reference to fig. 1 to 8.
As shown in fig. 1, the method for generating a game scene image provided by the present application includes the following steps S101 to S104.
Step S101: determining reference scene images which are respectively matched with all sub-requirement information in the scene requirement information; the sub-requirement information is requirement information corresponding to each region in the game scene image to be generated.
The scene requirement information of the game scene image in step S101 may be information indicating the scene style of the game scene image through customized sub-requirement information, where each sub-requirement information is requirement information corresponding to each region in the game scene image to be generated, for example, the upper left corner of the game scene image to be generated is a two-layer building of Miao-zhai type, the middle is an open and bright region, the right side is an ancient garden, and two children play in the ancient garden.
In a specific embodiment, each piece of sub-requirement information may include environmental atmosphere information of the game scene image and building information of a virtual building included in the game scene image. The scene requirement information composed of the sub-requirement information may be used to indicate a scene style of the game scene image to be generated.
As shown in table 1, an example table of scene requirement information in the method for generating a game scene image according to the embodiment of the present application is shown.
Table 1.
According to the method and the device, developers can provide sub-requirement information according to development requirements to obtain scene requirement information, and after the sub-requirement information of the game scene image to be generated is obtained, a reference scene image matched with the sub-requirement information can be obtained.
Because the scene demand information can have sub-demand information related to environment, sub-demand information related to buildings and sub-demand information related to virtual roles, and generally no reference scene image completely conforming to the scene demand information exists in the existing gallery, the acquisition of the reference scene image matched with each sub-demand information in the application can be understood as aiming at each sub-demand information in the scene demand information, and the acquisition of the reference scene image matched with the aimed sub-demand information.
In a specific embodiment, for each piece of sub-requirement information of the game scene image to be generated, an image which can meet the aimed piece of sub-requirement information can be screened out from the existing gallery. For example, reference scene images with noisy scenes can be screened and acquired for noisy scenes, reference scene images corresponding to the ancient gardens of the national wind can be screened and acquired for the ancient gardens of the national wind, and reference scene images with dark red light can be screened and acquired for dark red light. The number of reference scene images acquired for each sub-requirement is one or more (a plurality in this application means two or more), and in practical applications, the number of reference scene images is usually multiple.
It will be appreciated that the reference scene image obtained in the present application may be matched with one or more other sub-requirement information while being matched with corresponding sub-requirement information.
In the present application, the game scene image to be generated may be a partial game scene image corresponding to the virtual game, or may be all game scene images corresponding to the virtual game. Specifically, when the game scene image to be generated is a partial game scene image, the game scene image may be a game scene image corresponding to a certain game stage in the virtual game. Game scene images generally refer to images of various environments and backgrounds appearing in virtual games, including indoor and outdoor information such as buildings, terrains, weather, lights, etc., and game scenes provide a virtual space for virtual characters so that the virtual characters can perform various game behaviors in the game scenes.
It can be understood that, because the environment provided by the game scene image has a certain atmosphere and the provided virtual building has a certain style, after the game scene built according to the game scene image is applied to the virtual game, a virtual space for playing game behaviors is provided for virtual roles in the game, and meanwhile, a rich visual experience is provided for players participating in the virtual game.
In practical applications, the virtual game refers to an application program developed according to game application requirements, and the types of virtual games may include, but are not limited to, at least one of the following: two-dimensional (Two-dimensional) game applications, three-dimensional (Three-dimensional) game applications, virtual Reality (VR) game applications, augmented Reality (Augmented Reality, AR) game applications, mixed Reality (MR) game applications. In the embodiment of the application, the virtual Game may be any one of application programs such as a multiplayer online tactical competition Game (Multiplayer Online Battle Arena Games, abbreviated as Moba), a multiplayer online role Playing Game (Multiplayer Online Role-Playing Game, abbreviated as MMORPG), and the like.
The virtual Character refers to a comprehensive expression of an image with personalized features formed in a literature art work, and is usually a Character which does not exist in reality, including a fictional Character in at least one creative work of a television play, a movie, a cartoon and a game, and the virtual Character can be a Player Character operated by a Player or a Non-Player Character (NPC) not operated by a user. Wherein the non-player character is capable of directing the player to play and interact with the virtual character operated by the player.
In the embodiment of the application, the player can use the terminal device to operate the virtual character located in the game scene of the virtual game to play the game, wherein the game comprises the following steps: adjusting at least one of body posture, crawling, walking, running, jumping, driving, picking up, shooting, attacking, throwing, moving, running, defending.
Step S102: and acquiring description information corresponding to the scene demand information, wherein the description information is used for describing the environment style corresponding to the game scene image to be generated.
This step is used to obtain more specific and more detailed description information according to the scene requirement information of the game scene image to be generated.
In the application, in the case that the game scene image to be generated is an image containing no building, the description information can be used for describing the environment style corresponding to the game scene image to be generated; in the case where the game scene image to be generated is an image containing a building, the description information may be used to describe, in addition to the environment style corresponding to the game scene image to be generated, the building style corresponding to the virtual building in the game scene image to be generated.
In practical application, the scene demand information is generally composed of extremely simple, generalized and refined sub-demand information, so that in order to enable the generated game scene image to meet the scene style indicated by each sub-demand information and simultaneously have high-quality and rich and fine visual effects, the scene demand information of the game scene image to be generated can be expanded, and the description information of the game scene image to be generated is obtained.
Specifically, the description information can be obtained by carrying out the picture expansion on each piece of sub-requirement information. It may be understood that, while the scene images of the game scene images described by the description information satisfy the respective sub-requirement information, the description information may also have a description of the screen quality or the screen proportion of the rendered scene images, and the description information may include information for describing the architectural style information of the virtual building in the game scene images, information for describing the environment information corresponding to the game scene images, and information for describing the screen quality of the scene images corresponding to the game scene images or information for describing the screen proportion of the rendered scene images.
The information of the picture quality of the scene picture can be expressed by resolution, for example, a picture expression of 8k is generated, and a high-definition picture with a resolution of 8000 pixels is generated. The information of the screen ratio of the scene picture may be set information of the screen ratio to the renderer, for example, the renderer ar16:9 (ar full scale Augmented Reality, augmented reality) means that in the renderer, the ratio of the screen is set to 16:9. Renderer ar16: the scale of 9 may provide a wider field of view allowing the user to better appreciate the rendered image, and may also provide a clearer, finer image allowing the user to better view the details of the rendered image.
In practical application, multiple different description information can be determined for one scene requirement information, and multiple different description information corresponding to one scene requirement information can be distinguished in terms of descriptor richness.
As shown in table 2, an example table of description information generated for one scene requirement information in the game scene image generation method provided in the embodiment of the present application is shown.
Table 2.
In this way, the description information of the scene picture of the game scene image to be generated corresponding to the scene demand information and the reference scene image matched with each piece of sub-demand information are obtained, and a basis is provided for generating the game scene image capable of reflecting the scene demand information.
Step S103: and generating a scene image set according to the description information and the reference scene image.
The method is used for generating a scene image set with similar environment and atmosphere so as to generate a game scene image which accords with scene requirement information according to the scene image set.
It can be understood that the reference scene image is an image satisfying each piece of sub-requirement information in the scene requirement information, and the description information includes building style information of a virtual building in the game scene image, environment information corresponding to the game scene image, and information of picture quality of a scene picture corresponding to the game scene image or information of screen proportions of the presented scene picture, so that a scene image set matched with the description information can be obtained based on the reference scene image according to the reference scene image and the description information.
In an alternative embodiment, an image conforming to the description information can be generated according to the description information, then the sub-image features meeting the corresponding sub-requirements of the reference scene image are acquired, and the generated image conforming to the description information is fused with the acquired sub-image features, so that the scene image can be obtained.
In another alternative embodiment, the reference scene image may be taken as a base image, and the reference scene image may be adjusted according to the description information in a targeted manner to obtain the scene image.
In practical application, a corresponding scene image can be generated through a plurality of reference scene images meeting different sub-requirement information in the scene requirement information and the description information, and a scene image set is obtained through combination. Multiple scene images may be derived from at least one reference scene image and descriptive information, for example four scene images may be derived from reference scene image 1 and reference scene image 2 and descriptive information 1.
It will be appreciated that since the reference scene image is a reference scene image that meets the corresponding sub-requirement, each scene image in the generated set of scene images is an image that meets the corresponding sub-requirement.
For example, image 1 in the scene image set satisfies a noisy environment, image 2 satisfies a dark room at a high place, and image 3 satisfies a national wind ancient building or the like.
Although each image in the scene image set may not fully satisfy the scene demand information, since the image satisfying each sub-demand information in the scene demand information is included in the scene image set, a game scene image satisfying the scene demand information may be generated from the images included in the scene image set.
Step S104: and selecting a plurality of images from the scene image set to splice to obtain the game scene image reflecting the scene demand information.
After the scene image set is generated, selecting a plurality of images from the scene image set according to the scene demand information, and then splicing the selected images to obtain a game scene image so as to obtain the game scene image reflecting the scene demand information. Because there are scene images satisfying the sub-requirement information in the scene image set, in step S104, the scene images satisfying the sub-requirement information can be specifically selected from the scene image set to be spliced, so that the obtained game scene images can satisfy the scene requirement information and reach the concept composition required by the scene requirement information.
In a possible implementation manner, for each piece of sub-requirement information, a scene image with the highest matching degree with the piece of sub-requirement information can be selected from the scene image set, and the scene images selected for each piece of sub-requirement information are spliced to obtain the game scene image.
The method for generating the game scene image comprises the steps of firstly, determining reference scene images which are respectively matched with all sub-requirement information in scene requirement information; the sub-demand information is demand information corresponding to each region in the game scene image to be generated; secondly, descriptive information corresponding to the scene demand information is acquired, wherein the descriptive information is used for describing an environment style corresponding to a game scene image to be generated; then, generating a scene image set according to the description information and the reference scene image; and finally, selecting a plurality of images from the scene image set to splice to obtain the game scene image reflecting the scene demand information. Since the reference scene image is an image matching with each sub-requirement information in the scene requirement information, and the description information is information for describing an environmental style corresponding to the game scene image to be generated, the scene image set generated from the description information and the reference scene image includes images matching with each sub-requirement in the scene requirement information and the environmental style matching with the environmental style corresponding to the game scene image to be generated, respectively. Therefore, a plurality of images can be selected from the scene image set for stitching, so that game scene images meeting all sub-requirement information and meeting the required environmental style are obtained, and the obtained game scene images meet all the sub-requirement information corresponding to the scene requirement information, so that the obtained game scene images can reflect the scene requirement information.
Therefore, the game scene image generation method can efficiently meet the game scene image required, so that the game immersion of a player can be effectively promoted when the game scene built according to the generated game scene image is applied to the virtual game.
Alternatively, step S104 may be implemented by:
determining a base image from the scene image set according to key information and composition information required by the game scene image to be generated;
determining a first image of a building component meeting at least one of the sub-requirement information from images of the set of scene images other than the base image;
and splicing the first image with the base image to obtain a game scene image reflecting the scene demand information.
In the present application, key information and composition information required for a game scene image to be generated may be input to a picture screening model, and a base image may be determined from a scene image set according to the key information and composition information required for the game scene image to be generated by the picture screening model. The base image corresponding to the desired key and composition may also be manually selected from the scene image collection.
Or selecting a picture with the matching degree with the scene requirement information higher than the specific matching degree from a plurality of images in the generated scene image set, or selecting the picture with the largest information quantity of the satisfied sub-requirement information as the base image.
The base image determined in the present application is an image for showing key information and composition information of a game scene image to be generated, that is, the base image may be used to decide the key and composition of the game scene image.
In the present application, composition is understood as the composition of an image picture, such as arrangement and combination of lines, shapes, spaces, textures, and the like, and processing of a main body and a accompanying body, and the like. The key refers to basic characteristics such as color, tone, atmosphere, emotion and the like of a picture, and can influence the impression and feeling of a user on the whole picture.
In the application, a base image can be determined from images contained in a scene image set, and a first image of a building component meeting at least one piece of sub-requirement information can be determined from images in the scene image set except for the base image. It will be appreciated that the first image may comprise a plurality of images, for example images satisfying a red lantern, images satisfying dim lighting in a room.
In practical applications, for one piece of sub-requirement information, at least one first image satisfying the aimed piece of sub-requirement information may be determined from the scene image set. When the first image determined for each piece of sub-requirement information is one, each first image and the base image can be spliced to obtain a game scene image; when the number of the first images determined for each piece of sub-requirement information is multiple, a selection strategy can be selected based on that the matching degree of the spliced game scene image and the scene requirement information is higher than a preset threshold value, one first image is selected from the determined multiple first images for each piece of sub-requirement information, and the selected first image is spliced with the base image to obtain the game scene image.
In this way, the first images and the base image can be spliced on the basis of the basic tone and the composition indicated by the base image, and the game scene image is obtained.
It can be understood that, since the first image is an image of the building element conforming to at least one sub-requirement information, the key information and the composition information of the game scene image formed after the first image is spliced with the base image are the key information and the composition information indicated by the base image, and the game scene image satisfies each sub-requirement information, the obtained game scene image is a game scene image in which the key information and the composition information are the key information and the composition information indicated by the base image, and the scene requirement information can be reflected.
According to the technical means, the base image used for displaying the key information and the composition information of the game scene image to be generated is determined from the scene image set, so that each first image can be spliced with the base image according to the key information and the composition information, the key and the composition of the game scene image generated can be effectively controlled, in this way, when the scene image is generated, the key information and the composition information of a plurality of used images are not required to be considered, and each first image and the base image are spliced on the basis of the key information and the composition information indicated by the base image, so that the game scene image meeting the scene requirement information is obtained, the generation difficulty of the game scene image is reduced, and the generation efficiency of the game scene image is improved.
Optionally, the building layout of each image in the scene image set corresponds, and the building layout corresponds can be understood as having higher similarity of building layout, for example, the left side of each image in the scene image set is attic building, the right side is a very popular drunkard, and the middle is a region with open space.
The step of stitching the first image with the base image to obtain a game scene image reflecting the scene requirement information may specifically include the following steps:
And replacing a local image area corresponding to the building component meeting the sub-requirement information in the first image to an area corresponding to the base image, and generating a game scene image reflecting the scene requirement information.
In the application, when the game scene image is generated according to the base image and the first image, the first image is an image of at least one piece of sub-requirement information in the building component meeting the scene requirement information, so that a local image area corresponding to the building component meeting the sub-requirement information in the first image can be selected, and the local image area is replaced to the area corresponding to the base image, so that the game scene image reflecting the scene requirement information is generated.
Because of the correspondence of the building layout between the images in the scene image set, the local image areas are located in the first image at the corresponding area positions to be replaced in the base image, and thus the game scene image reflecting the scene demand information can be quickly generated by image stitching
For example, if the room component in the lower left corner in the first image a meets the sub-requirement information about the room component in the scene requirement information, the local image area corresponding to the room component in the first image a may be replaced to the lower left corner of the base image; and if the caging cage component right above the first image b accords with the sub-requirement information about the caging cage component in the scene requirement information, the local image region corresponding to the caging cage component in the first image b can be replaced to the right above the base image. In this way, the base image after replacing the local image area corresponding to each building component can be enabled to meet the scene demand information.
In practical application, a local image corresponding to the sub-component meeting the scene demand information in each first image can be obtained through matting or an image processing tool, and each local image is replaced into an area corresponding to a base image, so that a game scene image is obtained.
In particular implementations, the game scene image may be generated using an image processing library (e.g., openCV (computer vision library), python Imaging Library (image processing library, PIL for short), etc.), which is a computer vision library that provides a series of functions and algorithms for processing images and video. PIL is a Python library for processing images. The steps for generating a game scene image using the image processing library are as follows:
the first step: and reading each first image and the substrate image to be spliced.
And a second step of: and selecting the local image areas to be spliced in each first image and the base image through the functions provided by the image processing library.
And a third step of: and splicing the local image areas to be spliced through a function provided by the image processing library to obtain the game scene image.
It can be appreciated that, because the environment and atmosphere corresponding to each image in the scene image set are high in similarity, so that the high similarity is maintained in tone and composition, the joint transition of the spliced game scene images is natural under the normal condition.
Of course, when the game scene images obtained by stitching need to be adjusted, the game scene image generation method provided by the application can also perform adjustment operations on at least one aspect of brightness, contrast, sharpness and the like on the game scene images obtained by stitching, so that the adjusted game scene images are more attractive and exquisite. This eliminates the need for excessive manpower and time for adjustment of the game scene images due to the high similarity in hue and composition of the images.
Optionally, the "obtaining the description information corresponding to the scene requirement information" in step S102 may be implemented in the following manner: determining an environmental style descriptor and a building style descriptor in the scene demand information; the ambient style descriptor and the architectural style descriptor are determined as the description information of the game scene image to be generated.
In a specific embodiment, the environment style analysis and the building style analysis can be performed on the scene demand information through an artificial intelligent model or manually, so that environment style descriptors describing the environment style and building style descriptors describing the building style are obtained, and the environment style descriptors and the building style descriptors are used as the description information of the game scene image to be generated.
When the building style is analyzed, the building type conforming to the building style can be determined, and when the environment style is analyzed, the corresponding atmosphere, illumination and other information can be determined. In this way, the composition thought of the scene demand information can be determined by analyzing the environment style and analyzing the building style, and the scene demand information is further expanded according to the determined composition thought, so that the environment style descriptor describing the environment style and the building style descriptor describing the building style are obtained.
As shown in table 3, an example table of analysis of environmental style and analysis of architectural style for the scene demand information 1 illustrated in table 1 in the game scene image generation method provided in the embodiment of the present application is shown.
Table 3.
In this way, the scene demand information can be expanded in two aspects through the building style and the environment style, so that richer picture type description information is obtained, and a high-quality game scene image can be generated based on the richer picture type description information and the reference scene image.
Optionally, in the case where the description information is used to describe an environment style corresponding to the game scene image to be generated and a building style possessed by the virtual building in the game scene image, step S103 may be implemented by:
Determining characteristic data conforming to the sub-requirement information in the reference scene image;
carrying out feature fusion on the style feature data of the environment style and the style feature data of the building style described by the description information and the feature data to obtain fused scene feature data;
a set of scene images is generated comprising a plurality of scene images based on the scene feature data.
In a specific embodiment, feature data corresponding to the sub-requirement information may be determined from the reference scene image, where the feature data may be data in a vector form or may be a character string, and the feature data is used to generate a game scene image corresponding to the sub-requirement information.
In this way, when the environment information and the building style information included in the description information are feature-fused with the feature data, the scene feature data satisfying the environment information and the building style information can be generated on the basis of retaining the feature data. Based on this, a scene image set including a plurality of scene images can be generated based on the generated scene feature data, and the generated plurality of scene images also satisfy the environment information and the building style information included in the description information on the basis of retaining the feature data.
Alternatively, step S103 may be implemented by: and taking at least one image of the description information and the reference scene image as input data, and inputting a preset image generation model so that the image generation model outputs the scene image set.
In the present application, a scene image set including an image may be generated according to text data, such as description information, and image data, such as a reference scene image, by using a preset image generation model, which may be midjourn (generating a picture tool based on online big data AI, abbreviated as MJ), stable diffration (generating a picture tool based on a local model library AI, abbreviated as SD), or other image generation models capable of generating an image according to text data and image data.
Midjourney is an AI drawing creation tool, and can be operated by a robot instruction, and a user can produce corresponding high-quality images through a powerful AI algorithm by only inputting text. Stable Diffusion is a text-to-picture AI model that can be tuned by text hinting, which can separate the imaging process into a "diffuse" process at run-time, gradually improving the image from noisy conditions until completely free of noise, gradually approaching the text description provided.
In a specific embodiment, one or more images of the reference scene images may be input to a preset image generation model, and description information may also be input to the image generation model, so that the image generation model may generate, according to the input description information and at least one reference scene image, a corresponding image that corresponds to the picture described by the description information and corresponds to the style of the input reference scene image.
In practical application, the image generation model can output at least one image based on the input description information and at least one reference scene image, and the images output by the image generation model can be used as images in a scene image set. For example, the image combination of the reference scene image a and the reference scene image b and the description information are input into the image generation model to generate 4 corresponding images, and then each generated 4 images can be used as images in the scene image set.
In the application, in order to generate a sufficient number of images so that when generating a game scene image according to a scene image set composed of images, the most suitable image resource can be selected from the images which are rich enough, in the application, the reference scene image and the description information can be adjusted in a multiple-time targeted manner so that a preset image generation model can generate different output data according to input data composed of different reference scene images and different description information.
In this way, the powerful image generating capability of the image generating model is based on the preset image generating capability, the image generating model is constrained in calculation through the description information and at least one reference scene image, the defects that the image generating model is uncontrollable and random in generation can be effectively avoided under the condition that the description information and the at least one reference scene image are input into the image generating model, and high-quality images which are enough in quantity, similar in environment and matched with at least one piece of sub-requirement information in the scene requirement information are efficiently generated, so that game scene images meeting the scene requirement information can be efficiently generated according to the images which are more in quantity and similar in environment, and the generation efficiency of the game scene images is further improved.
As shown in fig. 2, a flowchart of generating a game scene image through a preset image generation model in the game scene image generation method according to the embodiment of the present application is shown. The reference scene image 10 and the description information 11 are input into a preset image generation model 12, and a scene image set 13 may be output.
As shown in fig. 3, which is an example diagram of a reference scene image in the method for generating a game scene image according to the embodiment of the present application, the reference scene image illustrated in fig. 3 is input into a preset image generation model, and description information is input, where the description information is "high-rise building, red lantern, central open area", the scene images illustrated in fig. 4 and 5 can be obtained.
Fig. 4 is an exemplary diagram of one example of a scene image in a scene image set generated in the method for generating a game scene image according to the embodiment of the present application, and fig. 5 is an exemplary diagram of another example of a scene image in a scene image set generated in the method for generating a game scene image according to the embodiment of the present application. As can be seen from fig. 4 and fig. 5, the generated scene images conform to the description information as a whole, and partial features of the reference scene images are retained, the two scene images are highly similar in overall basic tone and composition, and there are differences in details, such as different shapes of lantern, different designs of floors, and the like.
In addition, in order to make the corresponding virtual building in the generated game scene image be a high-quality virtual building with high fidelity, the game scene image generation method provided by the embodiment of the application may further include the following steps:
aiming at a virtual building in a game scene image, building a three-dimensional structure model corresponding to the virtual building;
acquiring a building image with the same building type as the virtual building;
and generating the virtual building according to the three-dimensional structure model and the building image.
Since the architecture in the game scene image is attractive after the architecture is thinned, the game scene image can improve the game experience of the player when the game activity is performed based on the game scene built by the game scene image, and therefore, the architecture thinning in the game scene image becomes extremely important.
The virtual building in the present application may be any building in the game scene image, for example, a building such as a tall building, a temple, a tower bridge, or the like. It should be noted that, in the game scene image, building refinement may be understood as a detailed description and design of details of a building, and the building refinement may help a developer create a more realistic and attractive game scene. Building refinements may include, in particular, the depiction and design of the shape, structure, material, texture, lighting, etc. of the building, and in addition, building refinements may include the design of details of the interior of the building, such as furniture placed, decorations used, lights, etc.
In the present application, for a virtual building in a game scene image, first, a three-dimensional structure model corresponding to the virtual building can be built, and the three-dimensional structure model can be understood as a 3D model which has no material and texture, is only composed of black and white gray colors, and is used for indicating structural information of the thinned virtual building. And secondly, a corresponding building image can be acquired according to the building type corresponding to the virtual building, the acquired building image and the virtual building are of the same type, and the building image can be used for carrying out component refinement on the three-dimensional structure model with the structure information so as to generate a refined virtual building conforming to the building type.
In practice, building types may be classified according to different criteria. The following are common building types:
classifying according to regional culture: buildings can be classified into chinese buildings, european buildings, african buildings, american buildings, etc. The Chinese building is a building with Chinese style, such as Tang Song building, ming Qing building, etc. European building refers to a building having a European style, such as a baroque building, a classical style building, a Gotty building, etc. African buildings refer to buildings having African style, such as mosaic buildings, thatch houses, and the like. The american building means a building having a american style such as an aztec building, an indian building, etc.
Classifying according to building style: the building can be classified into classical building, modern building, decorative art building, trade-off building, etc. Classical architecture refers to architecture having classical style, such as ancient greek architecture, ancient roman architecture, etc. Modern buildings refer to buildings having modern styles, such as abstract buildings, streamlined buildings, and the like. The decorative artistic building refers to a building with decorative artistic style, such as a new artistic building, an artistic decorative building, etc. Compromised buildings refer to buildings that fuse together multiple styles, such as the gothic building, the literature re-entertaining building, and the like.
Classified according to the function used: buildings can be classified into residential buildings, public buildings, industrial buildings, agricultural buildings, and the like. The residential building refers to a building such as a house, apartment, dormitory, etc. for people to live, and work. Public buildings refer to buildings for people to conduct various social activities, such as administrative office buildings, schools, hospitals, markets, and the like. Industrial buildings refer to buildings used for industrial production, processing, etc., such as factories, warehouses, laboratories, etc. Agricultural buildings refer to buildings used for agricultural production, processing, etc., such as farms, greenhouses, farms, etc.
Classification by scale: the building can be classified into a large-scale building, and the like. A large number of buildings refers to large scale, large numbers of buildings, such as large commercial complexes, large residential areas, etc. Large buildings refer to larger scale, smaller numbers of buildings, such as large commercial buildings, large residential buildings, and the like.
Classifying according to building forms: the building can be classified into a high-rise building, a multi-storey building, a low-rise building, etc. High-rise buildings refer to buildings with heights exceeding 100 meters, such as skyscrapers, super high-rise houses, and the like. Multi-story buildings refer to buildings having a height between 10 meters and 100 meters, such as multi-story homes, multi-story commercial buildings, etc. The low-rise building refers to a building with a height of less than 10 meters, such as a flat house, a low-rise house, and the like.
In the present application, the building type is mainly a building type obtained by classifying according to regional culture, and the building image in the present application may be understood as an existing image corresponding to a building of which the virtual building belongs to the same building type, and the building image may be an image corresponding to a representative building. For example, if the virtual building is an ancient temple type, the image corresponding to the white horse temple may be used as the building image corresponding to the virtual building. For another example, if the virtual building is a tangsheng building type, the image corresponding to the damming palace and/or Hua Qinggong may be used as the building image corresponding to the virtual building.
In particular embodiments, the acquired architectural image may include at least one image. When the acquired building image includes a plurality of images, the plurality of images may be images of different angles corresponding to the same building or may be images of different buildings. When the plurality of images are a plurality of images corresponding to different buildings, the different buildings belong to the same building type.
In the application, the acquired building image may be an image corresponding to a building actually existing in the real world, so that the thinned virtual building generated based on the image corresponding to the building actually existing in the real world has the same building style as the building actually existing in the real world, and when the building image is applied to a game scene, the game simulation degree can be improved, and the game immersion feeling of a player is improved.
Component refinement of the three-dimensional structure model according to the building image can be understood as component refinement of building components such as doors and windows in the three-dimensional structure model according to the building image, so that the thinned virtual building meets specific structural information and accords with corresponding building styles.
In addition, the three-dimensional structure model corresponding to the virtual building in the application can be a three-dimensional structure model with light and shadow information, the light and shadow information can simulate a stereoscopic visual effect well, and concave-convex structures corresponding to the three-dimensional structure model can be effectively distinguished according to the light and shadow information.
Optionally, the step of generating the virtual building from the three-dimensional structure model and the building image may be specifically implemented by:
extracting building characteristic data from the building image;
and adjusting the three-dimensional structure model according to the building characteristic data to generate the virtual building.
The building characteristic data in the application can represent the building characteristics of the building type to which the building image belongs, and the corresponding building type can be determined through the building characteristic data. The following are the building characteristics that each building type of building may have:
Building characteristics of Chinese ancient architecture: the building features of ancient Chinese buildings generally include four corner-tipped roofs, carved ornamental eave corners, columns or square posts in buildings, tall hallways, spacious courtyards, carved ornamental walls, exquisite windows, wooden buildings, and the like.
Building characteristics of European building: building features of European buildings typically include arched windows, cylinders, tall building bodies, complex decorative lines, carved decorative elements, large-size fireplaces, and the like.
Building characteristics of American construction: building features of American type buildings typically include flat rooftops, square columns, large windows, compact molding, large area terraces, large-size fireplaces, etc.
Building characteristics of Japanese building: the architectural features of japanese construction typically include inclined roofs, rectangular posts, small windows, simple molding, and the like.
Because the building type corresponding to the building image is the same as the building type corresponding to the virtual building, the building characteristic data extracted from the building image is the building characteristic data required by the virtual building, and based on the building characteristic data and the three-dimensional structure model built for the virtual building, a virtual building with the building type indicated by the building characteristic data and the structure consistent with that of the three-dimensional structure model can be generated, namely, the thinned virtual building is generated.
Therefore, the generated thinned virtual building meets the customized structure, has the building style of the belonging building type, and greatly meets the development requirement.
In an alternative embodiment, the step of "extracting building feature data from the building image" may include the steps of:
and extracting material data and texture data from the building image, and determining the material data and the texture data as the building characteristic data.
The material refers to the characteristics of texture, hardness, roughness and the like of the surface of an object, and is an important component of the appearance of the surface of the object. The materials can be classified into natural materials including wood, stone, metal, glass, etc., and artificial materials including plastic, rubber, leather, cloth, etc. Texture refers to the characteristics of texture, pattern, etc. of the surface of an object, which is an important component of the appearance of the surface of an object. The texture may be classified into natural texture including natural texture of wood, stone, metal, glass, etc., and artificial texture including texture, pattern, etc. of plastic, rubber, leather, cloth, etc. The material and texture are important components of the surface appearance of the object, and can simulate the real appearance of the object, so that the object model is more vivid and lively.
The material mainly describes the characteristics of texture, hardness, roughness and the like of the surface of the object, and the texture mainly describes the characteristics of grains, textures, patterns and the like of the surface of the object. The material is biased towards properties such as wood properties or marble properties. While the texture is typically a map, such as a texture map of wood or a texture map of marble. In general, the textured model is static and surface and does not change (such as illumination) due to changes in the external environment, while the textured model is dynamic and intrinsic, and can make corresponding changes when the external environment changes, thereby providing a more realistic feel.
In the application, the texture data and the texture data can be extracted from the building image, and the extracted texture data and texture data are endowed to the three-dimensional structure model to generate the thinned virtual building.
Therefore, the virtual building can be provided with the materials and textures corresponding to the same type of building, and the corresponding building type is met. For example, when the virtual building is of the ancient temple type, the virtual building with the same material and texture as the white horse temple can be generated through the white horse temple, so that when the virtual building with the same material and texture as the white horse temple is applied to the game scene image, a player can see the virtual building of the more realistic ancient temple type, and further, the game immersion and game experience of the player are improved.
Optionally, because the three-dimensional structure model is a 360-degree omnibearing three-dimensional model, a 360-degree omnibearing three-dimensional thinned virtual building can be generated according to the three-dimensional structure model and a building image of a building which belongs to the same type as the virtual building, and when the building image of the thinned virtual building needs to be obtained, the building image of the thinned virtual building can be obtained by rotating the thinned virtual building and combining screenshot.
It can be understood that the building image of the thinned virtual building at any view angle can be obtained in the application. The viewing angle is understood as the angle at which the virtual building is viewed, the visual views of the virtual building presented at different viewing angles differing. In practical applications, the plurality of viewing angles may include, but are not limited to: a top view of the virtual building from top to bottom, a front view of the virtual building from right in front, a view of the virtual building from right in left, a view of the virtual building from right in right, and the like. In the embodiment of the application, the first viewing angle may be any viewing angle of the virtual building from which the virtual building is viewed.
Optionally, the step of "adjusting the three-dimensional structure model according to the building feature data, generating the virtual building" may include the steps of:
Obtaining a visual angle image corresponding to the three-dimensional structure model;
and adjusting the visual angle image according to the building characteristic data to generate an image of the virtual building under the corresponding visual angle.
Because the three-dimensional structure model is a three-dimensional structure, higher performance consumption is usually generated when the thinned virtual building is generated by directly using the three-dimensional structure model and the building image belonging to the same building type as the virtual building. In the case of the image of the virtual building after the refinement, which is finally generated by the application, in order to reduce the performance consumption, in the application, the view angle image corresponding to the three-dimensional structure model can be preferentially acquired, and then the view angle image is adjusted according to the building characteristic data, so that the performance consumption generated by adjusting the whole three-dimensional structure model according to the building characteristic data is avoided.
In the application, according to the view angle image and the building image of the three-dimensional structure model under a certain view angle, the thinned virtual building corresponding to the view angle can be generated. For example, from a perspective image of a three-dimensional structure model under an elevation view and a building image, a virtual building having a structure of the perspective image of the three-dimensional structure model under the elevation view and having a building feature possessed by the building image can be generated; from the perspective image of the three-dimensional structure model in the overhead view and the building image, a virtual building having a structure of the perspective image of the three-dimensional structure model in the overhead view and having building features of the building image can be generated.
In a specific embodiment, a view angle image corresponding to the three-dimensional structure model under the required view angle can be obtained according to the required view angle of the virtual building in the game scene image, so that an image of the virtual building refined under the required view angle can be generated based on the view angle image and the building image. Such that the image of the virtual building presented in the game scene image has a specific structure and the building type is the same as the building type to which the same type of building belongs.
Thus, the three-dimensional structure model and the building image corresponding to each view angle can refine the building of the image of the virtual building under each view angle.
Alternatively, the present application may refine a virtual building by means of an artificial intelligence model, so the step of "generating the virtual building from the three-dimensional structure model and the building image" may be achieved by:
inputting the building image into a building generation model to obtain an output image, and adjusting model parameters of the building generation model based on the output image and the building image to obtain the trained building generation model;
and inputting the model image corresponding to the three-dimensional structure model into the trained building generation model, so that the trained building generation model outputs the image of the virtual building.
It may be understood that the building generation model generated in advance in the present application may be a corresponding building generation model capable of generating each building type for each building type, or may be a building generation model capable of generating each building type for training, which is not particularly limited in this application.
For example, for an ancient temple type, a building generation model for generating an ancient temple type building can be trained; building generation models for generating the Tang Dynasty building type building can be trained for the Tang Dynasty building type. For another example, a building generation model capable of generating various types of buildings including ancient temple types and tangshengjia building types can be trained.
The building generation model is used for generating a building generation model which is obtained by training for each building type and can generate the building of the building type. The following is presented by way of training a building generation model for generating a Miao village type building:
firstly, a large number of sample building images can be selected, wherein the sample building images are the building images which are the same as the building types of the virtual buildings and are obtained in the application, and the sample building images are images of various Miao village buildings and can comprise different Miao village buildings. In this way, the corresponding building generation model can be trained specifically for the virtual building of the building type to be generated by means of the building image of the building type.
Second, a large number of sample building images can be divided into training, validation and test sets. The training set is used for training parameters of the building generation model; the test set is used for checking the generalization performance of the building generation model which is already trained, and the generalization performance can be understood as the accuracy on a new sample; the verification set is used for verifying the performance of the building generation model, and adjusting model parameters of the model according to the verification result. It should be noted that, for a model, there cannot be any intersection of the training set, the validation set, and the test set. In model training, validation sets and test sets are important tools used to evaluate model performance. The validation set and the test set differ in that: the validation set is a sample set used in the model training process to adjust model hyper-parameters and validate model performance, and the test set is a sample set used to evaluate the generalization performance of the model over unknown data. The use of validation sets and test sets helps to prevent model overfitting, thereby improving the generalization performance of the model.
And then training the building generation model according to the training set, checking the building generation model according to the test set, checking the building generation model according to the verification set, and adjusting the super parameters.
The specific training process for training the building generation model according to the training set is as follows: inputting the training set into the building generation model to obtain an output image, and adjusting model parameters of the building generation model by taking the output image of the building generation model and the building type of the input training set as adjustment principles to obtain a trained building generation model.
It can be appreciated that in the present application, when training the building generation model, the building generation model that can be used to generate the image of the virtual building corresponding to each view angle can be trained by the building image corresponding to each view angle.
After training the building generation model, the visual angle image corresponding to the three-dimensional structure model under a certain visual angle can be input into the trained building generation model capable of generating a specific building type, and the trained building generation model can generate the image of the thinned virtual building which has the structure under the visual angle and can generate the specific building type according to the structure of the visual angle image corresponding to the three-dimensional structure model under the visual angle.
Therefore, compared with the conventional manual building refinement, the virtual image of the refined virtual building which has a customized structure and accords with the modeling of the required building type can be efficiently generated through the building generation model, the building refinement efficiency is improved, and the labor cost and the time cost are reduced.
Optionally, the refinement for the virtual building in the method for generating a game scene image provided by the embodiment of the present application may further include the following steps:
and carrying out component adjustment on the virtual building to obtain the adjusted virtual building, so that the realism of each component corresponding to the adjusted virtual building is within a preset range.
In this application. The thinned virtual building can be subjected to component adjustment through the building generation model, and the thinned virtual building can be subjected to component adjustment through manual work, so that each building component corresponding to the virtual building can be specifically adjusted. Building components refer to the various components in a building, and generally include: walls, floors, beams, columns, stairs, balconies, roofs, doors, windows, and the like.
Wherein the wall is an integral part of the building, the wall may be used to separate rooms and support the weight of the building. Floor slabs are floors of a building that can be used to bear the weight of the building. The beams are cross beams of a building, which can be used to support the weight of the building. The columns are upstands of the building that can be used to support the weight of the building. Stairs may be used to connect different floors of a building. Balconies may be used to expand the space of a building. Roof is the roof of a building, which can be used to protect the building from rain and sunlight. The door and window is a door and window of a building, and the door and window can be used for ventilation, lighting and building access.
In a specific embodiment, the building components of the thinned virtual building are adjusted so that the realism of each adjusted building component is within a preset range. The authenticity of the building component can be judged by a building discriminator, and the building discriminator can learn the characteristics of the appearance, structure, material, texture and the like of the existing building and judge the authenticity of the building by using the characteristics.
Specifically, the virtual building may be a guo-guo building, and the step of "performing component adjustment on the virtual building" may include at least one of the following:
adjusting the curvature of the wood column assembly in the virtual building to enable the wood column assembly to have curvature;
adjusting the color of each component in the virtual building so that the color of each component accords with the color of the national wind ancient building;
the roof assemblies in the virtual building are adjusted to present the roof assemblies with a stale feel.
In practice, the assembly may include, but is not limited to, at least one of a timber column assembly, a roof assembly, a wall assembly. Since the guo-gu building presents a certain sense of invalidity in the real world, the wooden pillars usually have a certain degree of curvature, but not completely straight, and part of the tiles on the roof may be broken or lack, so in the case that the virtual building is guo-gu building, in order to make the visual effect presented by the virtual building in the game scene image highly similar to that presented by guo-gu building in the real world. In the application, the following adjustment can be specifically performed on the thinned virtual building:
Specifically, in the case where the assembly includes a wooden pillar assembly, the wooden pillar assembly in the thinned virtual building may be adjusted in the present application, so that the wooden pillar assembly has a curvature. The thinned virtual building is adjusted, so that the whole thinned virtual building is aged, the thinned virtual building can be not excessively leveled, the wood board can also be in a curved and undulating shape, and the wood is not excessively straight. The colors of all the components in the thinned virtual building are adjusted so that the colors of the adjusted virtual building are not too bright and are in line with the colors of the national wind ancient building. In the case where the assembly includes a roof assembly, the roof assembly may be adapted so that it presents a level of stale feel and a feeling of stumbling from wind and rain.
Therefore, the adjusted virtual building is more real and natural, and when the game scene constructed according to the game scene image generated by the adjusted virtual building is applied to the virtual game, the game immersion feeling and the game experience of the user can be further improved.
It can be understood that each building in the game scene image in the application can be subjected to component refinement through the mode, so that the game scene image after each building refinement is obtained, and the fineness and quality of the game scene image are improved.
The following describes the refinement of building components in the method for generating a game scene image provided in the embodiment of the present application through fig. 6-a, 6-b, 6-c, 7, 8 and 9:
6-a, 6-b and 6-c are exemplary diagrams of building images used for refinement of building components in the method for generating a game scene image according to an embodiment of the present application. It can be seen that fig. 6-a, 6-b and 6-c are corresponding building images of the same building type at different viewing angles.
As shown in fig. 7, which is an exemplary diagram of an example of an image of a three-dimensional structure model built for refinement of building components in the game scene image generation method provided in the embodiment of the present application, a refined virtual building as shown in fig. 8 may be generated according to the building image illustrated in fig. 6-a, 6-b and 6-c and the three-dimensional structure model image illustrated in fig. 7.
Fig. 8 is a diagram illustrating an example of a thinned image of a virtual building generated in the game scene image generation method according to the embodiment of the present application. It can be seen that the view angle of the image of the thinned virtual building illustrated in fig. 8 is the same as the view angle of the image of the three-dimensional structure model in fig. 7, and the image of the thinned virtual building in fig. 8 has the structure of the three-dimensional structure model in fig. 7, and the building characteristics of the building images illustrated in fig. 6-a, 6-b and 6-c are specified.
Corresponding to the method for generating a game scene image provided in the first embodiment of the present application, the second embodiment of the present application further provides a device for generating a game scene image, as shown in fig. 9, where the device 900 for generating a game scene image includes:
a determining unit 901, configured to determine reference scene images that are respectively matched with each piece of sub-requirement information in the scene requirement information; the sub-demand information is demand information corresponding to each region in the game scene image to be generated;
an obtaining unit 902, configured to obtain description information corresponding to the scene requirement information, where the description information is used to describe an environment style corresponding to a game scene image to be generated;
a generating unit 903, configured to generate a scene image set according to the description information and the reference scene image;
and the stitching unit 904 is configured to select a plurality of images from the scene image set to stitch, so as to obtain a game scene image reflecting the scene requirement information.
Optionally, the stitching unit 904 is configured to: determining a base image from the scene image set according to the key information and composition information required by the game scene image to be generated, wherein the base image is used for displaying the key information and composition information of the game scene image to be generated; determining a first image of a building component meeting at least one of the sub-requirement information from images of the set of scene images other than the base image; and splicing the first image with the base image to obtain a game scene image reflecting the scene demand information.
Optionally, building layouts between images in the set of scene images correspond; the splicing unit 904 is specifically configured to: and replacing a local image area corresponding to the building component meeting the sub-requirement information in the first image to an area corresponding to the base image, and generating a game scene image reflecting the scene requirement information.
Optionally, the generating unit 903 is configured to: determining characteristic data conforming to the sub-requirement information in the reference scene image; carrying out feature fusion on the style feature data of the environment style and the style feature data of the building style described by the description information and the feature data to obtain fused scene feature data; a set of scene images is generated comprising a plurality of scene images based on the scene feature data.
Optionally, the acquiring unit 902 is configured to: determining an environmental style descriptor and a building style descriptor in the scene demand information; the ambient style descriptor and the architectural style descriptor are determined as the description information of the game scene image to be generated.
The game scene image generating apparatus 900 further includes a building refinement unit for: aiming at a virtual building in a game scene image, acquiring a three-dimensional structure model corresponding to the virtual building; acquiring a building image with the same building type as the virtual building; and generating the virtual building according to the three-dimensional structure model and the building image.
Optionally, the building refinement unit is specifically configured to: extracting building characteristic data from the building image; and adjusting the three-dimensional structure model according to the building characteristic data to generate the virtual building.
Optionally, the building refinement unit is further specifically configured to: obtaining a visual angle image corresponding to the three-dimensional structure model; and adjusting the visual angle image according to the building characteristic data to generate an image of the virtual building under the corresponding visual angle.
Optionally, the building refinement unit is further specifically configured to: inputting the building image into a building generation model to obtain an output image, and adjusting model parameters of the building generation model based on the output image and the building image to obtain the trained building generation model; and inputting the model image corresponding to the three-dimensional structure model into the trained building generation model, so that the trained building generation model outputs the image of the virtual building.
Optionally, the building refinement unit is further specifically configured to: and extracting material data and texture data from the building image, and determining the material data and the texture data as the building characteristic data.
Optionally, the building refinement unit is further specifically configured to: and carrying out component adjustment on the virtual building to obtain the adjusted virtual building, so that the realism of each component corresponding to the adjusted virtual building is within a preset range.
Optionally, the virtual building is a national wind ancient building; the building refinement unit is also specifically for: adjusting the curvature of the wood column assembly in the virtual building to enable the wood column assembly to have curvature; adjusting the color of each component in the virtual building so that the color of each component accords with the color of the national wind ancient building; the roof assemblies in the virtual building are adjusted to present the roof assemblies with a stale feel.
Corresponding to the method for generating the game scene image provided in the first embodiment of the present application, the third embodiment of the present application further provides an electronic device for generating the game scene image. As shown in fig. 10, the electronic device 1000 includes: a processor 1001; and a memory 1002 for storing a program of a generation method of a game scene image, the apparatus, after powering on and running the program of the generation method of a game scene image by the processor, performing the steps of:
Determining reference scene images which are respectively matched with all sub-requirement information in the scene requirement information; the sub-demand information is demand information corresponding to each region in the game scene image to be generated;
acquiring description information corresponding to the scene demand information, wherein the description information is used for describing an environment style corresponding to a game scene image to be generated;
generating a scene image set according to the description information and the reference scene image;
and selecting a plurality of images from the scene image set to splice to obtain the game scene image reflecting the scene demand information.
In correspondence with the method for generating a game scene image provided in the first embodiment of the present application, a fourth embodiment of the present application provides a computer-readable storage medium storing a program of the method for generating a game scene image, the program being executed by a processor to perform the steps of:
determining reference scene images which are respectively matched with all sub-requirement information in the scene requirement information; the sub-demand information is demand information corresponding to each region in the game scene image to be generated;
acquiring description information corresponding to the scene demand information, wherein the description information is used for describing an environment style corresponding to a game scene image to be generated;
Generating a scene image set according to the description information and the reference scene image;
and selecting a plurality of images from the scene image set to splice to obtain the game scene image reflecting the scene demand information.
It should be noted that, for the detailed descriptions of the apparatus, the electronic device, and the computer readable storage medium provided in the second embodiment, the third embodiment, and the fourth embodiment of the present application, reference may be made to the related descriptions of the first embodiment of the present application, and no further description is given here.
While the preferred embodiment has been described, it is not intended to limit the invention thereto, and any person skilled in the art may make variations and modifications without departing from the spirit and scope of the present invention, so that the scope of the present invention shall be defined by the claims of the present application.
In one typical configuration, the node devices in the blockchain include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
1. Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), random Access Memory (RAM) of other nature, read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage media or any other non-transmission media that can be used to store information that can be accessed by a computing device. Computer readable media, as defined herein, does not include non-transitory computer readable media (transmission media), such as modulated data signals and carrier waves.
2. It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
While the preferred embodiment has been described, it is not intended to limit the invention thereto, and any person skilled in the art may make variations and modifications without departing from the spirit and scope of the present invention, so that the scope of the present invention shall be defined by the claims of the present application.

Claims (15)

1. A method of generating a game scene image, the method comprising:
determining reference scene images which are respectively matched with all sub-requirement information in the scene requirement information; the sub-demand information is demand information corresponding to each region in the game scene image to be generated;
acquiring description information corresponding to the scene demand information, wherein the description information is used for describing an environment style corresponding to a game scene image to be generated;
generating a scene image set according to the description information and the reference scene image;
and selecting a plurality of images from the scene image set to splice to obtain the game scene image reflecting the scene demand information.
2. The method of claim 1, wherein selecting a plurality of images from the set of scene images for stitching to obtain a game scene image reflecting the scene demand information, comprises:
Determining a base image from the scene image set according to key information and composition information required by the game scene image to be generated;
determining a first image of a building component meeting at least one of the sub-requirement information from images of the set of scene images other than the base image;
and splicing the first image with the base image to obtain a game scene image reflecting the scene demand information.
3. The method of claim 2, wherein the building layout is consistent between images in the set of scene images;
the step of stitching the first image with the base image to obtain a game scene image reflecting the scene requirement information includes:
and replacing a local image area corresponding to the building component meeting the sub-requirement information in the first image to an area corresponding to the base image, and generating a game scene image reflecting the scene requirement information.
4. The method of claim 1, wherein the description information is further used to describe a building style of a virtual building in the game scene image to be generated, and wherein generating the scene image set according to the description information and the reference scene image includes:
Determining characteristic data conforming to the sub-requirement information in the reference scene image;
carrying out feature fusion on the style feature data of the environment style and the style feature data of the building style described by the description information and the feature data to obtain fused scene feature data;
a set of scene images is generated comprising a plurality of scene images based on the scene feature data.
5. The method according to claim 1, wherein the obtaining the description information corresponding to the scene requirement information includes:
determining an environmental style descriptor and a building style descriptor in the scene demand information;
the ambient style descriptor and the architectural style descriptor are determined as the description information of the game scene image to be generated.
6. The method according to any one of claims 1 to 5, further comprising:
aiming at a virtual building in a game scene image, acquiring a three-dimensional structure model corresponding to the virtual building;
acquiring a building image with the same building type as the virtual building;
and generating the virtual building according to the three-dimensional structure model and the building image.
7. The method of claim 6, wherein the generating the virtual building from the three-dimensional structural model and the building image comprises:
extracting building characteristic data from the building image;
and adjusting the three-dimensional structure model according to the building characteristic data to generate the virtual building.
8. The method of claim 7, wherein said adjusting the three-dimensional structural model based on the building characterization data to generate the virtual building comprises:
obtaining a visual angle image corresponding to the three-dimensional structure model;
and adjusting the visual angle image according to the building characteristic data to generate an image of the virtual building under the corresponding visual angle.
9. The method of claim 6, wherein the generating the virtual building from the three-dimensional structural model and the building image comprises:
inputting the building image into a building generation model to obtain an output image, and adjusting model parameters of the building generation model based on the output image and the building image to obtain the trained building generation model;
and inputting the model image corresponding to the three-dimensional structure model into the trained building generation model, so that the trained building generation model outputs the image of the virtual building.
10. The method of claim 7, wherein the extracting building feature data from the building image comprises:
and extracting material data and texture data from the building image, and determining the material data and the texture data as the building characteristic data.
11. The method of claim 6, wherein the method further comprises:
and carrying out component adjustment on the virtual building to obtain the adjusted virtual building, so that the realism of each component corresponding to the adjusted virtual building is within a preset range.
12. The method of claim 11, wherein the virtual building is a guo-guo building; the component adjustment of the virtual building comprises at least one of the following steps:
adjusting the curvature of the wood column assembly in the virtual building to enable the wood column assembly to have curvature;
adjusting the color of each component in the virtual building so that the color of each component accords with the color of the national wind ancient building;
the roof assemblies in the virtual building are adjusted to present the roof assemblies with a stale feel.
13. A game scene image generation apparatus, the apparatus comprising:
the determining unit is used for determining reference scene images which are respectively matched with all the sub-requirement information in the scene requirement information; the sub-demand information is demand information corresponding to each region in the game scene image to be generated;
the acquisition unit is used for acquiring description information corresponding to the scene demand information, wherein the description information is used for describing an environment style corresponding to a game scene image to be generated;
a generating unit, configured to generate a scene image set according to the description information and the reference scene image;
and the splicing unit is used for selecting a plurality of images from the scene image set to splice so as to obtain a game scene image reflecting the scene demand information.
14. An electronic device, comprising:
a processor; and
a memory for storing a data processing program, the electronic device being powered on and executing the program by the processor, to perform the method of any of claims 1-12.
15. A computer readable storage medium, characterized in that a data processing program is stored, which program is run by a processor, performing the method according to any of claims 1-12.
CN202311533516.8A 2023-11-16 2023-11-16 Game scene image generation method and device and electronic equipment Pending CN117689849A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311533516.8A CN117689849A (en) 2023-11-16 2023-11-16 Game scene image generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311533516.8A CN117689849A (en) 2023-11-16 2023-11-16 Game scene image generation method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN117689849A true CN117689849A (en) 2024-03-12

Family

ID=90136145

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311533516.8A Pending CN117689849A (en) 2023-11-16 2023-11-16 Game scene image generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN117689849A (en)

Similar Documents

Publication Publication Date Title
KR100740072B1 (en) Image generating device
US6831659B1 (en) Image processor unit, game machine, image processing method, and recording medium
Myerson et al. The 21st century office
Bellotti et al. An architectural approach to efficient 3D urban modeling
Deng et al. [Retracted] Research on Convolutional Neural Network‐Based Virtual Reality Platform Framework for the Intangible Cultural Heritage Conservation of China Hainan Li Nationality: Boat‐Shaped House as an Example
Fewella Nightlife in historical sites: between lights and shadows (visions and challenges)
CN117689849A (en) Game scene image generation method and device and electronic equipment
CN117357903A (en) Game building editing method, game building editing device, storage medium and electronic equipment
POUlTER Building a browsable virtual library
Wattanasoontorn et al. A classification of visual style for 3D games
Huang et al. Interactive evolutionary computation (IEC) method of interior work (IW) design for use by non-design-professional Chinese residents
Zhang Discussion on the Expression of Digital Model for the Residential Buildings of Han Style
CN119379946B (en) AIGC technology-based meta-universe scene automatic generation and optimization platform
Zhang et al. HongCun
Engzell Waldén Chasing Swans
Kontogianni The Contribution of 3D models to serious games applications
Ghanim et al. Employing 360° Video Panorama Technology to Determine the Impact of Details on the Collective Memory of the Urban Scene.
Chung Relate, Relate, Relate: In the Age of Machine Learning
Rahkonen Limitations of map creation in the DOOM game engine
Pan et al. Digital Research and Application of Yunnan Tile Cats
Cornejo Surfacing and Look Development for an Animated Film using Per-Face Texture Mapping and Procedural Shading
Zou Research on Post-Processing of Interior Decoration Renderings Based on Genetic Algorithm
Weilun 3DMAX-Based Roaming Design of Interactive Architectural Landscape
Guo The Protection of Beijing Courtyard Houses Based on Virtual Reality Technology
Liangyu A Study of Organic Renewal Strategies and Practices in Traditional Settlements in Northeast China

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination