CN117717785A - Game scene component method, device, storage medium and electronic equipment - Google Patents

Game scene component method, device, storage medium and electronic equipment Download PDF

Info

Publication number
CN117717785A
CN117717785A CN202311770775.2A CN202311770775A CN117717785A CN 117717785 A CN117717785 A CN 117717785A CN 202311770775 A CN202311770775 A CN 202311770775A CN 117717785 A CN117717785 A CN 117717785A
Authority
CN
China
Prior art keywords
scene
scene component
information
component
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311770775.2A
Other languages
Chinese (zh)
Inventor
初小宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311770775.2A priority Critical patent/CN117717785A/en
Publication of CN117717785A publication Critical patent/CN117717785A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides a game scene component generation method, a game scene component generation device, a storage medium and electronic equipment, and relates to the technical field of computers and games. The method comprises the following steps: responding to the trigger condition in the game program, and determining the information of the first scene component combination according to the first prompt information; the first scene component combination is a scene component combination to be generated corresponding to the first prompt information; providing a plurality of first scene component controls and providing second scene component controls corresponding to the first scene component combinations under the condition of displaying the game editing scene; the first scene component control is configured to respond to and generate a corresponding scene component in the game editing scene according to the first trigger operation; in response to a second trigger operation of the second scenario component control, a first scenario component combination is generated in the game editing scenario according to the information of the first scenario component combination. The method and the device improve the generation efficiency of the game scene components and reduce the resource expenditure.

Description

Game scene component method, device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer and game technologies, and in particular, to a game scene component generating method, a game scene component generating device, a computer readable storage medium, and an electronic device.
Background
A game scene is typically made up of a number of scene components (or models) that are people or objects in the game scene. Modeling and editing people or things in a game scene is one of the main works for building the game scene.
Currently, modeling and editing in a game scenario is largely dependent on manual operations of the game developer. For example, the generation process of a single scene component generally includes: manually editing the appearance, adjusting the size and position, rendering colors, textures and the like, and finally generating the required scene components. Such a method consumes high manpower and time costs and is inefficient.
Disclosure of Invention
The present disclosure provides a game scene component generating method, a game scene component generating device, a computer-readable storage medium, and an electronic device, so as to solve the problem of low editing efficiency of a game scene component at least to some extent.
According to a first aspect of the present disclosure, there is provided a game scene component generating method, the method comprising: responding to the trigger condition in the game program, and determining the information of the first scene component combination according to the first prompt information; the first scene component combination is a scene component combination to be generated corresponding to the first prompt information; the first scene component combination is composed of one or more scene components; providing a plurality of first scene component controls and providing second scene component controls corresponding to the first scene component combinations under the condition of displaying the game editing scene; the first scene component control is configured to respond to and generate a corresponding scene component in the game editing scene according to a first trigger operation; and responding to a second triggering operation of the second scene component control, and generating the first scene component combination in the game editing scene according to the information of the first scene component combination.
According to a second aspect of the present disclosure, there is provided a game scene component generating apparatus, the apparatus comprising: the scene component combination information determining module is configured to determine information of a first scene component combination according to the first prompt information in response to the trigger condition being met in the game program; the first scene component combination is a scene component combination to be generated corresponding to the first prompt information; the first scene component combination is composed of one or more scene components; the scene component control providing module is configured to provide a plurality of first scene component controls and provide second scene component controls corresponding to the first scene component combinations under the condition that the game editing scene is displayed; the first scene component control is configured to respond to and generate a corresponding scene component in the game editing scene according to a first trigger operation; and the scene component combination generation processing module is configured to respond to a second trigger operation of the second scene component control and generate the first scene component combination in the game editing scene according to the information of the first scene component combination.
According to a third aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the game scenario component generating method of the first aspect described above and possible implementations thereof.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the above-described game scenario component generating method of the first aspect and possible implementations thereof via execution of the executable instructions.
The technical scheme of the present disclosure has the following beneficial effects:
on the one hand, the scheme for quickly and efficiently generating the game scene components is provided, the information of the first scene component combination can be determined based on the first prompt information, the corresponding second scene component control is provided, the user can quickly generate the first scene component combination in the game editing scene by performing second triggering operation on the second scene component control, a great amount of manual operation by the user is not needed to edit and generate the scene components, the labor and time cost is reduced, and the editing efficiency is improved. On the other hand, the new scene components are generated by combining the existing scene components in the game program, so that the limitation of the existing scene component library can be broken through, a large number of diversified first scene component combinations are generated, and the game content is enriched. On the other hand, the mode of combining the scene components does not need to execute a complete modeling process, and is beneficial to controlling the size of the generated resources of the first scene component combination, so that the expenditure of resources such as calculation force and storage and the scheme realization cost are reduced, and the method can be applied to lightweight scenes such as mobile games.
Drawings
Fig. 1 shows a system architecture diagram in the present exemplary embodiment;
FIG. 2 illustrates a flow chart of a game scene component generation method in the present exemplary embodiment;
FIG. 3 shows a schematic diagram of a game editing scene and a first scene component control in the present exemplary embodiment;
fig. 4 shows a schematic diagram of a setting interface of a game editing scene in the present exemplary embodiment;
FIG. 5 shows a schematic diagram of a second scenario component control in the present exemplary embodiment;
FIG. 6 illustrates a sub-flowchart of a game scene component generation method in the present exemplary embodiment;
fig. 7 shows a schematic diagram of search results corresponding to the scene component control search information in the present exemplary embodiment;
fig. 8 shows a schematic diagram of a generation list in the present exemplary embodiment;
fig. 9 shows a schematic configuration diagram of a game scene component generating device in the present exemplary embodiment;
fig. 10 shows a schematic structural diagram of an electronic device in the present exemplary embodiment.
Detailed Description
Exemplary embodiments of the present disclosure will be described more fully hereinafter with reference to the accompanying drawings.
The drawings are schematic illustrations of the present disclosure and are not necessarily drawn to scale. Some of the block diagrams shown in the figures may be functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software, or in hardware modules or integrated circuits, or in networks, processors or microcontrollers. Embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein. The described features, structures, or characteristics of the disclosure may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough description of embodiments of the present disclosure. However, it will be recognized by one skilled in the art that one or more of the specific details may be omitted, or other methods, components, devices, steps, etc. may be used instead of one or more of the specific details in implementing the aspects of the present disclosure.
Modeling and editing of game scene components is mainly done by manual operations of game developers, requiring high labor and time costs. With the development of artificial intelligence technology, a scheme of performing game modeling by an artificial intelligence model, for example, inputting a description word into a model, which can automatically execute an instruction of DCC (Digital Content Creation, digital content generation) software, create a model of a game scene component or the like, has emerged in the related art. However, implementation of these schemes requires a lot of computation effort and storage resources, such as the number of patches and the size of resources of the created scene components cannot be controlled, which results in too high implementation cost of the schemes, and difficult application in lightweight scenes such as mobile games.
In view of one or more of the problems described above, exemplary embodiments of the present disclosure provide a game scene component generation method.
Fig. 1 shows a system architecture diagram of an operating environment of the present exemplary embodiment. The system architecture may include a terminal device 110, a server 120. The terminal device may be a mobile phone, a tablet computer, a personal computer, an intelligent wearable device, a game machine and other devices, and has a display function, and is capable of displaying a graphical user interface, where the graphical user interface may include an interface of an operating system or an interface of an application program. The terminal device 110 has installed thereon a game program, such as a client program that may be a network game. When the terminal device 110 runs the game program, a game editing scene or the like may be displayed in the graphical user interface, and the user can perform editing operations. The server 120 generally refers to a background system that provides game services in the present exemplary embodiment, and may be one server or a cluster of multiple servers. The server 120 is deployed with a game server program for executing game data processing of the server. The connection between the terminal device 110 and the server 120 may be formed by a wired or wireless communication link for data transmission. The game scenario component generating method in the present exemplary embodiment may be executed by any one or more of the terminal device 110 and the server 120.
In one embodiment, the game scene component generation method may be implemented and executed based on a cloud interaction system. The cloud interaction system may be the system architecture described above. Various cloud applications can be run under the cloud interaction system, for example: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the cloud game operation mode, the game program operation main body and the game picture presentation main body are separated, and the storage and operation of the control and interaction method in the game are completed on a cloud game server (such as the server 120), and the cloud game client (such as the terminal device 110) is used for receiving and sending data and presenting the game picture. For example, the cloud game client may be a display device with a data transmission function near the user side, such as a mobile terminal, a television, a computer, a palm computer, etc.; and the cloud game server for information processing is a cloud game server. When playing a game or editing a game scene, a user operates a cloud game client to send an operation instruction to a cloud game server, the cloud game server runs a game according to the operation instruction, codes and compresses data such as a game picture and the like, returns the data to the cloud game client through a network, and finally decodes the data through the cloud game client and outputs the game picture.
In one embodiment, the game scenario component generation method may be implemented in a stand-alone game. Without the need to deploy a server, it is possible to install all game programs by the terminal device 110 and execute the game scene component generating method.
In one embodiment, referring to fig. 2, the game scene component generating method may include the following steps S210 to S230:
step S210, determining information of a first scene component combination according to first prompt information in response to the trigger condition being met in the game program; the first scene component combination is a scene component combination to be generated corresponding to the first prompt information; the first scene component combination is composed of one or more scene components;
step S220, providing a plurality of first scene component controls and providing second scene component controls corresponding to the first scene component combinations under the condition of displaying the game editing scene; the first scene component control is configured to respond to and generate a corresponding scene component in the game editing scene according to the first trigger operation;
step S230, responding to a second triggering operation of the second scene component control, and generating a first scene component combination in the game editing scene according to the information of the first scene component combination.
In the method shown in fig. 2, on the one hand, a scheme for quickly and efficiently generating a game scene component is provided, information of a first scene component combination can be determined based on first prompt information, and a corresponding second scene component control is provided, so that a user can quickly generate the first scene component combination in a game editing scene by performing a second trigger operation on the second scene component control, and a great amount of manual operation by the user is not required to edit the generated scene component, thereby reducing labor and time cost and improving editing efficiency. On the other hand, the new scene components are generated by combining the existing scene components in the game program, so that the limitation of the existing scene component library can be broken through, a large number of diversified first scene component combinations are generated, and the game content is enriched. On the other hand, the mode of combining the scene components does not need to execute a complete modeling process, and is beneficial to controlling the size of the generated resources of the first scene component combination, so that the expenditure of resources such as calculation force and storage and the scheme realization cost are reduced, and the method can be applied to lightweight scenes such as mobile games.
Each step in fig. 2 is described in detail below.
Referring to fig. 2, in step S210, in response to a trigger condition being satisfied in a game program, information of a first scene component combination is determined according to first hint information; the first scene component combination is a scene component combination to be generated corresponding to the first prompt information; the first scene component combination is made up of one or more scene components.
The present exemplary embodiment provides a player-oriented game editing function. The game program of step S210 may be a game master program, and in the case of running the game master program, the user may select a specific option to enter into an editor or editing interface, using an editing function. Alternatively, the game program of step S210 may be an editing program that is matched with the game main program, which may be independently run outside the game main program, and in the case of running the editing program, the user may use the editing function.
The scene component is a person or thing (or a partial person or thing) in the game scene, such as a building, organization, carrier, furniture, etc. in the game scene. The scene component may be an object generated, used, or edited during a game editing stage and configured to generate a corresponding virtual model during a game execution stage. In a game, the scene components and the corresponding virtual models typically have the same appearance, and thus can be considered the same object, not particularly distinguished herein. In a program, the scene components and virtual models may be stored as different types of program objects. The virtual model is configured with one of the following performance effects: visual presentation effects, and non-visual presentation effects. That is, during the game play phase, the virtual model corresponding to the scene component may be a visible object with a specific appearance or may be an invisible object (e.g., a "birth point", i.e., an area where the player character initially resides in the game, is a scene component that has a size and shape but is invisible in the game). In one embodiment, a scenario component may refer to a generated or completed edited scenario component provided in a game program, such as may include a scenario component previously edited by a game developer, a scenario component edited and saved by a player, and so forth. The user does not need to edit from scratch, and these scene components can be used directly in the editing interface.
In the present exemplary embodiment, a plurality of scene components may be combined into a scene component combination. For example, the block assembly may be combined onto the circular face of a cylindrical assembly, constituting an organ assembly in the form of a roller. In this way, complex, diverse assemblies of people or things can be formed in the game. For ease of distinction, a single scene component may be referred to as a base scene component, which is not separable and may be considered as the smallest constituent element in a game editing scene. Unless otherwise specified, references herein to a scene component may refer to a base scene component or a scene component combination.
The game program provides the functionality to generate a scene component from a prompt message (prompt). The prompt information is information based on natural language, such as text information, voice information, etc. For ease of illustration, the generation function may be referred to as a generator, such as an AI (artificial intelligence) generator. The trigger condition is a condition that triggers the usage generator. The first hint information is hint information used in case the trigger condition is met. It will be appreciated that if a generator is used in other cases, the generation process may be performed in accordance with other hints information, as described below in the context of searching for a scene component control, the generation process may be performed in accordance with second hints information. The first prompt information and the second prompt information are obtained in different modes, the first prompt information can be automatically determined by a game program, and the second prompt information can be determined according to scene component control search information input by a user.
In the present exemplary embodiment, a new scene component is generated by combining existing scene components according to the hint information. That is, the scene components to be generated are scene component combinations. The scene component combination generated according to the first prompt message is referred to herein as a first scene component combination, and the scene component combination generated according to the second prompt message is referred to herein as a second scene component combination. The first scene component combination and the second scene component combination are composed of one or more scene components, and the scene components can be scene components existing in a game program. For ease of distinction, the scene components used to make up the first scene component combination are referred to as first target scene components, and the scene components used to make up the second scene component combination are referred to as second target scene components.
An exemplary description of how to obtain the first hint information is provided below.
In one embodiment, before determining the information of the first scene component combination according to the first hint information, the game scene component generating method may further include the steps of:
and randomly generating first prompt information according to a preset rule.
The preset rule is a limiting rule aiming at randomly generated information, so that the randomly generated first prompt information is information describing a person or object, and random information irrelevant to a scene component is avoided. For example, the preset rule may be a restriction rule on the part of speech of the keyword in the first prompt information, a restriction rule on the number of words, or the like. According to a preset rule, more standard random information can be generated to serve as first prompt information.
In an embodiment, the randomly generating the first prompt message according to the preset rule may include the following steps:
randomly determining a target object type prompt word from a plurality of object type prompt words;
determining a target feature dimension from feature dimensions of a target object type corresponding to the target object type prompt word, and randomly determining a target feature prompt word from the feature prompt words of the target feature dimension;
and generating first prompt information according to the target object type prompt words and the target feature prompt words.
The object type refers to a type to which a scene component generated through prompt information belongs, and may include an object type such as a table, a chair, a sofa, a cabinet, a bed, and the like. The object type hint word is a word for describing an object type, such as a name of an object type that may be "table", "chair", "sofa", "cabinet", "bed", or the like. For example, if a scene component under 5 object types of a table, a chair, a sofa, a cabinet and a bed can be generated, 5 object type prompt words can be provided, and the game program randomly selects a target object type prompt word from the object type prompt words, which is equivalent to randomly determining that the object type of the scene component to be generated is the target object type.
In one embodiment, the object types may have multiple hierarchies, such as including a primary object type and a secondary object type, where one or more secondary object types may be included under the primary object type. For example, the primary object type chairs include secondary object types such as sun chairs, benches, chinese chairs, and the like, and the secondary object types are object types more subdivided under the primary object types. Based on this, the target object type hint word for each level may be determined in turn from a low level to a high level. Taking two levels as an example, the above-mentioned randomly determining the target object type prompt word from the plurality of object type prompt words may include the following steps:
randomly determining target primary object type prompt words from a plurality of primary object type prompt words;
if the target primary object type corresponding to the target primary object type prompt word comprises a plurality of secondary object types, randomly determining the target secondary object type prompt word from the secondary object type prompt words corresponding to the plurality of secondary object types.
For example, the plurality of primary object type prompt words include "table", "chair", "sofa", "cabinet", "bed", from which the "chair" is randomly determined as the target primary object type prompt word, i.e. the object type of the selected article is the target primary object type, and the secondary object types including furniture, interior decoration, lamps and lanterns below the target primary object type prompt word may be randomly determined from the secondary object type prompt words "sun chair", "couch", "leisure chair", "rocking chair" and the like, such as randomly selecting "sun chair". Thus two target object type cues are derived, the word "chair" and "solar chair".
And if the target primary object type corresponding to the target primary object type prompt word comprises a secondary object type, taking the secondary object type prompt word corresponding to the secondary object type as a target secondary object type prompt word.
It should be understood that, as exemplified above by object types of two hierarchies, for the case where the object types are three or more hierarchies, the target object type hint words may be randomly determined in each hierarchy until the target object type hint word of the highest hierarchy is obtained. For example, if the target secondary object type corresponding to the target secondary object type prompt word includes a plurality of tertiary object types, the target tertiary object type prompt word is further randomly determined from the tertiary object type prompt words corresponding to the plurality of tertiary object types. Based on a similar manner, a target four-level object type prompt word, a target five-level object type prompt word and the like can be obtained.
A scene component under a target object type (which may be the highest level object type if there are multiple levels of object types) may have one or more feature dimensions, which refers to a feature or attribute of the scene component in some aspect. For example, under the target object type of a solar chair, the feature dimensions may include: whether there is a backrest, whether there is a handrail, whether it is foldable, whether there are several legs, etc. One or more target feature dimensions may be determined from the feature dimensions under the target object type, the target feature dimensions may be selected randomly, or the target feature dimensions may be determined in a certain order (e.g., according to the degree of feature saliency, whether there is a feature of the backrest that is most significant, and the feature dimension is preferentially the target feature dimension). Each feature dimension has one or more feature cue words, such as a feature dimension of "whether there is a backrest", wherein the feature cue words are feature dimensions of "there is a backrest", "there is no backrest", "there are several legs", and wherein the feature cue words are "3 legs", "4 legs", etc. One of the feature hint words of the target feature dimension may be randomly selected as the target feature hint word.
Combining the target object type prompt word and the target feature prompt word, such as splicing through comma and other separators, and adding a start character, an end character and the like to form first prompt information. If the target object type prompt word includes a plurality of levels of target object type prompt words, the target object type prompt word of each level may be used in the first prompt message, or only the target object type prompt word of the highest level may be used.
In an embodiment, the generating the first prompt message according to the preset rule may further include the following steps:
and randomly determining target color prompt words from the plurality of color prompt words, and adding the target color prompt words into the first prompt information.
The color prompt words can be color names such as yellow, bright yellow, light yellow and creamy yellow, and color style words such as bright, full and elegant. One of the color prompt words can be randomly selected as a target color prompt word and added to the first prompt information, namely the first prompt information comprises description information of the color.
The method for generating the first prompt information by randomly determining the target object type prompt word, the target feature prompt word and the target color prompt word can ensure the mutual association adaptation of the keywords in the first prompt information, and is beneficial to the subsequent generation of high-quality first scene component combinations.
In one embodiment, the prompt information with the better generation result can be randomly selected or selected from the prompt information input by other users as the first prompt information, for example, the prompt information input by some other user previously, the corresponding generated scene component combination is used by more users, and the prompt information can be used as the first prompt information.
The first hint information is described above. The following describes in what case the trigger condition is satisfied.
In one embodiment, the determining the information of the first scene component combination according to the first prompt information in response to the trigger condition being met in the game program may include the following steps:
in response to entering the game editing scene from outside the game editing scene in the game program, determining information of the first scene component combination according to the first prompt information.
The user can edit the game scene by using the editing function, can select to newly establish a game scene and edit the game scene, and can also select to edit the existing game scene. The game editing scene is a game scene presented in an edited state. The game editing scene comprises one or more scene components, which can be self-contained (for example, a game program can provide preset game scenes with a plurality of different styles, a user can select a certain preset game scene to edit, and the scene components are initially self-contained), or can be generated by editing by the user. The scene component is a person or thing (or a local person or thing) in the game editing scene.
In the game editing scene, a user can edit the environment of the game scene, such as background, weather, illumination and the like, and can edit scene components, such as adding and deleting a certain scene component, setting the attribute of the scene component and the like. In the case of displaying a game editing scene, a plurality of first scene component controls may be provided for generating corresponding scene components in the game editing scene in response to and in accordance with a first trigger operation by a user. The first scene component control is a control for generating an existing scene component in the game program, and is different from the second scene component control and the third scene component control below. The first scene component control may be divided into different component types, and the component types may be the same as or different from the classification mode or standard of the object types. FIG. 3 illustrates a schematic diagram of a game editing scene with a first scene component control. Referring to FIG. 3, the first scene component control may be provided in the form of a list (including a one-level or multi-level list, shown in FIG. 3 as a two-level list) in which one-level or multi-level component types are shown. Such as "structure", "article", "environment", "institution", "living being", "combination", etc., are primary component types, each of which includes one or more secondary component types under the primary component type, such as "article" including "furniture", "interior trim", "light fixture", etc., secondary component types under the primary component type. One or more first scene component controls are included under each secondary component type, such as first scene component controls including "log tea table", "round short table", "wooden table", "European table", and the like under the secondary component type "furniture". The user may perform a first triggering operation, such as a clicking operation, a dragging operation, etc., on the first scene component control to generate a corresponding scene component in the game editing scene. For example, in FIG. 3, a user dragging a "round short table" first scene component control to a location in a game editing scene may trigger the generation of a round short table scene component at that location. The form, appearance and the like of the first scene component control are not limited, and the name of the scene component and the icon of the scene component can be displayed in the first scene component control.
In one embodiment, the first scene component control is configured with a corresponding scene component template that includes pre-edited scene component parameters. The scene component parameters may include one or more of shape, color, texture, transparency, material, size, orientation, and may include other kinds of parameters. When a user edits a scene component in a game editing scene, a scene component parameter needs to be set for the scene component. If the scene component is generated in the game editing scene through the first scene component control, the scene component parameters in the scene component template corresponding to the first scene component control can be called, and the scene component is rapidly generated according to the parameters. It can be understood that the first scene component control is a template scene component edited in advance, the scene component parameters are configured in the form of a template, and the scene component parameters are provided for the user in the form of the first scene component control, and when the user uses the first scene component control, the scene component template is applied. Thereby improving editing efficiency. Of course, after the scene component is generated, changes may be allowed to its scene component parameters, such as changing its size through a zoom operation, changing its direction through a rotate operation, changing its color, transparency, etc. through a color setting operation.
The game program may be self-contained with a first scene component control, such as a game developer pre-editing a scene component template and configuring the first scene component control for use by the player. In addition, the player may also be allowed to configure the first scene component control. For example, a player may perform modeling of a scene component and save scene component parameters in a game editing scene or other editing interface to form a scene component template and configure it as a first scene component control.
In one embodiment, the user may be allowed to edit and update the first scene component control, for example, the scene component template corresponding to the first scene component control may be modified, the scene component parameters therein may be modified, and the scene component template corresponding to the first scene component control may be updated after the scene component parameters are saved. And when the scene component is generated in the game editing scene through the first scene component control, the scene component parameters in the updated scene component template can be called to generate the scene component.
In one embodiment, for a scene component combination, a corresponding first scene component control may also be configured. For example, a user may combine the block components onto the circular surface of the cylinder component, compose an organization component in the form of a scroll wheel, and may configure a new first scenario component control based on scenario component parameters of the organization component. A subsequent user may generate an organ component in the game editing scene through the first scene component control. Therefore, the one-key generation scene component combination is realized, and the method is very convenient.
In one embodiment, a virtual camera may be provided in the game editing scene. The virtual camera is a shooting tool simulating a real camera in a game program, and can be used for shooting to form pictures in a game editing scene. The virtual camera can be arranged at any position in the game editing scene, and can shoot the scene at any view angle, namely, the virtual camera can have any pose, and the pose can be fixed or dynamically changed. In addition, any number of virtual cameras can be arranged in the game editing scene, and different virtual cameras can shoot different game scene pictures. For example, two virtual cameras are arranged in the game editing scene, one of the two virtual cameras is used for shooting a main picture in the game editing scene, the other virtual camera is used for shooting an auxiliary picture in the game editing scene, and when the main picture displays the front surface of a certain scene component, the auxiliary picture can display the side surface or the back surface of the scene component, and the like, so that a user can see more comprehensive information.
The game editing scene may present two different perspectives, a viewing perspective and a game perspective, respectively, in which the user may set which perspective to use in the game editing scene. For example, FIG. 4 illustrates a setting interface for a game editing scenario in which an option control for a viewing angle is provided, and a user can select to use a viewing angle or a game viewing angle. The viewing angle means that the game editing scene is viewed at an angle of view called by the third person, and under the viewing angle, the user may not manipulate the game character in the game editing scene, but directly manipulate the virtual camera to move the angle of view (the virtual camera is not displayed). The game view angle refers to a view angle of a first person to observe a game editing scene, under the game view angle, a user can control a certain game role in the game editing scene, the game role can be bound with a virtual camera, namely, the position relationship between the game role and the virtual camera is fixed, for example, the game role can be positioned at the focus of the virtual camera, and when the user controls the game role to move, the virtual camera synchronously moves, so that the view angle is moved. Of course, in the viewing angle, an invisible game character may be set in the game editing scene, which corresponds to hiding of the game character in the game angle, and the user can move the virtual camera by moving the game character when moving the viewing angle. Under the observation view angle or the game view angle, a virtual rocker, an up-shift or down-shift control, etc. can be arranged in the game editing scene, and a user can move a virtual camera or move a game role by operating the control.
In addition, other setting controls, such as a "grid display" setting control for setting a grid display mode in the game editing scene, a "lens speed" setting control for setting a moving speed of the virtual camera or a moving speed of a virtual object controlled by a player in the game running scene, and the like, may be provided in the setting interface of the game editing scene. The present disclosure is not particularly limited.
Entering a game editing scene from outside the game editing scene refers to entering the game editing scene from a scene or interface that is not related to editing a game map. For example, this is the case when a game editing scene is entered from a game hall, a game editing scene is performed from a character information interface, and the like. In one embodiment, the game program includes a plurality of functional modules, wherein at least one of the plurality of functional modules is an editor for running and providing a game editing scene. Entering the game editing scene from outside the game editing scene may refer to entering the editor from other functional modules in the game program. In one embodiment, when the game editing scene is not closed, the game editing scene jumps to other interfaces (such as a game hall) from the game editing scene, and then the game editing scene can be automatically closed after the background keeps running for a period of time, and if the game editing scene returns to the game editing scene before the background is automatically closed, the situation of entering the game editing scene from outside the game editing scene is not included.
When entering the game editing scene from outside the game editing scene in the game program, the information of the first scene component combination can be determined according to the first prompt information, and entering the game editing scene is equivalent to meeting the trigger condition, and the second scene component control corresponding to the first scene component combination can be provided subsequently, so that a user can quickly edit and generate the first scene component combination in the game editing scene.
In one embodiment, in the case of providing the second scene component control corresponding to the first scene component combination, the game scene component generating method may further include the following steps:
the second scene component control is maintained unchanged during the current game editing scene run.
The current running period of the game editing scene refers to a process from entering the game editing scene outside the game editing scene to exiting or closing the game editing scene, and in the process, the second scene component control can be kept unchanged all the time. It may be understood that when the game editing scene is entered from outside the game editing scene for the first time, the information of the first scene component combination is determined according to the first prompt information, and the corresponding second scene component control is generated and provided, where the game editing scene is not closed, the second scene component control may be kept provided, that is, the generating process of steps S210 and S220 may not be performed again.
In one embodiment, each time a game editing scene is entered from outside the game editing scene, randomly generated first prompt information can be obtained, and information of a first scene component combination is determined according to the first prompt information, so that a corresponding second scene component control is generated for a user to use. Wherein, entering the game editing scene from outside the game editing scene each time means entering the game editing scene when the game editing scene has been closed last time. That is, each time a game editing scene is re-entered, a random first scene component combination can be provided, increasing the diversity of game editing on the basis of the existing scene components of the game program.
In one embodiment, the information of the first scene component combination may be determined according to the first hint information in response to an operation of opening a scene component control interface in the game editing scene. The scene component control interface is an interface for accommodating the first scene component control and the second scene component control, and can be a sub-interface in a game editing scene. The operation of opening the scene component control interface may be an operation of sliding from an edge of the game editing scene to pull out the scene component control interface, or an operation of clicking a control of opening the scene component control interface, or the like. The operation of opening the scene component control interface in the game editing scene is equivalent to meeting the triggering condition, and the information of the first scene component combination is determined according to the first prompt information. For example, after entering a game editing scene, when the scene component control interface is opened for the first time, randomly generated first prompt information can be obtained or the first prompt information can be determined in other modes, the information of the first scene component combination is determined according to the first prompt information, and then a second scene component control is generated, so that the second scene component control can be included in the opened scene component control interface for a user to use.
In one embodiment, after entering the game editing scene, the information of the first scene component combination may be determined according to the first prompt information in response to the game editing scene being in an idle state. The game editing scene is in an idle state, which means that the calculated data amount in the game editing scene is lower than a certain threshold value, or the interactive data amount between the terminal equipment and the server is lower than a certain threshold value, under such a condition, the generation process is started, the information of the first scene component combination is determined according to the first prompt information, the influence of the generation process on other editing operations of a user can be avoided, and the running fluency of the editing function is ensured.
The information of the first scene component combination includes, but is not limited to, one or more of the following: the details of the first scene component combination, such as name, identification (ID or key), preview, generation or editing time, resource occupation value (parameter for quantifying resources such as storage, calculation power and the like occupied by scene components, game scenes and the like); information of first target scene components constituting the first scene component combination, such as component types of the first target scene components (i.e., what type of component the first target scene component is), morphology information (meaning attribute information affecting morphology appearance, such as size, color, texture, etc.), relative pose (i.e., pose relative to other first target scene components).
In an embodiment, the determining the information of the first scene component combination according to the first prompt information may include the following steps:
the first prompt information is sent to a server, and the server processes the first prompt information by utilizing a pre-trained component generation model to obtain information of a first scene component combination;
and receiving information of the first scene component combination returned by the server.
The component generation model may be any type of machine learning model, such as a large language model (Large Language Model, LLM), among others. The server can analyze the first prompt information through the component generation model, output the information of the first scene component combination and return the information to the terminal equipment.
In one embodiment, the component generating model may be deployed on the terminal side, and the terminal device may locally run the component generating model to process the first prompt information to obtain information of the first scene component combination.
With continued reference to FIG. 2, in step S220, in the case of displaying a game editing scene, providing a plurality of first scene component controls and providing second scene component controls corresponding to the first scene component combinations; the first scenario component control is configured to generate a corresponding scenario component in the game-editing scenario in response to and according to the first trigger operation.
The first scenario component control may be partially shown in fig. 3, and when the user performs the first triggering operation on the first scenario component control, a corresponding scenario component is generated in the game editing scenario, where the scenario component may be an existing scenario component in the game program. In addition, a second scene component control corresponding to the first scene component combination can be generated and provided for editing and using by a user.
In one embodiment, the first scene component control is divided into different component types, such as may be the component types shown in FIG. 3. The providing a plurality of first scene component controls and providing a second scene component control corresponding to the first scene component combination may include the following steps:
providing a first type selection control corresponding to each component type, and providing a second type selection control;
responding to a first selection operation of the first type selection control, determining a target first type selection control according to the first selection operation, and displaying a first scene component control under a component type corresponding to the target first type selection control;
in response to a second selection operation of the second type of selection control, a second scene component control corresponding to the first scene component combination is displayed.
Wherein the first type selection control is for selecting a component type of the first scene component control. As shown with reference to fig. 3, the first scene component controls are provided in the form of a list, and are arranged under different component types, wherein buttons of the component types of "structure", "article", "environment", "organ", "organism", "combination", "furniture", "interior decoration", "lamp" are first type selection controls, and a user clicks an operation of selecting the first type selection controls to be first selection operations, and the selected first type selection controls are target first type selection controls. For example, a user selects a "furniture" first type selection control via a first selection operation, and a first scene component control under this component type of furniture is displayed.
And independently taking the second scene component control as a component type, wherein the corresponding selection control is a second type selection control. As shown with reference to fig. 5, an "AI" (artificial intelligence, meaning that the first scene component combination is a scene component generated by artificial intelligence technology) button is a second type selection control, and the operation of clicking the second type selection control by the user is a second selection operation. For example, when the user selects the second type selection control through the second selection operation, a second scene component control corresponding to the first scene component combination, that is, an "AI chair 1" control, an "AI chair 2" control, etc. in fig. 5 is displayed. Therefore, the first scene component control and the second scene component control can be displayed separately, and the user can conveniently distinguish the use.
In an embodiment, the determining the information of the first scene component combination according to the first prompt information may include the following steps:
determining M first component generation schemes according to the first prompt information; each first component generation scheme includes information of a corresponding first scene component combination; the different first component generation schemes are used for generating different first scene component combinations; m is a positive integer not less than 2.
Correspondingly, the providing the second scene component control corresponding to the first scene component combination may include the following steps:
m second scene component controls corresponding to the M first scene component combinations are provided.
Wherein the first component generation scheme is a scheme for generating a first combination of scenario components, and the second component generation scheme below is a scheme for generating a second combination of scenario components. When the server (or the terminal equipment) processes the first prompt information by using the component generation model, M output results can be obtained, and each output result is a first component generation scheme. For example, the first prompting information is "a solar chair, has a backrest, is foldable", and according to the first prompting information, M first component generating schemes can be determined, and M different solar chairs can be generated corresponding to M different first scene component combinations. Each first component generation scheme includes information of a corresponding first scene component combination. The value of M may be determined empirically or in specific situations, for example, the value of M is set to 5 in consideration of user demands, calculation forces of the terminal device and the server, and the like. In the case of M first component generation schemes, M second scenario component controls may be generated and provided, each corresponding to a first scenario component combination in one first component generation scheme. The subsequent user may generate a different first scene component combination using a different second scene component control. This provides the user with a number of alternatives that are easy to meet the user's needs.
With continued reference to FIG. 2, in step S230, in response to a second trigger operation for the second scene component control, a first scene component combination is generated in the game editing scene from the information of the first scene component combination.
Wherein the second triggering operation is an operation of generating the first scene component combination using the second scene component control, such as an operation of dragging the second scene component control into the game editing scene. When the user performs a second triggering operation on the second scene component control, the first scene component combination can be triggered to be generated in the game editing scene according to the information of the first scene component combination.
In one embodiment, the information of the first scene component combination includes information of a first target scene component, the first target scene component being a scene component for composing the first scene component combination. The generating the first scene component combination in the game editing scene according to the information of the first scene component combination may include the following steps:
generating a first target scene component in the game editing scene according to the information of the first target scene component to form a first scene component combination.
Wherein generating the first target scene component may include loading program objects of the first target scene component (e.g., game resources, parameters, code, etc. including the first target scene component), rendering the first target scene component, etc. After all the first target scene components are generated, the first target scene components are arranged and combined according to the position relation, and the generated first target scene components are taken as a whole to form a first scene component combination.
In one embodiment, the generating the first target scene component in the game editing scene according to the information of the first target scene component to form the first scene component combination may include the following steps:
determining a generation sequence of different first target scene components, wherein at least part of the generation sequence of the first target scene components is different from the generation sequence of other first target scene components;
based on the information of the first target scene components, generating the first target scene components in the game editing scene according to the generation sequence of the first target scene components so as to form a first scene component combination.
The generation sequence of all the first target scene components is not identical, namely, all the first target scene components have a certain generation sequence and are not generated together at the same time. The order of generation may be randomly determined for the different first target scenario components according to the processing power of the game program. For example, when the game program generates the first target scene components, F threads responsible for generation can be run in parallel, so that F first target scene components can be generated at the same time, the first target scene components can be divided into F groups, each group is provided with the same generation sequence, or the scene components are divided into F groups, and the generation sequence is set for the first target scene components in the sequence of 1, 2, 3 and … in each group. For another example, the order of generation may be determined in accordance with the relative pose of the different first target scene components. Specifically, there is a first target scene component serving as a pose reference, for example, the position coordinate and the angle coordinate of the first target scene component are both 0, the generation sequence of the first target scene component is set to 1 (i.e., first generation), then the generation sequences of other first target scene components are sequentially determined to be 2, 3, 4 and … according to a certain direction sequence (such as from left to right, from top to bottom, etc.), and the generation sequence of the first target scene component can be more advanced as the first target scene component with the generation sequence of 1 is closer.
Under the condition that the generation sequence of the first target scene components is determined, each first target scene component can be gradually generated according to the generation sequence, so that the generation process is more orderly, and the game program is prevented from loading excessive data at one time.
In one embodiment, the generating the first target scene component in the game editing scene based on the information of the first target scene component and according to the generating order of the first target scene component to form the first scene component combination may include the following steps: sequentially loading information of each first target scene component according to the generation sequence of the first target scene components; the information may include component type, morphology information, relative pose; after the information of the first target scene component is successfully loaded, generating a scene component of a corresponding component type according to the component type, setting the form of the scene component according to the form information, and then placing the scene component to a corresponding position and adjusting the angle according to the relative pose; after all of the first target scene components are generated, a first scene component combination is formed.
In one embodiment, the generating the first target scene component in the game editing scene based on the information of the first target scene component and according to the generating order of the first target scene component may include the following steps: and a process of dynamically displaying information based on the first target scene component in a screen of the game editing scene, and generating the first target scene component in the game editing scene in the generation order of the first target scene component. Wherein the generation of the first target scene component may be dynamically displayed within a screen of the game editing scene. For example, initially displaying a screen that does not generate any first target scene component; subsequently, gradually displaying the generated first target scene component; finally, when all the first target scene components have been generated, displaying the complete first scene component combination. Therefore, the user can watch the complete generation process, the user is prevented from meaningless waiting in the generation process, the user has stronger perception on the generation process, and the user experience is better.
In one embodiment, referring to fig. 6, the game scene component generating method may further include the following steps S610 to S640:
step S610, scene component control search information for searching for an existing scene component control is acquired.
The existing scene component controls comprise a first scene component control and/or a second scene component control, wherein the second scene component control can be a second scene component control generated in a game editing scene, such as a second scene component control generated according to the first prompt information. Because the number and the component types of the existing scene component controls may be more, the user can be provided with a search function, and the search information input by the user is the scene component control search information. In one embodiment, a scenario component control search control may be provided in the case of displaying a game editing scenario, and scenario component control search information input based on the scenario component control search control may be obtained. The scene component control search control is used for searching the existing scene component control by inputting keywords, and can be a text input control, a voice input control and the like. The user can easily input the scene component control search information using the field Jing Zujian control search control.
Step S620, based on the scene component control search information, searching for an existing scene component control matching the scene component control search information, and displaying the existing scene component control in the search result corresponding to the scene Jing Zujian control search information.
The method can be used for searching in the existing scene component control in the editor, such as the scene component control in the editor and the scene component control generated by editing the player (such as the scene component combination control). When searching, a fuzzy search mode can be adopted, and the search result can be regarded as matching after a certain correlation with the scene component control search information.
For example, referring to FIG. 7, a user enters "football" in a scene component control search control 701 to search information for scene component controls. All existing scene component controls can be searched for scene component controls that match "football", e.g., three first scene component controls 702 of "football", "basketball", "balloon" are found and displayed in the search results.
Step S630, determining second prompt information based on the scene component control search information, determining information of a second scene component combination according to the second prompt information, generating a third scene component control corresponding to the second scene component combination, and displaying the third scene component control in search results corresponding to the scene Jing Zujian control search information; the second scene component combination is a scene component combination to be generated corresponding to the second prompt information; the second scene component combination is made up of one or more scene components.
The scene component control search information can be used as second prompt information, and can be supplemented or modified, for example, feature prompt words, color prompt words and the like are added in the scene Jing Zujian control search information, so that the second prompt information is obtained.
In the case that the second hint information is determined, the generating process may be performed using a generator. The terminal device may send the second prompt information to the server, where the server processes the second prompt information using the component generation model to obtain information of the second scene component combination, and returns the information to the terminal device. And generating a third scene component control corresponding to the second scene component combination on the terminal equipment, and displaying the search result.
For example, referring to fig. 7, the user may input the scene component control search information as "football", and may determine the information of the second scene component combination by using the information of "football" as the second prompt information, and generate the corresponding third scene component control 703, that is, generate a new football scene component control, and display the new football scene component control in the search result.
Step S640, responding to a third triggering operation of a third scene component control, and generating a second scene component combination in the game editing scene according to the information of the second scene component combination.
Wherein the third triggering operation is an operation of generating a second scene component combination using the third scene component control, such as an operation of dragging the third scene component control into a game editing scene. When the user performs a third triggering operation on the third scene component control, the second scene component combination can be triggered to be generated in the game editing scene according to the information of the second scene component combination.
Based on the method of FIG. 6, a user may provide search results outside of existing scenario component controls when searching for scenario component controls, such that the user may edit and generate more diverse scenario components in a game editing scenario.
In one embodiment, the information of the second scene component combination includes information of a second target scene component, the second target scene component being a scene component for composing the second scene component combination. The generating the second scene component combination in the game editing scene according to the information of the second scene component combination may include the steps of:
and generating a second target scene component in the game editing scene according to the information of the second target scene component so as to form a second scene component combination.
Wherein generating the second target scene component may include loading a program object of the second target scene component, rendering the second target scene component, and so on. After all the second target scene components are generated, the second target scene components are arranged and combined according to the position relation, and the generated second target scene components are taken as a whole to form a second scene component combination.
In one embodiment, the generating the second target scene component in the game editing scene according to the information of the second target scene component to form the second scene component combination may include the following steps:
determining a generation sequence of different second target scene components, wherein at least part of the generation sequence of the second target scene components is different from the generation sequence of other second target scene components;
and generating second target scene components in the game editing scene according to the generation sequence of the second target scene components based on the information of the second target scene components so as to form a second scene component combination.
The generation sequence of all the second target scene components is not identical, i.e. all the second target scene components have a certain generation sequence and are not generated together at the same time. The order of generation may be randomly determined for the second, different, target scenario components based on the processing power of the game program. For example, when the game program generates the second target scene components, F threads responsible for generation may be run in parallel, so that F second target scene components may be generated at the same time, the second target scene components may be divided into F groups, each group being set with the same generation order, or the scene components may be divided into F groups, each group being set with the generation order for the second target scene components in the order of 1, 2, 3, …. For another example, the order of generation may be determined in accordance with the relative pose of the different second target scene components. Specifically, there is a second target scene component serving as a pose reference, for example, the position coordinate and the angle coordinate of the second target scene component are both 0, the generation sequence of the second target scene component is set to 1 (i.e., the first generation), then the generation sequences of other second target scene components are sequentially determined to be 2, 3, 4 and … according to a certain direction sequence (such as from left to right, from top to bottom, etc.), and the generation sequence of the second target scene component can be more advanced as the second target scene component with the generation sequence of 1 is closer to the second target scene component.
Under the condition that the generation sequence of the second target scene components is determined, each second target scene component can be gradually generated according to the generation sequence, so that the generation process is more orderly, and the game program is prevented from loading excessive data at one time.
In an embodiment, the generating the second target scene component in the game editing scene based on the information of the second target scene component and according to the generating order of the second target scene component to form the second scene component combination may include the following steps: sequentially loading information of each second target scene component according to the generation sequence of the second target scene components; the information may include component type, morphology information, relative pose; after the information of the second target scene component is successfully loaded, generating a scene component of a corresponding component type according to the component type, setting the form of the scene component according to the form information, and then placing the scene component to a corresponding position and adjusting the angle according to the relative pose; after all of the second target scene components are generated, a second scene component combination is formed.
In one embodiment, the generating the second target scene component in the game editing scene based on the information of the second target scene component and according to the generating order of the second target scene component may include the following steps: and a process of dynamically displaying information based on the second target scene component in the screen of the game editing scene, and generating the second target scene component in the game editing scene in the order of generating the second target scene component. Wherein the generation of the second target scene component may be dynamically displayed within the screen of the game editing scene. For example, initially displaying a screen that does not generate any second target scene components; subsequently, gradually displaying the generated second target scene component; finally, when all second target scene components have been generated, displaying the complete second scene component combination. Therefore, the user can watch the complete generation process, the user is prevented from meaningless waiting in the generation process, the user has stronger perception on the generation process, and the user experience is better.
In one embodiment, in the case of generating the third scene component control corresponding to the second scene component combination, the game scene component generating method may further include the steps of:
during the running of the current game editing scene, if new scene component control searching information is acquired, searching for an existing scene component control matched with the new scene component control searching information based on the new scene component control searching information, and displaying the existing scene component control in a searching result corresponding to the new scene component control searching information; determining new second prompt information based on the new scene component control search information, determining new second scene component combination information according to the new second prompt information, generating a third scene component control corresponding to the new second scene component combination, and displaying the third scene component control in a search result corresponding to the new scene component control search information; the third scene component control corresponding to the new second scene component combination is configured to generate the new second scene component combination in the game editing scene in response to and according to the third trigger operation.
In the case that the user has searched for the scene component control, new scene component control search information is input again, and steps S610 to S640 may be executed again based on the new scene component control search information. In particular, by executing step S630 again, a new third scenario component control corresponding to the second scenario component combination is generated, which may cover the original third scenario component control. Or, the original search results can be covered by the search results corresponding to the new scene component control search information. The generator can be used for executing one generation process every time the scene component control search information is input, so that a user can use the generator to generate a required scene component combination at any time during editing in a game editing scene.
In an embodiment, the determining the information of the second scene component combination according to the second prompt information may include the following steps:
determining N second component generation schemes according to the second prompt information; each second component generation scheme includes information of a corresponding second scene component combination; the different second component generation schemes are used to generate different second scene component combinations; n is a positive integer not less than 2.
Correspondingly, the generating the third scene component control corresponding to the second scene component combination and displaying the search result corresponding to the search information of the field Jing Zujian control may include the following steps:
and generating N third scene component controls corresponding to the N second scene component combinations and displaying the N third scene component controls in search results corresponding to the search information of the field Jing Zujian control.
Wherein the second component generation scheme is a scheme for generating a second scene component combination. When the server (or the terminal equipment) processes the second prompt information by using the component generation model, N output results can be obtained, and each output result is a second component generation scheme. For example, the second hint information is "football", according to which N second component generation schemes may be determined, corresponding to N different second scene component combinations, i.e. N different football may be generated. Each second component generation scheme includes information of a corresponding second scene component combination. The value of N may be determined empirically or in specific situations, for example, the value of N is set to 3 in consideration of user demands, calculation forces of the terminal device and the server, and the like. In the case where N second-component generation schemes are available, N third-scene-component controls may be generated and provided, each corresponding to a second-scene-component combination in one second-component generation scheme. The subsequent user may generate a different second scene component combination using a different third scene component control. This provides the user with a number of alternatives that are easy to meet the user's needs.
In one embodiment, search results corresponding to the scene component control search information are presented through a search interface, and an existing scene component control and a third scene component control matched with the scene component control search information are arranged in the search interface. As shown with reference to fig. 7, the searched first scene component control 702 and third scene component control 703 are arranged in the search interface 704 in order, the first scene component control 702 is arranged in an upper position, and the third scene component control 703 is arranged in a lower position. Accordingly, the game scene component generating method may further include the steps of:
if the search interface does not display all the existing scene component controls and the third scene component controls matched with the scene component control search information, providing an interface navigation control;
and responding to the fourth triggering operation of the interface navigation control, and controlling the search interface to scroll to the position where the third scene component control is displayed.
Referring to FIG. 7, it can be seen that because the area currently displayed in the search interface 704 is insufficient to accommodate the entire search result, a portion of the search result is not viewable by the user, and because the third scene component control 703 is typically arranged in a downward position, the user is likely not to see the third scene component control 703, even be unaware of the results generated by the presence generator. Thus, an interface navigation control may be provided, such as the "click view AI generation results" control shown in FIG. 7.
The operation of the user on the interface navigation control is a fourth triggering operation, such as clicking and long-pressing the control. Responding to the fourth preset operation, the search interface can be controlled to scroll, so that the third scene component control is displayed, for example, the third scene component control can scroll to the bottom of the whole interface or to the position of the first third scene component control, and the like, so that a user can quickly find and use the third scene component control to generate a second scene component combination in a game editing scene, and the method is very convenient.
In one embodiment, where a first scene component combination is generated in the game editing scene by a second scene component control or a second scene component combination is generated in the game editing scene by a third scene component control, the generated first scene component combination or second scene component combination may be added to the generation list.
Wherein the generated list is a list of scene components generated by the generator. For example, FIG. 8 shows a schematic diagram of a generated list that a user can see information of various different types of scene components generated by the generator and can also operate on. Adding the first scene component combination or the second scene component combination to the generated list is equivalent to saving the first scene component combination or the second scene component combination, and the saved information may include: details such as name, ID, scene component hint information, preview, time of creation or editing, resource occupancy value (e.g., values such as 650, 320 in the creation list shown in fig. 8); information of the first scene component combination or the second scene component combination, such as information of the first target scene component or the second target scene component, and the like. The first or second scene component combination is added to the generated list, indicating that the user confirms acceptance of the generated first or second scene component combination. In the generation list shown in fig. 8, the first item is the first scene component combination or the second scene component combination generated this time. The scene components in the list may be arranged in the time added to the generated list or the time sequence of generation, the higher the scene components, the later they are added to the generated list or the generated time.
Referring to fig. 8, the user may trigger the right-most "…" control to trigger the display of the detail information. For example, the detail interface may be displayed in a popup or a cover manner, where the detail information of the selected scene component (e.g., the first scene component combination or the second scene component combination) is shown, such as a name, an ID, a scene component prompt, a preview, a generation or editing time, a resource occupation value, and the like.
In one embodiment, a first scene component combination or a second scene component combination is generated in a game editing scene in response to a generation trigger operation for the first scene component combination or the second scene component combination in the generation list.
Wherein the generating triggering operation is an operation of generating the first scene component combination or the second scene component combination to the game editing scene in the generation list. As shown with reference to fig. 8, a "generate to scene" control is provided in the generation list, and a trigger operation of the control by a user, that is, a trigger operation, may trigger generation of a first scene component combination or a second scene component combination in the game editing scene. It should be appreciated that if a first or second scene component combination has been generated in the game-editing scene, a new first or second scene component combination may be regenerated in the game-editing scene. There may be multiple first scene component combinations or second scene component combinations in the game editing scene.
The exemplary embodiment of the disclosure also provides a game scene component generating device. Referring to fig. 9, the game scene component generating apparatus 900 may include the following program modules:
a scene component combination information determining module 910 configured to determine information of a first scene component combination according to the first hint information in response to a trigger condition being satisfied in the game program; the first scene component combination is a scene component combination to be generated corresponding to the first prompt information; the first scene component combination is composed of one or more scene components;
the scene component control providing module 920 is configured to provide a plurality of first scene component controls and provide second scene component controls corresponding to the first scene component combination in the case of displaying the game editing scene; the first scene component control is configured to respond to and generate a corresponding scene component in the game editing scene according to the first trigger operation;
the scene component combination generation processing module 930 is configured to generate a first scene component combination in the game editing scene according to the information of the first scene component combination in response to the second trigger operation on the second scene component control.
In one embodiment, the scene component combination information determination module 910 is further configured to:
And before determining the information of the first scene component combination according to the first prompt information, randomly generating the first prompt information according to a preset rule.
In one embodiment, the generating the first prompt message according to the preset rule includes: randomly determining a target object type prompt word from a plurality of object type prompt words; determining a target feature dimension from feature dimensions of a target object type corresponding to the target object type prompt word, and randomly determining a target feature prompt word from the feature prompt words of the target feature dimension; and generating first prompt information according to the target object type prompt words and the target feature prompt words.
In one embodiment, randomly determining the target object type hint word from a plurality of object type hint words includes: randomly determining target primary object type prompt words from a plurality of primary object type prompt words; if the target primary object type corresponding to the target primary object type prompt word comprises a plurality of secondary object types, randomly determining the target secondary object type prompt word from the secondary object type prompt words corresponding to the plurality of secondary object types.
In one embodiment, the first prompt message is randomly generated according to a preset rule, and the method further includes: and randomly determining target color prompt words from the plurality of color prompt words, and adding the target color prompt words into the first prompt information.
In one embodiment, responsive to a trigger condition being met in a game program, determining information for a first scene component combination from first hint information includes: in response to entering the game editing scene from outside the game editing scene in the game program, determining information of the first scene component combination according to the first prompt information.
In one embodiment, the scene component control providing module 920 is further configured to: and under the condition that the second scene component control corresponding to the first scene component combination is provided, keeping the second scene component control unchanged during the running of the current game editing scene.
In one embodiment, the first scene component control is divided into different component types; providing a plurality of first scene component controls and providing a second scene component control corresponding to the first scene component combination, wherein the method comprises the following steps: providing a first type selection control corresponding to the component type, and providing a second type selection control; responding to a first selection operation of the first type selection control, determining a target first type selection control according to the first selection operation, and displaying a first scene component control under a component type corresponding to the target first type selection control; in response to a second selection operation of the second type of selection control, a second scene component control corresponding to the first scene component combination is displayed.
In one embodiment, determining information of a first scene component combination according to the first prompt information includes: determining M first component generation schemes according to the first prompt information; each first component generation scheme includes information of a corresponding first scene component combination; the different first component generation schemes are used for generating different first scene component combinations; m is a positive integer not less than 2; providing a second scene component control corresponding to the first scene component combination, comprising: m second scene component controls corresponding to the M first scene component combinations are provided.
In one embodiment, determining information of a first scene component combination according to the first prompt information includes: the first prompt information is sent to a server, and the server processes the first prompt information by utilizing a pre-trained component generation model to obtain information of a first scene component combination; and receiving information of the first scene component combination returned by the server.
In one embodiment, the information of the first scene component combination includes information of a first target scene component, the first target scene component being a scene component for composing the first scene component combination; generating a first scene component combination in the game editing scene according to the information of the first scene component combination, wherein the first scene component combination comprises: generating a first target scene component in the game editing scene according to the information of the first target scene component to form a first scene component combination.
In one embodiment, the game scenario component generating apparatus 900 may further comprise a search processing module configured to: acquiring scene component control searching information for searching an existing scene component control; the existing scene component controls comprise a first scene component control and/or a second scene component control; based on the scene component control search information, searching for an existing scene component control matched with the scene component control search information, and displaying the existing scene component control in a search result corresponding to the scene Jing Zujian control search information; determining second prompt information based on the scene component control search information, determining information of a second scene component combination according to the second prompt information, generating a third scene component control corresponding to the second scene component combination, and displaying the third scene component control in search results corresponding to the scene Jing Zujian control search information; the second scene component combination is a scene component combination to be generated corresponding to the second prompt information; the second scene component combination is composed of one or more scene components; in response to a third trigger operation of the third scenario component control, a second scenario component combination is generated in the game editing scenario according to the information of the second scenario component combination.
In one embodiment, the search processing module is further configured to: under the condition of generating a third scene component control corresponding to the second scene component combination, if new scene component control search information is acquired during the running of the current game editing scene, searching for an existing scene component control matched with the new scene component control search information based on the new scene component control search information, and displaying the existing scene component control in a search result corresponding to the new scene component control search information; determining new second prompt information based on the new scene component control search information, determining new second scene component combination information according to the new second prompt information, generating a third scene component control corresponding to the new second scene component combination, and displaying the third scene component control in a search result corresponding to the new scene component control search information; the third scene component control corresponding to the new second scene component combination is configured to generate the new second scene component combination in the game editing scene in response to and according to the third trigger operation.
In one embodiment, search results corresponding to the scene component control search information are displayed through a search interface, and an existing scene component control matched with the scene component control search information and a third scene component control are arranged in the search interface; the search processing module is further configured to: if the search interface does not display all the existing scene component controls and the third scene component controls matched with the scene component control search information, providing an interface navigation control; and responding to the fourth triggering operation of the interface navigation control, and controlling the search interface to scroll to the position where the third scene component control is displayed.
In one embodiment, determining information of the second scene component combination according to the second prompt information includes: determining N second component generation schemes according to the second prompt information; each second component generation scheme includes information of a corresponding second scene component combination; the different second component generation schemes are used to generate different second scene component combinations; n is a positive integer not less than 2; generating a third scene component control corresponding to the second scene component combination and displaying search results corresponding to the search information of the field Jing Zujian control comprises the following steps: and generating N third scene component controls corresponding to the N second scene component combinations and displaying the N third scene component controls in search results corresponding to the search information of the field Jing Zujian control.
In one embodiment, obtaining scenario component control search information for searching for a first scenario component control comprises: and providing a scene component control search control under the condition of displaying the game editing scene, and acquiring scene component control search information input based on the scene component control search control.
The specific details of each part in the above apparatus are already described in the method part embodiments, and the details not disclosed can refer to the embodiment content of the method part, so that the details are not repeated.
The exemplary embodiments of the present disclosure also provide a computer readable storage medium, which may be implemented in the form of a program product comprising a program code for causing an electronic device to execute (more specifically, may cause a processor of the electronic device to execute) the steps according to the various exemplary embodiments of the present disclosure described in the above section of the present disclosure, such as may perform the game scene component generating method in the present exemplary embodiment, when the program product is run on the electronic device, comprising the steps of: responding to the trigger condition in the game program, and determining the information of the first scene component combination according to the first prompt information; the first scene component combination is a scene component combination to be generated corresponding to the first prompt information; the first scene component combination is composed of one or more scene components; providing a plurality of first scene component controls and providing second scene component controls corresponding to the first scene component combinations under the condition of displaying the game editing scene; the first scene component control is configured to respond to and generate a corresponding scene component in the game editing scene according to the first trigger operation; in response to a second trigger operation of the second scenario component control, a first scenario component combination is generated in the game editing scenario according to the information of the first scenario component combination.
In one embodiment, before determining the information of the first scene component combination according to the first hint information, the method further comprises: and randomly generating first prompt information according to a preset rule.
In one embodiment, the generating the first prompt message according to the preset rule includes: randomly determining a target object type prompt word from a plurality of object type prompt words; determining a target feature dimension from feature dimensions of a target object type corresponding to the target object type prompt word, and randomly determining a target feature prompt word from the feature prompt words of the target feature dimension; and generating first prompt information according to the target object type prompt words and the target feature prompt words.
In one embodiment, randomly determining the target object type hint word from a plurality of object type hint words includes: randomly determining target primary object type prompt words from a plurality of primary object type prompt words; if the target primary object type corresponding to the target primary object type prompt word comprises a plurality of secondary object types, randomly determining the target secondary object type prompt word from the secondary object type prompt words corresponding to the plurality of secondary object types.
In one embodiment, the first prompt message is randomly generated according to a preset rule, and the method further includes: and randomly determining target color prompt words from the plurality of color prompt words, and adding the target color prompt words into the first prompt information.
In one embodiment, responsive to a trigger condition being met in a game program, determining information for a first scene component combination from first hint information includes: in response to entering the game editing scene from outside the game editing scene in the game program, determining information of the first scene component combination according to the first prompt information.
In one embodiment, in a case where a second scene component control corresponding to the first scene component combination is provided, the method further includes: the second scene component control is maintained unchanged during the current game editing scene run.
In one embodiment, the first scene component control is divided into different component types; providing a plurality of first scene component controls and providing a second scene component control corresponding to the first scene component combination, wherein the method comprises the following steps: providing a first type selection control corresponding to the component type, and providing a second type selection control; responding to a first selection operation of the first type selection control, determining a target first type selection control according to the first selection operation, and displaying a first scene component control under a component type corresponding to the target first type selection control; in response to a second selection operation of the second type of selection control, a second scene component control corresponding to the first scene component combination is displayed.
In one embodiment, determining information of a first scene component combination according to the first prompt information includes: determining M first component generation schemes according to the first prompt information; each first component generation scheme includes information of a corresponding first scene component combination; the different first component generation schemes are used for generating different first scene component combinations; m is a positive integer not less than 2; providing a second scene component control corresponding to the first scene component combination, comprising: m second scene component controls corresponding to the M first scene component combinations are provided.
In one embodiment, determining information of a first scene component combination according to the first prompt information includes: the first prompt information is sent to a server, and the server processes the first prompt information by utilizing a pre-trained component generation model to obtain information of a first scene component combination; and receiving information of the first scene component combination returned by the server.
In one embodiment, the information of the first scene component combination includes information of a first target scene component, the first target scene component being a scene component for composing the first scene component combination; generating a first scene component combination in the game editing scene according to the information of the first scene component combination, wherein the first scene component combination comprises: generating a first target scene component in the game editing scene according to the information of the first target scene component to form a first scene component combination.
In one embodiment, the method further comprises: acquiring scene component control searching information for searching an existing scene component control; the existing scene component controls comprise a first scene component control and/or a second scene component control; based on the scene component control search information, searching for an existing scene component control matched with the scene component control search information, and displaying the existing scene component control in a search result corresponding to the scene Jing Zujian control search information; determining second prompt information based on the scene component control search information, determining information of a second scene component combination according to the second prompt information, generating a third scene component control corresponding to the second scene component combination, and displaying the third scene component control in search results corresponding to the scene Jing Zujian control search information; the second scene component combination is a scene component combination to be generated corresponding to the second prompt information; the second scene component combination is composed of one or more scene components; in response to a third trigger operation of the third scenario component control, a second scenario component combination is generated in the game editing scenario according to the information of the second scenario component combination.
In one embodiment, in a case of generating a third scene component control corresponding to the second scene component combination, the method further includes: during the running of the current game editing scene, if new scene component control searching information is acquired, searching for an existing scene component control matched with the new scene component control searching information based on the new scene component control searching information, and displaying the existing scene component control in a searching result corresponding to the new scene component control searching information; determining new second prompt information based on the new scene component control search information, determining new second scene component combination information according to the new second prompt information, generating a third scene component control corresponding to the new second scene component combination, and displaying the third scene component control in a search result corresponding to the new scene component control search information; the third scene component control corresponding to the new second scene component combination is configured to generate the new second scene component combination in the game editing scene in response to and according to the third trigger operation.
In one embodiment, search results corresponding to the scene component control search information are displayed through a search interface, and an existing scene component control matched with the scene component control search information and a third scene component control are arranged in the search interface; the method further comprises the steps of: if the search interface does not display all the existing scene component controls and the third scene component controls matched with the scene component control search information, providing an interface navigation control; and responding to the fourth triggering operation of the interface navigation control, and controlling the search interface to scroll to the position where the third scene component control is displayed.
In one embodiment, determining information of the second scene component combination according to the second prompt information includes: determining N second component generation schemes according to the second prompt information; each second component generation scheme includes information of a corresponding second scene component combination; the different second component generation schemes are used to generate different second scene component combinations; n is a positive integer not less than 2; generating a third scene component control corresponding to the second scene component combination and displaying search results corresponding to the search information of the field Jing Zujian control comprises the following steps: and generating N third scene component controls corresponding to the N second scene component combinations and displaying the N third scene component controls in search results corresponding to the search information of the field Jing Zujian control.
In one embodiment, obtaining scenario component control search information for searching for a first scenario component control comprises: and providing a scene component control search control under the condition of displaying the game editing scene, and acquiring scene component control search information input based on the scene component control search control.
According to the method, on the one hand, a scheme for quickly and efficiently generating the game scene components is provided, information of the first scene component combination can be determined based on the first prompt information, a corresponding second scene component control is provided, a user can quickly generate the first scene component combination in a game editing scene by performing second triggering operation on the second scene component control, a large number of manual operations are not required for the user to edit and generate the scene components, labor and time cost are reduced, and editing efficiency is improved. On the other hand, the new scene components are generated by combining the existing scene components in the game program, so that the limitation of the existing scene component library can be broken through, a large number of diversified first scene component combinations are generated, and the game content is enriched. On the other hand, the mode of combining the scene components does not need to execute a complete modeling process, and is beneficial to controlling the size of the generated resources of the first scene component combination, so that the expenditure of resources such as calculation force and storage and the scheme realization cost are reduced, and the method can be applied to lightweight scenes such as mobile games.
In an alternative embodiment, the program product may be implemented as a portable compact disc read only memory (CD-ROM) and comprises program code and may run on an electronic device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The exemplary embodiments of the present disclosure also provide an electronic device, such as may be the terminal device 110 or the server 120 described above. The electronic device may include a processor and a memory. The memory stores executable instructions of the processor, such as program code. The processor performs the method of the present exemplary embodiment by executing the executable instructions. The electronic device may further comprise a display for displaying the graphical user interface.
An electronic device is illustrated in the form of a general purpose computing device with reference to fig. 10. It should be understood that the electronic device 1000 shown in fig. 10 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 10, the electronic device 1000 may include: processor 1010, memory 1020, bus 1030, I/O (input/output) interface 1040, network adapter 1050, and display 1060.
Memory 1020 may include volatile memory such as RAM 1021, cache unit 1022, and may also include nonvolatile memory such as ROM 1023. Memory 1020 may also include one or more program modules 1024, such program modules 1024 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. For example, program modules 1024 may include modules in the apparatus described above.
The processor 1010 may include one or more processing units, such as: the processor 1010 may include an AP (Application Processor ), modem processor, GPU (Graphics Processing Unit, graphics processor), ISP (Image Signal Processor ), controller, encoder, decoder, DSP (Digital Signal Processor ), baseband processor, and/or NPU (Neural-Network Processing Unit, neural network processor), and the like.
The processor 1010 is operable to execute executable instructions stored in the memory 1020, such as may perform the game scene component generation method in the present exemplary embodiment, comprising the steps of: responding to the trigger condition in the game program, and determining the information of the first scene component combination according to the first prompt information; the first scene component combination is a scene component combination to be generated corresponding to the first prompt information; the first scene component combination is composed of one or more scene components; providing a plurality of first scene component controls and providing second scene component controls corresponding to the first scene component combinations under the condition of displaying the game editing scene; the first scene component control is configured to respond to and generate a corresponding scene component in the game editing scene according to the first trigger operation; in response to a second trigger operation of the second scenario component control, a first scenario component combination is generated in the game editing scenario according to the information of the first scenario component combination.
In one embodiment, before determining the information of the first scene component combination according to the first hint information, the method further comprises: and randomly generating first prompt information according to a preset rule.
In one embodiment, the generating the first prompt message according to the preset rule includes: randomly determining a target object type prompt word from a plurality of object type prompt words; determining a target feature dimension from feature dimensions of a target object type corresponding to the target object type prompt word, and randomly determining a target feature prompt word from the feature prompt words of the target feature dimension; and generating first prompt information according to the target object type prompt words and the target feature prompt words.
In one embodiment, randomly determining the target object type hint word from a plurality of object type hint words includes: randomly determining target primary object type prompt words from a plurality of primary object type prompt words; if the target primary object type corresponding to the target primary object type prompt word comprises a plurality of secondary object types, randomly determining the target secondary object type prompt word from the secondary object type prompt words corresponding to the plurality of secondary object types.
In one embodiment, the first prompt message is randomly generated according to a preset rule, and the method further includes: and randomly determining target color prompt words from the plurality of color prompt words, and adding the target color prompt words into the first prompt information.
In one embodiment, responsive to a trigger condition being met in a game program, determining information for a first scene component combination from first hint information includes: in response to entering the game editing scene from outside the game editing scene in the game program, determining information of the first scene component combination according to the first prompt information.
In one embodiment, in a case where a second scene component control corresponding to the first scene component combination is provided, the method further includes: the second scene component control is maintained unchanged during the current game editing scene run.
In one embodiment, the first scene component control is divided into different component types; providing a plurality of first scene component controls and providing a second scene component control corresponding to the first scene component combination, wherein the method comprises the following steps: providing a first type selection control corresponding to the component type, and providing a second type selection control; responding to a first selection operation of the first type selection control, determining a target first type selection control according to the first selection operation, and displaying a first scene component control under a component type corresponding to the target first type selection control; in response to a second selection operation of the second type of selection control, a second scene component control corresponding to the first scene component combination is displayed.
In one embodiment, determining information of a first scene component combination according to the first prompt information includes: determining M first component generation schemes according to the first prompt information; each first component generation scheme includes information of a corresponding first scene component combination; the different first component generation schemes are used for generating different first scene component combinations; m is a positive integer not less than 2; providing a second scene component control corresponding to the first scene component combination, comprising: m second scene component controls corresponding to the M first scene component combinations are provided.
In one embodiment, determining information of a first scene component combination according to the first prompt information includes: the first prompt information is sent to a server, and the server processes the first prompt information by utilizing a pre-trained component generation model to obtain information of a first scene component combination; and receiving information of the first scene component combination returned by the server.
In one embodiment, the information of the first scene component combination includes information of a first target scene component, the first target scene component being a scene component for composing the first scene component combination; generating a first scene component combination in the game editing scene according to the information of the first scene component combination, wherein the first scene component combination comprises: generating a first target scene component in the game editing scene according to the information of the first target scene component to form a first scene component combination.
In one embodiment, the method further comprises: acquiring scene component control searching information for searching an existing scene component control; the existing scene component controls comprise a first scene component control and/or a second scene component control; based on the scene component control search information, searching for an existing scene component control matched with the scene component control search information, and displaying the existing scene component control in a search result corresponding to the scene Jing Zujian control search information; determining second prompt information based on the scene component control search information, determining information of a second scene component combination according to the second prompt information, generating a third scene component control corresponding to the second scene component combination, and displaying the third scene component control in search results corresponding to the scene Jing Zujian control search information; the second scene component combination is a scene component combination to be generated corresponding to the second prompt information; the second scene component combination is composed of one or more scene components; in response to a third trigger operation of the third scenario component control, a second scenario component combination is generated in the game editing scenario according to the information of the second scenario component combination.
In one embodiment, in a case of generating a third scene component control corresponding to the second scene component combination, the method further includes: during the running of the current game editing scene, if new scene component control searching information is acquired, searching for an existing scene component control matched with the new scene component control searching information based on the new scene component control searching information, and displaying the existing scene component control in a searching result corresponding to the new scene component control searching information; determining new second prompt information based on the new scene component control search information, determining new second scene component combination information according to the new second prompt information, generating a third scene component control corresponding to the new second scene component combination, and displaying the third scene component control in a search result corresponding to the new scene component control search information; the third scene component control corresponding to the new second scene component combination is configured to generate the new second scene component combination in the game editing scene in response to and according to the third trigger operation.
In one embodiment, search results corresponding to the scene component control search information are displayed through a search interface, and an existing scene component control matched with the scene component control search information and a third scene component control are arranged in the search interface; the method further comprises the steps of: if the search interface does not display all the existing scene component controls and the third scene component controls matched with the scene component control search information, providing an interface navigation control; and responding to the fourth triggering operation of the interface navigation control, and controlling the search interface to scroll to the position where the third scene component control is displayed.
In one embodiment, determining information of the second scene component combination according to the second prompt information includes: determining N second component generation schemes according to the second prompt information; each second component generation scheme includes information of a corresponding second scene component combination; the different second component generation schemes are used to generate different second scene component combinations; n is a positive integer not less than 2; generating a third scene component control corresponding to the second scene component combination and displaying search results corresponding to the search information of the field Jing Zujian control comprises the following steps: and generating N third scene component controls corresponding to the N second scene component combinations and displaying the N third scene component controls in search results corresponding to the search information of the field Jing Zujian control.
In one embodiment, obtaining scenario component control search information for searching for a first scenario component control comprises: and providing a scene component control search control under the condition of displaying the game editing scene, and acquiring scene component control search information input based on the scene component control search control.
Based on the implementation of the method by the electronic device 1000, on one hand, a scheme for quickly and efficiently generating the game scene components is provided, the information of the first scene component combination can be determined based on the first prompt information, and the corresponding second scene component control is provided, so that the user can quickly generate the first scene component combination in the game editing scene by performing the second trigger operation on the second scene component control, and the user does not need to perform a large amount of manual operation to edit and generate the scene components, thereby reducing the labor and time cost and improving the editing efficiency. On the other hand, the new scene components are generated by combining the existing scene components in the game program, so that the limitation of the existing scene component library can be broken through, a large number of diversified first scene component combinations are generated, and the game content is enriched. On the other hand, the mode of combining the scene components does not need to execute a complete modeling process, and is beneficial to controlling the size of the generated resources of the first scene component combination, so that the expenditure of resources such as calculation force and storage and the scheme realization cost are reduced, and the method can be applied to lightweight scenes such as mobile games.
The bus 1030 is used to enable connections between the various components of the electronic device 1000 and may include a data bus, an address bus, and a control bus.
The electronic device 1000 can communicate with one or more external devices 1100 (e.g., keyboard, mouse, external controller, etc.) through the I/O interface 1040.
Electronic device 1000 can communicate with one or more networks through network adapter 1050, e.g., network adapter 1050 can provide a mobile communication solution such as 3G/4G/5G, or a wireless communication solution such as wireless local area network, bluetooth, near field communication, etc. Network adapter 1050 can communicate with other modules of electronic device 1000 via bus 1030.
The electronic device 1000 may display a graphical user interface, such as displaying game editing scenes or the like, through the display 1060.
Although not shown in fig. 10, other hardware and/or software modules may also be provided in the electronic device 1000, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims. It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof.

Claims (19)

1. A method of generating a game scene component, the method comprising:
Responding to the trigger condition in the game program, and determining the information of the first scene component combination according to the first prompt information; the first scene component combination is a scene component combination to be generated corresponding to the first prompt information; the first scene component combination is composed of one or more scene components;
providing a plurality of first scene component controls and providing second scene component controls corresponding to the first scene component combinations under the condition of displaying the game editing scene; the first scene component control is configured to respond to and generate a corresponding scene component in the game editing scene according to a first trigger operation;
and responding to a second triggering operation of the second scene component control, and generating the first scene component combination in the game editing scene according to the information of the first scene component combination.
2. The method of claim 1, wherein prior to determining the information for the first scene component combination based on the first hint information, the method further comprises:
and randomly generating first prompt information according to a preset rule.
3. The method of claim 2, wherein the randomly generating the first hint information according to the preset rule includes:
Randomly determining a target object type prompt word from a plurality of object type prompt words;
determining a target feature dimension from feature dimensions of the target object type corresponding to the target object type prompt word, and randomly determining a target feature prompt word from the feature prompt words of the target feature dimension;
and generating first prompt information according to the target object type prompt words and the target feature prompt words.
4. The method of claim 3, wherein the randomly determining the target object type hint word from the plurality of object type hint words comprises:
randomly determining target primary object type prompt words from a plurality of primary object type prompt words;
if the target primary object type corresponding to the target primary object type prompt word comprises a plurality of secondary object types, randomly determining a target secondary object type prompt word from the secondary object type prompt words corresponding to the plurality of secondary object types.
5. The method of claim 3, wherein the randomly generating the first hint information according to a preset rule further comprises:
and randomly determining target color prompt words from the plurality of color prompt words, and adding the target color prompt words into the first prompt information.
6. The method of claim 1, wherein determining information for the first scene component combination based on the first hint information in response to the trigger condition being met in the game program comprises:
and responding to entering the game editing scene from outside the game editing scene in the game program, and determining the information of the first scene component combination according to the first prompt information.
7. The method of claim 6, wherein in the case of providing a second scene component control corresponding to the first scene component combination, the method further comprises:
and during the running of the current game editing scene, keeping the second scene component control unchanged.
8. The method of claim 1, wherein the first scene component control is partitioned into different component types; the providing a plurality of first scene component controls and providing a second scene component control corresponding to the first scene component combination includes:
providing a first type selection control corresponding to the component type, and providing a second type selection control;
responding to a first selection operation of the first type selection control, determining a target first type selection control according to the first selection operation, and displaying a first scene component control under a component type corresponding to the target first type selection control;
And responding to a second selection operation of the second type selection control, and displaying a second scene component control corresponding to the first scene component combination.
9. The method of claim 1, wherein determining information for a first scene component combination based on the first hint information comprises:
determining M first component generation schemes according to the first prompt information; each first component generation scheme includes information of a corresponding first scene component combination; the different first component generation schemes are used for generating different first scene component combinations; m is a positive integer not less than 2;
the providing the second scene component control corresponding to the first scene component combination comprises the following steps:
m second scene component controls corresponding to the M first scene component combinations are provided.
10. The method of claim 1, wherein determining information for a first scene component combination based on the first hint information comprises:
the first prompt information is sent to a server, and the server processes the first prompt information by utilizing a pre-trained component generation model to obtain information of the first scene component combination;
and receiving information of the first scene component combination returned by the server.
11. The method of claim 1, wherein the information of the first scene component combination includes information of a first target scene component, the first target scene component being a scene component for composing the first scene component combination; the generating the first scene component combination in the game editing scene according to the information of the first scene component combination comprises the following steps:
and generating the first target scene component in the game editing scene according to the information of the first target scene component so as to form the first scene component combination.
12. The method according to claim 1, wherein the method further comprises:
acquiring scene component control searching information for searching an existing scene component control; the existing scene component controls comprise a first scene component control and/or a second scene component control;
searching for an existing scene component control matched with the scene component control searching information based on the scene component control searching information, and displaying the existing scene component control in a searching result corresponding to the scene component control searching information;
determining second prompt information based on the scene component control search information, determining information of a second scene component combination according to the second prompt information, generating a third scene component control corresponding to the second scene component combination, and displaying the third scene component control in a search result corresponding to the scene component control search information; the second scene component combination is a scene component combination to be generated corresponding to the second prompt information; the second scene component combination is composed of one or more scene components;
And responding to a third triggering operation of the third scene component control, and generating the second scene component combination in the game editing scene according to the information of the second scene component combination.
13. The method of claim 12, wherein in the case of generating a third scene component control corresponding to the second scene component combination, the method further comprises:
during the running of the current game editing scene, if new scene component control searching information is acquired, searching for an existing scene component control matched with the new scene component control searching information based on the new scene component control searching information, and displaying the existing scene component control in a searching result corresponding to the new scene component control searching information; determining new second prompt information based on the new scene component control search information, determining new second scene component combination information according to the new second prompt information, generating a third scene component control corresponding to the new second scene component combination, and displaying the third scene component control in a search result corresponding to the new scene component control search information; the third scene component control corresponding to the new second scene component combination is configured to generate the new second scene component combination in the game editing scene in response to and according to a third trigger operation.
14. The method of claim 11, wherein search results corresponding to the scene component control search information are presented through a search interface, and an existing scene component control matching the scene component control search information and the third scene component control are arranged in the search interface; the method further comprises the steps of:
providing an interface navigation control if the search interface does not display all the existing scene component controls matched with the scene component control search information and the third scene component control;
and responding to a fourth triggering operation of the interface navigation control, and controlling the search interface to scroll to a position for displaying the third scene component control.
15. The method of claim 11, wherein determining information for a second scene component combination based on the second hint information comprises:
determining N second component generation schemes according to the second prompt information; each second component generation scheme includes information of a corresponding second scene component combination; the different second component generation schemes are used to generate different second scene component combinations; n is a positive integer not less than 2;
The generating the third scene component control corresponding to the second scene component combination and displaying the third scene component control in the search result corresponding to the scene component control search information comprises the following steps:
and generating N third scene component controls corresponding to the N second scene component combinations and displaying the N third scene component controls in search results corresponding to the scene component control search information.
16. The method of claim 11, wherein the obtaining scenario component control search information for searching for the first scenario component control comprises:
and providing a scene component control searching control under the condition of displaying the game editing scene, and acquiring scene component control searching information input based on the scene component control searching control.
17. A game scene component generating apparatus, the apparatus comprising:
the scene component combination information determining module is configured to determine information of a first scene component combination according to the first prompt information in response to the trigger condition being met in the game program; the first scene component combination is a scene component combination to be generated corresponding to the first prompt information; the first scene component combination is composed of one or more scene components;
The scene component control providing module is configured to provide a plurality of first scene component controls and provide second scene component controls corresponding to the first scene component combinations under the condition that the game editing scene is displayed; the first scene component control is configured to respond to and generate a corresponding scene component in the game editing scene according to a first trigger operation;
and the scene component combination generation processing module is configured to respond to a second trigger operation of the second scene component control and generate the first scene component combination in the game editing scene according to the information of the first scene component combination.
18. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any one of claims 1 to 16.
19. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any one of claims 1 to 16 via execution of the executable instructions.
CN202311770775.2A 2023-12-20 2023-12-20 Game scene component method, device, storage medium and electronic equipment Pending CN117717785A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311770775.2A CN117717785A (en) 2023-12-20 2023-12-20 Game scene component method, device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311770775.2A CN117717785A (en) 2023-12-20 2023-12-20 Game scene component method, device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN117717785A true CN117717785A (en) 2024-03-19

Family

ID=90208770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311770775.2A Pending CN117717785A (en) 2023-12-20 2023-12-20 Game scene component method, device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117717785A (en)

Similar Documents

Publication Publication Date Title
US11721081B2 (en) Virtual reality experience scriptwriting
US20100287529A1 (en) Systems and Methods for Generating Multimedia Applications
CN112074876A (en) Authoring content in a three-dimensional environment
KR20170078651A (en) Authoring tools for synthesizing hybrid slide-canvas presentations
CN112068750A (en) House resource processing method and device
CN112860148B (en) Medal icon editing method, device, equipment and computer readable storage medium
JP4370792B2 (en) Video sequence hierarchy forming method and program
CN112596694B (en) Method and device for processing house source information
CN108846886A (en) A kind of generation method, client, terminal and the storage medium of AR expression
CN115082602A (en) Method for generating digital human, training method, device, equipment and medium of model
CN109513212B (en) 2D mobile game UI (user interface) and scenario editing method and system
Sul et al. Virtual stage: a location-based karaoke system
CN117717785A (en) Game scene component method, device, storage medium and electronic equipment
WO2023160015A1 (en) Method and apparatus for marking position in virtual scene, and device, storage medium and program product
CN106331525A (en) Realization method for interactive film
CN117717784A (en) Game scene component generation method and device, storage medium and electronic equipment
US11423941B2 (en) Write-a-movie: unifying writing and shooting
Talib et al. Design and development of an interactive virtual shadow puppet play
Mokhov et al. Agile forward-reverse requirements elicitation as a creative design process: A case study of Illimitable Space System v2
Wang et al. Design and implementation of realistic rendering and immersive experience system based on unreal engine4
CN118079402A (en) Game scene component editing method, device, program product and electronic equipment
CN117654051A (en) Game event editing method, game event editing device, storage medium and electronic equipment
CN112631814B (en) Game scenario dialogue playing method and device, storage medium and electronic equipment
US20220329922A1 (en) Method and platform of generating a short video, electronic device, and storage medium
Griffith et al. MARTA: Modern Automatic Renderings from Text to Animation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication