CN117899460A - Interaction method and device in game and electronic equipment - Google Patents

Interaction method and device in game and electronic equipment Download PDF

Info

Publication number
CN117899460A
CN117899460A CN202311687262.5A CN202311687262A CN117899460A CN 117899460 A CN117899460 A CN 117899460A CN 202311687262 A CN202311687262 A CN 202311687262A CN 117899460 A CN117899460 A CN 117899460A
Authority
CN
China
Prior art keywords
game
game scene
target
scene map
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311687262.5A
Other languages
Chinese (zh)
Inventor
刘诗琪
刘姿佑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311687262.5A priority Critical patent/CN117899460A/en
Publication of CN117899460A publication Critical patent/CN117899460A/en
Pending legal-status Critical Current

Links

Abstract

The present disclosure provides an interactive method, an interactive device and an electronic device in a game, wherein a setting panel containing at least one game scene map identifier is displayed in a graphical user interface in response to a first trigger operation, and a game scene map corresponding to the game scene map identifier is used for providing a game running scene; determining a target game scene map identification from the setting panel in response to the second trigger operation; and controlling to generate a demonstration model in the game scene, wherein the demonstration model comprises information corresponding to a game scene map corresponding to the target game scene map identifier, and the demonstration model is a virtual model generated according to the game scene map data corresponding to the target game scene map identifier and the initial demonstration model data. In the mode, the player can show the interested game scene map in the game scene through the showing model so as to support the game scene map, and the player can independently edit the content displayed in the showing model, so that the personalized requirement of the player on support is met.

Description

Interaction method and device in game and electronic equipment
Technical Field
The disclosure relates to the technical field of game interaction design, and in particular relates to an interaction method, an interaction device and electronic equipment in a game.
Background
Players can share or should support their own favorite game content or virtual props within the game. In the related art, players can share things to friends or communities in the game through sharing a UI popup window or a link. However, the sharing and the application modes are single, and the sharing content is homogeneous and lacks of interest, so that the sharing effect of players is poor.
Disclosure of Invention
The invention aims to provide an interaction method, an interaction device and electronic equipment in a game, so that a rendering mode of a game map is more vivid and interesting, and a player can edit a map which wants to publicize at any time.
In a first aspect, the present disclosure provides a method of interaction in a game, the method comprising: providing a graphical user interface through the terminal device, wherein at least part of game scenes are displayed in the graphical user interface, and the game scenes comprise: at least one first virtual object and a second virtual object controlled by the terminal device; responding to the first triggering operation, displaying a setting panel in a graphical user interface, wherein the setting panel comprises at least one game scene map identifier, and a game scene map corresponding to the game scene map identifier is used for providing a game running scene so that the first virtual object and/or the second virtual object play a game in the game running scene; responding to the second triggering operation, and determining a target game scene map identifier from at least one game scene map identifier; and controlling to generate a demonstration model in the game scene, wherein the demonstration model comprises information corresponding to a game scene map corresponding to the target game scene map identifier, and the demonstration model is a virtual model generated according to the game scene map data corresponding to the target game scene map identifier and the initial demonstration model data.
In a second aspect, the present disclosure provides an interactive apparatus in a game, the apparatus comprising: the interface display module is used for providing a graphical user interface through the terminal equipment, wherein at least part of game scenes are displayed in the graphical user interface, and the game scenes comprise: at least one first virtual object and a second virtual object controlled by the terminal device; the panel display module is used for responding to the first triggering operation, displaying a setting panel in the graphical user interface, wherein the setting panel comprises at least one game scene map identifier, and a game scene map corresponding to the game scene map identifier is used for providing a game running scene so that the first virtual object and/or the second virtual object play a game in the game running scene; the map selection module is used for responding to the second triggering operation and determining a target game scene map identifier from at least one game scene map identifier; the model display module is used for controlling a display model to be generated in the game scene, the display model comprises information corresponding to a game scene map corresponding to the target game scene map identification, and the display model is a virtual model generated according to game scene map data corresponding to the target game scene map identification and initial display model data.
In a third aspect, the present disclosure provides an electronic device comprising a processor and a memory storing machine executable instructions executable by the processor to implement the method of interaction in a game described above.
In a fourth aspect, the present disclosure provides a computer-readable storage medium storing computer-executable instructions that, when invoked and executed by a processor, cause the processor to implement the method of interaction in a game described above.
The embodiment of the disclosure brings the following beneficial effects:
The present disclosure provides an interaction method, an apparatus and an electronic device in a game, where a graphical user interface is provided by a terminal device, and at least a part of game scenes are displayed in the graphical user interface, where the game scenes include: at least one first virtual object and a second virtual object controlled by the terminal device; firstly, responding to a first triggering operation, displaying a setting panel in a graphical user interface, wherein the setting panel comprises at least one game scene map identifier, and a game scene map corresponding to the game scene map identifier is used for providing a game running scene so that a first virtual object and/or a second virtual object play a game in the game running scene; further, responding to the second triggering operation, and determining a target game scene map identifier from at least one game scene map identifier; and then controlling to generate a demonstration model in the game scene, wherein the demonstration model comprises information corresponding to a game scene map corresponding to the target game scene map identifier, and the demonstration model is a virtual model generated according to the game scene map data corresponding to the target game scene map identifier and the initial demonstration model data. In the mode, the player can show the interested game scene map in the game scene through the showing model so as to support the game scene map, and the player can independently edit the content displayed in the showing model, so that the personalized requirement of the player on support is met.
Additional features and advantages of the disclosure will be set forth in the description which follows, or in part will be obvious from the description, or may be learned by practice of the techniques of the disclosure.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the prior art, the drawings that are required in the detailed description or the prior art will be briefly described, it will be apparent that the drawings in the following description are some embodiments of the present disclosure, and other drawings may be obtained according to the drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a flow chart of an interactive method in a game provided in an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a second virtual object hand-held display model according to an embodiment of the disclosure;
fig. 3 is a schematic display diagram of a setting panel according to an embodiment of the disclosure;
FIG. 4 is a schematic illustration of a display of an operational wheel provided in an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of initial presentation model data provided by an embodiment of the present disclosure;
FIG. 6 is a schematic structural diagram of an interactive device in a game according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
Players can share or should support their own favorite game content or virtual props within the game. In the related art, players can share things to friends or communities in the game through sharing a UI popup window or a link. However, the sharing mode through the UI interface is common and not visual enough, and particularly in games with rich scene interaction, the performance after sharing through the UI is weaker and the immersion feeling is not provided; in the two modes, the sharer does not have the function support of related editing or creation sharing, so that the sharing form is relatively homogeneous, the sharing content form which is checked by the sharer is relatively consistent, and the sharing effect of the player is relatively poor due to the lack of characteristics.
Based on the above problems, the embodiments of the present disclosure provide an interaction method, an apparatus and an electronic device in a game, where the technology may be applied to a game interaction scene, especially a scene of applying to a game scene map.
The interaction method in the game in one embodiment of the present disclosure may be executed on a local terminal device or a server. When the interaction method in the game runs on the server, the method can be realized and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an alternative embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, the running main body of the game program and the game picture presentation main body are separated, the storage and running of the interaction method in the game are completed on the cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the cloud game server which performs information processing is a cloud. When playing the game, the player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the client device through a network, and finally decodes the data through the client device and outputs the game pictures.
In an alternative embodiment, taking a game as an example, the local terminal device stores a game program and is used to present a game screen. The local terminal device is used for interacting with the player through the graphical user interface, namely, conventionally downloading and installing the game program through the electronic device and running. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, may be rendered for display on a display screen of the terminal, or provided to the player by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
In a possible implementation manner, the embodiment of the present disclosure provides an interaction method in a game, as shown in fig. 1, where the method includes the following specific steps:
Step S102, providing a graphical user interface through the terminal equipment, wherein at least part of game scenes are displayed in the graphical user interface, and the game scenes comprise: at least one first virtual object and a second virtual object controlled by the terminal device.
The terminal device may be the aforementioned local terminal device, or may be a client device in the aforementioned cloud interaction system. For example, the terminal device may be a mobile phone, a tablet computer, a personal computer, or the like. The terminal device enters the game by running the game program and displays a graphical user interface in which a part of the game scene is displayed, which may be an initial game scene or a game running scene (also referred to as a game play scene). The game play scene can be understood as a scene of the virtual object to experience the game, namely a scene when the virtual object plays the game play; the initial game scene is a lobby that is displayed when the player just enters the game, and in an alternative embodiment, the lobby is a virtual game scene, not a game play scene. The game scene may contain a plurality of virtual objects that may be played in the game scene, interacted with other virtual objects, played against, etc.
Step S104, responding to the first triggering operation, displaying a setting panel in the graphical user interface, wherein the setting panel comprises at least one game scene map identifier, and a game scene map corresponding to the game scene map identifier is used for providing a game running scene so that the first virtual object and/or the second virtual object play a game in the game running scene.
The specific operation corresponding to the first triggering operation can be determined according to the requirement and the player operation. For example, the first trigger operation may be an operation for triggering a control displayed in the graphical user interface, or may be a trigger operation for a specified area in the graphical user interface. When the player performs the first triggering operation, a setting panel is displayed in the graphical user interface, wherein the setting panel comprises one or more game scene map identifications, the game scene map identifications are unique, the game scene map identifications are used for indicating a game scene map in a game, the display mode of the identifications corresponding to the game scene map identifications can be determined according to research and development requirements or player operations, and for example, the game scene map identifications can be map covers, map names or map author names.
In a specific implementation, the game scene map corresponding to the game scene map identifier displayed in the setting panel may be a game scene map played by the second virtual object, or may be a game scene map collected by the second virtual object, for example, the system may directly pull a map list in a map favorites corresponding to the second virtual object, and display the game scene map identifier in the map list in the setting panel.
In practical applications, the game scene map corresponding to the game scene map identifier displayed in the setting panel is usually obtained by the map creator automatically editing the game scene map according to the requirement, that is, the map creator can edit the game editing scene in the game editor, so that the game scene map is generated based on the edited game editing scene. The map creator may be a player who handles the first virtual object, a player who handles the second virtual object, or another player in the game.
Step S106, responding to the second triggering operation, and determining the target game scene map identification from at least one game scene map identification.
The specific operation corresponding to the second triggering operation may be determined according to the research and development requirement and the player operation, for example, the second triggering operation may be a clicking operation, a long-press operation, or a sliding operation of the player on a target game scene map identifier in the game scene map identifiers displayed in the setting panel; the player may click on the determination control displayed in the setting panel after clicking on the target game scene map identifier. The target game scene map identifier may be any one of the game scene map identifiers displayed in the setting panel.
Step S108, control generates a demonstration model in the game scene, wherein the demonstration model comprises information corresponding to a game scene map corresponding to the target game scene map identification, and the demonstration model is a virtual model generated according to the game scene map data corresponding to the target game scene map identification and the initial demonstration model data.
In the specific implementation, after the player determines the target game scene map identifier from at least one game scene map identifier, a display model can be automatically displayed in the game scene, or a display model can be displayed in the game scene after the player triggers a designated control displayed in a graphical user interface, wherein the display model is generated according to game scene map data corresponding to the target game scene map identifier and initial display model data, the initial display model data is used for creating a virtual model without any content, and the display model can be obtained by embedding the game scene map data in the virtual model.
The presentation model displays information related to a game scene map corresponding to the target game scene map identifier, wherein the information generally comprises game scene map data corresponding to the target game scene map, and the game scene map data can comprise at least one of the following data: the target game scene map identifies the map name, the map author and the map cover of the corresponding game scene map, and the map video and the virtual object obtained by collecting the game scene map play the game fragments of the game in the game running scene corresponding to the game scene map. Specifically, the map cover may be a panoramic picture of the game scene map, or may be a picture corresponding to a part of the scene area in the game scene map; the map video may be a video related to a game scene map uploaded by a map author, or may be a map video obtained by shooting a game scene map according to a preset shooting track.
In practical applications, the display model generated in the game scene may be displayed at a designated position in the game scene (for example, the display model stands on the designated position), or may be displayed on the first virtual object, so that the first virtual object may drive the display model to move, for example, the first virtual object may hold the display model with hands, hold the display model with the hands, or hold the display model with the hands. The designated position may be any position in the game scene, and the position may be determined according to player setting or research and development requirements.
According to the interaction method in the game, a player can display an interested game scene map in a game scene through the display model so as to support the game scene map, and the player can independently edit the content displayed in the display model, so that personalized requirements of the player on support are met.
The following preferred embodiments are used to describe the manner in which a map author creates a map of a game scene.
Specifically, the game scene map is generated by the following steps 10-12:
step 10, in response to the first editing operation, creating at least one scene component in the game editing scene based on the first editing operation.
In a specific implementation, the game editing scene is provided by a game editor, and a player can edit the editable component according to requirements in the game editing scene, and the edited editable component is determined to be a scene component and is arranged in the game editing scene. In practical applications, when a player triggers a game editing instruction, a game editing scene may be displayed in a graphical user interface, and the game editing instruction may be determined according to game rules. For example, the editing instruction may be an operation to enter a game editor; the operation of selecting a certain scene map for editing may be performed.
The editable components are typically components in a component library, and a player triggers the editable components to generate scene components corresponding to the editable components in the game editing scene. The component types and the component forms of the editable components included in the component library can be set according to the development requirements. For example, a plurality of different types of editable components are included in the component library, including: the system comprises a structure type component, a furniture type component, an environment type component, an organization type component, a biological type component, a combination type component and the like, wherein each type of editable component also comprises a plurality of editable components. For example, the mechanism type component also comprises a motion mechanism component, a functional mechanism component, a logic component, an object component, a floor component, a carrier component and the like.
Step 11, in response to the second editing operation, determining component parameters of at least one scene component based on the second editing operation.
The specific operation corresponding to the second editing operation may be determined according to the development requirement, for example, the second editing operation may be an editing operation for a parameter in an editing window provided in a game editing scene, and the edited parameter is determined as a component parameter of the scene component; the second editing operation may be a triggering operation of a setting control displayed in the graphical user interface after a certain scene component in the game editing scene is selected, and after the setting control is triggered, a parameter setting panel corresponding to the scene component is displayed in the editing window, and the player may edit component parameters of the scene component in the parameter setting panel.
Step 12, responding to map generation operation, controlling component parameters based on scene components created in the game editing scene, generating a game scene map corresponding to the game editing scene, and transmitting the game scene map to a server; the server is configured to be in communication connection with the terminal device, the terminal device is configured with a game program, the terminal device is configured to obtain a game scene map from the server, and a corresponding game running scene is generated according to the game scene map through the game program.
In a specific implementation, the map generating operation may be determined according to a development requirement, and the map generating operation may be a click operation or a long press operation on a preset control displayed in a graphical user interface. After the map author performs the map generation operation, a game scene map corresponding to the game editing scene may be generated, and the game scene information corresponding to the game scene map may be stored in a preset position, where the preset position may be a map file, and the map file may store not only the game scene information, but also other map information (including but not limited to screenshot, map name, log, etc.). The map file is uploaded to the server after storing the game scene information. After the verification of the server is passed, the game scene map generated by the game scene information can be released into a preset map pool, so that the terminal equipment connected with the server can download the corresponding game scene map from the server, generate a corresponding game running scene according to the game scene map through a game program, and then play game experience in the game running scene. The method can release the game scene map in the game editor and be experienced by other players, so that a rapid UGC (User Generated Content, original user content) function is realized.
The following preferred embodiments are used to describe the manner in which a presentation model is displayed in a game scene.
Specifically, the specific process of generating a presentation model in a game scene by the control may include: generating a display model based on game scene map data and initial display model data corresponding to the target game scene map identification; controlling to attach the presentation model to a second virtual object in the game scene.
In particular implementations, the initial presentation model data includes: a first model data layer and a second model data layer located above the first model data layer; the first model data layer is used for configuring the model style of the virtual model, and the second model data layer is used for embedding game scene map data corresponding to the target game scene map identification so as to display the game scene map data on the virtual model and obtain the display model. The model style of the virtual model is related to the shape of the first model data layer, for example, the first model data layer is rectangular, and then the virtual model is also rectangular; the first model data layer is circular and the virtual model is circular. In practical applications, the virtual model may be understood as a whiteboard model before the initial presentation model data is not embedded in the game scene map data.
After the display model is generated, a second virtual object with the generated display model in a hanging manner appears in the game scene, and the display model can be hung at any position of the second virtual object, for example, the display model can be hung on the hand, the back or the top of the head of the second virtual object. After the display model is hung on the second virtual object, the display model moves along with the movement of the second virtual object, and correspondingly shakes along with the movement of the second virtual object, so that the state that the real person carries things to move is simulated.
In an alternative embodiment, the specific process of controlling the hooking of the presentation model on the second virtual object in the game scene may include: controlling the second virtual object to lift the display model through the appointed action; wherein the specified actions include default actions or target actions determined based on action setting operations.
In specific implementation, after the display model is generated, a second virtual object in the game scene is displayed in the graphical user interface, and a screen of the display model is lifted by a designated action, as shown in fig. 2, which is a schematic diagram of the second virtual object hand-lifted display model provided in the embodiment of the disclosure, in fig. 2, a character model with a rectangular hand-lifted shape is the second virtual object, a rectangle hand-lifted by the second virtual object is a schematic diagram of the display model, and in fig. 2, the character model except the second virtual object is the first virtual object. The designated action can be a default action or a target action edited by a player according to an action setting operation, and the specific operation corresponding to the action setting operation can be determined according to research and development requirements or player operations. The specific action corresponding to the default action can be determined according to the research and development requirements.
In practical application, after the display model is generated, the second virtual object in the game scene can be immediately controlled to lift the display model through a default action; the second virtual object may be controlled to lift the display model through the target action after the target action is set based on the action setting operation without lifting the display model after the display model is generated; the second virtual object may be controlled to lift the display model by a default action after the display model is generated, and then the default action is adjusted to a target action based on the action setting operation, so that the second virtual object is switched to lift the display model by the target action.
In an alternative embodiment, the above-described target actions are determined by: responding to the action setting operation, and displaying action identifiers corresponding to a plurality of preset actions in a graphical user interface; wherein the default action is an action of a plurality of preset actions; responding to a selection operation aiming at action identifiers corresponding to a plurality of preset actions, and determining the preset action corresponding to the action identifier selected by the selection operation as a target action. The specific actions corresponding to the preset actions may be determined according to the development requirements, for example, the preset actions may be a second virtual object lifting display model, rotating the display model The whole country overhead or holding the display model in one hand, etc. The target action may be any one of a plurality of preset actions.
Each preset action corresponds to an action identifier, and a specific identifier corresponding to the action identifier can be determined according to research and development requirements. For example, the action identifier may be a text expression or an action portrait, and the action portrait includes a virtual object for making a preset action, so that a player can clearly know what the preset action corresponding to each action identifier is through the action identifier, which is helpful for improving the experience of the player.
In a specific embodiment, a first control displayed in a first display mode is included in a graphical user interface on which a game scene is displayed, and the first triggering operation is realized by triggering the first control displayed in the first display mode; and after the target game scene map identifier is determined from the at least one game scene map identifier in response to the second triggering operation, controlling the display mode of the first control to be switched from the first display mode to the second display mode, and displaying the first control in the second display mode in the graphical user interface.
In a specific implementation, the first control displayed in the first display style is displayed in the graphical user interface in the following manner: responding to the triggering operation of the prop selection control displayed on the graphical user interface, and displaying a plurality of virtual props in the graphical user interface; the virtual prop is used for controlling the second virtual object to execute a preset game behavior; and responding to the selected operation aiming at the target virtual prop in the plurality of virtual props, and displaying a first control corresponding to the target virtual prop in the graphical user interface. The plurality of virtual props displayed in the graphical user interface may be virtual props owned by the second virtual object, or may be virtual props that may be used by the second virtual object, each virtual prop may control the second virtual object to execute a preset game behavior, where a specific behavior corresponding to the preset game behavior may be determined according to a development requirement, for example, the preset game behavior may be a behavior such as a swing to call, jump, or hug. The target virtual prop can be one of a plurality of virtual props, wherein the prop can display a first control, and the action of lifting the display model by a second virtual object, the content displayed in the display model and the like can be edited by triggering the first control.
The player triggers the first control displayed in the first display mode, a setting panel containing at least one game map identifier is displayed in the graphical user interface, as shown in fig. 3, which is a schematic diagram of a setting panel provided by an embodiment of the disclosure, a rectangle with data displayed below a map collection list in the setting panel represents the game scene map identifiers, the number of game scene map identifiers currently displayed in the setting panel may be less than the total number of game scene map identifiers contained in the map collection list, and the player can view other game scene map identifiers in the map collection list through a sliding button displayed on the right side of the game scene map identifiers. The player executes a second triggering operation, namely, the target game scene map identifier can be determined from at least one game scene map identifier, and in particular, the second triggering operation can be a clicking operation for clicking the target game scene map identifier in the setting panel and then clicking a determination control displayed in the setting panel. After the player determines the target game scene identification from the setting panel, switching the display style of the first control displayed in the graphical user interface from the first display style to the second display style; the first display style and the second display style are different, and specific display styles corresponding to the first display style and the second display style can be determined according to research and development requirements, for example, the first display style can be that a control icon is empty or is a first pattern, and the second display style can be that the control icon is displayed as a prop identifier or a second pattern corresponding to a target virtual prop.
In a specific embodiment, the target action may be determined by: responding to a third triggering operation aiming at the first control displayed in the second display mode, and displaying action identifiers corresponding to a plurality of preset actions in a graphical user interface; responding to a sliding operation taking a triggering position of the third triggering operation as a starting point and taking a target action identifier in actions corresponding to a plurality of preset actions as an end point, selecting the target action identifier, and determining the preset action corresponding to the target action identifier as a target action. In response to the end of the sliding operation, the second virtual object is controlled to lift the presentation model in the game scene by the target action.
The third triggering operation may be a clicking operation or a long-press operation of the first control displayed in the second display mode, and when the player performs the third triggering operation on the first control displayed in the second display mode, action identifiers corresponding to a plurality of preset actions are displayed in the graphical user interface, and then in a continuous state of the third triggering operation, the player releases his hands after sliding his fingers to positions corresponding to the target action identifiers, so that the target action identifiers can be selected. The target action identifier may be any one of action identifiers corresponding to a plurality of preset actions.
In a specific implementation, when a player presses a first control displayed in a second display mode, an operation wheel is displayed in a graphical user interface, a plurality of action identifiers corresponding to preset actions are displayed in the operation wheel, as shown in fig. 4, which is a display schematic diagram of the operation wheel provided by the embodiment of the disclosure, the lower right corner in fig. 4 is displayed in the operation wheel, three action identifiers corresponding to preset actions are displayed in the operation wheel, the preset action corresponding to the action identifier displayed at the leftmost side of the operation wheel is a flat lifting display model, the preset action corresponding to the action identifier displayed at the uppermost side is to lift the display model over the top of the head, and the preset action corresponding to the action identifier displayed at the rightmost side is rotated as a single-hand holding display model. An exit control is also displayed in the operational wheel, and the player clicks the exit control, and the first control will cancel the display in the graphical user interface.
The mode can edit the action of lifting the display model by the virtual object in the game scene, so that the action of playing the virtual object on the display model in real life can be simulated, and the player can more personally publicize and apply the virtual object on the favorite map.
The following preferred embodiments are used to describe the manner in which a promotional word is displayed on a display model, as well as the different manners in which a display model is displayed.
Specifically, the display model further includes a target propaganda word for a game scene map corresponding to the target game scene map identifier, so that the target propaganda word is displayed while information corresponding to the game scene map corresponding to the target game scene map identifier is displayed through the display model. In specific implementation, the target propaganda may be a fixed propaganda or a player self-defined propaganda.
In practical application, the display model in the game scene can only display the game scene map data corresponding to the target game scene map identifier, and can also display the game scene map data and the target propaganda meaning of the game scene map corresponding to the target game scene map identifier. Specifically, the content displayed on the unused application scene presentation model varies.
When the target propaganda displayed on the display model is a player self-defined propaganda, the target propaganda is determined by the following method: in response to the publicity input operation, the content input by the publicity input operation is determined as a target publicity for the game scene map corresponding to the target game scene map identification. Specifically, the specific operation corresponding to the publicity input operation may be determined according to the development requirement, for example, the publicity input operation may be a content input operation for an input control displayed in a graphical user interface, where the input control may be displayed in a main interface of the graphical user interface or in a setting panel displayed through the first trigger operation, and the position of "clicking on the input publicity" shown in fig. 3 corresponds to the input control. The propaganda input operation may be an input operation of clicking the display model and then inputting the display model.
In an alternative embodiment, the initial presentation model data further includes: a third model data layer located above the second model data layer; the third model data layer is used for embedding a target propaganda word of the game scene map corresponding to the target game scene map identification so as to display the target propaganda word above the game scene map data displayed in the display model. Specifically, the initial display model data may include three model data layers, as shown in fig. 5, which is a schematic diagram of initial display model data provided by an embodiment of the present disclosure, where a third layer in fig. 5 is a first model data layer, the first model data layer is used to indicate a model style of a virtual model, and a second layer in fig. 5 is a second model data layer, and the second model data layer is used to embed a map, that is, game scene data, and the game scene data may be an image or a video; the first layer in fig. 5 is a third model data layer, where the third model data layer is used to embed a target propaganda, and meanwhile, a playing form of the target propaganda is also manufactured on the third model data layer, where a specific form corresponding to the playing form may be determined according to development requirements, for example, the playing form may be rolling circulation playing, or may be displaying at a fixed position, etc.
In an alternative embodiment, after a presentation model is displayed in the game scene, the player may further modify and switch information corresponding to the game scene map displayed in the presentation model, and specifically, in response to the fourth triggering operation, control a map switch control to be displayed in the graphical user interface; responding to the triggering operation for the map switching control, and displaying a setting panel in a graphical user interface; and responding to the selection operation of the first game scene map identifier in the setting panel, and switching the information corresponding to the game scene map corresponding to the target game scene map identifier displayed in the display model in the game scene to the information corresponding to the game scene map corresponding to the first game scene map identifier. The first game scene map identifier may be any one of game scene map identifiers in a setting panel.
In a specific implementation, the specific operation corresponding to the fourth triggering operation may be determined according to a development requirement, for example, the fourth triggering operation may be a triggering operation of a map switching control displayed in a graphical user interface, or may be an operation identical to the first triggering operation. In a specific embodiment, a map switching control may be added to the operation wheel shown in fig. 3; based on the above, the player can click the first control to display and operate the wheel disc, then respond to the sliding operation with the touch point position of the clicking operation as a starting point and the map switching control as an ending point, and select the map switching control; in response to the end of the sliding operation, a setting panel is displayed in the graphical user interface.
In order to avoid the problems of too much information loading and disordered propaganda and expression of multiple virtual objects in a display model, a distance judging mechanism is set, and the expression of the display model effect is ensured.
Specifically, in response to the second virtual object being hooked with the display model, the stay time of the second virtual object at the target position in the game scene is longer than the first preset time, the display model is controlled to display game scene map data corresponding to the target game map identifier, and the target propaganda is controlled to be played on the game scene map data according to the preset playing form.
The target position may be any position in the game scene, and the specific duration corresponding to the first preset duration may be determined according to the research and development requirement, for example, the first preset duration may be 1 second or 2 seconds, etc. When the game is specifically implemented, when a player looks at the display model of the virtual object mounted in the game scene, only game scene map data corresponding to the target game map mark is displayed on the display model when the virtual object moves in the game scene, and after the virtual object stays at a certain position in the game scene for a first preset time period, a target propaganda played according to a preset playing form appears above the game scene map of the display model.
Further, in response to the second virtual object being hooked with the display model, wherein the distance between the target first virtual object and the second virtual object is larger than a preset distance threshold, displaying the second virtual object hooked with the display model in a game scene provided by a terminal device for controlling the target first virtual object, and controlling the display model to only display game scene map data corresponding to the target game map identification; and in response to the fact that the duration that the distance between the target first virtual object and the second virtual object is not greater than the preset distance threshold exceeds the second preset duration, displaying the second virtual object connected with the display model in a game scene provided by the terminal equipment of the control target first virtual object, and controlling display of game scene map data corresponding to the target game map identification and target propaganda words played on the game scene map data according to the preset playing mode in the display model.
In a specific implementation, the distance value corresponding to the preset distance threshold may be determined according to a development requirement, for example, the preset distance threshold may be 1 meter or 3 meters. The second preset duration may also be determined according to the development requirement, for example, the second preset duration may be set to 1 second or 2 seconds, etc. Specifically, when other players see that a virtual object controlled by a certain player is hung with a display model, the content displayed on the display model is related to the distance between virtual characters, and when the distance between the virtual object controlled by the other players and the virtual object hung with the display model is larger than a preset distance threshold, only game scene map data corresponding to a target game scene map identifier is displayed on the display model; and after the distance between the virtual object controlled by other players and the virtual object hung with the display model is not greater than a preset distance threshold and the distance range is kept for a second preset time, adding a target propaganda which is played according to a preset playing form, for example, a target propaganda played by a barrage rolling effect, on the display model.
The novel propaganda and recordation and playing list mode provided in the field Jing Hua interactive game in the mode creates immersive recordation experience, combines the characteristics of game scene interaction and the content attribute of the UGC authoring map, can meet the authoring space of an author (i.e. a propaganda target) and also provides a certain propaganda authoring space of a sharer (i.e. a recordation target). In addition, the method determines the content displayed on the display model according to the distance judging mechanism, reduces the loading load of scene content, ensures the viewing experience of sharees, and cannot cause information overload due to excessive information.
Corresponding to the above method embodiment, the embodiment of the present disclosure further provides an interaction device in a game, as shown in fig. 6, where the device includes:
The interface display module 60 is configured to provide a graphical user interface through the terminal device, where at least part of a game scene is displayed in the graphical user interface, and the game scene includes: at least one first virtual object and a second virtual object controlled by the terminal device.
The panel display module 61 is configured to display a setting panel in response to the first trigger operation, where the setting panel includes at least one game scene map identifier, and a game scene map corresponding to the game scene map identifier is used to provide a game running scene, so that the first virtual object and/or the second virtual object play a game in the game running scene.
The map selection module 62 is configured to determine a target game scene map identifier from the at least one game scene map identifier in response to the second trigger operation.
The model exhibition module 63 is configured to control generation of an exhibition model in the game scene, where the exhibition model includes information corresponding to a game scene map corresponding to the target game scene map identifier, and the exhibition model is a virtual model generated according to game scene map data corresponding to the target game scene map identifier and initial exhibition model data.
According to the interactive device in the game, the player can display the interested game scene map in the game scene through the display model so as to apply the game scene map, and the player can independently edit the content displayed in the display model, so that the personalized requirement of the player on applying is met.
Specifically, the model display module 63 is configured to: generating a display model based on game scene map data and initial display model data corresponding to the target game scene map identification; controlling to attach the presentation model to a second virtual object in the game scene.
In a specific implementation, the initial presentation model data includes: a first model data layer and a second model data layer located above the first model data layer; the first model data layer is used for configuring the model style of the virtual model, and the second model data layer is used for embedding game scene map data corresponding to the target game scene map identification so as to display the game scene map data on the virtual model and obtain the display model.
In an alternative embodiment, the game scene map data corresponding to the target game scene map identifier includes at least one of the following: the target game scene map identifies the map name, the map author and the map cover of the corresponding game scene map, and the map video and the virtual object obtained by collecting the game scene map play the game fragments of the game in the game running scene corresponding to the game scene map.
In an alternative embodiment, the model display module 63 is configured to: controlling the second virtual object to lift the display model through the appointed action; wherein the specified actions include default actions or target actions determined based on action setting operations.
Further, the above target action is determined by: responding to the action setting operation, and displaying action identifiers corresponding to a plurality of preset actions in a graphical user interface; wherein the default action is an action of a plurality of preset actions; responding to a selection operation aiming at action identifiers corresponding to a plurality of preset actions, and determining the preset action corresponding to the action identifier selected by the selection operation as a target action.
In a specific implementation, the graphical user interface includes a first control displayed in a first display style, and the first triggering operation is implemented by triggering the first control displayed in the first display style; after the step of determining the target game scene map identifier from the at least one game scene map identifier in response to the second triggering operation, the method further includes: controlling the display style of the first control to be switched from the first display style to the second display style, and displaying the first control in the second display style in the graphical user interface; responding to the action setting operation, and displaying action identifiers corresponding to a plurality of preset actions in a graphical user interface, wherein the step comprises the following steps: responding to a third triggering operation aiming at the first control displayed in the second display mode, and displaying action identifiers corresponding to a plurality of preset actions in a graphical user interface; responding to a selection operation of action identifiers corresponding to a plurality of preset actions, and determining the preset action corresponding to the action identifier selected by the selection operation as a target action, wherein the step comprises the following steps: responding to a sliding operation taking a triggering position of the third triggering operation as a starting point and taking a target action identifier in actions corresponding to a plurality of preset actions as an end point, selecting the target action identifier, and determining the preset action corresponding to the target action identifier as a target action.
Further, the model display module 63 is further configured to: in response to the end of the sliding operation, the second virtual object is controlled to lift the presentation model in the game scene by the target action.
In a specific implementation, the first control is displayed in the graphical user interface by: responding to the triggering operation of the prop selection control displayed on the graphical user interface, and displaying a plurality of virtual props in the graphical user interface; the virtual prop is used for controlling the second virtual object to execute a preset game behavior; and responding to the selected operation aiming at the target virtual prop in the plurality of virtual props, and displaying a first control corresponding to the target virtual prop in the graphical user interface.
Further, the display model further comprises a target propaganda word of the game scene map corresponding to the target game scene map identification, so that the target propaganda word is displayed while the information corresponding to the game scene map corresponding to the target game scene map identification is displayed through the display model.
Further, the device further comprises a propaganda word determining module for: in response to the publicity input operation, the content input by the publicity input operation is determined as a target publicity for the game scene map corresponding to the target game scene map identification.
In a specific implementation, the initial presentation model data further includes: a third model data layer located above the second model data layer; the third model data layer is used for embedding a target propaganda word of the game scene map corresponding to the target game scene map identification so as to display the target propaganda word above the game scene map data displayed in the display model.
In an alternative embodiment, the apparatus further includes a first display module configured to: and responding to the situation that the second virtual object is hung with the display model, and the stay time of the second virtual object at the target position in the game scene is longer than the first preset time, controlling the display model to display game scene map data corresponding to the target game map mark, and controlling the game scene map data to play the target propaganda according to the preset playing form.
In an alternative embodiment, the apparatus further includes a second display module configured to: responding to the situation that the second virtual object is hung with the display model, wherein the distance between the target first virtual object and the second virtual object is larger than a preset distance threshold value, displaying the second virtual object hung with the display model in a game scene provided by a terminal device for controlling the target first virtual object, and controlling the display model to only display game scene map data corresponding to a target game map identifier; and in response to the fact that the duration that the distance between the target first virtual object and the second virtual object is not greater than the preset distance threshold exceeds the second preset duration, displaying the second virtual object connected with the display model in a game scene provided by the terminal equipment of the control target first virtual object, and controlling display of game scene map data corresponding to the target game map identification and target propaganda words played on the game scene map data according to the preset playing mode in the display model.
Further, the device further comprises a map switching module for: after the step of controlling the generation of a presentation model in the game scene, controlling the display of a map switching control in the graphical user interface in response to a fourth triggering operation; responding to the triggering operation for the map switching control, and displaying a setting panel in a graphical user interface; and responding to the selection operation of the first game scene map identifier in the setting panel, and switching the information corresponding to the game scene map corresponding to the target game scene map identifier displayed in the display model in the game scene to the information corresponding to the game scene map corresponding to the first game scene map identifier.
Further, the device further comprises a map generation module for: creating at least one scene component in the game editing scene based on the first editing operation in response to the first editing operation; responsive to the second editing operation, determining component parameters of the at least one scene component based on the second editing operation; in response to the map generation operation, controlling component parameters of scene components created in the game editing scene, generating a game scene map corresponding to the game editing scene, and transmitting the game scene map to a server; the server is configured to be in communication connection with the terminal equipment, the terminal equipment is configured with a game program, the terminal equipment is configured to obtain a game scene map from the server, and a corresponding game running scene is generated according to the game scene map through the game program.
The interaction device in the game provided by the embodiments of the present disclosure has the same implementation principle and technical effects as those of the foregoing method embodiments, and for the sake of brevity, reference may be made to corresponding contents in the foregoing method embodiments where the device embodiment portion is not mentioned.
The disclosed embodiments also provide an electronic device, as shown in fig. 7, which includes a processor and a memory, where the memory stores machine executable instructions that can be executed by the processor, and the processor executes the machine executable instructions to implement the interaction method in the game.
Specifically, the interaction method in the game comprises the following steps: providing a graphical user interface through the terminal equipment, wherein at least part of game scenes are displayed in the graphical user interface, and the game scenes comprise: at least one first virtual object and a second virtual object controlled by the terminal device; responding to the first triggering operation, displaying a setting panel in a graphical user interface, wherein the setting panel comprises at least one game scene map identifier, and a game scene map corresponding to the game scene map identifier is used for providing a game running scene so that the first virtual object and/or the second virtual object play a game in the game running scene; responding to the second triggering operation, and determining a target game scene map identifier from at least one game scene map identifier; and controlling to generate a demonstration model in the game scene, wherein the demonstration model comprises information corresponding to a game scene map corresponding to the target game scene map identifier, and the demonstration model is a virtual model generated according to the game scene map data corresponding to the target game scene map identifier and the initial demonstration model data.
In the interaction method in the game, the player can display the interested game scene map in the game scene through the display model so as to apply the game scene map, and the player can independently edit the content displayed in the display model, so that the personalized requirement of the player on applying is met.
In an alternative embodiment, the step of controlling to generate a presentation model in the game scene includes: generating a display model based on game scene map data and initial display model data corresponding to the target game scene map identification; controlling to attach the presentation model to a second virtual object in the game scene.
In an alternative embodiment, the initial presentation model data includes: a first model data layer and a second model data layer located above the first model data layer; the first model data layer is used for configuring the model style of the virtual model, and the second model data layer is used for embedding game scene map data corresponding to the target game scene map identification so as to display the game scene map data on the virtual model and obtain the display model.
In an alternative embodiment, the game scene map data corresponding to the target game scene map identifier includes at least one of the following: the target game scene map identifies the map name, the map author and the map cover of the corresponding game scene map, and the map video and the virtual object obtained by collecting the game scene map play the game fragments of the game in the game running scene corresponding to the game scene map.
In an alternative embodiment, the step of controlling the hooking of the presentation model on the second virtual object in the game scene includes: controlling the second virtual object to lift the display model through the appointed action; wherein the specified actions include default actions or target actions determined based on action setting operations.
In an alternative embodiment, the above-described target actions are determined by: responding to the action setting operation, and displaying action identifiers corresponding to a plurality of preset actions in a graphical user interface; wherein the default action is an action of a plurality of preset actions; responding to a selection operation aiming at action identifiers corresponding to a plurality of preset actions, and determining the preset action corresponding to the action identifier selected by the selection operation as a target action.
In an optional embodiment, the graphical user interface includes a first control displayed in a first display style, and the first triggering operation is implemented by triggering the first control displayed in the first display style; after the step of determining the target game scene map identifier from the at least one game scene map identifier in response to the second triggering operation, the method further includes: controlling the display style of the first control to be switched from the first display style to the second display style, and displaying the first control in the second display style in the graphical user interface; the step of displaying the action identifiers corresponding to the plurality of preset actions in the graphical user interface in response to the action setting operation comprises the following steps: responding to a third triggering operation aiming at the first control displayed in the second display mode, and displaying action identifiers corresponding to a plurality of preset actions in a graphical user interface; the step of determining the preset action corresponding to the action identifier selected by the selection operation as the target action in response to the selection operation of the action identifier corresponding to the plurality of preset actions, includes: responding to a sliding operation taking a triggering position of the third triggering operation as a starting point and taking a target action identifier in actions corresponding to a plurality of preset actions as an end point, selecting the target action identifier, and determining the preset action corresponding to the target action identifier as a target action.
In an alternative embodiment, the method further comprises: in response to the end of the sliding operation, the second virtual object is controlled to lift the presentation model in the game scene by the target action.
In an alternative embodiment, the first control displayed in the first display style is displayed in the graphical user interface in the following manner: responding to the triggering operation of the prop selection control displayed on the graphical user interface, and displaying a plurality of virtual props in the graphical user interface; the virtual prop is used for controlling the second virtual object to execute a preset game behavior; and responding to the selected operation aiming at the target virtual prop in the plurality of virtual props, and displaying a first control which corresponds to the target virtual prop and is displayed in a first display mode in the graphical user interface.
In an alternative embodiment, the display model further includes a target propaganda word for a game scene map corresponding to the target game scene map identifier, so that the target propaganda word is displayed while information corresponding to the game scene map corresponding to the target game scene map identifier is displayed through the display model.
In an alternative embodiment, the target publicity is determined by: in response to the publicity input operation, the content input by the publicity input operation is determined as a target publicity for the game scene map corresponding to the target game scene map identification.
In an alternative embodiment, the initial presentation model data further includes: a third model data layer located above the second model data layer; the third model data layer is used for embedding a target propaganda word of the game scene map corresponding to the target game scene map identification so as to display the target propaganda word above the game scene map data displayed in the display model.
In an alternative embodiment, the method further comprises: and responding to the situation that the second virtual object is hung with the display model, and the stay time of the second virtual object at the target position in the game scene is longer than the first preset time, controlling the display model to display game scene map data corresponding to the target game map mark, and controlling the game scene map data to play the target propaganda according to the preset playing form.
In an alternative embodiment, the method further comprises: responding to the situation that the second virtual object is hung with the display model, wherein the distance between the target first virtual object and the second virtual object is larger than a preset distance threshold value, displaying the second virtual object hung with the display model in a game scene provided by a terminal device for controlling the target first virtual object, and controlling the display model to only display game scene map data corresponding to a target game map identifier; and in response to the fact that the duration that the distance between the target first virtual object and the second virtual object is not greater than the preset distance threshold exceeds the second preset duration, displaying the second virtual object connected with the display model in a game scene provided by the terminal equipment of the control target first virtual object, and controlling display of game scene map data corresponding to the target game map identification and target propaganda words played on the game scene map data according to the preset playing mode in the display model.
In an alternative embodiment, after the step of controlling to generate a presentation model in the game scene, the method further includes: responding to the fourth triggering operation, and controlling a map switching control to be displayed in the graphical user interface; responding to the triggering operation for the map switching control, and displaying a setting panel in a graphical user interface; and responding to the selection operation of the first game scene map identifier in the setting panel, and switching the information corresponding to the game scene map corresponding to the target game scene map identifier displayed in the display model in the game scene to the information corresponding to the game scene map corresponding to the first game scene map identifier.
In an alternative embodiment, the game scene map is generated by: creating at least one scene component in the game editing scene based on the first editing operation in response to the first editing operation; responsive to the second editing operation, determining component parameters of the at least one scene component based on the second editing operation; in response to the map generation operation, controlling component parameters of scene components created in the game editing scene, generating a game scene map corresponding to the game editing scene, and transmitting the game scene map to a server; the server is configured to be in communication connection with the terminal equipment, the terminal equipment is configured with a game program, the terminal equipment is configured to obtain a game scene map from the server, and a corresponding game running scene is generated according to the game scene map through the game program.
Further, the electronic device shown in fig. 7 further includes a bus 102 and a communication interface 103, and the processor 101, the communication interface 103, and the memory 100 are connected through the bus 102.
The memory 100 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one disk memory. The communication connection between the system network element and at least one other network element is implemented via at least one communication interface 103 (which may be wired or wireless), and may use the internet, a wide area network, a local network, a metropolitan area network, etc. Bus 102 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 7, but not only one bus or type of bus.
The processor 101 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 101 or instructions in the form of software. The processor 101 may be a general-purpose processor, including a central processing unit (Central Processing Unit, abbreviated as CPU), a network processor (Network Processor, abbreviated as NP), and the like; but may also be a digital signal processor (DIGITAL SIGNAL Processing, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks of the disclosure in the embodiments of the disclosure may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present disclosure may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 100 and the processor 101 reads information in the memory 100 and in combination with its hardware performs the steps of the method of the previous embodiments.
The embodiments of the present disclosure further provide a computer readable storage medium, where the computer readable storage medium stores computer executable instructions that, when invoked and executed by a processor, cause the processor to implement the interaction method in the game, and the specific implementation may refer to a method embodiment and will not be described herein.
Specifically, the interaction method in the game comprises the following steps: providing a graphical user interface through the terminal equipment, wherein at least part of game scenes are displayed in the graphical user interface, and the game scenes comprise: at least one first virtual object and a second virtual object controlled by the terminal device; responding to the first triggering operation, displaying a setting panel in a graphical user interface, wherein the setting panel comprises at least one game scene map identifier, and a game scene map corresponding to the game scene map identifier is used for providing a game running scene so that the first virtual object and/or the second virtual object play a game in the game running scene; responding to the second triggering operation, and determining a target game scene map identifier from at least one game scene map identifier; and controlling to generate a demonstration model in the game scene, wherein the demonstration model comprises information corresponding to a game scene map corresponding to the target game scene map identifier, and the demonstration model is a virtual model generated according to the game scene map data corresponding to the target game scene map identifier and the initial demonstration model data.
In the interaction method in the game, the player can display the interested game scene map in the game scene through the display model so as to apply the game scene map, and the player can independently edit the content displayed in the display model, so that the personalized requirement of the player on applying is met.
In an alternative embodiment, the step of controlling to generate a presentation model in the game scene includes: generating a display model based on game scene map data and initial display model data corresponding to the target game scene map identification; controlling to attach the presentation model to a second virtual object in the game scene.
In an alternative embodiment, the initial presentation model data includes: a first model data layer and a second model data layer located above the first model data layer; the first model data layer is used for configuring the model style of the virtual model, and the second model data layer is used for embedding game scene map data corresponding to the target game scene map identification so as to display the game scene map data on the virtual model and obtain the display model.
In an alternative embodiment, the game scene map data corresponding to the target game scene map identifier includes at least one of the following: the target game scene map identifies the map name, the map author and the map cover of the corresponding game scene map, and the map video and the virtual object obtained by collecting the game scene map play the game fragments of the game in the game running scene corresponding to the game scene map.
In an alternative embodiment, the step of controlling the hooking of the presentation model on the second virtual object in the game scene includes: controlling the second virtual object to lift the display model through the appointed action; wherein the specified actions include default actions or target actions determined based on action setting operations.
In an alternative embodiment, the above-described target actions are determined by: responding to the action setting operation, and displaying action identifiers corresponding to a plurality of preset actions in a graphical user interface; wherein the default action is an action of a plurality of preset actions; responding to a selection operation aiming at action identifiers corresponding to a plurality of preset actions, and determining the preset action corresponding to the action identifier selected by the selection operation as a target action.
In an optional embodiment, the graphical user interface includes a first control displayed in a first display style, and the first triggering operation is implemented by triggering the first control displayed in the first display style; after the step of determining the target game scene map identifier from the at least one game scene map identifier in response to the second triggering operation, the method further includes: controlling the display style of the first control to be switched from the first display style to the second display style, and displaying the first control in the second display style in the graphical user interface; the step of displaying the action identifiers corresponding to the plurality of preset actions in the graphical user interface in response to the action setting operation comprises the following steps: responding to a third triggering operation aiming at the first control displayed in the second display mode, and displaying action identifiers corresponding to a plurality of preset actions in a graphical user interface; the step of determining the preset action corresponding to the action identifier selected by the selection operation as the target action in response to the selection operation of the action identifier corresponding to the plurality of preset actions, includes: responding to a sliding operation taking a triggering position of the third triggering operation as a starting point and taking a target action identifier in actions corresponding to a plurality of preset actions as an end point, selecting the target action identifier, and determining the preset action corresponding to the target action identifier as a target action.
In an alternative embodiment, the method further comprises: in response to the end of the sliding operation, the second virtual object is controlled to lift the presentation model in the game scene by the target action.
In an alternative embodiment, the first control displayed in the first display style is displayed in the graphical user interface in the following manner: responding to the triggering operation of the prop selection control displayed on the graphical user interface, and displaying a plurality of virtual props in the graphical user interface; the virtual prop is used for controlling the second virtual object to execute a preset game behavior; and responding to the selected operation aiming at the target virtual prop in the plurality of virtual props, and displaying a first control which corresponds to the target virtual prop and is displayed in a first display mode in the graphical user interface.
In an alternative embodiment, the display model further includes a target propaganda word for a game scene map corresponding to the target game scene map identifier, so that the target propaganda word is displayed while information corresponding to the game scene map corresponding to the target game scene map identifier is displayed through the display model.
In an alternative embodiment, the target publicity is determined by: in response to the publicity input operation, the content input by the publicity input operation is determined as a target publicity for the game scene map corresponding to the target game scene map identification.
In an alternative embodiment, the initial presentation model data further includes: a third model data layer located above the second model data layer; the third model data layer is used for embedding a target propaganda word of the game scene map corresponding to the target game scene map identification so as to display the target propaganda word above the game scene map data displayed in the display model.
In an alternative embodiment, the method further comprises: and responding to the situation that the second virtual object is hung with the display model, and the stay time of the second virtual object at the target position in the game scene is longer than the first preset time, controlling the display model to display game scene map data corresponding to the target game map mark, and controlling the game scene map data to play the target propaganda according to the preset playing form.
In an alternative embodiment, the method further comprises: responding to the situation that the second virtual object is hung with the display model, wherein the distance between the target first virtual object and the second virtual object is larger than a preset distance threshold value, displaying the second virtual object hung with the display model in a game scene provided by a terminal device for controlling the target first virtual object, and controlling the display model to only display game scene map data corresponding to a target game map identifier; and in response to the fact that the duration that the distance between the target first virtual object and the second virtual object is not greater than the preset distance threshold exceeds the second preset duration, displaying the second virtual object connected with the display model in a game scene provided by the terminal equipment of the control target first virtual object, and controlling display of game scene map data corresponding to the target game map identification and target propaganda words played on the game scene map data according to the preset playing mode in the display model.
In an alternative embodiment, after the step of controlling to generate a presentation model in the game scene, the method further includes: responding to the fourth triggering operation, and controlling a map switching control to be displayed in the graphical user interface; responding to the triggering operation for the map switching control, and displaying a setting panel in a graphical user interface; and responding to the selection operation of the first game scene map identifier in the setting panel, and switching the information corresponding to the game scene map corresponding to the target game scene map identifier displayed in the display model in the game scene to the information corresponding to the game scene map corresponding to the first game scene map identifier.
In an alternative embodiment, the game scene map is generated by: creating at least one scene component in the game editing scene based on the first editing operation in response to the first editing operation; responsive to the second editing operation, determining component parameters of the at least one scene component based on the second editing operation; in response to the map generation operation, controlling component parameters of scene components created in the game editing scene, generating a game scene map corresponding to the game editing scene, and transmitting the game scene map to a server; the server is configured to be in communication connection with the terminal equipment, the terminal equipment is configured with a game program, the terminal equipment is configured to obtain a game scene map from the server, and a corresponding game running scene is generated according to the game scene map through the game program.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a terminal device, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a read-only memory (ROM), a random access memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description of the present disclosure, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present disclosure and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present disclosure. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (19)

1. A method of interaction in a game, the method comprising:
Providing a graphical user interface through the terminal equipment, wherein at least part of game scenes are displayed in the graphical user interface, and the game scenes comprise: at least one first virtual object and a second virtual object controlled by the terminal device;
responding to a first triggering operation, displaying a setting panel in the graphical user interface, wherein the setting panel comprises at least one game scene map identifier, and a game scene map corresponding to the game scene map identifier is used for providing a game running scene so that the first virtual object and/or the second virtual object play a game in the game running scene;
Responding to a second triggering operation, and determining a target game scene map identifier from the at least one game scene map identifier;
And controlling to generate a display model in the game scene, wherein the display model comprises information corresponding to a game scene map corresponding to the target game scene map identifier, and the display model is a virtual model generated according to game scene map data corresponding to the target game scene map identifier and initial display model data.
2. The method of claim 1, wherein the step of controlling generation of a presentation model in the game scene comprises:
Generating a display model based on game scene map data and initial display model data corresponding to the target game scene map identification;
controlling the second virtual object in the game scene to be connected with the display model.
3. The method of claim 2, wherein the initial presentation model data comprises: a first model data layer, a second model data layer located above the first model data layer; the first model data layer is used for configuring a model style of a virtual model, and the second model data layer is used for embedding game scene map data corresponding to the target game scene map identification so as to display the game scene map data on the virtual model and obtain the display model.
4. The method of claim 3, wherein the target game scene map identification corresponding game scene map data comprises at least one of: the target game scene map identifies the map name, the map author and the map cover of the corresponding game scene map, and gathers the map video and the virtual object obtained by the game scene map to play the game fragments of the game in the game running scene corresponding to the game scene map.
5. The method of claim 2, wherein the step of controlling hooking the presentation model on the second virtual object in the game scene comprises:
Controlling the second virtual object to lift the display model through a specified action; wherein the specified action includes a default action or a target action determined based on an action setting operation.
6. The method of claim 5, wherein the target action is determined by:
responding to the action setting operation, and displaying action identifiers corresponding to a plurality of preset actions in the graphical user interface; wherein the default action is an action of the plurality of preset actions;
Responding to a selection operation aiming at action identifiers corresponding to the plurality of preset actions, and determining the preset action corresponding to the action identifier selected by the selection operation as the target action.
7. The method of claim 6, wherein the graphical user interface includes a first control displayed in a first display style therein, the first triggering operation being accomplished by triggering the first control displayed in the first display style;
After the step of determining the target game scene map identifier from the at least one game scene map identifier in response to the second trigger operation, the method further includes: controlling the display style of the first control to be switched from a first display style to a second display style, and displaying the first control in the second display style in the graphical user interface;
The step of responding to the action setting operation and displaying action identifiers corresponding to a plurality of preset actions in the graphical user interface comprises the following steps: responding to a third triggering operation aiming at the first control displayed in the second display mode, and displaying action identifiers corresponding to a plurality of preset actions in the graphical user interface;
The step of responding to the selection operation of the action identifiers corresponding to the plurality of preset actions and determining the preset action corresponding to the action identifier selected by the selection operation as the target action includes: responding to a sliding operation taking a triggering position of the third triggering operation as a starting point and taking a target action identifier in actions corresponding to the plurality of preset actions as an ending point, selecting the target action identifier, and determining the preset action corresponding to the target action identifier as a target action.
8. The method of claim 7, wherein the method further comprises:
and controlling the second virtual object to lift the demonstration model through the target action in the game scene in response to the end of the sliding operation.
9. The method of claim 7, wherein the first control displayed in the first display style is displayed in the graphical user interface by:
responding to the triggering operation of the prop selection control displayed on the graphical user interface, and displaying a plurality of virtual props in the graphical user interface; the virtual prop is used for controlling the second virtual object to execute a preset game behavior;
And responding to the selected operation of the target virtual prop in the plurality of virtual props, and displaying a first control which corresponds to the target virtual prop and is displayed in a first display mode in the graphical user interface.
10. The method of claim 1, wherein the presentation model further comprises a target promo for a game scene map corresponding to the target game scene map identification to present the target promo while presenting information corresponding to the game scene map corresponding to the target game scene map identification through the presentation model.
11. The method of claim 10, wherein the targeted promotional is determined by:
And responding to the propaganda input operation, and determining the content input by the propaganda input operation as a target propaganda for the game scene map corresponding to the target game scene map identification.
12. A method according to claim 3, wherein the initial presentation model data further comprises: a third model data layer located above the second model data layer; the third model data layer is used for embedding a target propaganda word of the game scene map corresponding to the target game scene map identifier so as to display the target propaganda word above the game scene map data displayed in the display model.
13. The method according to claim 10, wherein the method further comprises:
And responding to the situation that the second virtual object is hung with the display model, and the stay time of the second virtual object at the target position in the game scene is longer than the first preset time, controlling the display model to display game scene map data corresponding to the target game map mark, and controlling the game scene map data to play the target propaganda according to a preset play form.
14. The method according to claim 10, wherein the method further comprises:
Responding to the situation that the second virtual object is connected with the display model in a hanging mode, wherein the distance between the target first virtual object and the second virtual object is larger than a preset distance threshold value, displaying the second virtual object connected with the display model in a game scene provided by a terminal device for controlling the target first virtual object, and controlling the display model to only display game scene map data corresponding to the target game map identification;
And in response to the fact that the duration that the distance between the target first virtual object and the second virtual object is not greater than the preset distance threshold exceeds a second preset duration, displaying the second virtual object connected with the display model in a hanging mode in a game scene provided by a terminal device controlling the target first virtual object, and controlling the display model to display game scene map data corresponding to the target game map identification and the target propaganda language played on the game scene map data according to a preset playing mode.
15. The method of claim 1, wherein said controlling further comprises, after said step of generating a presentation model in said game scene:
responding to a fourth triggering operation, and controlling a map switching control to be displayed in the graphical user interface;
Responding to the triggering operation for the map switching control, and displaying the setting panel in the graphical user interface;
Responding to the selection operation of the first game scene map identification in the setting panel, and switching the information corresponding to the game scene map corresponding to the target game scene map identification displayed in the display model in the game scene into the information corresponding to the game scene map corresponding to the first game scene map identification.
16. The method of claim 1, wherein the game scene map is generated by:
Creating at least one scene component in a game editing scene based on a first editing operation in response to the first editing operation;
Responsive to a second editing operation, determining component parameters of the at least one scene component based on the second editing operation;
In response to a map generation operation, controlling component parameters based on scene components created in the game editing scene, generating a game scene map corresponding to the game editing scene, and transmitting the game scene map to a server; the server is configured to be in communication connection with the terminal equipment, the terminal equipment is configured with a game program, the terminal equipment is configured to acquire the game scene map from the server, and a corresponding game running scene is generated according to the game scene map through the game program.
17. An interactive apparatus in a game, the apparatus comprising:
The interface display module is used for providing a graphical user interface through the terminal equipment, wherein at least part of game scenes are displayed in the graphical user interface, and the game scenes comprise: at least one first virtual object and a second virtual object controlled by the terminal device;
The panel display module is used for responding to a first triggering operation, displaying a setting panel in the graphical user interface, wherein the setting panel comprises at least one game scene map identifier, and a game scene map corresponding to the game scene map identifier is used for providing a game running scene so that the first virtual object and/or the second virtual object play a game in the game running scene;
the map selection module is used for responding to the second triggering operation and determining a target game scene map identifier from the at least one game scene map identifier;
The model display module is used for controlling to generate a display model in the game scene, the display model comprises information corresponding to a game scene map corresponding to the target game scene map identifier, and the display model is a virtual model generated according to game scene map data corresponding to the target game scene map identifier and initial display model data.
18. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the method of interaction in a game of any of claims 1 to 16.
19. A computer readable storage medium storing computer executable instructions which, when invoked and executed by a processor, cause the processor to implement the method of interaction in a game of any one of claims 1 to 16.
CN202311687262.5A 2023-12-08 2023-12-08 Interaction method and device in game and electronic equipment Pending CN117899460A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311687262.5A CN117899460A (en) 2023-12-08 2023-12-08 Interaction method and device in game and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311687262.5A CN117899460A (en) 2023-12-08 2023-12-08 Interaction method and device in game and electronic equipment

Publications (1)

Publication Number Publication Date
CN117899460A true CN117899460A (en) 2024-04-19

Family

ID=90694444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311687262.5A Pending CN117899460A (en) 2023-12-08 2023-12-08 Interaction method and device in game and electronic equipment

Country Status (1)

Country Link
CN (1) CN117899460A (en)

Similar Documents

Publication Publication Date Title
CN107341018B (en) Method and device for continuously displaying view after page switching
JP6646319B2 (en) Multi-user demo streaming service for cloud games
US9242176B2 (en) Remote control of a first user's gameplay by a second user
JP6313037B2 (en) Method and system for mini-application generation and execution of computer applications serviced by a cloud computing system
US20120176516A1 (en) Augmented reality system
US20150026573A1 (en) Media Editing and Playing System and Method Thereof
US10913003B2 (en) Mini-games accessed through a sharing interface
BR102013033136B1 (en) METHOD FOR GENERATING A LIMITED PLAYABLE VERSION OF A VIDEO GAME; AND METHOD TO PROVIDE REMOTE CONTROL OF A USER'S GAME
CN112933606B (en) Game scene conversion method and device, storage medium and computer equipment
CN110507992B (en) Technical support method, device, equipment and storage medium in virtual scene
CN111760272B (en) Game information display method and device, computer storage medium and electronic equipment
CN107626105B (en) Game picture display method and device, storage medium and electronic equipment
CN114327214A (en) Interaction method, interaction device, electronic equipment, storage medium and computer program product
CN113101633B (en) Cloud game simulation operation method and device and electronic equipment
CN111866403B (en) Video graphic content processing method, device, equipment and medium
CN117899460A (en) Interaction method and device in game and electronic equipment
Felicia Getting started with Unity: Learn how to use Unity by creating your very own" Outbreak" survival game while developing your essential skills
CN117138346A (en) Game editing method, game control device and electronic equipment
KR102276789B1 (en) Method and apparatus for editing video
CN115396685B (en) Live interaction method and device, readable storage medium and electronic equipment
NO20190524A1 (en) Game Story System for Mobile Apps
CN117899463A (en) Method and device for displaying multimedia content in game and electronic equipment
CN116850580A (en) Sky illumination switching method and device in game and electronic equipment
JP2008535070A (en) Method for constructing a multimedia scene comprising at least one pointer object, and corresponding scene rendering method, terminal, computer program, server and pointer object
CN117462955A (en) Game editing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination