CN117815670A - Scene element collaborative construction method, device, computer equipment and storage medium - Google Patents

Scene element collaborative construction method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN117815670A
CN117815670A CN202410042699.1A CN202410042699A CN117815670A CN 117815670 A CN117815670 A CN 117815670A CN 202410042699 A CN202410042699 A CN 202410042699A CN 117815670 A CN117815670 A CN 117815670A
Authority
CN
China
Prior art keywords
scene
scene element
player
terminal
basic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410042699.1A
Other languages
Chinese (zh)
Inventor
许展昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202410042699.1A priority Critical patent/CN117815670A/en
Publication of CN117815670A publication Critical patent/CN117815670A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the application discloses a scene element collaborative construction method, a device, computer equipment and a storage medium. The method comprises the following steps: the method comprises the steps of displaying a virtual scene, at least one first basic scene element and at least one second basic scene element in the virtual scene through a first terminal and a second terminal, wherein the first basic scene element is a scene element which is triggered by a first player through the first terminal and is arranged in the virtual scene, the second basic scene element is a scene element which is triggered by a second player through the second terminal and is arranged in the virtual scene, responding to the combination operation of the first player on the at least one first basic scene element and the at least one second basic scene element when the first player arranges the scene element, carrying out element combination on the at least one first basic scene element and the at least one second basic scene element to obtain a target scene element, and synchronizing the target scene element to the second terminal. The embodiment of the application can realize that the player and the friend cooperate to construct the scene element.

Description

Scene element collaborative construction method, device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and apparatus for collaborative construction of scene elements, a computer device, and a storage medium.
Background
The user-generated content game (User Generated Content Games, UGC) refers to a game in which game content is created, shared, distributed, and evaluated by a player himself. The appearance of UGC game modes enables players to participate in creation and development of games more freely, and meanwhile, the games are diversified and creative.
The construction of scene elements can be involved in UGC games, and currently UGC games are usually constructed based on Vision Pro equipment. The scene element construction mode can only be independently completed by the player, and the player cannot cooperate with a plurality of friends to construct the scene element.
Disclosure of Invention
The embodiment of the application provides a scene element collaborative construction method, a device, computer equipment and a storage medium, which can realize the collaborative construction of scene elements by players and friends.
In a first aspect, an embodiment of the present application provides a method for collaborative construction of a scene element, where the method includes:
displaying a virtual scene, at least one first basic scene element and at least one second basic scene element in the virtual scene through a first terminal and a second terminal, wherein the first basic scene element is a scene element triggered by a first player through the first terminal and arranged in the virtual scene, and the second basic scene element is a scene element triggered by a second player through the second terminal and arranged in the virtual scene;
Responding to the combination operation of the first player on at least one first basic scene element and at least one second basic scene element when the first player arranges the scene elements, and carrying out element combination on the at least one first basic scene element and the at least one second basic scene element to obtain target scene elements;
the target scene element is synchronized to the second terminal such that the second player confirms the target scene element based on the second terminal.
In a second aspect, an embodiment of the present application further provides a scene element collaboration building apparatus, where the scene element collaboration building apparatus includes:
the first display module is used for displaying a virtual scene through a first terminal and a second terminal, and at least one first basic scene element and at least one second basic scene element which are positioned in the virtual scene, wherein the first basic scene element is a scene element which is triggered by a first player through the first terminal and is arranged in the virtual scene, and the second basic scene element is a scene element which is triggered by a second player through the second terminal and is arranged in the virtual scene;
the element combination module is used for responding to the combination operation of the first player on at least one first basic scene element and at least one second basic scene element when the first player arranges the scene elements, and carrying out element combination on the at least one first basic scene element and the at least one second basic scene element to obtain target scene elements;
And the element confirmation module is used for synchronizing the target scene element to the second terminal so that the second player confirms the target scene element based on the second terminal.
The present application further provides a computer readable storage medium storing a computer program adapted to be loaded by a processor to perform the steps in the scene element co-construction method of any of the above embodiments.
The embodiment of the application also provides a computer device, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the steps in the scene element collaborative building method according to any embodiment by calling the computer program stored in the memory.
According to the scene element collaborative construction method, the device, the computer equipment and the storage medium, a virtual scene and at least one first basic scene element and at least one second basic scene element which are positioned in the virtual scene are displayed through the first terminal and the second terminal, the first player responds to the combined operation of the at least one first basic scene element and the at least one second basic scene element when arranging the scene elements, the at least one first basic scene element and the at least one second basic scene element are subjected to element combination to obtain a target scene element, and the target scene element is synchronized to the second terminal, so that the second player confirms the target scene element based on the second terminal. According to the method and the device for constructing the scene element, the base scene elements triggered by the plurality of players are displayed in the virtual scene at the same time, and the base scene elements triggered by the plurality of players are combined, so that the scene elements can be constructed cooperatively by the plurality of players, and the use interesting experience of the multiparty players in the scene element construction process is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a system schematic diagram of a scene element collaboration building apparatus provided in an embodiment of the present application.
Fig. 2 is a schematic diagram of a graphical user interface according to an embodiment of the present application.
Fig. 3 is a flow chart of a scene element collaborative building method provided in an embodiment of the present application.
Fig. 4 is a schematic diagram of a graphical user interface according to an embodiment of the present application.
Fig. 5 is a schematic diagram III of a graphical user interface provided in an embodiment of the present application.
Fig. 6 is a schematic diagram of a graphical user interface provided in an embodiment of the present application.
Fig. 7 is a schematic diagram fifth of a graphical user interface provided in an embodiment of the present application.
Fig. 8 is a schematic diagram sixth of a graphical user interface provided in an embodiment of the present application.
Fig. 9 is a schematic diagram seven of a graphical user interface provided in an embodiment of the present application.
Fig. 10 is a schematic sub-flowchart of a scene element collaborative building method provided in an embodiment of the present application.
Fig. 11 is another flow chart of a scene element collaborative building method provided in an embodiment of the present application.
Fig. 12 is a flow chart of a scene element scaling process according to an embodiment of the present application.
Fig. 13 is a schematic flow chart of a scene element rotation process according to an embodiment of the present application.
Fig. 14 is a schematic structural diagram of a scene element collaboration building apparatus provided in an embodiment of the present application.
Fig. 15 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application. Meanwhile, in the description of the embodiments of the present application, the terms "first", "second", "third", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance. Thus, a feature defining "a first", "a second", and "a third" may include one or more features, either explicitly or implicitly. In the description of the embodiments of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The embodiment of the application provides a scene element collaborative construction method, a scene element collaborative construction device, computer equipment and a computer readable storage medium. In particular, the present embodiment will be described from the perspective of a scene element co-construction apparatus, which may be integrated in a computer device in particular, that is, the scene element co-construction method of the embodiment of the present application may be executed by a computer device, and optionally, the computer device may include: and a terminal device. The terminal device may be a mobile phone, a tablet computer, a smart bluetooth device, a notebook computer, a game console, or a personal computer (Personal Computer, PC), etc.
The scene element collaborative construction method provided by the embodiment of the application can be applied to a scene element collaborative construction system shown in fig. 1. The scene element collaborative building system may include a server, a first terminal (terminal 1) and a second terminal (terminals 2, …, terminal N), where the first terminal and the second terminal may be devices that include both receiving and transmitting hardware, i.e., devices that have receiving and transmitting hardware capable of performing bidirectional communications over a bidirectional communication link. The first terminal and the second terminal respectively communicate with the server to realize data communication.
Alternatively, the server may be a stand-alone server, or may be a server network or a server cluster of servers, including but not limited to a computer, a network host, a single network server, a set of multiple network servers, or a cloud server of multiple servers. Wherein the Cloud server is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing).
In one possible implementation manner, the first terminal and the second terminal may be, for example, terminal devices in the same game, and the game account corresponding to the first terminal and the game account corresponding to the second terminal, where the game may be in the same team or in the same camping currently.
In one possible implementation, the first terminal may provide a graphical user interface for the first player, and the second terminal may provide a graphical user interface for the second player, where the content displayed by the graphical user interface includes a virtual scene and a list of scene elements, and the virtual scene is a preset scene terrain scene, which includes, but is not limited to, a grassland, a desert, a wildland, a river tree, and other terrain scene scenes. The scene element list includes a plurality of basic scene element identifiers, and a target scene element for constructing a scene can be constructed through a plurality of basic scene elements corresponding to the plurality of basic scene element identifiers, for example, as shown in fig. 2, taking a wild action game as an example, the plurality of basic scene elements include a breakable house 0-breakable house 11, a countdown timer, a red square small crown, a blue square small crown, a red square large crown, and the like.
The virtual scene and the scene element list may be located at any position on the graphical user interface, such as a lower right, a lower left, and a middle position, which is not limited in this embodiment. For example, with continued reference to FIG. 2, the virtual scene is located on the left side of the graphical user interface and the list of scene elements is located on the right side of the graphical user interface.
In the actual implementation process, a plurality of terminal devices participating in the collaborative construction of the scene elements may be presented by taking a first terminal and a second terminal as an example, in the collaborative construction process of the scene elements, the virtual scene and at least one first basic scene element and at least one second basic scene element located in the virtual scene are displayed through the first terminal and the second terminal, the first basic scene element is a scene element triggered by a first player through the first terminal and arranged in the virtual scene, the second basic scene element is a scene element triggered by a second player through the second terminal and arranged in the virtual scene, then the first player can send out a combined operation for the at least one first basic scene element and the at least one second basic scene element through the first terminal, the server can perform element combination on the at least one first basic scene element and the at least one second basic scene element based on the combined operation, so as to obtain a target scene element, the server can synchronize the target scene element to the second terminal, and the second player can confirm the target scene element based on the second terminal, so as to realize the collaborative construction of the scene elements of the players.
The scene element co-construction method in one embodiment of the present disclosure may be run on a local terminal device or a server. When the game interaction method runs on a server, the method can be realized and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an alternative embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, the running main body of the game program and the game picture presentation main body are separated, the storage and running of the interaction method in the game are completed on the cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the cloud game server which performs information processing is a cloud. When playing the game, the player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the client device through a network, and finally decodes the data through the client device and outputs the game pictures.
In an alternative embodiment, taking a game as an example, the local terminal device stores a game program and is used to present a game screen. The local terminal device is used for interacting with the player through the graphical user interface, namely, conventionally downloading and installing the game program through the electronic device and running. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal, or provided to the player by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including a virtual scene and a list of scene elements, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
The following describes in detail the embodiments with reference to the drawings, in which the execution subject is a server. The following description of the embodiments is not intended to limit the preferred embodiments. Although a logical order is depicted in the flowchart, in some cases the steps shown or described may be performed in an order different than depicted in the figures.
First, a scenario element collaboration construction method provided in an embodiment of the present application will be described with reference to fig. 3, and fig. 3 is a flowchart of the scenario element collaboration construction method provided in the embodiment of the present application.
Referring to fig. 3, the specific flow of the scene element collaborative construction method may be as follows steps 101 to 104, where:
101, displaying a virtual scene, at least one first basic scene element and at least one second basic scene element, wherein the first basic scene element is a scene element triggered by a first player through the first terminal and arranged in the virtual scene, and the second basic scene element is a scene element triggered by a second player through the second terminal and arranged in the virtual scene.
The first player is a player corresponding to the first terminal, the second player is other players except the first player in the game, for example, the wild action game comprises a first player, a second player, … and a player N, the first player is the first player, and the second player is at least one player of the second player, … and the player N.
The first terminal is terminal equipment corresponding to a first player, the first terminal can provide a virtual scene for the first player, and at least one first basic scene element and at least one second basic scene element which are positioned in the virtual scene, the second terminal is terminal equipment corresponding to a second player, and the second terminal can provide a virtual scene for the second player, and at least one first basic scene element and at least one second basic scene element which are positioned in the virtual scene.
The virtual scene refers to a virtual environment created by computer technology, can simulate various situations and scenes in the real world, can be a completely fictitious environment, can simulate or simulate the real world, and can be used for real-time interaction with a virtual character, environment and the like generated by a computer by a user. For example, referring to fig. 2, the first terminal and the second terminal may be Vision Pro devices, and the game screen of the wild game may be mapped by the Vision Pro devices, so that the graphical user interface shown in fig. 2 may be obtained.
The virtual scene contains scene objects, such as houses, automobiles, tables, chairs and the like, the first basic scene element and the second basic scene element are basic elements forming the scene objects, and the first basic scene element and the second basic scene element can be the same scene element or different scene elements. For example, referring to fig. 2, the first basic scene element may be one or more elements of a breakable house 0 to a breakable house 11, a countdown timer, a red crown, a blue crown, and a red crown, or one or more elements of a breakable house 0 to a breakable house 11, a countdown timer, a red crown, a blue crown, and a red crown.
It will be understood that when the second player is one player, the second terminal is a terminal device corresponding to the one player, and when the second player is a plurality of players, the second terminal is a plurality of terminal devices corresponding to the plurality of players, respectively.
102, in response to a combining operation of the first player on at least one first basic scene element and at least one second basic scene element when the first player arranges the scene elements, element combination is performed on the at least one first basic scene element and the at least one second basic scene element, and a target scene element is obtained.
The combination operation may be a second selection operation of the plurality of combination controls, where the second selection operation includes, but is not limited to, an eye gaze operation, an air click operation, a long press operation, a short press operation, and the like, which is not limited in this embodiment. For example, referring to fig. 5 and 6, the combination control includes an addition control, a subtraction control, an intersection control, and an exclusion control, where the first player gazes at the addition control through an eye gaze, and clicks the addition control using an air click gesture, so that selection of the addition control can be achieved.
In one possible implementation, the plurality of combined controls may be located anywhere on the graphical user interface, such as, but not limited to, a lower right, lower left, and middle position. In addition, the plurality of combination controls may be arranged in any manner on the graphical user interface, for example, displayed in columns from top to bottom, side by side from left to right, which is not limited in this embodiment. For example, with continued reference to fig. 5 and 6, the plurality of combined controls are positioned in an intermediate position over the graphical user interface and the plurality of combined controls are displayed side-by-side from left to right.
In one possible implementation, the plurality of combination controls correspond to a plurality of combination strategies, the plurality of combination strategies including at least one of adding, subtracting, intersecting, and excluding, and the step of obtaining the target scene element by element combining the at least one first base scene element and the at least one second base scene element in response to a combination operation of the at least one first base scene element and the at least one second base scene element by the first player when arranging the scene elements includes:
displaying a plurality of combination controls through a first terminal, wherein the plurality of combination controls correspond to a plurality of combination strategies;
determining a target combination control in response to a second selection operation of the plurality of combination controls by the first player when arranging the scene elements;
determining a target combination strategy based on the target combination control; the target combination strategy is a combination strategy corresponding to the target combination control;
and carrying out element combination on at least one first basic scene element and at least one second basic scene element based on the target combination strategy to obtain target scene elements.
The target combination control is a combination control selected by the first player through a second selection operation, after receiving a second selection operation of the first player on the plurality of combination controls when the scene elements are arranged, the target combination control may be determined according to the second selection operation, for example, as shown in fig. 5 and 6, the first player gazes at the addition control through eye gaze, clicks the addition control through an air click gesture, and determines that the target combination control is the addition control.
The target combination policy is a combination policy corresponding to the target combination control, for example, the target combination control is an addition combination control, and the target combination policy is addition; the target combination control is a subtraction combination control, and the target combination strategy is subtraction; the target combination control is an intersecting combination control, and the target combination strategy is intersecting; and if the target combination control is an exclusion combination control, the target combination strategy is exclusion.
The target scene element is a combined scene element obtained by combining at least one first basic scene element and at least one second basic scene element based on a target combination strategy, for example, when the target combination strategy is adding, the target scene element is a variant scene element obtained by adding at least one first basic scene element and at least one second basic scene element; when the target combination strategy is subtraction, the target scene element is a variant scene element obtained by subtracting at least one first basic scene element and at least one second basic scene element.
In one possible implementation, the first player may select a corresponding combination control from the plurality of combination controls, and perform element combination on at least one first base scene element and at least one second base scene element based on a combination policy corresponding to the combination control, so as to obtain the target scene element. For example, with continued reference to fig. 6 and 7, when the first player selects the add control, the server performs an add process on at least one first base scene element and at least one second base scene element; when the first player switches from the add control to the subtract control, the server performs a subtract process on the at least one first base scene element and the at least one second base scene element.
It should be noted that, a plurality of combination controls may also be displayed through the second terminal, the second player may select a corresponding combination control from the plurality of combination controls, and perform element combination on at least one first basic scene element and at least one second basic scene element based on a combination policy corresponding to the combination control, so as to obtain a target scene element, and specific steps are the same as the steps that the first player selects a corresponding combination control from the plurality of combination controls, and performs element combination on at least one first basic scene element and at least one second basic scene element based on a combination policy corresponding to the combination control, and specific reference may be made to the discussion of the corresponding steps.
103, synchronizing the target scene element to the second terminal, so that the second player confirms the target scene element based on the second terminal.
In one possible implementation, the target scene element may be displayed in the virtual scene by the first terminal, such that the first player may view the target scene element through the first terminal and confirm the target scene element. Meanwhile, the server can synchronize the target scene element to the second terminal, so that the second player can view the target scene element through the second terminal and confirm the target scene element.
In one possible implementation manner, referring to fig. 5 to 8, after the step of combining the elements of the at least one first base scene element and the at least one second base scene element to obtain the target scene element, the method further includes: and displaying a first identifier and a second identifier through the first terminal, wherein the first identifier is used for representing the confirmation condition of the first player on the target scene element, and the second identifier is used for representing the confirmation condition of the second player on the target scene element.
In one possible implementation, the first identifier and the second identifier may be located at any position on the graphical user interface, such as a lower right, a lower left, and a middle position, which is not limited in this embodiment.
In one possible implementation, the first identifier includes player information of the first player and confirmation information of the first player, for example, as shown in fig. 5 to 8, where the first player is a player a, the first identifier includes "a", and when the first player has not confirmed the target scene element, the first identifier includes a "to confirm" word, and the "to confirm" word becomes a "confirm" word after the first player confirms.
Similarly, the second identifier includes player information of the second player and confirmation information of the second player, for example, as shown in fig. 5 to 8, the second player is a B player, the second identifier includes "B", and when the second player has not confirmed the target scene element, the second identifier includes a "to confirm" word, and the "to confirm" word becomes a "confirm" word after confirmation by the second player.
In one possible implementation manner, the first identifier and the second identifier are obtained by performing multi-flap processing on the circle based on the actual number of players, for example, when the actual number of players is 2, performing two-flap processing on the circle to obtain a left-right two-flap circle, wherein player information and confirmation information of the second player are respectively displayed in the left-right two-flap circle; when the number of actual players is 4, four-piece processing is performed on the circles to obtain four quarter circles, and player information and confirmation information of the second player are respectively displayed in the four quarter circles.
In one possible implementation manner, after the step of displaying the first identifier and the second identifier through the first terminal, the method further includes:
modifying the display state of the first identifier in response to a first confirmation operation of the target scene element by the first player;
and modifying the display state of the second identifier in response to a second confirmation operation of the target scene element by the second player.
The first confirmation operation may be a gesture of performing a single tap between the index finger and the thumb, an "OK" gesture, etc., which is not limited in this embodiment. For example, the first player may confirm the target scene element by a single tap gesture with the thumb and index finger.
The second confirmation operation may be a gesture of a single tap between the index finger and the thumb, an "OK" gesture, etc., which is not limited in this embodiment. For example, the second player may confirm the target scene element by a single tap gesture with the thumb and index finger.
After receiving the first confirmation operation of the target scene element by the first player, the server may modify the display state of the first identifier based on the first confirmation operation, for example, modify the "to-be-confirmed" word in the first identifier into a "confirm" word.
Meanwhile, the server also receives a second confirmation operation of the target scene element by the second player, and modifies the display state of the second identifier based on the second confirmation operation, for example, modifies the word to be confirmed in the second identifier into a word to be confirmed.
In one possible implementation, after the step of synchronizing the target scene element to the second terminal, the method further includes: the first identifier and the second identifier are displayed through the second terminal, so that the second player can determine the confirmation condition of other players on the target scene element through the first identifier and the second identifier displayed by the second terminal.
In one possible implementation, after the multiparty player confirms the target scene element, the target scene element is successfully built, for example, referring to fig. 8 and 9, the first player is player a, the second player is player B, and when both player a and player B confirm the target scene element, the scene interface displays a message that the cooperative building of the target scene element is successful.
In this embodiment, as shown in fig. 10, displaying, by a first terminal and a second terminal, a virtual scene, and at least one first basic scene element and at least one second basic scene element located in the virtual scene may include the following steps 201 to 204:
and 201, displaying the virtual scene and the scene element list through the first terminal.
The scene element list includes a plurality of basic scene element identifiers, and a plurality of basic scene elements corresponding to the plurality of basic scene element identifiers can be combined into a variant scene element (i.e. a target scene element) in the game scene, for example, with continued reference to fig. 2, the breakable houses 0 to 11 can be combined with each other to form a new breakable house.
202, at least one first base scene element identification is determined in response to a first selection operation of the base scene element identifications in the scene element list by the first player.
The first base scene element identifier is a base scene element identifier selected by the first player from the scene element list through a first selection operation, where the first selection operation may be any one or more of eye gaze operation, click operation, long press operation, and short press operation, and this embodiment is not limited to this. For example, the first player may select the base scene element by eye gaze at the breakable house 0 in the scene element list.
After receiving the first selection operation of the first player on the basic scene element identifiers in the scene element list, the server may determine at least one first basic scene element identifier from the scene element list according to the first selection operation, for example, with continued reference to fig. 4, and determine that the at least one first basic scene element identifier is the breakable building 2 when detecting that the first player gazes at the breakable building 2 through the eye gaze.
203, determining first position information of at least one first basic scene element corresponding to the at least one first basic scene element identifier in the virtual scene in response to a first movement operation of the first player on the at least one first basic scene element identifier.
After the first player selects at least one first basic scene element identifier from the scene element list through the first selection operation, the first player also needs to move the at least one first basic scene element identifier to a specified position in the virtual scene through the first movement operation. The first movement operation may be any one or more of thumb and index finger kneading movement, thumb and middle finger fitting movement and fist making movement, which is not limited in this embodiment. For example, after the first player selects the breakable home 0 through eye gaze, the breakable home 0 is dragged to a designated position of the virtual scene by a gesture of pinching movement of the thumb and the index finger.
The first location information is location information of at least one first basic scene element in the virtual scene determined based on the first movement operation, and after the first movement operation of the first player on the at least one first basic scene element identification is received, the first location information of the at least one first basic scene element in the virtual scene can be determined according to the first movement operation so as to display the at least one first basic scene element in the virtual scene.
204, displaying at least one first base scene element in the virtual scene based on the first location information.
After the first location information is determined, at least one first base scene element may be displayed in the virtual scene based on the first location information. For example, if the position information of the crushable house 2 in the virtual scene is determined to be position a based on the first movement operation and the position information of the crushable house 9 in the virtual scene is determined to be position B based on the first movement operation, the crushable house 2 is displayed at position a in the virtual scene and the crushable house 9 is displayed at position B in the virtual scene.
In this embodiment, as shown in fig. 11, after displaying at least one first basic scene element in the virtual scene based on the first location information, the method may include the following steps 301 to 302:
In response to a second movement operation of the first player on at least one first base scene element in the virtual scene, second location information of the at least one first base scene element in the virtual scene is determined 301.
In one possible implementation manner, after displaying the at least one first basic scene element in the virtual scene, a second movement operation of the first player for the at least one first basic scene element in the virtual scene may be further received, where the second movement operation may be any one of thumb and index finger pinching movement, thumb and middle finger pinching movement, and fist making movement, and this embodiment is not limited thereto. For example, the first player may drag the breakable house 2 displayed in the virtual scene by pinching with the thumb and the index finger.
The second location information is location information of at least one first base scene element in the virtual scene determined based on the second movement operation, and after the second movement operation is received, the second location information of the at least one first base scene element in the virtual scene may be determined according to the second movement operation.
302, performing a position movement on at least one first base scene element in the virtual scene based on the second position information.
After the second position information is determined, the position of at least one first basic scene element in the virtual scene can be moved based on the second position information. For example, if it is determined that the position information of the crushable home 2 in the virtual scene is the position C based on the second moving operation, the crushable home 2 is moved from the position a to the position C.
In one possible implementation, after the step of moving the position of the at least one first base scene element in the virtual scene based on the second position information, the method includes:
and synchronizing the first basic scene element after the position movement to the second terminal so that the second terminal displays the first basic scene element after the position movement.
After the at least one first basic scene element in the virtual scene is subjected to position movement based on the second position information, the first basic scene element after the position movement can be synchronized to the second terminal, so that the second terminal can synchronously display the first basic scene element after the position movement. For example, after the first terminal moves the crushable house 2 from position a to position C, the server may synchronize the position-moved crushable house 2 to the second terminal, so that the second terminal may synchronously display the position-moved crushable house 2.
In one possible implementation, the step of displaying, by the first terminal and the second terminal, the virtual scene and at least one first base scene element and at least one second base scene element located in the virtual scene further includes: displaying a virtual scene and a scene element list through a second terminal; determining at least one second base scene element identifier in response to a third selection operation of the second player on the base scene element identifiers in the scene element list; responding to a third movement operation of the second player on at least one second basic scene element identifier, and determining third position information of at least one second basic scene element corresponding to the at least one second basic scene element identifier in the virtual scene; based on the third location information, at least one second base scene element is displayed in the virtual scene, the details of this step being the same as those of steps 201 and 204 described above, reference being made in particular to the discussion of steps 201 and 204 described above.
In one possible implementation manner, after the step of displaying at least one second base scene element in the virtual scene based on the third location information, the method further includes: determining fourth location information of at least one first base scene element in the virtual scene in response to a fourth movement operation of the second player on at least one second base scene element in the virtual scene; the position movement of at least one second base scene element in the virtual scene is performed based on the fourth position information, the details of this step being the same as those of steps 301 and 302 described above, and reference being made in particular to the discussion of steps 301 and 302 described above.
Further, after the step of moving the position of the at least one second base scene element in the virtual scene based on the fourth position information, the method comprises: and synchronizing the second basic scene element after the position movement to the first terminal so that the first terminal displays the second basic scene element after the position movement, and thus the first terminal can synchronously display the second basic scene element after the position movement.
In one possible implementation manner, as shown in fig. 12, after the steps of displaying the virtual scene and at least one first base scene element and at least one second base scene element located in the virtual scene by the first terminal and the second terminal, the following steps 401 to 402 may be included:
in response to a first trigger operation of the zoom control by the first player when arranging the scene elements, a zoom parameter of at least one first base scene element in the virtual scene is determined 401.
Specifically, the zoom control may be displayed through the first terminal, and the zoom control may be located at any position on the graphical user interface, such as a lower right, a lower left, and an intermediate position, which is not limited in this embodiment. For example, with continued reference to fig. 4-8, the zoom control is located below the graphical user interface.
In one possible implementation manner, after displaying the virtual scene and at least one first basic scene element located in the virtual scene through the first terminal, a first triggering operation of the zoom control by the first player may also be received, where the first triggering operation of the zoom control may include: the fourth selection operation of the zoom control and the first sliding operation may be any one or more of eye gaze operation, click operation, long press operation, and short press operation, and the first sliding operation may be a gesture of sliding an index finger, a gesture of sliding a fist, or the like, which is not limited in this embodiment. For example, a first player may gaze at the zoom control through the eye gaze and then control slider movement of the zoom control through the index finger swipe in air.
The scaling parameter is a scaling parameter of at least one first base scene element in the virtual scene determined based on the first trigger operation, e.g., the scaling parameter may be a scaling of the at least one first base scene element. Upon receiving a first trigger operation of the zoom control by the first player when arranging the scene elements, a zoom parameter of at least one first base scene element in the virtual scene may be determined according to the first trigger operation.
At 402, scaling at least one first base scene element in the virtual scene based on the scaling parameter.
The scaling process refers to a process of adjusting the size of at least one first basic scene element in the virtual scene based on the scaling parameter, and after determining the scaling parameter of the at least one first basic scene element, the scaling process may be performed on the at least one first basic scene element in the virtual scene based on the scaling parameter, so as to obtain a scaled first basic scene element. For example, the scaling parameter of the first base scene element a is 80%, and the first base scene element a is scaled to 80% in equal proportion based on the scaling parameter.
In one possible implementation, after scaling at least one first base scene element in the virtual scene based on the scaling parameter, the server may synchronize the scaled first base scene element to the second terminal, so that the second terminal may synchronously display the scaled first base scene element.
It should be noted that, the second player may also perform scaling processing on at least one second basic scene element in the same manner to obtain a scaled second basic scene element, and the server may synchronize the scaled second basic scene element to the first terminal, so that the first terminal may synchronously display the scaled second basic scene element, thereby implementing collaborative construction of scene elements by multiple players.
In one possible implementation manner, as shown in fig. 12, after the step of displaying, by the first terminal and the second terminal, the virtual scene and at least one first base scene element and at least one second base scene element located in the virtual scene, the following steps 501 to 502 may further be included:
501, determining a rotation parameter of at least one first base scene element in the virtual scene in response to a second trigger operation of the rotation control by the first player when arranging the scene elements.
Specifically, the rotation control may be displayed through the first terminal, and the rotation control may be located at any position on the graphical user interface, such as a lower right, a lower left, and an intermediate position, which is not limited in this embodiment. For example, referring to fig. 4-8, the rotation control may be located below the graphical user interface, and the zoom control and the rotation control may be disposed side-by-side along the length of the graphical user interface.
In one possible implementation manner, after the virtual scene and at least one first basic scene element located in the virtual scene are displayed through the first terminal, a second trigger operation of the first player on the rotation control may be further received, where the second trigger operation on the rotation control may include: the fifth selection operation of the rotation control and the second sliding operation may be any one or more of eye gaze operation, click operation, long press operation, and short press operation, and the second sliding operation may be a gesture of sliding an index finger, a gesture of sliding a fist, or the like, which is not limited in this embodiment. For example, a first player may gaze at the spin control through eye gaze and then control slider movement of the spin control through index finger swipe in air.
The rotation parameter is a rotation parameter of at least one first base scene element in the virtual scene determined based on the second trigger operation, e.g., the rotation parameter may be a rotation angle of at least one first base scene element in the virtual scene. Upon receiving a second trigger operation of the rotation control by the first player when arranging the scene elements, a rotation parameter of at least one first base scene element in the virtual scene may be determined according to the second trigger operation.
502, performing rotation processing on at least one first basic scene element in the virtual scene based on the rotation parameters.
The rotation processing refers to a process of rotating at least one first basic scene element in the virtual scene in a three-dimensional space based on rotation parameters, and after the rotation parameters of the at least one first basic scene element in the virtual scene are determined, the rotation processing can be performed on the at least one first basic scene element in the virtual scene based on the rotation parameters, so as to obtain the rotated first basic scene element. For example, the rotation parameter of the first base scene element a is 90 ° rotated counterclockwise, and the first base scene element a is then rotated 90 ° counterclockwise.
In one possible implementation, after performing rotation processing on at least one first basic scene element in the virtual scene based on the rotation parameter, the server may synchronize the rotated first basic scene element to the second terminal, so that the second terminal may synchronously display the rotated first basic scene element.
It should be noted that, the second player may also perform rotation processing on at least one second basic scene element in the same manner to obtain a rotated second basic scene element, and the server may synchronize the rotated second basic scene element to the first terminal, so that the first terminal may synchronously display the rotated second basic scene element, thereby implementing collaborative construction of scene elements by multiple players.
All the above technical solutions may be combined to form an optional embodiment of the present application, which is not described here in detail.
In order to facilitate better implementation of the scene element collaborative construction method in the embodiment of the application, the embodiment of the application also provides a scene element collaborative construction device. Referring to fig. 14, fig. 14 is a schematic structural diagram of a scene element collaboration building apparatus according to an embodiment of the present application. The scene element co-construction apparatus 600 may include a first display module 601, an element combination module 602, and an element confirmation module 603.
The first display module 601 is configured to display, through a first terminal and a second terminal, a virtual scene, and at least one first basic scene element and at least one second basic scene element located in the virtual scene, where the first basic scene element is a scene element triggered by a first player through the first terminal and arranged in the virtual scene, and the second basic scene element is a scene element triggered by a second player through the second terminal and arranged in the virtual scene.
In an embodiment, the first display module 601 specifically performs, when performing the step of displaying the virtual scene and at least one first basic scene element and at least one second basic scene element located in the virtual scene by the first terminal and the second terminal: displaying a virtual scene and a scene element list through a first terminal; determining at least one first basic scene element identifier in response to a first selection operation of the first player on the basic scene element identifiers in the scene element list; responding to a first moving operation of a first player on at least one first basic scene element identifier, and determining first position information of at least one first basic scene element corresponding to the at least one first basic scene element identifier in a virtual scene; at least one first base scene element is displayed in the virtual scene based on the first location information.
The element combination module 602 is configured to respond to a combination operation of the first player on at least one first base scene element and at least one second base scene element when the first player arranges the scene elements, and perform element combination on the at least one first base scene element and the at least one second base scene element to obtain a target scene element.
In one embodiment, the element combination module 602, when performing the step of combining the at least one first base scene element and the at least one second base scene element to obtain the target scene element in response to the combining operation of the at least one first base scene element and the at least one second base scene element by the first player when the scene element is arranged, specifically performs: displaying a plurality of combination controls through a first terminal, wherein the plurality of combination controls correspond to a plurality of combination strategies; determining a target combination control in response to a second selection operation of the plurality of combination controls by the first player when arranging the scene elements; determining a target combination strategy based on the target combination control; the target combination strategy is a combination strategy corresponding to the target combination control; and carrying out element combination on at least one first basic scene element and at least one second basic scene element based on the target combination strategy to obtain target scene elements.
An element confirmation module 603, configured to synchronize the target scene element to the second terminal, so that the second player confirms the target scene element based on the second terminal.
In an embodiment, the scene element co-construction device further includes:
A first response module, configured to determine second location information of at least one first base scene element in the virtual scene in response to a second movement operation of the first player on the at least one first base scene element in the virtual scene;
and the moving module is used for carrying out position movement on at least one first basic scene element in the virtual scene based on the second position information.
In an embodiment, the scene element co-construction device further includes:
and the synchronization module is used for synchronizing the first basic scene element after the position movement to the second terminal so that the second terminal displays the first basic scene element after the position movement.
In an embodiment, the scene element co-construction device further includes:
the second response module is used for responding to a first triggering operation of the zoom control by the first player when the scene elements are arranged, and determining zoom parameters of at least one first basic scene element in the virtual scene;
and the scaling processing module is used for scaling at least one first basic scene element in the virtual scene based on the scaling parameters.
In an embodiment, the scene element co-construction device further includes:
a third response module for determining a rotation parameter of at least one first basic scene element in the virtual scene in response to a second trigger operation of the rotation control by the first player when arranging the scene elements;
And the rotation processing module is used for performing rotation processing on at least one first basic scene element in the virtual scene based on the rotation parameters.
In an embodiment, the scene element co-construction device further includes:
the second display module is used for displaying a first identifier and a second identifier through the first terminal, wherein the first identifier is used for representing the confirmation condition of the first player on the target scene element, and the second identifier is used for representing the confirmation condition of the second player on the target scene element.
In an embodiment, the scene element co-construction device further includes:
the first modification module is used for responding to a first confirming operation of the first player on the target scene element and modifying the display state of the first mark;
and the second modification module is used for responding to a second confirmation operation of the target scene element by the second player and modifying the display state of the second mark.
All the above technical solutions may be combined to form an optional embodiment of the present application, which is not described here in detail.
Correspondingly, the embodiment of the application also provides computer equipment, which can be a terminal or a server. As shown in fig. 15, fig. 15 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer device 700 includes a processor 701 having one or more processing cores, a memory 702 having one or more computer readable storage media, and a computer program stored on the memory 702 and executable on the processor. The processor 701 is electrically connected to the memory 702. It will be appreciated by those skilled in the art that the computer device structure shown in the figures is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The processor 701 is a control center of the computer device 700, connects various parts of the entire computer device 700 using various interfaces and lines, and performs various functions of the computer device 700 and processes data by running or loading software programs (computer programs) and/or modules stored in the memory 702, and calling data stored in the memory 702, thereby performing overall monitoring of the computer device 700.
In the embodiment of the present application, the processor 701 in the computer device 700 loads the instructions corresponding to the processes of one or more application programs into the memory 702 according to the following steps, and the processor 701 executes the application programs stored in the memory 702, so as to implement various functions:
displaying a virtual scene and at least one first basic scene element and at least one second basic scene element which are positioned in the virtual scene through a first terminal and a second terminal, wherein the first basic scene element is a scene element triggered by a first player through the first terminal and arranged in the virtual scene, and the second basic scene element is a scene element triggered by a second player through the second terminal and arranged in the virtual scene; responding to the combination operation of the first player on at least one first basic scene element and at least one second basic scene element when the first player arranges the scene elements, and carrying out element combination on the at least one first basic scene element and the at least one second basic scene element to obtain target scene elements; the target scene element is synchronized to the second terminal such that the second player confirms the target scene element based on the second terminal.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 15, the computer device 700 further includes: a touch display 703, a radio frequency circuit 704, an audio circuit 705, an input unit 706, and a power supply 707. The processor 701 is electrically connected to the touch display 703, the radio frequency circuit 704, the audio circuit 705, the input unit 706, and the power supply 707, respectively. Those skilled in the art will appreciate that the computer device structure shown in FIG. 15 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components.
The touch display 703 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display screen 703 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of a computer device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a liquid crystal display (LCD, liquid Crystal Display), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is transferred to the processor 701 to determine the type of touch event, and the processor 701 then provides a corresponding visual output on the display panel based on the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 703 to implement the input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch display 703 may also implement an input function as part of the input unit 706.
In the embodiment of the present application, the touch display screen 703 is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface.
The radio frequency circuitry 704 may be configured to transceive radio frequency signals to establish wireless communication with a network device or other computer device via wireless communication.
Audio circuitry 705 may be used to provide an audio interface between a user and a computer device through speakers, microphones, and so forth. The audio circuit 705 may transmit the received electrical signal converted from audio data to a speaker, where it is converted into a sound signal for output; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 705 and converted into audio data, which are processed by the audio data output processor 701 for transmission to, for example, another computer device via the radio frequency circuit 704, or which are output to the memory 702 for further processing. Audio circuitry 705 may also include an ear bud jack to provide communication of a peripheral headset with a computer device.
The input unit 706 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), as well as to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 707 is used to power the various components of the computer device 700. Alternatively, the power supply 707 may be logically connected to the processor 701 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system. The power supply 707 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 15, the computer device 700 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which will not be described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer readable storage medium having stored therein a plurality of computer programs that can be loaded by a processor to perform steps in any of the scene element co-construction methods provided by embodiments of the present application. For example, the computer program may perform the steps of:
Displaying a virtual scene, at least one first basic scene element and at least one second basic scene element in the virtual scene through a first terminal and a second terminal, wherein the first basic scene element is a scene element triggered by a first player through the first terminal and arranged in the virtual scene, and the second basic scene element is a scene element triggered by a second player through the second terminal and arranged in the virtual scene; responding to the combination operation of the first player on at least one first basic scene element and at least one second basic scene element when the first player arranges the scene elements, and carrying out element combination on the at least one first basic scene element and the at least one second basic scene element to obtain target scene elements; the target scene element is synchronized to the second terminal such that the second player confirms the target scene element based on the second terminal.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The steps in any of the scene element collaborative building methods provided in the embodiments of the present application may be executed by the computer program stored in the storage medium, so that the beneficial effects that any of the scene element collaborative building methods provided in the embodiments of the present application may be achieved, which are detailed in the previous embodiments and are not described herein.
The above describes in detail a method, apparatus, computer device and storage medium for collaborative construction of scene elements provided in the embodiments of the present application, and specific examples are applied to describe the principles and embodiments of the present application, where the description of the above embodiments is only for helping to understand the method and core ideas of the present application; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.

Claims (12)

1. A method for collaborative construction of scene elements, the method comprising:
displaying a virtual scene and at least one first basic scene element and at least one second basic scene element which are positioned in the virtual scene through a first terminal and a second terminal, wherein the first basic scene element is a scene element triggered by a first player through the first terminal and arranged in the virtual scene, and the second basic scene element is a scene element triggered by a second player through the second terminal and arranged in the virtual scene;
responding to the combined operation of the first player on the at least one first basic scene element and the at least one second basic scene element when the first player arranges the scene elements, and carrying out element combination on the at least one first basic scene element and the at least one second basic scene element to obtain target scene elements;
And synchronizing the target scene element to the second terminal so that the second player confirms the target scene element based on the second terminal.
2. The scene element co-construction method according to claim 1, wherein the displaying the virtual scene and the at least one first base scene element and the at least one second base scene element located in the virtual scene by the first terminal and the second terminal comprises:
displaying a virtual scene and a scene element list through the first terminal;
determining at least one first basic scene element identifier in response to a first selection operation of the first player on basic scene element identifiers in the scene element list;
determining first position information of at least one first basic scene element corresponding to the at least one first basic scene element identifier in the virtual scene in response to a first movement operation of the first player on the at least one first basic scene element identifier;
the at least one first base scene element is displayed in the virtual scene based on the first location information.
3. The scene element co-construction method according to claim 2, wherein after the step of displaying the at least one first base scene element in the virtual scene based on the first position information, comprising:
Determining second location information of the at least one first base scene element in the virtual scene in response to a second movement operation of the at least one first base scene element in the virtual scene by the first player;
and performing position movement on the at least one first basic scene element in the virtual scene based on the second position information.
4. A scene element co-construction method according to claim 3, wherein said step of moving the position of said at least one first base scene element in said virtual scene based on said second position information comprises, after:
and synchronizing the first basic scene element after the position movement to the second terminal so that the second terminal displays the first basic scene element after the position movement.
5. The method according to claim 1, wherein the step of obtaining the target scene element in response to the combined operation of the first player on the at least one first base scene element and the at least one second base scene element when the first player arranges the scene elements by element combination of the at least one first base scene element and the at least one second base scene element comprises:
Displaying a plurality of combination controls through the first terminal, wherein the plurality of combination controls correspond to a plurality of combination strategies;
determining a target combination control in response to a second selection operation of the plurality of combination controls by the first player when arranging the scene elements;
determining a target combination strategy based on the target combination control; the target combination strategy is a combination strategy corresponding to the target combination control;
and carrying out element combination on the at least one first basic scene element and the at least one second basic scene element based on the target combination strategy to obtain a target scene element.
6. The method according to claim 1, wherein after the step of displaying the virtual scene and the at least one first base scene element and the at least one second base scene element located in the virtual scene by the first terminal and the second terminal, further comprising:
determining scaling parameters of at least one first basic scene element in the virtual scene in response to a first triggering operation of the scaling control by the first player when the scene elements are arranged;
and scaling at least one first basic scene element in the virtual scene based on the scaling parameters.
7. The method according to claim 1, wherein after the step of displaying the virtual scene and the at least one first base scene element and the at least one second base scene element located in the virtual scene by the first terminal and the second terminal, further comprising:
determining a rotation parameter of at least one first basic scene element in the virtual scene in response to a second trigger operation of the rotation control by the first player when arranging scene elements;
and rotating at least one first basic scene element in the virtual scene based on the rotation parameters.
8. The method for collaborative construction of scene elements according to claim 1, wherein the step of combining the at least one first base scene element and the at least one second base scene element to obtain a target scene element further comprises:
and displaying a first identifier and a second identifier through the first terminal, wherein the first identifier is used for representing the confirmation condition of the first player on the target scene element, and the second identifier is used for representing the confirmation condition of the second player on the target scene element.
9. The scene element co-construction method according to claim 8, wherein after the step of displaying the first and second identifications by the first terminal, comprising:
modifying the display state of the first identifier in response to a first confirmation operation of the first player on the target scene element;
and modifying the display state of the second identifier in response to a second confirmation operation of the target scene element by the second player.
10. A scene element co-construction apparatus, characterized in that the scene element co-construction apparatus comprises:
the first display module is used for displaying a virtual scene through a first terminal and a second terminal, and at least one first basic scene element and at least one second basic scene element which are positioned in the virtual scene, wherein the first basic scene element is a scene element triggered by a first player through the first terminal and arranged in the virtual scene, and the second basic scene element is a scene element triggered by a second player through the second terminal and arranged in the virtual scene;
an element combination module, configured to respond to a combination operation of the first player on the at least one first base scene element and the at least one second base scene element when the first player arranges the scene elements, and perform element combination on the at least one first base scene element and the at least one second base scene element to obtain a target scene element;
And the element confirmation module is used for synchronizing the target scene element to the second terminal so that the second player confirms the target scene element based on the second terminal.
11. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program adapted to be loaded by a processor for performing the steps of the scene element co-construction method according to any of claims 1-9.
12. A computer device, characterized in that it comprises a memory in which a computer program is stored and a processor which performs the steps in the scene element co-construction method according to any of claims 1-9 by calling the computer program stored in the memory.
CN202410042699.1A 2024-01-11 2024-01-11 Scene element collaborative construction method, device, computer equipment and storage medium Pending CN117815670A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410042699.1A CN117815670A (en) 2024-01-11 2024-01-11 Scene element collaborative construction method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410042699.1A CN117815670A (en) 2024-01-11 2024-01-11 Scene element collaborative construction method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117815670A true CN117815670A (en) 2024-04-05

Family

ID=90507793

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410042699.1A Pending CN117815670A (en) 2024-01-11 2024-01-11 Scene element collaborative construction method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117815670A (en)

Similar Documents

Publication Publication Date Title
US11684858B2 (en) Supplemental casting control with direction and magnitude
CN113101652A (en) Information display method and device, computer equipment and storage medium
CN113082712A (en) Control method and device of virtual role, computer equipment and storage medium
CN113398590B (en) Sound processing method, device, computer equipment and storage medium
CN113350793B (en) Interface element setting method and device, electronic equipment and storage medium
CN113426124A (en) Display control method and device in game, storage medium and computer equipment
CN113398566A (en) Game display control method and device, storage medium and computer equipment
CN113332716A (en) Virtual article processing method and device, computer equipment and storage medium
CN112245914B (en) Viewing angle adjusting method and device, storage medium and computer equipment
CN115193043A (en) Game information sending method and device, computer equipment and storage medium
CN113521724B (en) Method, device, equipment and storage medium for controlling virtual character
CN114225412A (en) Information processing method, information processing device, computer equipment and storage medium
CN117815670A (en) Scene element collaborative construction method, device, computer equipment and storage medium
CN113426115A (en) Game role display method and device and terminal
CN113350801A (en) Model processing method and device, storage medium and computer equipment
CN113413600A (en) Information processing method, information processing device, computer equipment and storage medium
CN114189731B (en) Feedback method, device, equipment and storage medium after giving virtual gift
WO2024051414A1 (en) Hot area adjusting method and apparatus, device, storage medium, and program product
CN116966544A (en) Region prompting method, device, storage medium and computer equipment
CN115430151A (en) Game role control method and device, electronic equipment and readable storage medium
CN115430150A (en) Game skill release method and device, computer equipment and storage medium
CN116999835A (en) Game control method, game control device, computer equipment and storage medium
CN116328310A (en) Virtual model processing method, device, computer equipment and storage medium
CN117942556A (en) Game center adjusting method and device, electronic equipment and readable storage medium
CN116370960A (en) Virtual character selection method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination