WO2023088012A1 - 虚拟角色控制方法、装置、设备、存储介质及程序产品 - Google Patents

虚拟角色控制方法、装置、设备、存储介质及程序产品 Download PDF

Info

Publication number
WO2023088012A1
WO2023088012A1 PCT/CN2022/125747 CN2022125747W WO2023088012A1 WO 2023088012 A1 WO2023088012 A1 WO 2023088012A1 CN 2022125747 W CN2022125747 W CN 2022125747W WO 2023088012 A1 WO2023088012 A1 WO 2023088012A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
virtual character
control
layout
station
Prior art date
Application number
PCT/CN2022/125747
Other languages
English (en)
French (fr)
Inventor
高昊
林琳
钱杉杉
梁皓辉
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to KR1020247009540A priority Critical patent/KR20240046595A/ko
Publication of WO2023088012A1 publication Critical patent/WO2023088012A1/zh
Priority to US18/214,306 priority patent/US20230330539A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/426Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/44Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Definitions

  • the embodiments of the present application relate to the technical field of virtual scenes, and in particular to a virtual character control method, device, equipment, storage medium and program product.
  • the position of the virtual character in the virtual scene is usually adjustable.
  • a layout switch button is usually provided, through which the user can realize "one-key transposition" of the virtual characters in the virtual scene.
  • Embodiments of the present application provide a virtual character control method, device, device, storage medium, and program product, which can improve the flexibility of position adjustment of virtual characters in a virtual scene.
  • the technical solution is as follows:
  • a method for controlling a virtual character comprising:
  • the station of the target virtual character is adjusted based on the target adjustment method; at least one of the target adjustment method and the target virtual character is based on the user Determined by a custom operation; the target virtual character belongs to the at least one virtual character.
  • a virtual character control device comprising:
  • a screen display module configured to display a scene screen of a virtual scene, where at least one virtual character is included in the virtual scene
  • the control control display module is used to superimpose and display the station control control on the upper layer of the scene picture
  • a position adjustment module configured to adjust the position of the target virtual character based on a target adjustment mode in response to receiving a trigger operation on the position control control; the target adjustment mode and the target virtual character At least one is determined based on user-defined operations; the target virtual character belongs to the at least one virtual character.
  • the station control control includes at least two sub-controls
  • the position adjustment module is configured to adjust the position of the target virtual character based on the adjustment mode corresponding to the target sub-control in response to receiving a trigger operation on the target sub-control, and the target sub-control is Any one of the at least two sub-controls.
  • the station position adjustment module includes:
  • a state adjustment submodule configured to adjust the selection state of the at least one virtual character to an optional state in response to receiving a trigger operation on the first sub-control of the at least two sub-controls;
  • a character acquiring submodule configured to acquire the virtual character selected based on the character selecting operation as the target virtual character in response to receiving the character selecting operation
  • the position switching submodule is used to switch the position of the target virtual character from the first position to the second position in response to the end of the character selection operation; the positions of the target virtual character before and after position switching are along the target axis symmetry.
  • the character selection operation includes: a continuous sliding operation based on the scene picture, or a range selection operation based on the scene picture, or at least one of the click operations on the virtual character A sort of.
  • the target virtual character in response to the character selection operation being a continuous sliding operation based on the scene picture, is a virtual character selected based on the continuous sliding operation among the at least one virtual character.
  • the target virtual character is a virtual character within the range determined based on the range selection operation among the at least one virtual character;
  • the target virtual character is a virtual character selected based on the click operation among the at least one virtual character.
  • the position switching submodule in response to the character selection operation being a continuous sliding operation based on the scene picture, is configured to, in response to interruption of the continuous sliding operation based on the scene picture, set The position of the target virtual character is switched from the first position to the second position.
  • the device further includes:
  • a determination control display module configured to display a determination control in response to receiving a trigger operation on the first sub-control
  • the position switching submodule is configured to switch the position of the target virtual character from the first position to the second position in response to receiving a trigger operation based on the determination control.
  • the target axis includes the central axis of the scene picture, or, relative to the central axis of the target virtual character, or any one of any axis determined based on the user's drawing operation. kind.
  • the station position adjustment module includes:
  • the area display submodule is configured to display a layout display area in response to receiving a trigger operation on the second sub-control of at least two of the sub-controls, and at least one station layout is displayed in the layout display area;
  • the station layout is the station layout determined based on the layout settings of the user;
  • the station adjustment sub-module is configured to adjust the station of the target virtual character based on the target station layout in response to receiving a selection operation on the target station layout; the target station layout is at least one One of the station layouts.
  • the station position adjustment submodule includes:
  • an attack range obtaining unit configured to obtain the attack range of the target virtual character
  • the position adjustment unit is configured to adjust the position of the target virtual character based on the attack range of the target virtual character and the layout of the target position.
  • the station adjustment submodule is configured to, in response to the number of stations corresponding to the target station layout being greater than the number of the target virtual characters, arrange the The station of the target virtual character is adjusted to the station in the target station layout.
  • the device further includes:
  • the planning interface display module is used to display the layout planning interface
  • a layout generating module configured to generate the station layout based on the station determined in the layout planning interface
  • a layout adding module configured to add the station layout to the layout display area.
  • the layout display area includes a layout adding control
  • the planning interface display module is configured to display the layout planning interface in response to receiving a trigger operation on the layout adding control.
  • control control display module is configured to superimpose and display the station position control control on the upper layer of the scene picture in response to receiving an activation operation
  • the activation operation includes: at least one of: a target operation performed on the target area in the scene picture, and a trigger operation on an activation control.
  • the target operation includes a long press operation on the target area.
  • the station position adjustment module is configured to adjust the station position of the at least one virtual character to Switching from the first position to the second position; the positions of the at least one virtual character before and after the position switching are symmetrical along the target axis.
  • a computer device in another aspect, includes a processor and a memory, the memory stores at least one computer program, and the at least one computer program is loaded and executed by the processor to realize the above virtual Character control method.
  • a computer-readable storage medium wherein at least one computer program is stored in the computer-readable storage medium, and the computer program is loaded and executed by a processor to implement the above virtual character control method.
  • a computer program product or computer program includes at least one computer program, and the computer program is loaded and executed by a processor to realize the virtual character provided in the above various optional implementation modes Control Method.
  • the computer device when the computer device receives the trigger operation based on the position control control, it can uniformly adjust the position of the virtual character selected by the user, or can adjust the target object through a self-defined adjustment method.
  • the positions of the virtual characters in the virtual scene can be adjusted uniformly, or the positions of the virtual characters selected by the user can be adjusted uniformly through a custom adjustment method, so that when adjusting the positions of the virtual characters in the virtual scene, the position can be adjusted based on the position
  • the control controls realize the collective adjustment of the positions of multiple virtual characters, and at the same time ensure the flexibility of adjusting the positions of the virtual characters in the virtual scene.
  • Fig. 1 is a schematic diagram of a chess game battle screen provided by an exemplary embodiment of the present application
  • Fig. 2 shows a computer system block diagram provided by an exemplary embodiment of the present application
  • Fig. 3 shows a schematic diagram of a state synchronization technology shown in an exemplary embodiment of the present application
  • Fig. 4 shows a schematic diagram of a frame synchronization technology shown in an exemplary embodiment of the present application
  • Fig. 5 shows a flowchart of a virtual character control method shown in an exemplary embodiment of the present application
  • Fig. 6 shows a schematic diagram of a scene picture shown in an exemplary embodiment of the present application
  • Fig. 7 shows a flowchart of a virtual character control method shown in an exemplary embodiment of the present application
  • Fig. 8 shows a schematic diagram of a scene picture shown in an exemplary embodiment of the present application
  • Fig. 9 shows a schematic diagram of scenes before and after mirroring a virtual character shown in an exemplary embodiment of the present application.
  • Fig. 10 shows a schematic diagram of determining a target virtual character based on continuous sliding operations according to an exemplary embodiment of the present application
  • Fig. 11 shows a schematic diagram of determining a target virtual character through a range selection operation according to an exemplary embodiment of the present application
  • Fig. 12 shows a schematic diagram of a scene picture shown in an exemplary embodiment of the present application
  • Fig. 13 shows a schematic diagram of scenes before and after mirroring a virtual character shown in an exemplary embodiment of the present application
  • Fig. 14 shows a schematic diagram of an added station layout shown in an exemplary embodiment of the present application
  • Fig. 15 shows a schematic diagram of a layout display area shown in an exemplary embodiment of the present application
  • Fig. 16 shows a flowchart of a virtual character control method shown in an exemplary embodiment of the present application
  • Fig. 17 shows a block diagram of a virtual character control device shown in an exemplary embodiment of the present application
  • Fig. 18 is a structural block diagram of a computer device according to an exemplary embodiment
  • Fig. 19 is a structural block diagram of a computer device according to an exemplary embodiment.
  • the present application provides a method for controlling a virtual character, which can improve the efficiency of adjusting the position of the virtual character. For ease of understanding, several terms involved in this application are explained below.
  • Virtual scene refers to the virtual scene displayed (or provided) when the application program is running on the terminal.
  • the virtual scene may be a simulation environment scene of the real world, or a semi-simulation and semi-fictional three-dimensional environment scene, or a purely fictional three-dimensional environment scene.
  • the virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene.
  • the virtual scene is a three-dimensional virtual scene as an example for illustration, but this is not limited thereto.
  • Auto Chess It refers to a chess game in which the "chess pieces" are laid out in advance before the game, and during the battle, the "pieces” can automatically play against each other according to the pre-layout.
  • "Chess pieces” are usually represented by virtual characters. During the battle, the virtual characters automatically release various skills to fight. The battle usually adopts a turn-based system. When all the "chess pieces" of one party are killed (that is, the life value of the virtual character is reduced to zero), the party is the loser of the battle.
  • both parties are also provided with a virtual character representing the user participating in the battle.
  • This virtual character cannot be used as a "chess piece" to move to the battle area or the preparation area.
  • the virtual character is also provided with a life value (or blood volume), and the life value of the virtual character is correspondingly reduced (failed in the battle) or unchanged (winned in the battle) according to the result of each game.
  • life value of the virtual character is reduced to zero , the user corresponding to the virtual character quits the battle, and the remaining users continue to fight.
  • Chessboard refers to the area used to prepare for and conduct battles in the battle interface of the auto chess game.
  • the chessboard can be any one of a two-dimensional virtual chessboard, a 2.5-dimensional virtual chessboard, and a three-dimensional virtual chessboard, and this application does not limit it .
  • the board is divided into a battle area and a preparation area.
  • the battle area includes several battle chess grids of the same size, and the battle chess grid is used to place the chess pieces for the battle in the battle process;
  • the preparation pieces will not participate in the battle during the battle, and can be dragged and placed in the battle area during the preparation phase.
  • the chess piece roles in the battle include the chess piece roles in the battle area and the chess piece roles in the preparation area as an example for illustration.
  • the battle area includes n (rows) ⁇ m (columns) battle grids, schematically, n is an integer multiple of 2, and two adjacent rows The checkerboards are aligned, or two adjacent rows of checkerboards are staggered.
  • the battle area is divided into two parts according to the rows, which are the own battle area and the enemy battle area.
  • the users participating in the battle are located on the upper and lower sides of the battle interface, and in the preparation stage, users can only place chess pieces in the own battle area .
  • the battle area is divided into two parts according to columns, namely the own battle area and the enemy battle area, and the users participating in the battle are respectively located on the left and right sides of the battle interface.
  • the shape of the checkerboard may be any one of square, rectangle, circle, and hexagon, and the embodiment of the present application does not limit the shape of the checkerboard.
  • the battle chess grid is always displayed on the chessboard. In other embodiments, the battle chess grid is displayed when the user lays out the battle chess pieces. When the battle chess pieces are placed in the chess grid, the battle chess grid is canceled. .
  • FIG. 1 is a schematic diagram of a chess game battle screen provided by an exemplary embodiment of the present application. As shown in FIG. 3 ⁇ 7 battle grids, the shape of the grid is hexagonal, and two adjacent rows of grids are staggered, and the preparation area 112 includes 9 preparation grids.
  • Virtual characters in the auto chess game refer to the chess pieces placed on the chessboard in the auto chess game, including the candidate chess role in the combat chess role and the candidate chess role list (i.e. the candidate chess role in the virtual store), wherein , the chess piece roles in the battle include the chess piece roles in the battle area and the chess piece roles in the preparation area.
  • the virtual character can be a virtual chess piece, a virtual character, a virtual animal, an animation character, etc., and the virtual character can be displayed using a three-dimensional model.
  • Candidate chess pieces can be combined with the user's existing chess pieces to trigger the battle effect, and can also participate in the chess game as a chess piece alone.
  • the positions of the playing chess pieces on the chessboard can be changed.
  • the user can adjust the position of the chess pieces in the battle area, adjust the position of the chess pieces in the preparation area, move the chess pieces in the battle area to the preparation area (when there is an idle preparation grid in the preparation area), or, The pawn characters in the preparation area move to the battle area. It should be noted that, during the battle phase, the positions of chess pieces in the preparation area can also be adjusted.
  • the position of the chess pieces in the battle area is different from that in the preparation phase.
  • the chess piece role can automatically move from the battle area of one's own side to the battle area of the enemy, and attack the chess piece role of the enemy; B position.
  • chess piece characters can only be set in the battle area of one's own side, and chess piece roles set by the enemy are invisible on the chessboard.
  • the user can use virtual currency to purchase chess pieces in the preparation stage.
  • a virtual character is used to represent a user participating in a battle
  • the virtual character can be a virtual character, a virtual animal, an anime character, etc., and the following embodiments name this type of virtual character as a player virtual character or User avatar.
  • a first battle piece role 111a, a second battle piece role 111b and a third battle piece role 111c are displayed in the battle area 111, and the first battle piece role 112a, The second preparation chess piece role 112b and the third preparation chess piece role 112c.
  • a player avatar 113 is displayed next to the battle area and the preparation area.
  • each chess piece role in the auto chess game has its own attributes, which include at least two of the following attributes: the camp to which the chess piece role belongs (such as A alliance, B alliance, neutral faction, etc.), the occupation of the chess piece role (such as warrior, archer, mage, assassin, guard, swordsman, gunner, fighter, etc.), the attack type of the chess piece role (such as magic, physics, etc.), the identity of the chess piece role (such as nobleman, demon, elf, etc.), etc., this application The embodiment does not limit the specific type of attributes.
  • each chess piece role has attributes of at least two dimensions, and the equipment carried by the chess piece role can improve the attributes of the chess piece role.
  • the characters with this attribute can be obtained for chess pieces or all chess pieces in the battle area.
  • all the chess pieces will get a 10% defense bonus
  • when there are 4 chess pieces whose attributes are fighters on the battle area all All the fighting chess pieces get a 20% defense bonus
  • when there are 3 elf-attributed chess pieces in the battle area all the fighting chess pieces get a 20% dodge probability bonus.
  • the virtual character control method shown in this application can be applied to scenes that require virtual characters to perform position control, such as the above-mentioned auto chess game scene, card game scene, etc.
  • the auto chess game scene is taken as an example to illustrate the virtual character control method provided by this application.
  • Fig. 2 shows a block diagram of a computer system provided by an exemplary embodiment of the present application.
  • the computer system includes: a first terminal 120 , a server 140 and a second terminal 160 .
  • the first terminal 120 has an auto chess game application installed and running.
  • the first terminal 120 is the terminal used by the first user.
  • the first user uses the first terminal 120 to lay out the chess pieces in the battle area of the chessboard during the preparation stage of the game. Attributes, skills, and layouts, automatically control chess pieces for battle.
  • the first terminal 120 is connected to the server 140 through a wireless network or a wired network.
  • the server 140 includes at least one of a server, multiple servers, a cloud computing platform, and a virtualization center.
  • the server 140 includes a processor 144 and a memory 142
  • the memory 142 includes a receiving module 1421 , a control module 1422 and a sending module 1423 .
  • the server 140 is used to provide background services for auto chess game applications, such as providing image rendering services for auto chess games.
  • the receiving module 1421 is used to receive the layout information of the chess pieces sent by the client;
  • the control module 1422 is used to control the chess pieces to automatically fight according to the layout information of the chess pieces;
  • the sending module 1423 is used to send the result of the game to the client .
  • the server 140 undertakes the main calculation work, and the first terminal 120 and the second terminal 160 undertake the secondary calculation work; or, the server 140 undertakes the secondary calculation work, and the first terminal 120 and the second terminal 160 undertake the main calculation work; Alternatively, the server 140, the first terminal 120, and the second terminal 160 use a distributed computing architecture to perform collaborative computing.
  • the server 140 may use a synchronization technology to make the screen performances of multiple clients consistent.
  • the synchronization technology adopted by the server 140 includes: state synchronization technology or frame synchronization technology.
  • FIG. 3 shows a schematic diagram of a state synchronization technology shown in an exemplary embodiment of the present application.
  • the combat logic runs in the server 140.
  • the server 140 sends a status synchronization result to all clients, such as clients 1 to 10.
  • the client 1 sends a request to the server 140, the request carrying the chess piece roles participating in the chess game and the layout of the chess piece roles, and the server 140 is used to generate the chess piece roles during the game according to the chess piece roles and the chess piece layout.
  • the server 140 sends the state of the pawn role to the client 1 during the battle.
  • the server 140 sends the data of virtual props sent to client 1 to all clients, and all clients update local data and interface presentation according to the data.
  • FIG. 4 shows a schematic diagram of a frame synchronization technology shown in an exemplary embodiment of the present application.
  • the combat logic runs in each client.
  • Each client sends a frame synchronization request to the server, and the frame synchronization request carries local data changes of the client.
  • the server 140 forwards the frame synchronization request to all clients.
  • each client receives the frame synchronization request, it processes the frame synchronization request according to the local battle logic, and updates the local data and interface performance.
  • the second terminal 160 is connected to the server 140 through a wireless network or a wired network.
  • the second terminal 160 has an auto chess game application installed and running.
  • the second terminal 160 is the terminal used by the second user.
  • the second user uses the second terminal 160 to lay out the chess piece roles in the battle area of the chessboard during the preparation stage of the game. Attributes, skills, and layouts, automatically control chess pieces for battle.
  • the pawn roles laid out by the first user through the first terminal 120 and the second user through the second terminal 160 are located in different areas on the same chessboard, that is, the first user and the second user are in the same game.
  • the application programs installed on the first terminal 120 and the second terminal 160 are the same, or the application programs installed on the two terminals are the same type of application programs on different control system platforms.
  • the first terminal 120 may generally refer to one of the multiple terminals
  • the second terminal 160 may generally refer to one of the multiple terminals. This embodiment only uses the first terminal 120 and the second terminal 160 as an example for illustration.
  • the device types of the first terminal 120 and the second terminal 160 are the same or different, and the device types include: at least one of a smart phone, a tablet computer, an e-book reader, a digital player, a laptop computer and a desktop computer.
  • the number of the foregoing terminals may be more or less.
  • the above-mentioned terminal can be only one (that is, the user and the artificial intelligence are playing a game), or the number of the above-mentioned terminals is 8 (1v1v1v1v1v1v1v1, 8 users are cyclically playing and eliminating, and finally the winner is determined), or more Much quantity.
  • the embodiment of the present application does not limit the number of terminals and device types.
  • Fig. 5 shows a flowchart of a method for controlling a virtual character according to an exemplary embodiment of the present application.
  • the method can be executed by a computer device, and the computer device can be implemented as a terminal or a server.
  • the virtual character controls The method may include the steps of:
  • Step 510 displaying a scene picture of a virtual scene including at least one virtual character.
  • the target area of the virtual scene contains at least one virtual character.
  • the virtual scene may include a battle area and a battle preparation area
  • the target area may be a battle area in the virtual scene
  • the at least one virtual character is a virtual character in the battle area.
  • the at least one virtual character is a virtual character controlled by the current user.
  • Step 520 superimposing and displaying the station position control control on the upper layer of the scene picture.
  • At least one virtual character in the target area can be in the game stage and the preparation stage in the virtual scene; wherein, in the preparation stage, the user can adjust the position of the at least one virtual character in the virtual scene , so as to deploy the battle situation; therefore, in a possible implementation, the station control control can be superimposed on the upper layer of the scene picture, so that the user can quickly realize the control of the virtual character in the virtual scene through the station control control Custom deployment of the station,
  • Fig. 6 shows a schematic diagram of a scene picture shown in an exemplary embodiment of the present application, as shown in Fig.
  • a station control control 610 is superimposed on the upper layer of the scene picture, the station control
  • the control can be a single control, or it can also be a collection of multiple sub-controls with station control functions.
  • the station control control shown in Figure 6 contains at least one sub-control, and each sub-control is used to trigger different self-control Define the station control; optionally, when the station control control is a single control, the custom station control corresponding to the station control control can be changed based on the user's operation mode. Schematically, it can be based on the user's The number of consecutive clicks of the station control control determines different custom station controls.
  • Step 530 in response to receiving a trigger operation on the station control control, adjust the station position of the target virtual character based on the target adjustment method; at least one of the target adjustment method and the target virtual character is based on user-defined Operation determined; the target avatar belongs to at least one avatar.
  • the target virtual character may be all or part of the virtual character in the target area in the virtual scene determined based on the user-defined operation, or the target virtual character corresponds to the target adjustment method determined based on the user-defined operation
  • a default number of virtual characters for example, the target adjustment method defaults to position adjustment for all virtual characters in the target area, that is to say, the system defaults that the above-mentioned target virtual characters are all virtual characters in at least one virtual character.
  • the user can customize the target virtual character, including the position and quantity of the target virtual character, and the target adjustment method is the default adjustment method; or, the user can customize the target virtual character's target adjustment method,
  • the target virtual character is part or all of the default virtual characters; or, the user can customize the target adjustment method of the target virtual character on the basis of customizing the target virtual character.
  • the custom operation may include at least one of a character selection operation and a layout setting operation.
  • the virtual character control method provided by the embodiment of the present application provides a position control control, so that when the computer device receives a trigger operation based on the position control control, it can customize the position of the virtual character selected.
  • Perform unified adjustment or, can uniformly adjust the position of the target object through a self-defined adjustment method, or can uniformly adjust the position of the virtual character selected by the user through a self-defined adjustment method, so that the
  • the collective adjustment of the positions of multiple virtual characters can be realized based on the position control control, while ensuring the flexibility of the position adjustment of the virtual characters in the virtual scene.
  • Fig. 7 shows a flowchart of a method for controlling a virtual character according to an exemplary embodiment of the present application.
  • the method can be executed by a computer device, and the computer device can be implemented as a terminal or a server.
  • the virtual character controls The method may include the steps of:
  • Step 710 displaying a scene picture of a virtual scene including at least one virtual character.
  • the virtual scene can be a scene in which two virtual characters controlled by the operating users are playing against each other; the operating users can have their own fighting areas and preparation areas; when the first user is controlling the virtual characters in the virtual scene, he can The virtual character corresponding to the first user is controlled; or in other words, at least one virtual character contained in the target area in the virtual scene may be a virtual character belonging to the same faction.
  • the target area may be a battle area in the current camp.
  • Step 720 superimposing and displaying the station position control control on the upper layer of the scene picture.
  • the station control control can be superimposed on the scene when the scene is displayed; or, in another possible implementation, the station control is not displayed in the scene
  • the screen is superimposed and displayed on the upper layer of the scene screen, but after receiving the specified operation, the superimposed display is performed.
  • the station control control is superimposed on the upper layer of the scene screen;
  • the activation operation includes: at least one of: a target operation performed on a target area in the scene picture, and a trigger operation on an activation control.
  • the target operation may include a long-press operation on the target area; optionally, the target operation may also include a double-tap operation on the target area, or a triple-tap operation on the target area, or a sliding operation on the target area, etc. wait.
  • the activation control can be a control set at any position on the upper layer of the scene picture, and is used to call out the station control control when a trigger operation is received.
  • FIG. 8 shows an example embodiment of the present application.
  • Step 730 in response to receiving a trigger operation on the target sub-control, adjust the position of the target virtual character based on the adjustment method corresponding to the target sub-control, where the target sub-control is any one of the at least two sub-controls.
  • the first sub-control among at least two sub-controls can trigger a character selection operation, so as to customize the virtual character for position adjustment, and obtain The target avatar, after that, adjust the position of the target avatar in the default position adjustment mode.
  • the default position adjustment method may refer to performing mirror switching on the position of the target virtual character. Lines are adjusted symmetrically.
  • the second sub-control among the at least two sub-controls can trigger a layout setting operation to customize the position adjustment method of the virtual character that needs position adjustment, wherein the virtual character that needs position adjustment can be the default setting virtual characters, such as all virtual characters in the target area.
  • the fourth sub-control among the at least two sub-controls can trigger the layout setting operation after triggering the character selection operation, so as to change the position of the customized virtual character to the layout mode of the custom setting, which improves the user's position of the virtual character.
  • the autonomy of the overall adjustment of the position which in turn improves the flexibility of changing the position of the virtual character.
  • the process of adjusting the position of the virtual character based on the first sub-control may be implemented as the following process:
  • the position of the target virtual character is switched from the first position to the second position; the positions of the target virtual character before and after position switching are symmetrical along the target axis.
  • switching the position of the target virtual character from the first position to the second position, and the position of the target virtual character before and after the position switching are symmetrical along the target axis can be called mirror switching, that is to say, the first sub
  • the control is used to control at least one virtual character in the target area to perform mirror switching based on the position of the virtual character determined by the character selection operation, the first position is used to indicate the position of the virtual character before mirror switching, and the second position is used to Indicates the position after mirror switching; the target virtual character may be all or part of at least one virtual character.
  • the target axis may be the central axis of the scene picture, or the target axis may be the central axis relative to the target virtual character, or the target axis may be any axis determined based on the user's drawing operation. A sort of.
  • the target axis can be set by the developer, or it can also be preset or drawn by the user, or, in a possible implementation manner, in response to the end of the character selection operation, the mirror line setting is displayed Interface, the mirror line setting interface includes at least two mirror line selection controls, corresponding to different mirror line setting modes, based on the user's selection operation, determine the target axis corresponding to one of the mirror line selection controls as the mirror line;
  • the target axes corresponding to the at least two mirror line selection controls include at least two of the central axis of the scene picture, or the central axis relative to the target virtual character, or any axis determined based on the user's drawing operation.
  • Fig. 9 shows a schematic diagram of the scene picture before and after mirroring of the virtual character shown in an exemplary embodiment of the present application. Taking the mirroring line as an example based on the target axis determined by the user's drawing operation, as shown in Fig.
  • the mirroring line 910 It is the axis drawn by the user based on actual needs, and the target axis is used as a mirror line to perform mirror switching on the positions of the target virtual characters in the target area, so that the target virtual characters determined based on the character selection operation are all virtual characters in the target area
  • switch virtual character 921 from the current position to position 931
  • switch virtual character 922 from the current position to position 932
  • switch virtual character 923 to position 933 from the current position.
  • the position of the virtual character after mirroring based on the mirroring line exceeds the target area, the position of the virtual character is not adjusted, and the position of the virtual character that does not exceed the target area after mirroring is adjusted. Adjustment.
  • the first virtual character in response to the second virtual character existing in the second position after mirroring the first virtual character in the target virtual character, can be mirrored in any of the following ways: Adjustment of character position:
  • the position of the first virtual character is not adjusted
  • the character selection operation includes at least one of: a continuous sliding operation based on the scene picture, or a range selection operation based on the scene picture, or a click operation on the virtual character.
  • the target virtual character in response to the character selection operation being a continuous sliding operation based on the scene picture, is the virtual character selected based on the continuous sliding operation among at least one virtual character;
  • a one-time movement operation on the terminal screen or, when the user does not release the target physical button (such as the left mouse button), a one-time movement operation on the control device;
  • Figure 10 shows the A schematic diagram of determining a target virtual character based on continuous sliding operations shown in an exemplary embodiment of the application. As shown in FIG. The state is adjusted to an optional state. The user can use any virtual character in the target area as the starting point of the operation, and complete the continuous sliding operation without interrupting the operation (for example, the finger does not pop up, or the target physical button is not released).
  • the avatar through which the continuous sliding operation passes is obtained as the target avatar 1020 .
  • the target virtual character is a virtual character within the range determined based on the range selection operation among at least one virtual character; schematically, the scene picture-based range selection operation It may be a frame selection operation based on the scene picture, and the frame selection range corresponding to the range selection operation may be larger than the position range of the target avatar in the target area.
  • FIG. 1 Schematically, FIG. 1
  • the schematic diagram of the range selection operation to determine the target virtual character as shown in Figure 11, the user determines a larger frame selection range 1110 through the range selection operation on the scene picture, and the computer device can select the virtual character within the frame selection range 1110 Acquired as the target virtual character 1120, the shape of the frame selection range can be any one of square, rectangle, ellipse, circle, etc., which is not limited in this application.
  • the target virtual character is a virtual character selected based on the click operation among at least one virtual character.
  • the user can realize decentralized selection of virtual characters without considering the selection order.
  • the above three ways of determining the target virtual character can be used separately, or can also be used in combination of two, or can also be used in combination of the three ways, which is not limited in this application.
  • the operation end nodes corresponding to different role selection operations may be the same or different.
  • the character selection operation is a continuous sliding operation based on the scene picture
  • the position of the target virtual character is switched from the first position to the second position.
  • the computer device can switch the position of the target virtual character from the first position to the second position. For example, when the continuous sliding operation is interrupted , the computer device automatically takes the target axis as the mirror line, and performs mirror switching on the position of the target virtual character; for example, when the user performs continuous sliding operations with the finger, the finger is lifted, or the user performs continuous sliding operations through the physical device , release the target entity button, and the character selection operation ends.
  • the determination control in response to receiving a trigger operation on the first sub-control, is displayed. In this case, in response to receiving a trigger operation based on the determination control, the position of the target virtual character is switched from the first position to the second position.
  • FIG. 12 shows a schematic diagram of a scene picture shown in an exemplary embodiment of the present application. , taking the determination of the target virtual character based on the frame selection operation of the scene picture as an example, as shown in FIG.
  • the target avatar is determined in at least one of the above three ways of determining the target avatar
  • the selection operation on the determination control 1210 is received, the selection operation of the avatar is determined to be completed, and the position of the determined target avatar is based on the target Axis mirroring switching;
  • the determination control is used to trigger the mirroring operation based on the target axis, the target axis is a preset or default mirroring line; or, the determination control is used to trigger entering the mirroring line setting interface, based on The mirror line set in the mirror line setting interface mirrors the target avatar.
  • the position of the target virtual character is switched from position 1220 to position 1230 .
  • the at least one sub-control also includes a third sub-control, which is used to mirror and switch the positions of all virtual characters in the target area.
  • the process of adjusting the position of the virtual character can be implemented as the following process:
  • the position of at least one virtual character is switched from the first position to the second position; the position of the at least one virtual character before and after the position switching is along the target Axisymmetric.
  • the mirroring line 1330 is the scene picture In the case of the central axis, based on the mirroring line 1330, the position of the target avatar is mirrored and switched to obtain the target avatar 1340 after the position has been changed.
  • the mirror line setting interface can be entered to set the mirror line based on the user's selection operation, and based on the mirror line, target The position of the avatar is mirrored and switched.
  • the process of adjusting the position of the virtual character based on the second sub-control may be implemented as the following process:
  • the stance of the target virtual character is adjusted based on the target stance layout; the target stance layout being one of at least one stance layout.
  • the at least one station layout contained in the layout display area may be preset by the user before controlling the virtual character to enter the virtual scene, or the at least one station layout may be configured by the user after controlling the virtual character to enter the virtual scene.
  • the added, or at least one station layout includes the one preset by the user before controlling the virtual character to enter the virtual scene, and also includes the one added by the user after controlling the virtual character.
  • FIG. 14 shows a schematic diagram of an added station layout shown in an exemplary embodiment of the present application.
  • a layout planning interface 1420 is displayed, and the layout planning interface 1420 includes all station points in the target area, such as a 3 ⁇ 7 chess game grid, the user can select the station points in the layout planning interface to form a new station layout, in response to receiving the selection operation on the layout generation control 1430 displayed in the layout planning interface, it is determined to generate a new station Site layout 1440, and add the site layout to the layout display area 1450.
  • a layout adding control may be displayed in the layout display area, so that the user can re-enter the layout planning interface after adding a station layout once, and add the station layout again.
  • the maximum planned number of stations that can be planned by the layout planning interface is equal to the maximum number of virtual characters that can be added in the target area of the virtual scene, or equal to the number of virtual characters that can be added currently in the target area of the virtual scene greatest amount. For example, the number of virtual characters that can be added to the target area of the virtual scene increases as the user level increases.
  • the maximum number of virtual characters that can be added to the target area of the virtual scene is 9, while The maximum number of virtual characters that can be added based on the user's current level is 5, so the maximum number of planned layout planning interfaces can be set to 9 to plan a final version of the station layout, or the maximum number of layout planning interfaces can be set to The planning quantity is set to 5 to plan the station layout suitable for the current game.
  • the layout planning interface is displayed after receiving the user's touch operation on the layout adding control displayed in the layout display area, that is, in response to receiving the second The trigger operation of the child control displays the layout display area; the layout display area displays the layout addition control;
  • a layout planning interface is displayed.
  • FIG. 15 shows a schematic diagram of a layout display area shown in an exemplary embodiment of the present application, as shown in FIG. 15
  • the layout display area 1520 is displayed, and the layout display area 1520 contains a layout addition control 1530.
  • the layout addition control it can be adjusted.
  • the above two ways of opening the layout planning interface can be realized through different sub-controls in at least one sub-control, for example, the fifth sub-control is used to directly open the layout planning interface when a trigger operation is received, and the sixth sub-control is used to First open the layout display area containing the layout add controls.
  • the site layout displayed in the layout display area may also include a historical site layout saved based on the user's save operation; the historical site layout may be the user's corresponding target area.
  • the station layout may also be the station layout of the user in the target area corresponding to other users watching the match in the spectating mode.
  • a station layout saving control can be set in the scene screen to record the station layout composed of the stations of the target avatar included in the current target area when a selection operation is received, and save it to the layout In the display area, it is used for the user to choose in the next game.
  • the attack range of each virtual character can be used as the basis for the change, that is to say, based on the target position layout, the target virtual character
  • the process of making a change of station can include:
  • the position of the target virtual character is changed.
  • the avatar with a small attack range is in a position adjacent to the enemy battle area in the target station layout, and the avatar with a large attack range is in a position far away from the enemy battle area in the target station layout.
  • the position of the target virtual character can be changed according to the character attributes of the virtual character, where the role attributes include occupational attributes, life value attributes, virtual character levels, etc.; for example, Set the virtual character whose occupational attribute is a fighter in the target station layout adjacent to the enemy's battle area, and set the virtual character whose occupational attribute is a shooter in the target station layout far away from the enemy's battle area; or, The avatar with a higher upper limit of life value is set in a position adjacent to the enemy's battle area in the target station layout, and the virtual character with a lower upper limit of life value is set in a position far away from the enemy's battle area in the target station layout; Alternatively, the avatar with a higher level of avatar is set at a station adjacent to the enemy's battle area in the target station layout, and the avatar with a lower level of avatar is set at a station far away from the enemy's battle area in the target station layout.
  • the basis for setting the position of the virtual character provided in the embodiment of the present application is only illustrative, and relevant personnel can also randomly set the position of the target virtual character, or the basis for setting the position can also be a user-defined setting Yes, this application does not limit the basis for setting the position of the virtual character.
  • the station layout of the target virtual character based on the target station layout
  • the station of the target virtual character is adjusted to the stations in the target station layout according to the order of the target stations.
  • the order of the target stations may refer to the order from left to right based on the stations in the target station layout, or the order from the middle to both sides, or the order from front to back, etc., the present application There is no restriction on the setting method of the target station sequence.
  • the process of mirror switching the target virtual character determined by the character selection operation through the layout setting operation may be that the role selection process corresponding to the first sub-control corresponds to the second sub-control
  • the combination of the layout setting process schematically, in response to receiving a trigger operation on the fourth sub-control, the selection state of at least one virtual character is adjusted to an optional state; in response to receiving a role selection operation, based on the character selection
  • the virtual character selected by the operation is acquired as the target virtual character; the layout display area is displayed, and at least one station layout is displayed in the layout display area; the station layout is determined based on the user's layout settings; in response to receiving the The selection operation of the target station layout adjusts the station of the target virtual character based on the target station layout.
  • the role selection process corresponding to the first sub-control and the relevant content in the layout setting process corresponding to the second sub-control which will not be repeated here.
  • the virtual character control method provided by the embodiment of the present application provides a position control control, so that when the computer device receives a trigger operation based on the position control control, it can customize the position of the virtual character selected.
  • Perform unified adjustment or, can uniformly adjust the position of the target object through a self-defined adjustment method, or can uniformly adjust the position of the virtual character selected by the user through a self-defined adjustment method, so that the
  • the collective adjustment of the positions of multiple virtual characters can be realized based on the position control control, while ensuring the flexibility of the position adjustment of the virtual characters in the virtual scene.
  • FIG. 16 shows a flow chart of a virtual character control method shown in an example embodiment of the present application.
  • the method can be executed by a computer device, and the computer device can be implemented as a terminal or a server, as shown in FIG. 16
  • the virtual character control method includes:
  • the chessboard area is the game area corresponding to the virtual character controlled by one's own side in the auto chess scene.
  • the station control control includes a first sub-control, a second sub-control, and a third sub-control.
  • the first child control is one of the controls in the station control control.
  • the second sub-control is used to call out the preset station layout.
  • the target station layout is one of at least one station layout.
  • the third child control is one of the controls in the station control control.
  • the virtual character control method provided by the embodiment of the present application provides a position control control, so that when the computer device receives a trigger operation based on the position control control, it can customize the position of the virtual character selected.
  • Perform unified adjustment or, can uniformly adjust the position of the target object through a self-defined adjustment method, or can uniformly adjust the position of the virtual character selected by the user through a self-defined adjustment method, so that the
  • the collective adjustment of the positions of multiple virtual characters can be realized based on the position control control, while ensuring the flexibility of the position adjustment of the virtual characters in the virtual scene.
  • Fig. 17 shows a block diagram of a virtual character control device shown in an exemplary embodiment of the present application. As shown in Fig. 17, the virtual character control device includes:
  • a screen display module 1710 configured to display a scene screen of a virtual scene, where the virtual scene contains at least one virtual character
  • Control control display module 1720 configured to superimpose and display station control controls on the upper layer of the scene picture
  • the position adjustment module 1730 is configured to adjust the position of the target virtual character based on the target adjustment mode in response to receiving a trigger operation on the position control control; the target adjustment mode and the target virtual character At least one of is determined based on user-defined operations; the target virtual character belongs to the at least one virtual character.
  • the station control control includes at least two sub-controls
  • the position adjustment module is configured to adjust the position of the target virtual character based on the adjustment mode corresponding to the target sub-control in response to receiving a trigger operation on the target sub-control, and the target sub-control is Any one of the at least two sub-controls.
  • the station position adjustment module 1730 includes:
  • a state adjustment submodule configured to adjust the selection state of the at least one virtual character to an optional state in response to receiving a trigger operation on the first sub-control of the at least two sub-controls;
  • a character acquisition submodule configured to adjust the selection state of the at least one virtual character to an optional state in response to receiving a trigger operation on the first sub-control of the at least two sub-controls;
  • the mirror switching sub-module is used to switch the position of the target virtual character from the first position to the second position in response to the end of the character selection operation; the position of the target virtual character before and after position switching is along the target axis symmetry.
  • the character selection operation includes: a continuous sliding operation based on the scene picture, or a range selection operation based on the scene picture, or at least one of the click operations on the virtual character A sort of.
  • the target virtual character in response to the character selection operation being a continuous sliding operation based on the scene picture, is a virtual character selected based on the continuous sliding operation among the at least one virtual character.
  • the target virtual character is a virtual character within the range determined based on the range selection operation among the at least one virtual character;
  • the target virtual character is a virtual character selected based on the click operation among the at least one virtual character.
  • the position switching submodule in response to the character selection operation being a continuous sliding operation based on the scene picture, is configured to, in response to interruption of the continuous sliding operation based on the scene picture, set The position of the target virtual character is switched from the first position to the second position.
  • the device further includes:
  • a determination control display module configured to display a determination control in response to receiving a trigger operation on the first sub-control
  • the position switching submodule is configured to switch the position of the target virtual character from the first position to the second position in response to receiving a trigger operation based on the determination control.
  • the target axis includes the central axis of the scene picture, or, relative to the central axis of the target virtual character, or any one of any axis determined based on the user's drawing operation. kind.
  • the station position adjustment module 1730 includes:
  • the area display submodule is configured to display a layout display area in response to receiving a trigger operation on the second sub-control of at least two of the sub-controls, and at least one station layout is displayed in the layout display area;
  • the station layout is the station layout determined based on the layout settings of the user;
  • the station adjustment sub-module is configured to adjust the station of the target virtual character based on the target station layout in response to receiving a selection operation on the target station layout; the target station layout is at least one One of the station layouts.
  • the station position adjustment submodule includes:
  • an attack range obtaining unit configured to obtain the attack range of the target virtual character
  • the position adjustment unit is configured to adjust the position of the target virtual character based on the attack range of the target virtual character and the layout of the target position.
  • the station adjustment submodule is configured to, in response to the number of stations corresponding to the target station layout being greater than the number of the target virtual characters, arrange the The station of the target virtual character is adjusted to the station in the target station layout.
  • the device further includes:
  • the planning interface display module is used to display the layout planning interface
  • a layout generating module configured to generate the station layout based on the station determined in the layout planning interface
  • a layout adding module configured to add the station layout to the layout display area.
  • the layout display area includes a layout adding control
  • the planning interface display module is configured to display the layout planning interface in response to receiving a trigger operation on the layout adding control.
  • control control display module 1720 is configured to superimpose and display the station control control on the upper layer of the scene picture in response to receiving an activation operation
  • the activation operation includes: at least one of: a target operation performed on the target area in the scene picture, and a trigger operation on an activation control.
  • the target operation includes a long press operation on the target area.
  • the station position adjustment module 1730 is configured to adjust the station position of the at least one virtual character to The position is switched from the first position to the second position; the positions of the at least one virtual character before and after the position switching are symmetrical along the target axis.
  • the virtual character control device provided by the embodiment of the present application provides a position control control, so that when the computer device receives a trigger operation based on the position control control, it can customize the position of the selected virtual character.
  • Perform unified adjustment or, can uniformly adjust the position of the target object through a self-defined adjustment method, or can uniformly adjust the position of the virtual character selected by the user through a self-defined adjustment method, so that the
  • the collective adjustment of the positions of multiple virtual characters can be realized based on the position control control, while ensuring the flexibility of the position adjustment of the virtual characters in the virtual scene.
  • Fig. 18 shows a structural block diagram of a computer device 1800 shown in an exemplary embodiment of the present application.
  • the computer device can be implemented as the server in the above solutions of the present application.
  • the computer device 1800 includes a central processing unit (Central Processing Unit, CPU) 1801, a system memory 1804 including a random access memory (Random Access Memory, RAM) 1802 and a read-only memory (Read-Only Memory, ROM) 1803, and a connection system memory 1804 and system bus 1805 of the central processing unit 1801 .
  • the computer device 1800 also includes a mass storage device 1806 for storing an operating system 1809 , application programs 1810 and other program modules 1811 .
  • the mass storage device 1806 is connected to the central processing unit 1801 through a mass storage controller (not shown) connected to the system bus 1805 .
  • the mass storage device 1806 and its associated computer-readable media provide non-volatile storage for the computer device 1800 . That is, the mass storage device 1806 may include a computer-readable medium (not shown) such as a hard disk or a Compact Disc Read-Only Memory (CD-ROM) drive.
  • a computer-readable medium such as a hard disk or a Compact Disc Read-Only Memory (CD-ROM) drive.
  • Computer readable media may comprise computer storage media and communication media.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media include RAM, ROM, Erasable Programmable Read Only Memory (EPROM), Electronically Erasable Programmable Read-Only Memory (EEPROM) flash memory or other Solid state storage technology, CD-ROM, Digital Versatile Disc (DVD) or other optical storage, tape cartridge, tape, magnetic disk storage or other magnetic storage device.
  • EPROM Erasable Programmable Read Only Memory
  • EEPROM Electronically Erasable Programmable Read-Only Memory
  • CD-ROM Compact Disc
  • DVD Digital Versatile Disc
  • the computer storage medium is not limited to the above-mentioned ones.
  • the above-mentioned system memory 1804 and mass storage device 1806 may be collectively referred to as memory.
  • the computer device 1800 can also operate on a remote computer connected to a network through a network such as the Internet. That is, the computer device 1800 can be connected to the network 1808 through the network interface unit 1807 connected to the system bus 1805, or in other words, the network interface unit 1807 can also be used to connect to other types of networks or remote computer systems (not shown) .
  • the memory also includes at least one computer program stored in the memory, and the central processing unit 1801 implements all or part of the steps in the virtual character control method shown in the above embodiments by executing the at least one computer program.
  • Fig. 19 shows a structural block diagram of a computer device 1900 provided by an exemplary embodiment of the present application.
  • the computer device 1900 can be implemented as the above-mentioned terminal, such as a smart phone, a tablet computer, a notebook computer or a desktop computer.
  • the computer device 1900 may also be called user equipment, portable terminal, laptop terminal, desktop terminal, or other names.
  • a computer device 1900 includes: a processor 1901 and a memory 1902 .
  • the processor 1901 may include one or more processing cores, such as a 4-core processor, a 19-core processor, and the like.
  • Processor 1901 can adopt at least one hardware form in DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish.
  • the processor 1901 may also include a main processor and a coprocessor, the main processor is a processor for processing data in the wake-up state, and is also called a CPU (Central Processing Unit, central processing unit); the coprocessor is Low-power processor for processing data in standby state.
  • CPU Central Processing Unit
  • the processor 1901 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen.
  • the processor 1901 may also include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence, artificial intelligence
  • Memory 1902 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 1902 may also include high-speed random access memory, and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 1902 is used to store at least one computer program, and the at least one computer program is used to be executed by the processor 1901 to implement the methods provided by the method embodiments in this application. All or part of the steps in the virtual character control method.
  • the computer device 1900 may optionally further include: a peripheral device interface 1903 and at least one peripheral device.
  • the processor 1901, the memory 1902, and the peripheral device interface 1903 may be connected through buses or signal lines.
  • Each peripheral device can be connected to the peripheral device interface 1903 through a bus, a signal line or a circuit board.
  • the peripheral equipment includes: at least one of a radio frequency circuit 1904 , a display screen 1905 , a camera assembly 1906 , an audio circuit 1907 and a power supply 1909 .
  • computing device 1900 also includes one or more sensors 1910 .
  • the one or more sensors 1910 include, but are not limited to: an acceleration sensor 1911 , a gyro sensor 1912 , a pressure sensor 1913 , an optical sensor 1915 and a proximity sensor 1916 .
  • FIG. 19 does not constitute a limitation to the computer device 1900, and may include more or less components than shown in the figure, or combine certain components, or adopt a different component arrangement.
  • the computer-readable storage medium for storing at least one computer program, and the at least one computer program is loaded and executed by a processor to realize all or all of the above virtual character control methods. partial steps.
  • the computer-readable storage medium can be a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a read-only optical disc (Compact Disc Read-Only Memory, CD-ROM), Magnetic tapes, floppy disks, and optical data storage devices, etc.
  • a computer program product or computer program is also provided, the computer program product or computer program includes at least one computer program, and the at least one computer program is stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the at least one computer program, so that the computer device executes all or all of the methods shown in any of the embodiments shown in FIG. 5, FIG. partial steps.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种虚拟角色控制方法、装置、设备、存储介质及程序产品,涉及虚拟场景技术领域。该方法包括:显示虚拟场景的场景画面(510),该虚拟场景中包含至少一个虚拟角色;在场景画面的上层叠加显示站位控制控件(520);响应于接收到对站位控制控件的触发操作,基于目标调整方式,对目标虚拟角色的站位进行调整(530);该目标调整方式以及目标虚拟角色中的至少一种是基于用户的自定义操作确定的;该目标虚拟角色属于至少一个虚拟角色。通过上述方法,可以在对虚拟场景中的虚拟角色进行站位调整时,基于站位控制控件实现对多个虚拟角色的站位的集体调整,同时又保证了对虚拟场景中的虚拟角色的站位调整的灵活性。

Description

虚拟角色控制方法、装置、设备、存储介质及程序产品
本申请要求于2021年11月18日提交的、申请号为202111372101.8、发明名称为“虚拟角色控制方法、装置、设备、存储介质及产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及虚拟场景技术领域,特别涉及一种虚拟角色控制方法、装置、设备、存储介质及程序产品。
背景技术
在具有自走棋类的虚拟场景的应用程序中,为提高虚拟对局的灵活性,虚拟场景中的虚拟角色的站位通常具有可调整性。
在相关技术中,为实现对多个虚拟角色的站位的快速更改,通常设置有布局切换按键,用户通过该布局切换按键,可以实现对虚拟场景中的虚拟角色的“一键换位”。
然而,上述通过设置布局切换按键实现“一键换位”的方法的中所能切换的布局类型是固定的,从而使得对虚拟场景中的虚拟角色的站位调节的灵活性较差。
发明内容
本申请实施例提供了一种虚拟角色控制方法、装置、设备、存储介质及程序产品,可以提高对虚拟场景中的虚拟角色的站位调节的灵活性。该技术方案如下:
一方面,提供了一种虚拟角色控制方法,所述方法包括:
显示虚拟场景的场景画面,所述虚拟场景中包含至少一个虚拟角色;
在所述场景画面的上层叠加显示站位控制控件;
响应于接收到对所述站位控制控件的触发操作,基于目标调整方式,对目标虚拟角色的站位进行调整;所述目标调整方式以及所述目标虚拟角色中的至少一种是基于用户的自定义操作确定的;所述目标虚拟角色属于所述至少一个虚拟角色。
另一方面,提供了一种虚拟角色控制装置,所述装置包括:
画面显示模块,用于显示虚拟场景的场景画面,所述虚拟场景中包含至少一个虚拟角色;
控制控件显示模块,用于在所述场景画面的上层叠加显示站位控制控件;
站位调整模块,用于响应于接收到对所述站位控制控件的触发操作,基于目标调整方式,对目标虚拟角色的站位进行调整;所述目标调整方式以及所述目标虚拟角色中的至少一种是基于用户的自定义操作确定的;所述目标虚拟角色属于所述至少一个虚拟角色。
在一种可能的实现方式中,所述站位控制控件包含至少两个子控件;
所述站位调整模块,用于响应于接收到对目标子控件的触发操作,基于所述目标子控件对应的调整方式,对所述目标虚拟角色的站位进行调整,所述目标子控件是所述至少两个子控件中的任意一个。
在一种可能的实现方式中,所述站位调整模块,包括:
状态调整子模块,用于响应于接收到对至少两个所述子控件中的第一子控件的触发操作,将所述至少一个虚拟角色的选择状态调整为可选状态;
角色获取子模块,用于响应于接收到角色选择操作,将基于所述角色选择操作选择的虚拟角色获取为所述目标虚拟角色;
位置切换子模块,用于响应于所述角色选择操作结束,将所述目标虚拟角色的站位由第一位置切换至第二位置;位置切换前后的所述目标虚拟角色的站位沿目标轴线对称。
在一种可能的实现方式中,所述角色选择操作包括:基于所述场景画面的连续滑动操作,或者,基于所述场景画面的范围选择操作,或者,对虚拟角色的点选操作中的至少一种。
在一种可能的实现方式中,响应于所述角色选择操作为基于所述场景画面的连续滑动操作,所述目标虚拟角色是所述至少一个虚拟角色中,基于所述连续滑动操作选中的虚拟角色;
响应于所述角色选择操作为基于所述场景画面的范围选择操作,所述目标虚拟角色是所述至少一个虚拟角色中,处于基于所述范围选择操作确定的范围内的虚拟角色;
响应于所述角色选择操作为对虚拟角色的点选操作,所述目标虚拟角色是所述至少一个虚拟角色中,基于所述点选操作选中的虚拟角色。
在一种可能的实现方式中,响应于所述角色选择操作为基于所述场景画面的连续滑动操作,所述位置切换子模块,用于响应于基于所述场景画面的连续滑动操作中断,将所述目标虚拟角色的站位由第一位置切换至第二位置。
在一种可能的实现方式中,所述装置还包括:
确定控件显示模块,用于响应于接收到对所述第一子控件的触发操作,显示确定控件;
所述位置切换子模块,用于响应于接收到基于所述确定控件的触发操作,将所述目标虚拟角色的站位由第一位置切换至第二位置。
在一种可能的实现方式中,所述目标轴线包括所述场景画面的中轴线,或者,相对于所述目标虚拟角色的中轴线,或者,基于用户的绘制操作确定的任意轴线中的任意一种。
在一种可能的实现方式中,所述站位调整模块,包括:
区域显示子模块,用于响应于接收到对至少两个所述子控件中的第二子控件的触发操作,显示布局显示区域,所述布局显示区域中显示有至少一种站位布局;所述站位布局是基于用户的所述布局设置确定的所述站位布局;
站位调整子模块,用于响应于接收到对目标站位布局的选择操作,基于所述目标站位布局,对所述目标虚拟角色的站位进行调整;所述目标站位布局是至少一种所述站位布局中的一种。
在一种可能的实现方式中,所述站位调整子模块,包括:
攻击范围获取单元,用于获取所述目标虚拟角色的攻击范围;
所述站位调整单元,用于基于所述目标虚拟角色的攻击范围,以及所述目标站位布局,对所述目标虚拟角色的站位进行调整。
在一种可能的实现方式中,所述站位调整子模块,用于响应于所述目标站位布局对应的站位数量大于所述目标虚拟角色的数量,按照目标站位顺序,将所述目标虚拟角色的站位调整到所述目标站位布局中的站位中。
在一种可能的实现方式中,所述装置还包括:
规划界面显示模块,用于显示布局规划界面;
布局生成模块,用于基于在所述布局规划界面中确定的站位,生成所述站位布局;
布局添加模块,用于将所述站位布局添加到所述布局显示区域中。
在一种可能的实现方式中,所述布局显示区域中包括布局添加控件,所述规划界面显示模块,用于响应于接收到对所述布局添加控件的触发操作,显示所述布局规划界面。
在一种可能的实现方式中,所述控制控件显示模块,用于响应于接收到激活操作,在所述场景画面的上层叠加显示所述站位控制控件;
其中,所述激活操作包括:对所述场景画面中的所述目标区域执行的目标操作,以及,对激活控件的触发操作中的至少一种。
在一种可能的实现方式中,所述目标操作包括对所述目标区域的长按操作。
在一种可能的实现方式中,所述站位调整模块,用于响应于接收到对至少两个所述子控件中的第三子控件的触发操作,将所述至少一个虚拟角色的站位由第一位置切换至第二位置;位置切换前后的所述至少一个虚拟角色的站位沿目标轴线对称。
另一方面,提供了一种计算机设备,所述计算机设备包含处理器和存储器,所述存储器存储有至少一条计算机程序,所述至少一条计算机程序由所述处理器加载并执行以实现上述的虚拟角色控制方法。
另一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条计算机程序,所述计算机程序由处理器加载并执行以实现上述的虚拟角色控制方法。
另一方面,提供了一种计算机程序产品或计算机程序,所述计算机程序产品包括至少一条计算机程序,所述计算机程序由处理器加载并执行以实现上述各种可选实现方式中提供的虚拟角色控制方法。
本申请提供的技术方案可以包括以下有益效果:
通过提供站位控制控件,使得计算机设备在接收到基于站位控制控件的触发操作时,能够对自定义选择的虚拟角色的站位进行统一调整,或者,能够通过自定的调整方式对目标对象的站位进行统一调整,或者,能够通过自定义的调整方式对自定义选择的虚拟角色的站位进行统一调整,从而使得在对虚拟场景中的虚拟角色进行站位调整时,可以基于站位控制控件实现对多个虚拟角色的站位的集体调整,同时又保证了对虚拟场景中的虚拟角色的站位调整的灵活性。
附图说明
图1是本申请一个示例性实施例提供的棋局对战画面的示意图;
图2示出了本申请一个示例性实施例提供的计算机系统框图;
图3示出了本申请一示例性实施例示出的状态同步技术的示意图;
图4示出了本申请一示例性实施例示出的帧同步技术的示意图;
图5示出了本申请一示例性实施例示出的虚拟角色控制方法的流程图;
图6示出了本申请一示例性实施例示出的场景画面的示意图;
图7示出了本申请一示例性实施例示出的虚拟角色控制方法的流程图;
图8示出了本申请一示例性实施例示出的场景画面的示意图;
图9示出了本申请一示例性实施例示出的虚拟角色镜像前后的场景画面示意图;
图10示出了本申请一示例性实施例示出的基于连续滑动操作确定目标虚拟角色的示意图;
图11示出了本申请一示例性实施例示出的通过范围选择操作确定目标虚拟角色的示意图;
图12示出了本申请一示例性实施例示出的场景画面的示意图;
图13示出了本申请一示例性实施例示出的虚拟角色镜像前后的场景画面示意图;
图14示出了本申请一示例性实施例示出的添加的站位布局的示意图;
图15示出了本申请一示例性实施例示出的布局显示区域的示意图;
图16示出了本申请一示例性实施例示出的虚拟角色控制方法的流程图;
图17示出了本申请一示例性实施例示出的虚拟角色控制装置的方框图;
图18是根据一示例性实施例示出的计算机设备的结构框图;
图19是根据一示例性实施例示出的计算机设备的结构框图。
具体实施方式
本申请提供一种虚拟角色控制方法,可以提高对虚拟角色的站位的调整效率。为了便于理解,下面对本申请涉及的几个名词进行解释。
1)虚拟场景:是指应用程序在终端上运行时显示(或提供)的虚拟场景。该虚拟场景可以是对真实世界的仿真环境场景,也可以是半仿真半虚构的三维环境场景,还可以是纯虚构的三维环境场景。虚拟场景可以是二维虚拟场景、2.5维虚拟场景和三维虚拟场景中的任意一种,下述实施例以虚拟场景是三维虚拟场景来举例说明,但对此不加以限定。
2)自走棋:是指通过在棋局对战之前预先对“棋子”进行布局,在对战过程中,“棋子”能够根据预先布局进行自动对战的棋类游戏。“棋子”通常以虚拟角色进行表示,在对战过程中,虚拟角色自动释放各种技能进行对战。对战通常采用回合制,当对战一方的“棋子”全部阵亡(即虚拟角色的生命值降低至零),则该方是对战的失败方。在一些实施例中,对战双方除了具有进行对局的“棋子型虚拟角色”,还分别设置有一虚拟角色表示参与对战的用户,该虚拟角色不可作为“棋子”移动至对战区域或备战区域,该虚拟角色也设置有生命值(或血量),该虚拟角色的生命值根据每局对战结果进行相应的减少(对战失败)或不变(对战获胜),当该虚拟角色的生命值降低至零时,该虚拟角色对应的用户退出对战,剩余用户继续进行对战。
棋盘:指自走棋游戏对战界面中用于准备对战和进行对战的区域,该棋盘可以为二维虚拟棋盘、2.5维虚拟棋盘和三维虚拟棋盘中的任意一种,本申请对此不加以限定。
棋盘被划分为对战区域和备战区域。其中,对战区域包括若干个大小相同的对战棋格,该对战棋格用于放置对战过程中进行对战的对战棋子;备战区域包括若干个备战棋格,该备战棋格用于放置备战棋子,该备战棋子在对战过程中并不会参与对战,可在准备阶段被拖动放置在对战区域。本申请实施例以对战棋子角色包括位于对战区域中的棋子角色和位于备战区域的棋子角色为例进行说明。
关于对战区域中棋格的设置方式,在一些实施例中,对战区域中包括n(行)×m(列)个对战棋格,示意性的,n为2的整数倍,且相邻两行棋格对齐,或者,相邻两行棋格交错。此外,对战区域按照行均分为两部分,分别为己方对战区域和敌方对战区域,参与对战的用户分别位于对战界面的上下两侧,且在准备阶段,用户仅能够在己方对战区域放置棋子。在另一些实施例中,对战区域按照列均分为两部分,分别为己方对战区域和敌方对战区域,参与对战的用户分别位于对战界面的左右两侧。棋格的形状可以是正方形、矩形、圆形、六边形中的任意一种,本申请实施例对棋格的形状不加以限定。
在一些实施例中,对战棋格始终显示在棋盘中,在另一些实施例中,对战棋格在用户布局对战棋子时进行显示,当对战棋子被放置在棋格中后,对战棋格取消显示。
示意性的,图1是本申请一个示例性实施例提供的棋局对战画面的示意图,如图1所示,对战界面中的棋盘11包括对战区域111和备战区域112,其中,对战区域111包括为3×7个对战棋格,棋格的形状为六边形,且相邻两行棋格交错,备战区域112包括9个备战棋格。
3)自走棋游戏中的虚拟角色:指自走棋游戏中放置在棋盘上的棋子,包括对战棋子角色和候选棋子角色列表中的候选棋子角色(即虚拟商店中的候选棋子角色),其中,对战棋子角色包括位于对战区域的棋子角色和位于备战区域的棋子角色。虚拟角色可以是虚拟棋子、虚拟人物、虚拟动物、动漫人物等,且虚拟角色可以采用三维模型进行展示。候选棋子角色可以与用户已有的对战棋子角色进行组合后触发增益对战效果,还可以单独作为棋子角色参与棋局对战。
可选的,对战棋子角色在棋盘上所处的位置可以改变。在准备阶段,用户可以调整对战区域中棋子角色的位置、调整备战区域中棋子角色的位置、将位于对战区域中的棋子角色移动至备战区域(备战区域存在空闲备战棋格时),或,将位于备战区域的棋子角色移动至对战区域。需要说明的是,在对战阶段,位于备战区域中的棋子角色的位置也能够被调整。
可选的,在对战阶段,棋子角色在对战区域所处的位置与准备阶段不同。比如,在对战阶段,棋子角色可以自动由己方对战区域移动至敌方对战区域,并对敌方的棋子角色进行攻击;或者,棋子角色可以自动从己方对战区域的A位置移动至己方对战区域的B位置。
此外,在准备阶段,棋子角色仅能够设置在己方对战区域,且敌方设置的棋子角色在棋盘上不可见。
针对对战棋子角色的获取方式,在一些实施例中,在对局过程中,用户可以在准备阶段使用虚拟货币购买棋子角色。
需要说明的是,在一些实施例中,使用虚拟角色代表参与对战的用户,该虚拟角色可以是虚拟人物、虚拟动物、动漫人物等,下述实施例将该类虚拟角色命名为玩家虚拟角色或用户虚拟角色。
示意性的,如图1所示,对战区域111中显示有第一对战棋子角色111a、第二对战棋子角色111b以及第三对战棋子角色111c,备战区域112上设置有第一备战棋子角色112a、第二备战棋子角色112b和第三备战棋子角色112c。在对战区域和备战区域的旁边显示有玩家虚拟角色113。
属性:自走棋游戏中各个棋子角色都具有各自的属性,该属性包括如下属性中的至少两种:棋子角色所属的阵营(比如A联盟、B联盟、中立派等)、棋子角色的职业(比如战士、射手、法师、刺客、护卫、剑士、枪手、斗士等)、棋子角色的攻击类型(比如魔法、物理等)、棋子角色的身份(比如贵族、恶魔、精灵等)等,本申请实施例并不对属性的具体类型进行限定。
可选的,每个棋子角色具备至少两种维度的属性,且棋子角色所携带的装备可以提升棋子角色的属性。
可选的,对战区域中,当不同棋子角色具备关联属性(包括不同棋子具有同一属性,或不同棋子具有互补类型的属性),且数量达到数量阈值时(或称为羁绊),具备该属性的对棋子角色或对战区域所有棋子角色均能够获得该属性对应的增益效果。比如,当对战区域上同时包括2个属性为战士的棋子角色时,所有对战棋子角色均获得10%的防御力加成;当对战区域上同时包括4个属性为战士的对战棋子角色时,所有对战棋子角色均获得20%的防御力加成;当对战区域上同时包括3个属性为精灵的棋子角色时,所有对战棋子角色均获得20%的闪避概率加成。
需要说明的是,本申请所示的虚拟角色控制方法可以应用于具有虚拟角色进行站位控制需求的场景中,比如上述的自走棋对局场景,卡牌对局场景等等,本申请以自走棋对局场景为例对本申请提供的虚拟角色控制方法进行说明。
图2示出了本申请一个示例性实施例提供的计算机系统框图。该计算机系统包括:第一终端120、服务器140和第二终端160。
第一终端120安装和运行有自走棋类游戏应用。第一终端120是第一用户使用的终端,第一用户在对局的准备阶段使用第一终端120在棋盘的对战区域布局棋子角色,由第一终端120在对战阶段根据对战区域上棋子角色的属性、技能以及布局,自动控制棋子角色进行对战。
第一终端120通过无线网络或有线网络与服务器140相连。
服务器140包括一台服务器、多台服务器、云计算平台和虚拟化中心中的至少一种。示意性的,服务器140包括处理器144和存储器142,存储器142包括接收模块1421、控制模块1422和发送模块1423。服务器140用于为自走棋类游戏应用提供后台服务,如为自走棋游戏提供画面渲染服务。示意性的,接收模块1421用于接收客户端发送的棋子角色的布局信息;控制模块1422用于根据棋子角色的布局信息控制棋子角色自动进行对战;发送模块1423用于将对战结果发送至客户端。可选地,服务器140承担主要计算工作,第一终端120和第二终端160承担次要计算工作;或者,服务器140承担次要计算工作,第一终端120和第二终端160承担主要计算工作;或者,服务器140、第一终端120和第二终端160三者之间采用分布式计算架构进行协同计算。
服务器140可以采用同步技术使得多个客户端之间的画面表现一致。示例性的,服务器140采用的同步技术包括:状态同步技术或帧同步技术。
状态同步技术:在基于图2的可选实施例中,服务器140采用状态同步技术与多个客户端之间进行同步。图3示出了本申请一示例性实施例示出的状态同步技术的示意图,在状态 同步技术中,如图3所示,战斗逻辑运行在服务器140中。当对战棋盘中的某个棋子角色发生状态变化时,由服务器140向所有的客户端,比如客户端1至10,发送状态同步结果。
在一个示例性的例子中,客户端1向服务器140发送请求,该请求携带有参与棋局对战的棋子角色和棋子角色的布局,则服务器140用于根据棋子角色和棋子布局生成对战时棋子角色的状态,服务器140将对战时棋子角色的状态发送给客户端1。然后,服务器140将给客户端1发送虚拟道具的数据发送给所有客户端,所有的客户端根据该数据更新本地数据以及界面表现。
帧同步技术:在基于图2的可选实施例中,服务器140采用帧同步技术与多个客户端之间进行同步。图4示出了本申请一示例性实施例示出的帧同步技术的示意图,在帧同步技术中,如图4所示,战斗逻辑运行在各个客户端中。每个客户端会向服务器发送帧同步请求,该帧同步请求中携带有客户端本地的数据变化。服务器140在接收到某个帧同步请求后,向所有的客户端转发该帧同步请求。每个客户端接收到帧同步请求后,按照本地的战斗逻辑对该帧同步请求进行处理,更新本地数据以及界面表现。
第二终端160通过无线网络或有线网络与服务器140相连。
第二终端160安装和运行有自走棋类游戏应用。第二终端160是第二用户使用的终端,第二用户在对局的准备阶段使用第二终端160在棋盘的对战区域布局棋子角色,由第二终端160在对战阶段根据对战区域上棋子角色的属性、技能以及布局,自动控制棋子角色进行对战。
可选地,第一用户通过第一终端120以及第二用户通过第二终端160布局的棋子角色位于同一棋盘上不同的区域,即第一用户和第二用户处于同一场对局中。
可选地,第一终端120和第二终端160上安装的应用程序是相同的,或两个终端上安装的应用程序是不同控制系统平台的同一类型应用程序。第一终端120可以泛指多个终端中的一个,第二终端160可以泛指多个终端中的一个,本实施例仅以第一终端120和第二终端160来举例说明。第一终端120和第二终端160的设备类型相同或不同,该设备类型包括:智能手机、平板电脑、电子书阅读器、数码播放器、膝上型便携计算机和台式计算机中的至少一种。
本领域技术人员可以知晓,上述终端的数量可以更多或更少。比如上述终端可以仅为一个(即用户与人工智能进行对局),或者上述终端的数量为8个(1v1v1v1v1v1v1v1,8个用户之间进行循环对局和淘汰,最终决出胜利者),或者更多数量。本申请实施例对终端的数量和设备类型不加以限定。
图5示出了本申请一示例性实施例示出的虚拟角色控制方法的流程图,该方法可以由计算机设备执行,该计算机设备可以实现为终端或者服务器,如图5所示,该虚拟角色控制方法可以包括以下步骤:
步骤510,显示虚拟场景的场景画面,该虚拟场景中包含至少一个虚拟角色。
可选的,该虚拟场景的目标区域中包含至少一个虚拟角色。
在一种可能的实现方式中,该虚拟场景中可以包含对战区域和备战区域,该目标区域可以是虚拟场景中的对战区域,该至少一个虚拟角色是处于该对战区域中的虚拟角色。该至少一个虚拟角色是当前用户控制的虚拟角色。
步骤520,在场景画面的上层叠加显示站位控制控件。
在本申请实施例中,该目标区域中的至少一个虚拟角色在虚拟场景可以处于对局阶段和准备阶段;其中,在准备阶段,用户可以对至少一个虚拟角色在虚拟场景中的站位进行调整,以进行战局部署;因此,在一种可能的实现方式中,可以通过在场景画面的上层叠加站位控制控件,以使得用户可以通过该站位控制控件快速实现对虚拟场景中的虚拟角色的站位的自定义部署,图6示出了本申请一示例性实施例示出的场景画面的示意图,如图6所示,该场 景画面的上层叠加显示有站位控制控件610,该站位控制控件可以是单个控件,或者,也可以是多个具有站位控制功能的子控件的集合,图6所示的站位控制控件中包含至少一个子控件,每个子控件分别用以触发不同的自定义站位控制;可选的,当该站位控制控件是单个控件时,该站位控制控件对应的自定义站位控制可以基于用户的操作方式进行改变,示意性的,可以基于用户对该站位控制控件的连击次数,确定不同的自定义站位控制,比如,用户点击一次该站位控制控件,确定自定义站位调整方式为第一调整方式;用户双击该站位控制控件,确定自定义站位调整方式为第二调整方式等等;或者,也可以基于用户对该站位控制控件的长按时长,确定不同的自定义站位的控制,比如,当长按时长小于第一时长阈值时,确定自定义站位调整方式为第一调整方式,当长按时长大于第一时长阈值但小于第二时长阈值时,确定自定义站位调整方式为第二调整方式等等。
步骤530,响应于接收到对站位控制控件的触发操作,基于目标调整方式,对目标虚拟角色的站位进行调整;该目标调整方式以及目标虚拟角色中的至少一种是基于用户的自定义操作确定的;该目标虚拟角色属于至少一个虚拟角色。
该目标虚拟角色可以是处于虚拟场景中的目标区域中的虚拟角色中,基于用户的自定义操作确定的全部或者部分,或者,该目标虚拟角色是基于用户自定义操作确定的目标调整方式对应的默认数量的虚拟角色,比如,该目标调整方式默认对目标区域中的全部虚拟角色进行站位调整,也就是说,此时系统默认上述目标虚拟角色是至少一个虚拟角色中的全部虚拟角色。
在本申请实施例中,用户可以对目标虚拟角色进行自定义,包括目标虚拟角色的位置和数量,目标调整方式为默认调整方式;或者,用户可以对目标虚拟角色的目标调整方式进行自定义,目标虚拟角色为默认的部分或者全部的虚拟角色;或者,用户可以在对目标虚拟角色进行自定义的基础上,对目标虚拟角色的目标调整方式进行自定义。也就是说,该自定义操作可以包括角色选择操作以及布局设置操作中的至少一种。
综上所述,本申请实施例提供的虚拟角色控制方法,通过提供站位控制控件,使得计算机设备在接收到基于站位控制控件的触发操作时,能够对自定义选择的虚拟角色的站位进行统一调整,或者,能够通过自定的调整方式对目标对象的站位进行统一调整,或者,能够通过自定义的调整方式对自定义选择的虚拟角色的站位进行统一调整,从而使得在对虚拟场景中的虚拟角色进行站位调整时,可以基于站位控制控件实现对多个虚拟角色的站位的集体调整,同时又保证了对虚拟场景中的虚拟角色的站位调整的灵活性。
图7示出了本申请一示例性实施例示出的虚拟角色控制方法的流程图,该方法可以由计算机设备执行,该计算机设备可以实现为终端或者服务器,如图7所示,该虚拟角色控制方法可以包括以下步骤:
步骤710,显示虚拟场景的场景画面,该虚拟场景中包含至少一个虚拟角色。
该虚拟场景可以是两个操作用户分别控制的虚拟角色进行对局的场景;操作用户可以对应有各自的对战区域以及备战区域;当第一用户在对虚拟场景中的虚拟角色进行控制时,能够对该第一用户对应的虚拟角色进行控制;或者说,虚拟场景中目标区域中包含的至少一个虚拟角色可以是属于同一阵营的虚拟角色。该目标区域可以是当前阵营中的对战区域。
步骤720,在场景画面的上层叠加显示站位控制控件。
在一种可能的实现方式中,该站位控制控件可以在显示场景画面时,就叠加显示在场景画面上;或者,在另一种可能的实现方式中,该站位控制控件并非在显示场景画面时就叠加显示在场景画面的上层的,而是在接收到指定操作之后,才进行叠加显示的,示意性的,响应于接收到激活操作,在场景画面的上层叠加显示站位控制控件;
其中,该激活操作包括:对场景画面中的目标区域执行的目标操作,以及,对激活控件的触发操作中的至少一种。
该目标操作可以包括对目标区域的长按操作;可选的,该目标操作还可以包括对目标区域的双击操作,或者,对目标区域的三连击操作,或者,对目标区域的滑动操作等等。该激活控件可以是设置在场景画面上层任一位置的控件,用以在接收到触发操作调出该站位控制控件。
以本申请实施例中的站位控制控件是多个具有站位控制功能的子控件的集合为例,该站位控制控件包含至少两个子控件;图8示出了本申请一示例性实施例示出的场景画面的示意图,以该目标操作为对目标区域的长按操作为例,如图8所示,当接收到用户对目标区域810执行的长按操作后,在场景画面的上层叠加显示站位控制控件820。
步骤730,响应于接收到对目标子控件的触发操作,基于目标子控件对应的调整方式,对目标虚拟角色的站位进行调整,该目标子控件是至少两个子控件中的任意一个。
可选的,不同的子控件触发的自定义功能不同,示意性的,至少两个子控件中的第一子控件可以触发角色选择操作,以对进行站位调整的虚拟角色进行自定义设置,获得目标虚拟角色,之后,以默认站位调整方式对目标虚拟角色进行站位调整。
在一个示例性的方案中,该默认站位调整方式可以是指对目标虚拟角色的站位进行镜像切换,该镜像切换是指以目标轴线为镜像线,将目标虚拟角色的站位基于该镜像线进行对称调整。
至少两个子控件中的第二子控件可以触发布局设置操作,以对需进行站位调整的虚拟角色的站位调整方式进行自定义设置,其中,需要进行站位调整的虚拟角色可以是默认设置的虚拟角色,比如目标区域中的全部虚拟角色。
至少两个子控件中的第四子控件可以在触发角色选择操作之后,触发布局设置操作,以将自定义的虚拟角色的站位更改为自定义设置的布局方式,提高了用户对虚拟角色的站位进行整体调整的自主性,进而提高了对虚拟角色进行站位更改的灵活性。
在一种可能的实现方式中,基于第一子控件实现对虚拟角色的站位调整的过程可以实现为以下过程:
响应于接收到对至少两个子控件中的第一子控件的触发操作,将至少一个虚拟角色的选择状态调整为可选状态;
响应于接收到角色选择操作,将基于角色选择操作选择的虚拟角色获取为目标虚拟角色;
响应于角色选择操作结束,将目标虚拟角色的站位由第一位置切换至第二位置;位置切换前后的目标虚拟角色的站位沿目标轴线对称。
其中,将目标虚拟角色的站位由第一位置切换至第二位置,且位置切换前后的目标虚拟角色的站位沿目标轴线对称的操作,可以称为镜像切换,也就是说,第一子控件用于控制目标区域中的至少一个虚拟角色中,基于角色选择操作确定的虚拟角色的站位进行镜像切换,该第一位置用以指示虚拟角色镜像切换之前的位置,该第二位置用以指示镜像切换后的位置;该目标虚拟角色可以是至少一个虚拟角色中的全部或部分。
示意性的,该目标轴线可以是场景画面的中轴线,或者,该目标轴线可以是相对于目标虚拟角色的中轴线,或者,该目标轴线还可以是基于用户的绘制操作确定的任意轴线的任意一种。
可选的,该目标轴线可以是开发人员设置的,或者,也可以是由用户预先设置或者绘制好的,或者,在一种可能的实现方式中,响应于角色选择操作结束,显示镜像线设置界面,该镜像线设置界面中包含至少两个镜像线选择控件,分别对应于不同的镜像线设置方式,基于用户的选择操作,确定其中一种镜像线选择控件对应的目标轴线为镜像线;其中,至少两个镜像线选择控件对应的目标轴线包括场景画面的中轴线,或者,相对于目标虚拟角色的中轴线,或者,基于用户的绘制操作确定的任意轴线中的至少两种。示意性的,当用户选择镜像线为基于用户的绘制操作确定的任意轴线时,进入镜像线绘制界面,将用户在该镜像线绘制界面中绘制的轴线确定为镜像线;该镜像线绘制界面可以是处于可绘制状态的场景画面。 图9示出了本申请一示例性实施例示出的虚拟角色镜像前后的场景画面示意图,以该镜像线是基于用户的绘制操作确定的目标轴线为例,如图9所示,该镜像线910是用户基于实际需求绘制的轴线,将该目标轴线作为镜像线,对目标区域中的目标虚拟角色的站位进行镜像切换,以基于角色选择操作确定的目标虚拟角色是目标区域中的全部虚拟角色为例,将虚拟角色921由当前站位镜像切换到站位931;将虚拟角色922由当前站位镜像切换到站位932,将虚拟角色923由当前站位镜像切换到站位933。
在一种可能的实现方式中,若基于镜像线进行镜像后的虚拟角色的站位超过目标区域,则不调整该虚拟角色的站位,对镜像后未超过目标区域的虚拟角色的站位进行调整。
在一种可能的实现方式中,响应于对目标虚拟角色中的第一虚拟角色进行镜像后的第二位置上已存在第二虚拟角色,可以通过以下方式中的任意一种实现对第一虚拟角色站位调整:
不调整第一虚拟角色的站位;
将第一虚拟角色的站位与第二虚拟角色的站位互换;
将第二虚拟角色移动到与第二位置邻近的空站位上,将第一虚拟角色移动到第二位置上;
保持第二虚拟角色的位置不动,将第一虚拟角色移动到第二位置邻近的空站位上。
在一种可能的实现方式中,该角色选择操作包括:基于场景画面的连续滑动操作,或者,基于场景画面的范围选择操作,或者,对虚拟角色的点选操作中的至少一种。
其中,响应于角色选择操作为基于场景画面的连续滑动操作,该目标虚拟角色是至少一个虚拟角色中,基于连续滑动操作选中的虚拟角色;示意性的,该滑动操作是指用户手指不弹起的情况下,在终端屏幕上的一次性移动操作,或者,是用户不松开目标实体按键(比如鼠标左键)的情况下的,对控制设备的一次性移动操作;图10示出了本申请一示例性实施例示出的基于连续滑动操作确定目标虚拟角色的示意图,如图10所示,在接收到用户对第一子控件1010的触发操作后,将目标区域中的至少一个虚拟角色选择状态调整为可选状态,用户可以以目标区域中的任意虚拟角色为操作起点,在操作不中断(比如手指不弹起,或者,目标实体按键不松开)前提下,完成连续滑动操作,将连续滑动操作经过的虚拟角色获取为目标虚拟角色1020。
响应于角色选择操作为基于场景画面的范围选择操作,该目标虚拟角色是至少一个虚拟角色中,处于基于范围选择操作确定的范围内的虚拟角色;示意性的,该基于场景画面的范围选择操作可以是基于场景画面的框选操作,该范围选择操作对应的框选范围可以大于目标区域中目标虚拟角色的站位范围,示意性的,图11示出本申请一示例性实施例示出的通过范围选择操作确定目标虚拟角色的示意图,如图11所示,用户通过对场景画面的范围选择操作确定了一个较大的框选范围1110,计算机设备可以将处于该框选范围1110内的虚拟角色获取为目标虚拟角色1120,该框选范围的形状可以是正方形,长方形,椭圆形,圆形等形状中的任意一种,本申请对此不进行限制。
响应于角色选择操作为对虚拟角色的点选操作,该目标虚拟角色是至少一个虚拟角色中,基于点选操作选中的虚拟角色。在此情况下,用户可以不考虑选择顺序的情况下,实现对虚拟角色的分散式选中。
示意性的,上述三种确定目标虚拟角色方式可以分开使用,或者,也可以两两结合使用,或者也可以将是三种方式结合使用,本申请对此不进行限制。
不同的角色选择操作对应的操作结束节点可以相同,也可以不同。示意性的,若角色选择操作为基于场景画面的连续滑动操作,响应于基于场景画面的连续滑动操作中断,将目标虚拟角色的站位由第一位置切换至第二位置。
也就是说,当连续滑动操作中断时,即确定该连续滑动操作结束,此时,计算机设备即可以将目标虚拟角色的站位由第一位置切换至第二位置,比如,当连续滑动操作中断时,计算机设备自动以目标轴线为镜像线,对目标虚拟角色的站位进行镜像切换;比如,在用户通过手指进行连续滑动操作时,手指抬起,或者,用户在通过实体设备进行连续滑动操作时, 松开目标实体按键,则确定角色选择操作结束。
在另一种可能的实现方式中,响应于接收到对第一子控件的触发操作,显示确定控件。在这种情况下,响应于接收到基于确定控件的触发操作,将目标虚拟角色的站位由第一位置切换至第二位置。
也就是说,在基于第一子控件确定通过角色选择操作选中目标虚拟角色时,可以在场景画面的上层叠加显示确定控件,图12示出了本申请一示例性实施例示出的场景画面的示意图,以基于场景画面的框选操作确定目标虚拟角色为例,如图12所示,在接收到用户对第一子控件的触发操作之后,在场景画面的上层叠加显示确定控件1210,在用户通过上述三种确定目标虚拟角色的方式中的至少一种方式确定目标虚拟角色后,在接收到对确定控件1210的选择操作时,确定角色选择操作结束,将确定的目标虚拟角色的站位基于目标轴线进行镜像切换;该确定控件用以触发基于目标轴线的镜像操作,该目标轴线是预先设置好的,或者默认设置的镜像线;或者,该确定控件用以触发进入镜像线设置界面,以基于在镜像线设置界面中设置的镜像线,对目标虚拟角色进行镜像操作。如图12所示,以将目标虚拟角色的站位以场景画面的中轴线进行镜像切换为例,将目标虚拟角色的站位1220,切换为站位1230。
在一种可能的实现方式中,该至少一个子控件中还包括第三子控件,用以对目标区域中的全部虚拟角色的站位进行镜像切换,示意性的,基于第三子控件实现对虚拟角色的站位调整的过程可以实现为以下过程:
响应于接收到对至少两个子控件中的第三子控件的触发操作,将至少一个虚拟角色的站位由第一位置切换至第二位置;位置切换前后的至少一个虚拟角色的站位沿目标轴线对称。
也就是说,在接收到对第三子控件的触发操作时,默认将目标区域中的全部虚拟角色获取为目标虚拟角色;图13示出了本申请一示例性实施例示出的虚拟角色镜像前后的场景画面示意图,如图13所示,当用户选择站位操作控件中的第三子控件1310时,默认将目标区域中的全部虚拟角色确定为目标虚拟角色1320;在镜像线1330为场景画面的中轴线的情况下,基于该镜像线1330对目标虚拟角色的站位进行镜像切换,获得站位更改后的目标虚拟角色1340。
可选的,在基于对第三子控件的触发操作确定了目标虚拟角色之后,确定角色选择操作结束,可以进入镜像线设置界面,以基于用户的选择操作设置镜像线,并基于镜像线对目标虚拟角色的站位进行镜像切换。
在一种可能的实现方式中,基于第二子控件实现对虚拟角色的站位调整的过程可以实现为以下过程:
响应于接收到对至少两个子控件中的第二子控件的触发操作,显示布局显示区域,该布局显示区域中显示有至少一种站位布局;该站位布局是基于用户的布局设置确定的站位布局;
响应于接收到对目标站位布局的选择操作,基于目标站位布局,对目标虚拟角色的站位进行调整;该目标站位布局是至少一种站位布局中的一种。
该布局显示区域中包含的至少一种站位布局可以是用户在控制虚拟角色进入虚拟场景之前预先设置的,或者,该至少一种站位布局也可以是用户在控制虚拟角色进入虚拟场景之后进行添加的,或者,至少一种站位布局包括用户在控制虚拟角色进入虚拟场景之前预先设置的,也包括用户在控制虚拟角色之后进行添加的。
示意性的,用户在控制虚拟角色进入虚拟场景之后添加的站位布局的过程可以实现为:
显示布局规划界面;
基于在布局规划界面中确定的站位,生成站位布局;
将站位布局添加到布局显示区域中。
可选的,该布局规划界面可以是在接收到对第二子控件的触发操作后,直接进行显示的,图14示出了本申请一示例性实施例示出的添加的站位布局的示意图,如图14所示,在用户触发站位控制控件中的第二子控件1410之后,显示布局规划界面1420,该布局规划界面1420 中包含目标区域的全部站位点,如3×7的对战棋格,用户可以通过对布局规划界面中的站位点进行选择,以构成新的站位布局,响应于接收到对布局规划界面中显示的布局生成控件1430的选择操作后,确定生成新的站位布局1440,并将该站位布局添加到布局显示区域1450中。
可选的,该布局显示区域中可以显示有布局添加控件,以使得用户在可以在完成一次站位布局添加后,重新进入布局规划界面,再次进行站位布局添加。
其中,布局规划界面所能规划的站位的最大规划数量等于虚拟场景的目标区域中所能添加的虚拟角色的最大数量,或者,等于虚拟场景的目标区域中,当前所能添加的虚拟角色的最大数量。比如,虚拟场景的目标区域所能添加的虚拟角色的数量随着用户等级的升高而升高,在用户等级最高时,虚拟场景的目标区域所能添加的虚拟角色的最大数量为9,而基于用户当前等级所能添加的虚拟角色的最大数量为5,那么可以将布局规划界面的最大规划数量设置为9,以规划一个最终版本的站位布局,或者,也可以将布局规划界面的最大规划数量设置为5,以规划适用于当前对局的站位布局。
或者,在另一种可能的实现方式中,该布局规划界面是在接收到用户对布局显示区域中显示的布局添加控件的触控操作之后显示的,也就是说,响应于接收到对第二子控件的触发操作,显示布局显示区域;该布局显示区域中显示有布局添加控件;
响应于接收到对布局添加控件的触发操作,显示布局规划界面。
在这种情况下,该布局显示区域中可以不包括站位布局,或者,可以包括有站位布局;图15示出了本申请一示例性实施例示出的布局显示区域的示意图,如图15所示,在用户点击站位控制控件中的第二子控件1510之后,显示布局显示区域1520,该布局显示区域1520中包含布局添加控件1530,当用户对布局添加控件进行触发操作后,可以调出如图14所示的布局规划界面1420,以基于布局规划界面生成新的站位布局,并添加到布局显示区域1520中。
其中,上述两种打开布局规划界面的方式可以通过至少一个子控件中的不同子控件实现,比如,第五子控件用以在接收到触发操作时直接打开布局规划界面,第六子控件用以先打开包含布局添加控件的布局显示区域。
在一种可能的实现方式中,该布局显示区域中显示的站位布局,还可以包括基于用户的保存操作,保存的历史站位布局;该历史站位布局可以是用户对应的目标区域中的站位布局,或者,也可以是用户在处于观战模式下,用户观战的其他用户对应的目标区域中的站位布局。示意性的,场景画面中可以设置有站位布局保存控件,用以在接收到选择操作时,对当前目标区域中包含的目标虚拟角色的站位构成的站位布局进行记录,并保存到布局显示区域中,以供用户在下次对局中进行选择使用。
在一种可能的实现方式中,在基于目标站位布局进行虚拟角色的站位更改时,可以以各个虚拟角色的攻击范围为更改依据,也就是说,基于目标站位布局,对目标虚拟角色的站位进行更改的过程可以包括:
获取目标虚拟角色的攻击范围;
基于目标虚拟角色的攻击范围,以及目标站位布局,对目标虚拟角色的站位进行更改。
示意性的,攻击范围小的虚拟角色在目标站位布局中处于邻近敌方对战区域的站位,攻击范围大的虚拟角色在目标站位布局中处于远离敌方对战区域的站位。
或者,在另一种可能的实现方式中,可以根据虚拟角色的角色属性,对目标虚拟角色的站位进行更改,其中,角色属性包括职业属性,生命值属性,虚拟角色等级等等;比如,将职业属性为战士的虚拟角色设置在目标站位布局中邻近敌方对战区域的站位,将职业属性为射手的虚拟角色设置在目标站位布局中远离敌方对战区域的站位;或者,将生命值上限较高的虚拟角色设置在目标站位布局中邻近敌方对战区域的站位,将生命值上限较低的虚拟角色设置在目标站位布局中远离敌方对战区域的站位;或者,将虚拟角色等级较高的虚拟角色设置在目标站位布局中邻近敌方对战区域的站位,将虚拟角色等级较低的虚拟角色设置在目标 站位布局中远离敌方对战区域的站位。本申请实施例中提供的对虚拟角色的站位设置依据仅为示意性的,相关人员也可以将目标虚拟角色的站位进行随机设置,或者,该站位设置依据也可以是用户自定义设置的,本申请对虚拟角色的站位设置依据不进行限制。
当用户设置或者存储的目标站位布局中的站位数量大于当前目标区域中的虚拟角色的数量时,在一种可能的实现方式中,在基于目标站位布局对目标虚拟角色的站位布局进行更改的过程中,响应于目标站位布局对应的站位数量大于目标虚拟角色的数量,按照目标站位顺序,将目标虚拟角色的站位调整到目标站位布局中的站位中。
示意性的,该目标站位顺序可以是指基于目标站位布局中的站位从左到右的顺序,或者,从中间到两侧的顺序,或者,从前到后的顺序等等,本申请对目标站位顺序的设置方式不进行限制。
基于第四子控件触发的在进行角色选择操作之后,通过布局设置操作对角色选择操作确定的目标虚拟角色进行镜像切换的过程可以是上述第一子控件对应的角色选择过程与第二子控件对应的布局设置过程的结合,示意性的,响应于接收到对第四子控件的触发操作,将至少一个虚拟角色的选择状态调整为可选状态;响应于接收到角色选择操作,将基于角色选择操作选择的虚拟角色获取为目标虚拟角色;显示布局显示区域,布局显示区域中显示有至少一种站位布局;该站位布局是基于用户的布局设置确定的站位布局;响应于接收到对目标站位布局的选择操作,基于目标站位布局,对目标虚拟角色的站位进行调整。其中,上述过程的实现方式可以参考第一子控件对应的角色选择过程,以及第二子控件对应的布局设置过程中的相关内容,此处不再赘述。
综上所述,本申请实施例提供的虚拟角色控制方法,通过提供站位控制控件,使得计算机设备在接收到基于站位控制控件的触发操作时,能够对自定义选择的虚拟角色的站位进行统一调整,或者,能够通过自定的调整方式对目标对象的站位进行统一调整,或者,能够通过自定义的调整方式对自定义选择的虚拟角色的站位进行统一调整,从而使得在对虚拟场景中的虚拟角色进行站位调整时,可以基于站位控制控件实现对多个虚拟角色的站位的集体调整,同时又保证了对虚拟场景中的虚拟角色的站位调整的灵活性。
以自走棋场景为例,图16示出了本申请一示例实施例示出的虚拟角色控制方法的流程图,该方法可以由计算机设备执行,该计算机设备可以实现为终端或者服务器,如图16所示,该虚拟角色控制方法包括:
S1601,接收对棋盘区域的触控操作。
该棋盘区域为自走棋场景中,对应于己方控制的虚拟角色的对局区域。
S1602,判断触控操作是否为长按操作,若是,则执行S1603,否则,结束。
S1603,激活站位控制控件。
该站位控制控件中包含第一子控件,第二子控件,以及第三子控件。
S1604,接收到对第一子控件的触发操作,将棋盘区域中的棋子设置为可选状态。
第一子控件是站位控制控件中的其中一个控件。
S1605,接收对棋子的圈选操作。
S1606,接收到对确定控件的触发操作,将圈选获得的棋子的站位基于目标轴线镜像。
S1607,接收到对第二子控件的触发操作。
该第二子控件用以调出预设站位布局。
S1608,弹出布局显示区域,该预布局显示区域中包含至少一个站位布局。
S1609,接收到对目标站位布局的选择操作。
该目标站位布局是至少一个站位布局中的一个。
S1610,基于目标站位布局对目标区域中的棋子的站位进行调整。
S1611,接收到对第三子控件的触发操作。
第三子控件是站位控制控件中的其中一个控件。
S1612,将棋盘区域的全部棋子的站位基于目标轴线镜像。
综上所述,本申请实施例提供的虚拟角色控制方法,通过提供站位控制控件,使得计算机设备在接收到基于站位控制控件的触发操作时,能够对自定义选择的虚拟角色的站位进行统一调整,或者,能够通过自定的调整方式对目标对象的站位进行统一调整,或者,能够通过自定义的调整方式对自定义选择的虚拟角色的站位进行统一调整,从而使得在对虚拟场景中的虚拟角色进行站位调整时,可以基于站位控制控件实现对多个虚拟角色的站位的集体调整,同时又保证了对虚拟场景中的虚拟角色的站位调整的灵活性。
图17示出了本申请一示例性实施例示出的虚拟角色控制装置的方框图,如图17所示,该虚拟角色控制装置包括:
画面显示模块1710,用于显示虚拟场景的场景画面,所述虚拟场景中包含至少一个虚拟角色;
控制控件显示模块1720,用于在所述场景画面的上层叠加显示站位控制控件;
站位调整模块1730,用于响应于接收到对所述站位控制控件的触发操作,基于目标调整方式,对目标虚拟角色的站位进行调整;所述目标调整方式以及所述目标虚拟角色中的至少一种是基于用户的自定义操作确定的;所述目标虚拟角色属于所述至少一个虚拟角色。
在一种可能的实现方式中,所述站位控制控件包含至少两个子控件;
所述站位调整模块,用于响应于接收到对目标子控件的触发操作,基于所述目标子控件对应的调整方式,对所述目标虚拟角色的站位进行调整,所述目标子控件是所述至少两个子控件中的任意一个。
在一种可能的实现方式中,所述站位调整模块1730,包括:
状态调整子模块,用于响应于接收到对至少两个所述子控件中的第一子控件的触发操作,将所述至少一个虚拟角色的选择状态调整为可选状态;
角色获取子模块,用于响应于接收到对至少两个所述子控件中的第一子控件的触发操作,将所述至少一个虚拟角色的选择状态调整为可选状态;
镜像切换子模块,用于响应于所述角色选择操作结束,将所述目标虚拟角色的站位由第一位置切换至第二位置;位置切换前后的所述目标虚拟角色的站位沿目标轴线对称。
在一种可能的实现方式中,所述角色选择操作包括:基于所述场景画面的连续滑动操作,或者,基于所述场景画面的范围选择操作,或者,对虚拟角色的点选操作中的至少一种。
在一种可能的实现方式中,响应于所述角色选择操作为基于所述场景画面的连续滑动操作,所述目标虚拟角色是所述至少一个虚拟角色中,基于所述连续滑动操作选中的虚拟角色;
响应于所述角色选择操作为基于所述场景画面的范围选择操作,所述目标虚拟角色是所述至少一个虚拟角色中,处于基于所述范围选择操作确定的范围内的虚拟角色;
响应于所述角色选择操作为对虚拟角色的点选操作,所述目标虚拟角色是所述至少一个虚拟角色中,基于所述点选操作选中的虚拟角色。
在一种可能的实现方式中,响应于所述角色选择操作为基于所述场景画面的连续滑动操作,所述位置切换子模块,用于响应于基于所述场景画面的连续滑动操作中断,将所述目标虚拟角色的站位由第一位置切换至第二位置。
在一种可能的实现方式中,所述装置还包括:
确定控件显示模块,用于响应于接收到对所述第一子控件的触发操作,显示确定控件;
所述位置切换子模块,用于响应于接收到基于所述确定控件的触发操作,将所述目标虚拟角色的站位由第一位置切换至第二位置。
在一种可能的实现方式中,所述目标轴线包括所述场景画面的中轴线,或者,相对于所述目标虚拟角色的中轴线,或者,基于用户的绘制操作确定的任意轴线中的任意一种。
在一种可能的实现方式中,所述站位调整模块1730,包括:
区域显示子模块,用于响应于接收到对至少两个所述子控件中的第二子控件的触发操作,显示布局显示区域,所述布局显示区域中显示有至少一种站位布局;所述站位布局是基于用户的所述布局设置确定的所述站位布局;
站位调整子模块,用于响应于接收到对目标站位布局的选择操作,基于所述目标站位布局,对所述目标虚拟角色的站位进行调整;所述目标站位布局是至少一种所述站位布局中的一种。
在一种可能的实现方式中,所述站位调整子模块,包括:
攻击范围获取单元,用于获取所述目标虚拟角色的攻击范围;
所述站位调整单元,用于基于所述目标虚拟角色的攻击范围,以及所述目标站位布局,对所述目标虚拟角色的站位进行调整。
在一种可能的实现方式中,所述站位调整子模块,用于响应于所述目标站位布局对应的站位数量大于所述目标虚拟角色的数量,按照目标站位顺序,将所述目标虚拟角色的站位调整到所述目标站位布局中的站位中。
在一种可能的实现方式中,所述装置还包括:
规划界面显示模块,用于显示布局规划界面;
布局生成模块,用于基于在所述布局规划界面中确定的站位,生成所述站位布局;
布局添加模块,用于将所述站位布局添加到所述布局显示区域中。
在一种可能的实现方式中,所述布局显示区域中包括布局添加控件,所述规划界面显示模块,用于响应于接收到对所述布局添加控件的触发操作,显示所述布局规划界面。
在一种可能的实现方式中,所述控制控件显示模块1720,用于响应于接收到激活操作,在所述场景画面的上层叠加显示所述站位控制控件;
其中,所述激活操作包括:对所述场景画面中的所述目标区域执行的目标操作,以及,对激活控件的触发操作中的至少一种。
在一种可能的实现方式中,所述目标操作包括对所述目标区域的长按操作。
在一种可能的实现方式中,所述站位调整模块1730,用于响应于接收到对至少两个所述子控件中的第三子控件的触发操作,将所述至少一个虚拟角色的站位由第一位置切换至第二位置;位置切换前后的所述至少一个虚拟角色的站位沿目标轴线对称。
综上所述,本申请实施例提供的虚拟角色控制装置,通过提供站位控制控件,使得计算机设备在接收到基于站位控制控件的触发操作时,能够对自定义选择的虚拟角色的站位进行统一调整,或者,能够通过自定的调整方式对目标对象的站位进行统一调整,或者,能够通过自定义的调整方式对自定义选择的虚拟角色的站位进行统一调整,从而使得在对虚拟场景中的虚拟角色进行站位调整时,可以基于站位控制控件实现对多个虚拟角色的站位的集体调整,同时又保证了对虚拟场景中的虚拟角色的站位调整的灵活性。
图18示出了本申请一示例性实施例示出的计算机设备1800的结构框图。该计算机设备可以实现为本申请上述方案中的服务器。该计算机设备1800包括中央处理单元(Central Processing Unit,CPU)1801、包括随机存取存储器(Random Access Memory,RAM)1802和只读存储器(Read-Only Memory,ROM)1803的系统存储器1804,以及连接系统存储器1804和中央处理单元1801的系统总线1805。该计算机设备1800还包括用于存储操作系统1809、应用程序1810和其他程序模块1811的大容量存储设备1806。
该大容量存储设备1806通过连接到系统总线1805的大容量存储控制器(未示出)连接到中央处理单元1801。该大容量存储设备1806及其相关联的计算机可读介质为计算机设备1800提供非易失性存储。也就是说,该大容量存储设备1806可以包括诸如硬盘或者只读光盘(Compact Disc Read-Only Memory,CD-ROM)驱动器之类的计算机可读介质(未示出)。
不失一般性,该计算机可读介质可以包括计算机存储介质和通信介质。计算机存储介质包括以用于存储诸如计算机可读指令、数据结构、程序模块或其他数据等信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动介质。计算机存储介质包括RAM、ROM、可擦除可编程只读寄存器(Erasable Programmable Read Only Memory,EPROM)、电子抹除式可复写只读存储器(Electrically-Erasable Programmable Read-Only Memory,EEPROM)闪存或其他固态存储其技术,CD-ROM、数字多功能光盘(Digital Versatile Disc,DVD)或其他光学存储、磁带盒、磁带、磁盘存储或其他磁性存储设备。当然,本领域技术人员可知该计算机存储介质不局限于上述几种。上述的系统存储器1804和大容量存储设备1806可以统称为存储器。
根据本公开的各种实施例,该计算机设备1800还可以通过诸如因特网等网络连接到网络上的远程计算机运行。也即计算机设备1800可以通过连接在该系统总线1805上的网络接口单元1807连接到网络1808,或者说,也可以使用网络接口单元1807来连接到其他类型的网络或远程计算机系统(未示出)。
上述存储器还包括至少一条计算机程序,该至少一条计算机程序存储于存储器中,中央处理器1801通过执行该至少一条计算机程序来实现上述各个实施例所示的虚拟角色控制方法中的全部或部分步骤。
图19示出了本申请一个示例性实施例提供的计算机设备1900的结构框图。该计算机设备1900可以实现为上述的终端,比如:智能手机、平板电脑、笔记本电脑或台式电脑。计算机设备1900还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,计算机设备1900包括有:处理器1901和存储器1902。
处理器1901可以包括一个或多个处理核心,比如4核心处理器、19核心处理器等。处理器1901可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1901也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1901可以集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1901还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1902可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1902还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1902中的非暂态的计算机可读存储介质用于存储至少一条计算机程序,该至少一条计算机程序用于被处理器1901所执行以实现本申请中方法实施例提供的虚拟角色控制方法中的全部或部分步骤。
在一些实施例中,计算机设备1900还可选包括有:外围设备接口1903和至少一个外围设备。处理器1901、存储器1902和外围设备接口1903之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1903相连。具体地,外围设备包括:射频电路1904、显示屏1905、摄像头组件1906、音频电路1907和电源1909中的至少一种。
在一些实施例中,计算机设备1900还包括有一个或多个传感器1910。该一个或多个传感器1910包括但不限于:加速度传感器1911、陀螺仪传感器1912、压力传感器1913、光学传感器1915以及接近传感器1916。
本领域技术人员可以理解,图19中示出的结构并不构成对计算机设备1900的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
在一示例性实施例中,还提供了一种计算机可读存储介质,用于存储有至少一条计算机程序,该至少一条计算机程序由处理器加载并执行以实现上述虚拟角色控制方法中的全部或部分步骤。例如,该计算机可读存储介质可以是只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)、磁带、软盘和光数据存储设备等。
在一示例性实施例中,还提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括至少一条计算机程序,该至少一条计算机程序存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该至少一条计算机程序,使得该计算机设备执行上述图5、图7或图16任一实施例所示方法的全部或部分步骤。

Claims (20)

  1. 一种虚拟角色控制方法,所述方法由计算机设备执行,所述方法包括:
    显示虚拟场景的场景画面,所述虚拟场景中包含至少一个虚拟角色;
    在所述场景画面的上层叠加显示站位控制控件;
    响应于接收到对所述站位控制控件的触发操作,基于目标调整方式,对目标虚拟角色的站位进行调整;所述目标调整方式以及所述目标虚拟角色中的至少一种是基于用户的自定义操作确定的;所述目标虚拟角色属于所述至少一个虚拟角色。
  2. 根据权利要求1所述的方法,所述站位控制控件包含至少两个子控件;
    所述响应于接收到对所述站位控制控件的触发操作,基于目标调整方式,对目标虚拟角色的站位进行调整,包括:
    响应于接收到对目标子控件的触发操作,基于所述目标子控件对应的调整方式,对所述目标虚拟角色的站位进行调整,所述目标子控件是所述至少两个子控件中的任意一个。
  3. 根据权利要求2所述的方法,所述响应于接收到对目标子控件的触发操作,基于所述目标子控件对应的调整方式,对所述目标虚拟角色的站位进行调整,包括:
    响应于接收到对至少两个所述子控件中的第一子控件的触发操作,将所述至少一个虚拟角色的选择状态调整为可选状态;
    响应于接收到角色选择操作,将基于所述角色选择操作选择的虚拟角色获取为所述目标虚拟角色;
    响应于所述角色选择操作结束,将所述目标虚拟角色的站位由第一位置切换至第二位置;位置切换前后的所述目标虚拟角色的站位沿目标轴线对称。
  4. 根据权利要求3所述的方法,所述角色选择操作包括:基于所述场景画面的连续滑动操作,或,基于所述场景画面的范围选择操作,或,对虚拟角色的点选操作中的至少一种。
  5. 根据权利要求4所述的方法,响应于所述角色选择操作为基于所述场景画面的连续滑动操作,所述目标虚拟角色是所述至少一个虚拟角色中,基于所述连续滑动操作选中的虚拟角色;
    响应于所述角色选择操作为基于所述场景画面的范围选择操作,所述目标虚拟角色是所述至少一个虚拟角色中,处于基于所述范围选择操作确定的范围内的虚拟角色;
    响应于所述角色选择操作为对虚拟角色的点选操作,所述目标虚拟角色是所述至少一个虚拟角色中,基于所述点选操作选中的虚拟角色。
  6. 根据权利要求5所述的方法,响应于所述角色选择操作为基于所述场景画面的连续滑动操作,所述响应于角色选择操作结束,将所述目标虚拟角色的站位由第一位置切换至第二位置,包括:
    响应于基于所述场景画面的连续滑动操作中断,将所述目标虚拟角色的站位由第一位置切换至第二位置。
  7. 根据权利要求5所述的方法,所述方法还包括:
    响应于接收到对所述第一子控件的触发操作,显示确定控件;
    所述响应于所述角色选择操作结束,将所述目标虚拟角色的站位由第一位置切换至第二位置;位置切换前后的所述目标虚拟角色沿目标轴线对称,包括:
    响应于接收到基于所述确定控件的触发操作,将所述目标虚拟角色的站位由第一位置切换至第二位置。
  8. 根据权利要求3所述的方法,所述目标轴线包括所述场景画面的中轴线,或者,相对于所述目标虚拟角色的中轴线,或者,基于用户的绘制操作确定的任意轴线中的任意一种。
  9. 根据权利要求2所述的方法,所述响应于接收到对目标子控件的触发操作,基于所述目标子控件对应的调整方式,对所述目标虚拟角色的站位进行调整,包括:
    响应于接收到对至少两个所述子控件中的第二子控件的触发操作,显示布局显示区域, 所述布局显示区域中显示有至少一种站位布局;所述站位布局是基于用户的布局设置确定的所述站位布局;
    响应于接收到对目标站位布局的选择操作,基于所述目标站位布局,对所述目标虚拟角色的站位进行调整;所述目标站位布局是至少一种所述站位布局中的一种。
  10. 根据权利要求9所述的方法,所述响应于接收到对目标站位布局的选择操作,基于所述目标站位布局,对所述目标虚拟角色的站位进行调整,包括:
    获取所述目标虚拟角色的攻击范围;
    基于所述目标虚拟角色的攻击范围,以及所述目标站位布局,对所述目标虚拟角色的站位进行调整。
  11. 根据权利要求9所述的方法,所述响应于接收到对目标站位布局的选择操作,基于所述目标站位布局,对所述目标虚拟角色的站位进行调整,包括:
    响应于所述目标站位布局对应的站位数量大于所述目标虚拟角色的数量,按照目标站位顺序,将所述目标虚拟角色的站位调整到所述目标站位布局中的站位中。
  12. 根据权利要求9所述的方法,所述方法还包括:
    显示布局规划界面;
    基于在所述布局规划界面中确定的站位,生成所述站位布局;
    将所述站位布局添加到所述布局显示区域中。
  13. 根据权利要求12所述的方法,所述布局显示区域中包括布局添加控件,所述显示布局规划界面,包括:
    响应于接收到对所述布局添加控件的触发操作,显示所述布局规划界面。
  14. 根据权利要求1至13任一所述的方法,所述在所述场景画面的上层叠加显示站位控制控件,包括:
    响应于接收到激活操作,在所述场景画面的上层叠加显示所述站位控制控件;
    其中,所述激活操作包括:对所述场景画面中的目标区域执行的目标操作,以及,对激活控件的触发操作中的至少一种。
  15. 根据权利要求14所述的方法,所述目标操作包括对所述目标区域的长按操作。
  16. 根据权利要求2所述的方法,所述响应于接收到对目标子控件的触发操作,基于所述目标子控件对应的调整方式,对所述目标虚拟角色的站位进行调整,包括:
    响应于接收到对至少两个所述子控件中的第三子控件的触发操作,将所述至少一个虚拟角色的站位由第一位置切换至第二位置;位置切换前后的所述至少一个虚拟角色的站位沿目标轴线对称。
  17. 一种虚拟角色控制装置,所述装置包括:
    画面显示模块,用于显示虚拟场景的场景画面,所述虚拟场景中包含至少一个虚拟角色;
    控制控件显示模块,用于在所述场景画面的上层叠加显示站位控制控件;
    站位调整模块,用于响应于接收到对所述站位控制控件的触发操作,基于目标调整方式,对目标虚拟角色的站位进行调整;所述目标调整方式以及所述目标虚拟角色中的至少一种是基于用户的自定义操作确定的;所述目标虚拟角色属于所述至少一个虚拟角色。
  18. 一种计算机设备,所述计算机设备包括处理器和存储器,所述存储器存储有至少一条计算机程序,所述至少一条计算机程序由所述处理器加载并执行以实现如权利要求1至16任一所述的虚拟角色控制方法。
  19. 一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条计算机程序,所述计算机程序由处理器加载并执行以实现如权利要求1至16任一所述的虚拟角色控制方法。
  20. 一种计算机程序产品,所述计算机程序产品包括至少一条计算机程序,所述计算机程序由处理器加载并执行以实现如权利要求1至16任一所述的虚拟角色控制方法。
PCT/CN2022/125747 2021-11-18 2022-10-17 虚拟角色控制方法、装置、设备、存储介质及程序产品 WO2023088012A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020247009540A KR20240046595A (ko) 2021-11-18 2022-10-17 아바타 제어 방법 및 장치, 그리고 디바이스, 저장 매체 및 프로그램 제품
US18/214,306 US20230330539A1 (en) 2021-11-18 2023-06-26 Virtual character control method and apparatus, device, storage medium, and program product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111372101.8 2021-11-18
CN202111372101.8A CN114082189A (zh) 2021-11-18 2021-11-18 虚拟角色控制方法、装置、设备、存储介质及产品

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/214,306 Continuation US20230330539A1 (en) 2021-11-18 2023-06-26 Virtual character control method and apparatus, device, storage medium, and program product

Publications (1)

Publication Number Publication Date
WO2023088012A1 true WO2023088012A1 (zh) 2023-05-25

Family

ID=80302215

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/125747 WO2023088012A1 (zh) 2021-11-18 2022-10-17 虚拟角色控制方法、装置、设备、存储介质及程序产品

Country Status (4)

Country Link
US (1) US20230330539A1 (zh)
KR (1) KR20240046595A (zh)
CN (1) CN114082189A (zh)
WO (1) WO2023088012A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114082189A (zh) * 2021-11-18 2022-02-25 腾讯科技(深圳)有限公司 虚拟角色控制方法、装置、设备、存储介质及产品

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462307A (zh) * 2020-03-31 2020-07-28 腾讯科技(深圳)有限公司 虚拟对象的虚拟形象展示方法、装置、设备及存储介质
US20210038982A1 (en) * 2018-01-29 2021-02-11 Intellisports Inc. System, computing device, and method for mapping an activity of a player in a non-virtual environment into a virtual environment
CN112426719A (zh) * 2020-11-27 2021-03-02 网易(杭州)网络有限公司 一种足球游戏控制方法、装置、设备和存储介质
CN112891944A (zh) * 2021-03-26 2021-06-04 腾讯科技(深圳)有限公司 基于虚拟场景的互动方法、装置、计算机设备及存储介质
CN113101660A (zh) * 2021-04-16 2021-07-13 网易(杭州)网络有限公司 一种游戏的显示控制方法及装置
CN114082189A (zh) * 2021-11-18 2022-02-25 腾讯科技(深圳)有限公司 虚拟角色控制方法、装置、设备、存储介质及产品

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210038982A1 (en) * 2018-01-29 2021-02-11 Intellisports Inc. System, computing device, and method for mapping an activity of a player in a non-virtual environment into a virtual environment
CN111462307A (zh) * 2020-03-31 2020-07-28 腾讯科技(深圳)有限公司 虚拟对象的虚拟形象展示方法、装置、设备及存储介质
CN112426719A (zh) * 2020-11-27 2021-03-02 网易(杭州)网络有限公司 一种足球游戏控制方法、装置、设备和存储介质
CN112891944A (zh) * 2021-03-26 2021-06-04 腾讯科技(深圳)有限公司 基于虚拟场景的互动方法、装置、计算机设备及存储介质
CN113101660A (zh) * 2021-04-16 2021-07-13 网易(杭州)网络有限公司 一种游戏的显示控制方法及装置
CN114082189A (zh) * 2021-11-18 2022-02-25 腾讯科技(深圳)有限公司 虚拟角色控制方法、装置、设备、存储介质及产品

Also Published As

Publication number Publication date
CN114082189A (zh) 2022-02-25
US20230330539A1 (en) 2023-10-19
KR20240046595A (ko) 2024-04-09

Similar Documents

Publication Publication Date Title
WO2022151946A1 (zh) 虚拟角色的控制方法、装置、电子设备、计算机可读存储介质及计算机程序产品
CN112891944B (zh) 基于虚拟场景的互动方法、装置、计算机设备及存储介质
JP2022526456A (ja) 仮想オブジェクト制御方法並びにその、装置、コンピュータ装置及びプログラム
CN112755527B (zh) 虚拟角色的显示方法、装置、设备及存储介质
CN110339564B (zh) 虚拟环境中的虚拟对象显示方法、装置、终端及存储介质
JP7309917B2 (ja) 情報表示方法、装置、機器及びプログラム
JP7451563B2 (ja) 仮想キャラクタの制御方法並びにそのコンピュータ機器、コンピュータプログラム、及び仮想キャラクタの制御装置
CN113893560B (zh) 虚拟场景中的信息处理方法、装置、设备及存储介质
CN111672111A (zh) 界面显示方法、装置、设备及存储介质
US20220305384A1 (en) Data processing method in virtual scene, device, storage medium, and program product
CN112691366B (zh) 虚拟道具的显示方法、装置、设备及介质
JP2023552212A (ja) 対局決済インタフェースの表示方法、装置、機器及びコンピュータプログラム
US20220355202A1 (en) Method and apparatus for selecting ability of virtual object, device, medium, and program product
WO2023088012A1 (zh) 虚拟角色控制方法、装置、设备、存储介质及程序产品
JP2024500929A (ja) 仮想シーンに表情を表示する方法と装置及びコンピュータ機器とプログラム
CN110801629B (zh) 虚拟对象生命值提示图形的显示方法、装置、终端及介质
JP2024519880A (ja) 仮想環境画面の表示方法、装置、端末及びコンピュータプログラム
JP7314311B2 (ja) 仮想環境の画面表示方法、装置、機器及びコンピュータプログラム
WO2023138175A1 (zh) 卡牌施放方法、装置、设备、存储介质及程序产品
CN113018861B (zh) 虚拟角色显示方法、装置、计算机设备和存储介质
CN114225372A (zh) 虚拟对象的控制方法、装置、终端、存储介质及程序产品
KR20220161252A (ko) 가상 환경에서 특수 효과를 생성하기 위한 방법 및 장치, 디바이스, 및 저장 매체
CN111589114A (zh) 虚拟对象的选择方法、装置、终端及存储介质
KR102648210B1 (ko) 가상 객체 제어 방법 및 장치, 단말, 및 저장 매체
WO2023231557A1 (zh) 虚拟对象的互动方法、装置、设备、存储介质及程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22894549

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20247009540

Country of ref document: KR

Kind code of ref document: A