WO2023065964A1 - 虚拟场景的对象控制方法、装置、电子设备、计算机程序产品及计算机可读存储介质 - Google Patents

虚拟场景的对象控制方法、装置、电子设备、计算机程序产品及计算机可读存储介质 Download PDF

Info

Publication number
WO2023065964A1
WO2023065964A1 PCT/CN2022/120775 CN2022120775W WO2023065964A1 WO 2023065964 A1 WO2023065964 A1 WO 2023065964A1 CN 2022120775 W CN2022120775 W CN 2022120775W WO 2023065964 A1 WO2023065964 A1 WO 2023065964A1
Authority
WO
WIPO (PCT)
Prior art keywords
button
action
attack
connection
target
Prior art date
Application number
PCT/CN2022/120775
Other languages
English (en)
French (fr)
Inventor
崔维健
刘博艺
仇蒙
田聪
何晶晶
邹聃成
邓昱
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to JP2023571193A priority Critical patent/JP2024519364A/ja
Publication of WO2023065964A1 publication Critical patent/WO2023065964A1/zh
Priority to US18/214,903 priority patent/US20230330536A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/22Setup operations, e.g. calibration, key configuration or button assignment
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/422Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle automatically for the purpose of assisting the player, e.g. automatic braking in a driving game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/426Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/44Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1018Calibration; Key and button assignment
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the present application relates to human-computer interaction technology, and in particular to a virtual scene object control method, device, electronic equipment, computer program product, and computer-readable storage medium.
  • Display technology based on graphics processing hardware expands the channels for perceiving the environment and obtaining information, especially the multimedia technology for virtual scenes.
  • virtual objects controlled by users or artificial intelligence can be realized according to actual application requirements. There are various typical application scenarios. For example, in virtual scenes such as games, it can simulate the process of fighting between virtual objects.
  • the human-computer interaction between the virtual scene and the user is realized through the human-computer interaction interface.
  • the virtual object Sometimes in the battle scene, the virtual object needs to complete shooting and other actions at the same time. If you want to complete shooting and other actions at the same time, the user needs to use multiple fingers to frequently click the operation, which has high operational difficulty and accuracy requirements, resulting in low efficiency of human-computer interaction.
  • Embodiments of the present application provide an object control method, device, electronic device, computer program product, and computer-readable storage medium for a virtual scene, which can improve the control efficiency of the virtual scene.
  • An embodiment of the present application provides an object control method of a virtual scene, the method is executed by an electronic device, and the method includes:
  • each of the connection buttons is used to connect the attack button and one of the action buttons;
  • control the virtual object In response to the trigger operation for the target connection button, control the virtual object to execute the action associated with the target action button, and control the virtual object to use the attack props to perform attack operations synchronously; wherein, the target action button is the An action button in the at least one action button that is connected to the target connection button, and the target connection button is any selected connection button in the at least one connection button.
  • An embodiment of the present application provides an object control device for a virtual scene, including:
  • a display module configured to display a virtual scene; wherein, the virtual scene includes a virtual object holding an attack prop;
  • the display module is further configured to display an attack button and at least one action button, and display at least one connection button; wherein, each of the connection buttons is used to connect the attack button and one of the action buttons;
  • the control module is configured to control the virtual object to execute the action associated with the target action button in response to the trigger operation of the target connection button, and control the virtual object to use the attack props to perform attack operations synchronously; wherein the target action
  • the button is an action button connected to the target connection button among the at least one action button, and the target connection button is any selected connection button among the at least one connection buttons.
  • An embodiment of the present application provides an electronic device, including:
  • memory for storing computer-executable instructions
  • the processor is configured to, when executing the computer-executable instructions stored in the memory, implement the method for controlling objects in a virtual scene provided in the embodiments of the present application.
  • the embodiment of the present application provides a computer-readable storage medium storing computer-executable instructions for implementing the object control method of the virtual scene provided in the embodiment of the present application when executed by a processor.
  • An embodiment of the present application provides a computer program product, including a computer program or an instruction.
  • the computer program or computer executable instruction is executed by a processor, the object control method for a virtual scene provided by the embodiment of the present application is implemented.
  • an attack button and an action button and display a connection button for connecting the attack button and an action button, and control the virtual object to perform the action associated with the target action button and use the attack prop to perform the attack operation synchronously in response to the trigger operation for the target connection button , through the layout of the connection buttons, the action and attack operations can be executed at the same time, which is equivalent to using a single button to realize multiple functions at the same time, saving operation time, and thus improving the control efficiency in the virtual scene.
  • FIG. 1 is a schematic diagram of a display interface of an object control method for a virtual scene provided by related technologies
  • Fig. 2A is a schematic diagram of the application mode of the object control method of the virtual scene provided by the embodiment of the present application;
  • Fig. 2B is a schematic diagram of the application mode of the object control method of the virtual scene provided by the embodiment of the present application;
  • FIG. 3 is a schematic structural diagram of an electronic device applying a virtual scene object control method provided by an embodiment of the present application
  • FIGS. 4A-4C are schematic flowcharts of the object control method of the virtual scene provided by the embodiment of the present application.
  • 5A-5E are schematic diagrams of the display interface of the object control method of the virtual scene provided by the embodiment of the present application.
  • 6A-6C are schematic diagrams of the object control method of the virtual scene provided by the embodiment of the present application.
  • FIGS. 7A-7C are schematic diagrams of the object control method of the virtual scene provided by the embodiment of the present application.
  • FIG. 8 is a logical schematic diagram of an object control method for a virtual scene provided by an embodiment of the present application.
  • 9A-9E are schematic diagrams of the display interface of the object control method of the virtual scene provided by the embodiment of the present application.
  • first ⁇ second ⁇ third is only used to distinguish similar objects, and does not represent a specific ordering of objects. Understandably, “first ⁇ second ⁇ third” Where permitted, the specific order or sequencing may be interchanged such that the embodiments of the application described herein can be practiced in sequences other than those illustrated or described herein.
  • the virtual scene which is different from the real world scene output by the device, can form a visual perception of the virtual scene through the naked eye or the assistance of the device, such as the two-dimensional image output through the display screen, through stereoscopic projection, virtual reality and augmented reality Three-dimensional images output by three-dimensional display technology such as technology; in addition, various possible hardware can be used to form various perceptions that simulate the real world, such as auditory perception, tactile perception, olfactory perception, and motion perception.
  • Response is used to represent the condition or state on which the executed operation depends.
  • one or more operations to be executed may be real-time or have a set delay; Unless otherwise specified, there is no restriction on the order in which the operations are performed.
  • Client an application running in the terminal for providing various services, such as a game client.
  • Virtual objects objects interacting in virtual scenes, controlled by users or robot programs (for example, artificial intelligence-based robot programs), objects that can stand still, move and perform various behaviors in virtual scenes, such as in games various roles etc.
  • robot programs for example, artificial intelligence-based robot programs
  • the button is a control for human-computer interaction in the human-computer interaction interface of the virtual scene. It has a pattern logo and is bound to a specific processing logic. When the user triggers the button, the corresponding processing logic will be executed.
  • Fig. 1 is a schematic diagram of the display interface of the object control method of the virtual scene provided by the related technology.
  • the virtual object needs to complete shooting and action at the same time. Concealment can also attack the enemy, but in related technologies, if you want to complete shooting and actions at the same time (actions include: left and right probes, squatting, and lying down), users need to use multiple fingers to frequently click operations, which has high operational difficulty and precision.
  • the human-computer interaction interface 301 in Fig. 1 displays direction button 302, attack button 303 and action button 304, usually the direction button 302 is controlled by the left thumb, and the attack button 303 or action button 304 is controlled by the right thumb.
  • the human-computer interaction interface is usually controlled by the thumbs of the left and right hands, that is, the default operation mode is the two-finger operation mode, in which one thumb controls the direction, and the other thumb controls the virtual exclusive to perform specific operations, so it is difficult for users to use the default two-finger operation mode.
  • the default operation mode is the two-finger operation mode, in which one thumb controls the direction, and the other thumb controls the virtual exclusive to perform specific operations, so it is difficult for users to use the default two-finger operation mode.
  • Shooting and action operations can be performed at the same time by finger-key operation. Only by adjusting the button layout, multi-finger operations (at least 3 fingers) can be used for shooting and action operations at the same time. Even so, multi-finger operations also require high learning costs and proficiency.
  • the button area ratio of the screen is increased, which has a high probability of interfering with the user's field of vision, and it is difficult for most users to operate.
  • Embodiments of the present application provide a method, device, electronic device, computer-readable storage medium, and computer program product for object control in a virtual scene.
  • actions and attack operations can be executed simultaneously after the connection button is triggered, which is equivalent to using a single
  • the button realizes multiple functions at the same time, so as to improve user operation efficiency.
  • the exemplary application of the electronic device provided by the embodiment of the present application is described below.
  • the electronic device provided by the embodiment of the present application can be implemented as a notebook computer, a tablet computer, a desktop computer, and a set-top box , various types of user terminals such as mobile devices (eg, mobile phones, portable music players, personal digital assistants, dedicated messaging devices, portable game devices).
  • mobile devices eg, mobile phones, portable music players, personal digital assistants, dedicated messaging devices, portable game devices.
  • the virtual scene can be completely based on terminal output, or based on the terminal and Server collaboration to output.
  • the virtual scene can be an environment for game characters to interact, for example, it can be used for game characters to fight in the virtual scene, and the interaction between the two sides can be carried out in the virtual scene by controlling the action of the virtual object, so that the user can play in the virtual scene. Relieve the stress of life during the game.
  • FIG. 2A is a schematic diagram of the application mode of the object control method of the virtual scene provided by the embodiment of the present application, which is suitable for some computing capabilities that can completely rely on the computing power of the terminal 400 to complete the calculation of the relevant data of the virtual scene.
  • the application mode such as a stand-alone/offline game
  • the output of the virtual scene is completed through terminals 400 such as smart phones, tablet computers, and virtual reality/augmented reality devices.
  • the terminal 400 calculates and displays the required data through the graphics computing hardware, and completes the loading, parsing and rendering of the display data, and outputs video frames capable of forming a visual perception of the virtual scene on the graphics output hardware, for example , presenting two-dimensional video frames on the display screen of the smartphone, or projecting video frames to achieve three-dimensional display effects on the lenses of augmented reality/virtual reality glasses; in addition, in order to enrich the perception effect, the device can also use different hardware to Form one or more of auditory perception, tactile perception, motion perception and taste perception.
  • the terminal 400 runs the client (such as a stand-alone game application), and outputs a virtual scene including role-playing during the running of the client.
  • the virtual scene is an environment for game characters to interact with, for example, it can be used for game characters The plains, streets, valleys, etc. where the battle is carried out; the virtual scene includes a virtual object 110, a connection button 120, an action button 130, and an attack button 140.
  • the object 110 is controlled by the real user, and will move in the virtual scene in response to the real user's operation on the controller (including touch screen, voice-activated switch, keyboard, mouse and joystick, etc.), for example, when the real user moves the joystick to the left
  • the virtual object will move to the left in the virtual scene.
  • the virtual object In response to the trigger operation of the action button 130, the virtual object is controlled to perform actions in the virtual scene.
  • the trigger operation of the attack button 140 the virtual object is controlled to move in the virtual scene.
  • the attack operation is performed in the scene, and in response to the trigger operation on the connection button 120 , the virtual object is controlled to perform an action and the attack operation is performed synchronously.
  • FIG. 2B is a schematic diagram of the application mode of the object control method of the virtual scene provided by the embodiment of the present application, which is applied to the terminal 400 and the server 200, and is generally suitable for relying on the computing power of the server 200
  • the calculation of the virtual scene is completed, and the application mode of the virtual scene is output on the terminal 400 .
  • the server 200 calculates the display data related to the virtual scene and sends it to the terminal 400.
  • the terminal 400 relies on the graphics computing hardware to complete the loading, parsing and rendering of the calculation and display data, and relies on the graphics output hardware to output Virtual scene to form visual perception, for example, a two-dimensional video frame can be presented on the display screen of a smartphone, or a video frame that realizes a three-dimensional display effect can be projected on the lenses of augmented reality/virtual reality glasses; perception of the form of a virtual scene
  • corresponding hardware output of the terminal can be used, for example, a microphone output is used to form an auditory perception, a vibrator output is used to form a tactile perception, and so on.
  • the terminal 400 runs a client (such as a game application in the online version), and the virtual scene includes a virtual object 110, a connection button 120, an action button 130, and an attack button 140.
  • the client responds to the trigger operation of the connection button 120, and the client sends the action configuration information of the virtual object 110 to perform actions and the operation configuration information of synchronous attack operations using attack props to the server 200 through the network 300, and the server 200 is based on
  • the above information calculates the display data of operation configuration information and action configuration information, and sends the above display data to the client.
  • the client relies on the graphics computing hardware to complete the loading, parsing and rendering of the calculation and display data, and relies on the graphics output hardware to output the virtual scene.
  • a visual perception that is to display a screen in which the virtual object 110 executes the action associated with the target action button and uses the attack props to perform the attack operation synchronously.
  • the terminal 400 can implement the object control method of the virtual scene provided by the embodiment of the present application by running a computer program.
  • the computer program can be a native program or a software module in the operating system; it can be a local (Native) Application (APP, Application), that is, a program that needs to be installed in the operating system to run, such as a game APP (that is, the above-mentioned client); it can also be a small program, that is, it can be run only by downloading it to the browser environment program; it can also be a small game program that can be embedded in any APP.
  • the above-mentioned computer program can be any form of application program, module or plug-in.
  • Cloud technology refers to a kind of trusteeship that unifies a series of resources such as hardware, software, and network in a wide area network or a local area network to realize data calculation, storage, processing, and sharing. technology.
  • Cloud technology is a general term for network technology, information technology, integration technology, management platform technology, and application technology based on cloud computing business models. It can form a resource pool and be used on demand, which is flexible and convenient. Cloud computing technology will become an important support. The background service of the technical network system requires a large amount of computing and storage resources.
  • the server 200 can be an independent physical server, or a server cluster or a distributed system composed of multiple physical servers, and can also provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, Cloud servers for basic cloud computing services such as cloud communications, middleware services, domain name services, security services, CDN, and big data and artificial intelligence platforms.
  • the terminal 400 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, and a smart watch, but is not limited thereto.
  • the terminal 400 and the server 200 may be connected directly or indirectly through wired or wireless communication, which is not limited in this embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the terminal 400 shown in FIG. Various components in the terminal 400 are coupled together through a bus system 440 .
  • the bus system 440 is used to realize connection and communication among these components.
  • the bus system 440 also includes a power bus, a control bus and a status signal bus.
  • the various buses are labeled as bus system 440 in FIG. 3 .
  • Processor 410 can be a kind of integrated circuit chip, has signal processing capability, such as general processor, digital signal processor (DSP, Digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware Components, etc., wherein the general-purpose processor can be a microprocessor or any conventional processor, etc.
  • DSP digital signal processor
  • DSP Digital Signal Processor
  • User interface 430 includes one or more output devices 431 that enable presentation of media content, including one or more speakers and/or one or more visual displays.
  • the user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and buttons.
  • Memory 450 may be removable, non-removable or a combination thereof.
  • Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like.
  • Memory 450 may include one or more storage devices located physically remote from processor 410 .
  • Memory 450 includes volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory.
  • the non-volatile memory can be a read-only memory (ROM, Read Only Memory), and the volatile memory can be a random access memory (RAM, Random Access Memory).
  • ROM read-only memory
  • RAM random access memory
  • the memory 450 described in the embodiment of the present application is intended to include any suitable type of memory.
  • memory 450 is capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
  • Operating system 451 including system programs for processing various basic system services and performing hardware-related tasks, such as framework layer, core library layer, driver layer, etc., for implementing various basic services and processing hardware-based tasks;
  • the network communication module 452 is used to reach other electronic devices via one or more (wired or wireless) network interfaces 420.
  • Exemplary network interfaces 420 include: Bluetooth, Wireless Compatibility Authentication (WiFi), and Universal Serial Bus ( USB, Universal Serial Bus), etc.;
  • Presentation module 453 for enabling presentation of information via one or more output devices 431 (e.g., display screen, speakers, etc.) associated with user interface 430 (e.g., a user interface for operating peripherals and displaying content and information );
  • output devices 431 e.g., display screen, speakers, etc.
  • user interface 430 e.g., a user interface for operating peripherals and displaying content and information
  • the input processing module 454 is configured to detect one or more user inputs or interactions from one or more of the input devices 432 and translate the detected inputs or interactions.
  • the object control device for the virtual scene provided by the embodiment of the present application can be realized by software.
  • FIG. 3 shows the object control device 455 for the virtual scene stored in the memory 450, which can be a program and a plug-in, etc.
  • the form of software includes the following software modules: a display module 4551 and a control module 4552. These modules are logical, so they can be combined arbitrarily or further divided according to the realized functions. The function of each module will be explained below.
  • the terminal or the server can implement the object control method of the virtual scene provided by the embodiment of the present application by running a computer program.
  • a computer program can be a native program or software module in the operating system; it can be a local (Native) application program (APP, Application), that is, a program that needs to be installed in the operating system to run, such as a game APP or instant messaging APP; it can also be a small program, that is, a program that only needs to be downloaded into the browser environment to run; it can also be a small program that can be embedded in any APP.
  • the above-mentioned computer program can be any form of application program, module or plug-in.
  • the object control method of the virtual scene provided by the embodiment of the present application can be executed independently by the terminal 400 in FIG. 2A , or can be executed cooperatively by the terminal 400 and the server 200 in FIG. 2B , for example, in step 103 in response to the triggering of the target connection button Operation, controlling the virtual object to execute the action associated with the target action button, and controlling the virtual object to use the attack props to synchronously attack the operation can be performed by the terminal 400 and the server 200 in cooperation, the server 200 determines that the virtual object executes the action associated with the target action button, and uses After the attack props synchronously execute the execution result of the attack operation, the execution result is returned to the terminal 400 for display.
  • FIG. 4A is a schematic flowchart of a method for controlling an object in a virtual scene provided by an embodiment of the present application, which will be described in conjunction with the steps shown in FIG. 4A .
  • the method shown in FIG. 4A can be executed by various forms of computer programs run by the terminal 400, and is not limited to the above-mentioned client, such as the above-mentioned operating system 451, software modules and scripts, so the client does not It should be regarded as a limitation on the embodiments of the present application.
  • a virtual scene is used for a game as an example, but it should not be regarded as a limitation on the virtual scene.
  • step 101 a virtual scene is displayed.
  • the terminal runs the client, and the virtual scene including role-playing is output during the running of the client.
  • the virtual scene is an environment for game characters to interact, such as plains, streets, valleys, etc. for game characters to fight against. etc.; the virtual scene includes virtual objects holding attacking props, and the virtual objects can be game characters controlled by users (or players), that is, virtual objects are controlled by real users, and will respond to real users' actions on controllers (including touch controls).
  • attack props are virtual props that can be used and held by virtual objects and have attack functions, and attack props include at least one of the following: shooting props, throwing props , Fighting props.
  • step 102 an attack button and at least one action button are displayed, and at least one connection button is displayed.
  • each connection button is used to connect an attack button and an action button, for example, the attack button A, action button B1, action button C1 and action button D1 are displayed, and the connection button B2 is displayed between the action button B1 and the attack button A , the connection button C2 is displayed between the action button C1 and the attack button A, and the connection button D2 is displayed between the action button D1 and the attack button A.
  • the number of connection buttons is the same as the number of action buttons, and each action button corresponds to a connection button.
  • step 103 in response to the trigger operation on the target connection button, the virtual object is controlled to execute the action associated with the target action button, and the virtual object is controlled to use the attack props to perform attack operations synchronously.
  • the target action button is an action button connected to the target connection button among at least one action button
  • the target connection button is any selected connection button among the at least one connection button.
  • the attack button A, Action button B1, action button C1, and action button D1 the connection button B2 is displayed between the action button B1 and the attack button A
  • the connection button C2 is displayed between the action button C1 and the attack button A
  • the connection button C2 is displayed between the action button D1 and the attack button A
  • the connection button D2 taking the connection button B2 as an example, responds to the trigger operation on the connection button B2, recognizes the action button B1 connected to the connection button B2 as the target action button, thereby controlling the execution of the virtual object and the action button B1 Associated actions, and control virtual objects to use attack props to perform attack operations synchronously.
  • FIG. 9A is a schematic diagram of the display interface of the object control method of the virtual scene provided by the embodiment of the present application.
  • a connection button 902A is displayed in the human-computer interaction interface 901A, and the connection button 902A is used to connect the attack button 903A with the action
  • the button 904A and the connection button 902A are set between the attack button 903A and the action button 904A, the areas where the connection button 902A, the attack button 903A and the action button 904A are all belong to the operation area, and the connection button 902A, the attack button 903A and the action button 904A They are all embedded in the operation area, as shown in Figure 9A, buttons can be displayed in the operation area embedded in the virtual scene, see Figure 9C, Figure 9C is a schematic diagram of the display interface of the object control method of the virtual scene provided by the embodiment of the present application, A connection button 902C is displayed on the human-computer interaction interface 901C.
  • connection button 902C is used to connect the attack button 903C and the action button 904C.
  • the connection button 902C is set between the attack button 903C and the action button 904C.
  • the area where the button 904C is located belongs to the operation area, and the operation area is independent of the virtual scene. As shown in FIG. 9C , the button can be displayed in the operation area independent of the virtual scene.
  • FIG. 4B is a schematic flowchart of a method for controlling an object in a virtual scene provided by an embodiment of the present application.
  • step 102 an attack button and at least one action button are displayed, and steps from step 1021 to step 4B in FIG. 4B are displayed. 1022 realized.
  • step 1021 an attack button associated with the attack prop currently held by the virtual object is displayed.
  • the virtual object uses the attack props to perform an attack operation.
  • the attack prop currently held by the virtual object is a pistol
  • the attack button of the pistol is displayed.
  • the attack button of the crossbow is displayed, and when the attack item currently held by the virtual object is a grenade, the attack button of the grenade is displayed.
  • step 1022 at least one action button is displayed around the attack button.
  • FIG. 5A is a schematic diagram of the display interface of the object control method of the virtual scene provided by the embodiment of the present application.
  • a connection button 502A is displayed on the human-computer interaction interface 501A, and between the attack button 503A and three action buttons 504A Three connection buttons 502A are displayed between them.
  • three connection buttons 502A are displayed around the attack button 503A, and three action buttons 504A are displayed around the attack button 503A.
  • Each action button is associated with an action, for example, an action button 504A is associated with the action of squatting, and the other two action buttons are associated with the action of lying down and the action of jumping respectively.
  • the convenience of operation can be improved by displaying at least one action button around the attack button.
  • the type of at least one action button includes at least one of the following: an action button associated with a high-frequency action; wherein the high-frequency action is a candidate action whose operating frequency is higher than an operating frequency threshold among multiple candidate actions; and The action button associated with the target action; wherein, the target action is adapted to the state of the virtual object in the virtual scene, and the target action is adapted to the state of the virtual object in the virtual scene, indicating that the target action is suitable for the virtual object to execute in the current virtual scene , for example, the state of the virtual object in the virtual scene is the state of being attacked, then the action suitable for execution in the current virtual scene is a jumping action, and the jumping action is a target action adapted to the state of the virtual object in the virtual scene,
  • Each state of the virtual object in the virtual scene is configured with at least one adapted target action.
  • the operation frequency threshold is obtained based on previous data statistics.
  • the server may count the actual operation frequency of each candidate action in the interaction data of the latest week, and then perform an average processing on the actual operation frequency of multiple candidate actions.
  • the average processing result is used as the operation frequency threshold, and the interaction data here can be all the interaction data in the last week.
  • Figure 5E is a schematic diagram of the display interface of the object control method of the virtual scene provided by the embodiment of the present application
  • the human-computer interaction interface 501E displays a squat action button 504-1E, a squat action button 504-2E, and a jump action button 504-3E.
  • the virtual object In response to the user's trigger operation on the squat action button 504-1E, the virtual object is controlled to perform the squat action alone ; In response to the user's trigger operation on the prone action button 504-2E, the virtual object is controlled to perform the prone action alone; in response to the user's trigger operation on the jump action button 504-3E, the virtual object is controlled to perform the jump action alone.
  • squat action button 504-1E, prone action button 504-2E, and jump action button 504-3E in FIG. 5E may be set by default.
  • the action button may also be personalized.
  • the action button is an action button associated with a high-frequency action
  • the high-frequency action is a candidate action whose operating frequency is higher than the operating frequency threshold of the virtual object A among multiple candidate actions.
  • a high-frequency action is a candidate action whose operating frequency is higher than the operating frequency threshold of the virtual object B of the same camp among multiple candidate actions.
  • the operating frequency threshold of virtual object A is the average value of the number of times virtual object A performs each action, then the jumping action is a high-frequency action among multiple candidate actions, based on the virtual According to the operation data of object B, it is determined that the number of jumping actions performed by virtual object B of the same camp is higher than the operating frequency threshold of virtual object B, and the operating frequency threshold of virtual object B is the average value of the number of times virtual object B performs each action, Then the jump action is a high-frequency action among multiple candidate actions.
  • the action button can also be associated with the target action.
  • the target action is adapted to the state of the virtual object in the virtual scene. For example, if there are many enemies in the virtual scene, the virtual Object A needs to hide itself, so the action that adapts to the state of virtual object A in the virtual scene is the lying down action, and the lying down action is the target action at this time.
  • connection button is displayed in step 102, which can be achieved through the following technical solution: for each action button in the at least one action button, a connection button for connecting the action button and the attack button is displayed; wherein, the connection The button has at least one of the following display properties: the connection button includes a disabled icon when in a disabled state, and includes an available icon when in an available state.
  • the connection buttons in different states are displayed through different display attributes, thereby effectively prompting the user whether the connection button can be triggered or not, so as to improve the user's operation efficiency and avoid outputting invalid operations.
  • connection button when the connection button is set to off, a disabled icon is displayed on the upper layer of the layer where the link button is located, and when the connection button is set to on, an available icon is displayed on the upper layer of the layer where the connection button is located, for example , the available icon can be the icon of the connection button itself, see Figure 5D
  • Figure 5D is a schematic diagram of the display interface of the object control method of the virtual scene provided by the embodiment of the present application, when the connection button is set to off, it is displayed on the connection button 503D
  • the disabled icon 505D when the connection button is set to open, the disabled icon 505D is hidden on the connection button 503D and only the icon of the connection button 503D itself is displayed, and when the disabled icon is displayed, it can be directly superimposed on the icon of the connection button 503D itself for display.
  • connection button is displayed in step 102, which can be achieved through the following technical solutions: identify an action that is adapted to the state of the virtual object in the virtual scene, and use the button associated with the corresponding action as the target action button , showing only the connect button used to connect the target action button and the attack button. Since only the target connection button associated with the target action button is displayed, it can save the proportion of field of vision occupied by multiple connection buttons simultaneously, providing a larger display area for the virtual scene, and the displayed connection button is exactly what the user needs to use The connection button improves the efficiency for users to find a suitable connection button, and improves the intelligence of human-computer interaction.
  • connection button 902D displays a connection button 902D
  • the connection button 902D is used to connect the attack button 903D and the action button 904D
  • the connection button 902D is set between the attack button 903D and the action button 904D, as shown in Fig.
  • the squatting action is adapted to the state of the virtual object in the virtual scene.
  • the action adapted to the state of the virtual object in the virtual scene It's a squat.
  • At least one connection button is displayed in step 102, which can be achieved through the following technical solution: for the target action button in the at least one action button, the connection for connecting the target action button and the attack button is displayed based on the first display mode button, and for other action buttons except the target action button in at least one action button, based on the second display mode, the connection buttons connecting other action buttons and the attack button are displayed, thereby more prominently prompting the user to trigger the connection button associated with the target action button , so as to improve the user's operation efficiency.
  • FIG. 9E is a schematic diagram of the display interface of the object control method of the virtual scene provided by the embodiment of the present application.
  • a connection button 902E is displayed in the human-computer interaction interface 901E, and the connection button 902E is used to connect the attack button 903E and the next button.
  • Squat action button 904E a connection button 905E is displayed on the human-computer interaction interface 901E, and the connection button 905E is used to connect the attack button 903E and the prone action button 906E.
  • connection button 902E for connecting the attack button 903E with the squat action button 904E, and a connection button 905E for connecting the attack button 903E with the prone action button 906E is displayed based on the second display mode, which is more prominent than the first display mode Second display mode.
  • the brightness of the first display mode is higher than that of the second display mode, for example, the color contrast of the first display mode is higher than that of the second display mode.
  • connection button can be displayed all the time.
  • the connection button can be displayed on demand, that is, the connection button is switched from the non-display state to the display state.
  • the on-demand explicit refers to highlighting when the on-demand explicit condition is met.
  • the conditions to be displayed on demand include at least one of the following: the group to which the virtual object belongs interacts with other groups, for example, the group to which the virtual object belongs has a war with other groups, and the group to which the virtual object belongs refers to the group to which the virtual object belongs Team, at least one virtual object in the virtual scene can form a team to carry out activities in the virtual scene; the distance between the virtual object and other virtual objects in other groups is less than the distance threshold, for example, the connection button can be highlighted on demand, that is, in the Highlighting when displayed, for example, displaying dynamic effects of connection buttons, highlighting on demand refers to highlighting when the conditions for highlighting are met, and the conditions for highlighting include at least one of the following: virtual objects The group to which it belongs interacts with other groups; the distance between the virtual object and other virtual objects of other groups is less than a distance threshold.
  • the interaction data of the virtual object and the scene data of the virtual scene are acquired, the scene data includes at least one of the environment data of the virtual scene, the weather data of the virtual scene, and the battle situation data of the virtual scene, and the interaction data of the virtual object includes At least one of the position of the virtual object in the virtual scene A, the life value of the virtual object, the equipment data of the virtual object, and the comparison data between the two sides; based on the interaction data and scene data, call the neural network model to predict the compound action; wherein, the compound action Including attack operations and target actions; using the action button associated with the target action as the target action button, the neural network prediction method can more accurately determine the target action, and then determine the associated target action button, so that compound actions can be It has a higher degree of adaptation to the current virtual scene, thereby improving the user's operating efficiency.
  • the sample interaction data between each sample virtual object in each sample virtual scene is collected in the sample virtual scene pair
  • the sample scene data of each sample virtual scene is collected in the sample virtual scene pair
  • the collected sample interaction Data and sample scene data to construct training samples, use the training samples as the input of the neural network model to be trained, and use the sample compound action adapted to the sample virtual scene as the labeled data to train the neural network model, so as to call the neural network model based on interaction Data and scene data predict compound actions.
  • the way each connection button is used to connect an attack button and an action button includes: the connection button overlaps with an attack button and an action button respectively;
  • the button is connected to an action button, and the connection button is associated with the attack button and the action button in a display manner by overlapping, so that the user can be reminded of the multiple buttons arranged in the human-computer interaction interface without affecting the field of vision.
  • the connection relationship between them so as to avoid triggering the connection button by mistake. For example, the user originally wanted to control the virtual object to shoot and jump simultaneously, but because the connection relationship represented by the button layout is not clear, the user triggers the squat action button and the shooting
  • the link button of the button makes the virtual object crouch and shoot simultaneously.
  • FIG. 9A is a schematic diagram of the display interface of the object control method of the virtual scene provided by the embodiment of the present application.
  • a connection button 902A is also displayed in the human-computer interaction interface 901A, and the connection button 902A is used to connect the attack button 903A with the
  • the action button 904A and the connection button 902A are set between the attack button 903A and the action button 904A, and partially overlap the display areas of the attack button 903A and the action button 904A, see FIG.
  • connection identification includes at least one of the following: arrows, curves, and line segments
  • a connection button 902B is also displayed in the human-computer interaction interface 901B, and the connection button 902B is used to connect the attack button 903B and the action button 904B,
  • the connection button 902B is arranged between the attack button 903B and the action button 904B, and there is no overlap with the display area of the attack button 903B and the action button 904B. 904B connection.
  • FIG. 4C is a schematic flowchart of a method for controlling an object in a virtual scene provided by an embodiment of the present application.
  • step 102 before at least one connection button is displayed, step 104 is executed.
  • step 104 it is determined that a condition for automatically displaying at least one connection button is met.
  • the condition includes at least one of the following: a group of virtual objects interacts with other virtual objects of other groups, for example, a group of virtual objects engages with virtual objects of other groups; The distance of other virtual objects of the group is less than the distance threshold.
  • connection button can be displayed according to the condition.
  • condition When the condition is not met, only the attack button and the action button are displayed. After the condition is met, the connection button is displayed, so that the user's battle field of vision can be guaranteed.
  • the group of virtual objects interacts with other groups
  • interaction occurs between other virtual objects, for example, when a battle occurs, at least one connection button is automatically displayed, and at least one connection button is automatically displayed when the distance between the virtual object and other virtual objects in other groups is less than a distance threshold.
  • the connect button displayed, and when the attack button and at least one action button are displayed, at least one connect button is always displayed synchronously, so that even if there is no interaction between a virtual object's group and other virtual objects in other groups Interaction, or the distance between the virtual object and other virtual objects in other groups is not less than the distance threshold, that is, in any case, the connection button can be kept displayed, so that the user can trigger the connection button at any time, improving the flexibility of user operations.
  • a plurality of candidate actions are displayed in response to a replacement operation for any one of the action buttons; wherein, the plurality of candidate actions are associated with at least one Actions associated with the action buttons are all different; in response to a selection operation for multiple candidate actions, the action associated with any action button is replaced with the selected candidate action.
  • the object control method of the virtual scene provided by the embodiment of the present application provides the adjustment function of the action button.
  • the replacement function of the action button is provided to replace the action associated with the action button with other actions, so as to be flexible.
  • a connection button is displayed in the human-computer interaction interface. The connection button is used to connect the attack button and the action button.
  • the attack button is associated with the virtual prop currently held by the virtual object by default.
  • the display Multiple candidate key positions to be replaced, that is, multiple candidate actions are displayed, for example, the key position content of the action button is a squatting action, and in response to the selection operation for multiple candidate actions to be replaced, the selected candidate key positions
  • the content is updated to the action button to replace the squat action, that is, it supports replacing the key content of the action button whose original key content is squatting with squatting, and can also be replaced with a probe, which can realize the combined attack mode of shooting operation and probe operation , so that a variety of action combinations can be realized without occupying too much display area, thereby realizing a variety of combined attack methods.
  • the object control method of the virtual scene provides the adjustment function of the action button and can also be automatically replaced according to the user's operating habits. Actions are replaced with other actions in order to flexibly switch various actions.
  • the attack button is associated with the virtual prop currently held by the virtual object by default.
  • the key content obtained by automatic matching is updated to the action button to replace the squat action, that is, it supports replacing the key content of the action button whose original key content is squatting with automatic
  • the key content obtained by matching for example, is replaced by lying down.
  • the automatic matching process is obtained by matching the virtual scene, that is, the action adapted to the virtual scene is obtained as the key content, so that it can be displayed without occupying too much display area. Under certain circumstances, intelligently realize various action combinations, so as to realize various combined attack methods.
  • the attack props are in a single attack mode; in step 103, the virtual object is controlled to execute the action associated with the target action button, and the virtual object is controlled to use the attack props to perform attack operations synchronously, which can be achieved through the following technical solutions: control the virtual object Execute the action associated with the target action button once. When the posture after the execution of the action is different from the posture before the execution of the action, restore the posture of the virtual object before performing the action, and start from controlling the virtual object to execute the action associated with the target action button.
  • the object uses the attack props to perform an attack operation, and the virtual object is controlled to perform instantaneous actions through the instantaneous operation, which can perform lightweight operations, and is convenient for users to perform flexible interactive operations during the battle.
  • the posture after the execution of the action is different from the posture before the execution of the action, including lying down and squatting.
  • the trigger operation for the connection button is not draggable and belongs to the transient operation.
  • the trigger operation is a click operation
  • the control virtual The object performs an action associated with the target action button once.
  • restore the posture of the virtual object before performing the action that is, restore the virtual object to stand
  • the posture after the execution of the action is the same as the posture before the execution of the action.
  • the action when the action is a jumping action, after completing a jumping action, it has returned to the posture before performing the action, that is, the action itself has the ability to recover, so there is no need to restore the virtual object to the posture before performing the action again, and from the control
  • the virtual object starts to execute the action associated with the target action button, and controls the virtual object to use the attack props to perform an attack operation, and the perspective does not change during the whole process.
  • step 701C trigger the connection button between the attack button and the squat action button or trigger the attack button and the down
  • the connection button between the lying action buttons controls the virtual object to perform a single shooting operation (shooting a single bullet), and executes step 703C synchronously, in step 703C, controls the virtual object to complete the corresponding action and returns to the execution
  • the posture before the action for example, the posture of returning to standing after squatting or lying down, because the trigger operation is not draggable and is a transient operation, no other actions will be performed after step 702C and step 703C are executed.
  • the trigger operation is a continuous operation for the target connection button; before restoring the posture of the virtual object before performing the action, the posture after the execution of the action is maintained until the trigger operation is released; when the trigger operation generates a movement track , according to the direction and angle of the moving track, synchronously update the view angle of the virtual scene; in response to the trigger operation being released, stop updating the view angle of the virtual scene.
  • the connection button is reused, and the viewing angle is updated by dragging on the connection button, thereby simplifying the operation difficulty of the user in the battle process, and improving the efficiency of human-computer interaction and operation.
  • actions whose posture after the action is different from that before the action include squatting and squatting.
  • the trigger operation for the connection button is a continuous drag operation, for example, the trigger operation is a press operation.
  • the posture after the execution of the action is different from the posture before the execution of the action, even if the movement trajectory is generated, the posture will still be maintained.
  • the posture before performing the action such as maintaining a standing posture, is released in response to the trigger operation, and stops updating the viewing angle of the virtual scene.
  • FIG. 7A is a schematic diagram of the object control method of the virtual scene provided by the embodiment of the present application.
  • the single-shot firing mode refers to each triggering of the connection button. Execute only one attack operation.
  • step 701A trigger the connection button between the attack button and the squat action button or trigger the connection button between the attack button and the prone action button.
  • step 702A control the virtual object to perform a single attack operation. Shooting operation (shooting a single bullet), and execute step 703A synchronously.
  • step 703A control the virtual object to complete the corresponding action, for example, squat or lie down.
  • step 704A control the virtual object not to Shoot again, and execute step 705A synchronously to control the virtual object to keep squatting or lying down on the basis of step 703A.
  • step 706A it is judged whether the trigger operation for the connection button produces a movement track, which is equivalent to judging whether the user drags Connect the button to move. When not dragging, continue to execute step 705A and step 704A.
  • step 707A control the viewing angle of the virtual object based on step 705A and step 704A and follow the trigger operation
  • step 708A it is judged whether the trigger operation is stopped, that is, whether the user’s finger is released.
  • step 707A execute step 709A.
  • step 709A In step 709A In step 709A In , control the virtual object to restore the action to standing, and the perspective stops moving.
  • the attack props are in the continuous attack mode; in step 103, the virtual object is controlled to execute the action associated with the target action button, and the virtual object is controlled to use the attack props to perform the attack operation synchronously, which can be achieved through the following technical solutions: when the execution of the action is completed When the posture after the execution is different from the posture before the execution of the action, control the virtual object to execute the action associated with the target action button once, and maintain the posture after the execution of the action; when the posture after the execution of the action is the same as the posture before the execution of the action, control The virtual object executes the action associated with the target action button once; starting from controlling the virtual object to execute the action associated with the target action button, control the target object to use the attack props to continue the attack operation; when the posture after the execution of the action is different from the posture before the execution of the action , in response to the release of the trigger operation, restore the posture of the virtual object before performing the action, and stop controlling the virtual object to use the attack props to continue the attack operation; when the posture after the
  • the virtual object when the posture after the execution of the action is the same as the posture before the execution of the action, the virtual object can also be controlled to perform the action associated with the target action button multiple times until the trigger operation is released, for example, when the action is a jumping action , the virtual object can be controlled to complete the jumping action multiple times until the trigger operation is released, that is, the virtual object keeps jumping while keeping shooting.
  • the posture after the execution of the motion is different from the posture before the execution of the motion includes at least one of the following: squatting down, squatting, the motion after the execution of the motion is the same as the posture before the execution of the motion includes jumping, for the connection
  • the trigger operation of the button cannot be dragged and is a transient operation.
  • the trigger operation is a click operation. It can keep attacking continuously for a set time and then stop attacking. It can also stop attacking after a set number of consecutive attacks. It is instantaneous, so it will restore the posture of the virtual object before performing the action, or the virtual object will maintain the posture after the action before the end of the attack, and restore the posture of the virtual object before performing the action after the attack. Since the trigger operation has not been dragged, so The perspective of the virtual scene has not changed.
  • step 601C the attack button and the squat action button are triggered or trigger the connection button between the attack button and the down action button
  • step 602C control the virtual object to execute the shooting operation
  • step 603C control the virtual object to complete the corresponding action
  • step 604C control the virtual object to maintain continuous shooting operation on the basis of step 602C
  • step 605C synchronously, control the virtual object to keep squatting or keep lying down on the basis of step 603C
  • step 606C it is judged whether the trigger operation stops, that is, whether the finger is released.
  • step 607C is executed.
  • step 607C the shooting operation is stopped and the action returns to standing.
  • the trigger operation is a continuous operation of the target connection button, for example, a continuous press operation
  • a movement track is generated in response to the trigger operation
  • the viewing angle of the virtual scene is updated synchronously according to the direction and angle of the movement track ;
  • stop updating the viewing angle of the virtual scene In response to the trigger operation being released, stop updating the viewing angle of the virtual scene.
  • the viewing angle is changed by the direction button 302 in FIG. , to achieve the update of the viewing angle, thereby simplifying the operation difficulty in the user's battle process, and improving the efficiency of human-computer interaction and operation.
  • FIG. 6A is a schematic diagram of the object control method of the virtual scene provided by the embodiment of the present application.
  • the attack button and the squat action button are triggered.
  • step 602A control the virtual object to execute the shooting operation
  • step 603A control the virtual object to complete the corresponding action
  • step 604A control the virtual object to maintain continuous shooting operation on the basis of step 602A
  • step 605A synchronously, control the virtual object to keep squatting or keep lying down on the basis of step 603A
  • step 606A it is judged whether the trigger operation for the connection button produces a movement track, that is, whether the finger is dragged. When the finger is not dragged, step 605A and step 604A are executed.
  • step 608A determine whether the trigger operation is stopped, that is, whether the finger is released. When the trigger operation does not stop, execute In step 607A, when the triggering operation stops, step 609A is executed. In step 609A, the shooting is stopped, the action returns to standing, and the viewing angle stops moving.
  • the working mode of the target action button includes manual mode and lock mode; wherein, the manual mode is used to stop triggering the target connection button after the trigger operation is released, and the lock mode is used to continue to automatically trigger the target action after the trigger operation is released button; in step 103, control the virtual object to execute the action associated with the target action button, and control the virtual object to use the attack props to perform the attack operation synchronously, which can be achieved through the following technical solutions: when the trigger operation controls the target action button to enter the manual mode, when the trigger operation When the operation is not released, control the virtual object to execute the action associated with the target action button, and control the virtual object to use the attack props to perform attack operations synchronously; when the trigger operation controls the target action button to enter the lock mode, during the period when the trigger operation is not released , and during the period after the trigger operation is released, control the virtual object to execute the action associated with the target action button, and control the virtual object to use the attack props to perform attack operations synchronously.
  • the lock mode the user'
  • the attack can be stopped after continuous attack for a set time, or the attack can be stopped after a set number of consecutive attacks, or when the trigger operation for the lock mode is received again , stop controlling the virtual object to use the attack props to continue attacking operations, and when the posture after the execution of the action is different from the posture before the execution of the action, restore the posture of the virtual object before the execution of the action.
  • connection button in the object control method of the virtual scene provided by the embodiment of the present application can be automatically and continuously triggered, that is, the connection button not only has a manual mode, but also has a lock mode.
  • the lock mode when the connection button is triggered, the virtual object Can automatically repeat compound actions (such as single-shot shooting operation and jumping operation) to reduce the difficulty of operation.
  • the attack operation associated with the continuous button in response to the lock trigger operation for the continuous button, it will automatically repeat the execution Single-shot shooting operation and automatically repeat the jump operation, for example, when the user presses the connection button for a preset time, the pressing operation is determined as a lock trigger operation, the connection button is locked, and the virtual object still maintains the connection button even after the user releases the finger
  • the corresponding action for example, continuous single-shot shooting and continuous jumping, in response to the user’s operation of clicking the connection button again, the connection button is unlocked, and the virtual object releases the corresponding action of the connection button, for example, stop shooting and stop Jumping and connection button locking can help virtual objects to continuously perform attacks and actions, thereby improving operational efficiency, especially for single attacks and single actions. By locking the connection button, automatic continuous attacks can be realized, thereby improving operational efficiency. .
  • the virtual scene being in the button setting state indicates that the virtual scene is not in the battle state, so that the user can safely set the button, and respond to the selected operation for at least one connection button, according to the target display
  • Each selected connection button is displayed in a manner; wherein, the display method of the target is more prominent than the display method of the unselected connection button; the following processing is performed for each selected connection button:
  • the connection button is disabled, in response to the When the connection button is turned on, hide the disabled icon of the connection button and mark the connection button as open;
  • the connection button is in the on state, in response to the disabled operation of the connection button, display the disabled icon for the connection button and set the connection
  • the button is marked as disabled, and the available status of the connection button is set and prompted through the user's personalized settings, thereby improving the efficiency of human-computer interaction and the degree of personalization, and improving the user's operating efficiency.
  • step 801 the switch setting logic operation for the target connection button is received.
  • step 802 the target connection button is displayed.
  • switch option of the button execute step 803 at the same time, in step 803, highlight the outer frame of the connection button, and display the connection guideline, in step 804, judge whether to receive the click operation for the blank area, if not If the click operation on the blank area is received, continue to execute step 802 and step 803. If the click operation on the blank area is received, execute step 805. In step 805, hide the switch option, and execute step 806.
  • step 806 cancel the highlighting of the outer frame of the connection button, and hide the connection guide line
  • step 807 executes step 807, in step 807, receive the click operation for the switch option, in step 808, determine the switch Whether the option is "on”, when the switch option is "on”, execute step 809, in step 809, switch the switch option to "off” and display a disabled icon on the upper layer of the connection button, when the switch option is "off” , execute step 810, in step 810, switch the switch option to "on” and hide and display the disabled icon on the upper layer of the connection button.
  • the terminal runs the client (such as a stand-alone game application), and outputs a virtual scene including role-playing during the running of the client.
  • the virtual scene is an environment for game characters to interact, for example, it can be a plain for game characters to fight against , streets, valleys, etc.; virtual scenes include virtual objects, connection buttons, action buttons, and attack buttons.
  • Virtual objects can be game characters controlled by users (or users), that is, virtual objects are controlled by real users and will respond Because the real user moves in the virtual scene through the operation of the controller (including touch screen, voice control switch, keyboard, mouse and joystick, etc.), for example, when the real user moves the joystick to the left, the virtual object will move in the virtual scene Move to the left, in response to the trigger operation for the action button, control the virtual object to perform actions in the virtual scene, in response to the trigger operation for the attack button, control the virtual object to perform an attack operation in the virtual scene, in response to the connection button Trigger operations, control virtual objects to perform actions and perform attack operations synchronously.
  • the controller including touch screen, voice control switch, keyboard, mouse and joystick, etc.
  • the attack button is a shooting button, and the attack operation is a shooting operation as an example.
  • the attack operation is not limited to the shooting operation.
  • the attack button can also be used as a button for using other attack props.
  • different attack props can be used to attack , wherein the attack props include at least one of the following: guns, crossbows, and grenades.
  • the attack button displayed in the human-computer interaction interface is associated with the attack props currently held by the virtual object by default. When the virtual props held by the virtual object change from When the pistol is switched to a crossbow, the virtual item associated with the attack button will automatically switch from a pistol to a crossbow.
  • a connection button 502A is also displayed on the man-machine interface 501A, and three connection buttons 502A are displayed between the attack button 503A and the three action buttons 504A.
  • one The key controls the virtual object 505A to complete the shooting operation and the corresponding action at the same time.
  • Operation and squatting action in response to the user's trigger operation on the attack button 503A, the virtual object is controlled to perform an attack operation alone, and in response to the user's trigger operation on the action button 504A, the virtual object is controlled to perform a squat action alone.
  • the attack button as the origin, it is also possible to connect the attack button with more action buttons, such as the connection button between the shooting button and the scope button.
  • the connection button between the shooting button and the probe button in response to the trigger operation for the connection button, the shooting operation and the probe operation are performed simultaneously, and the connection button between the shooting button and the shovel button is performed simultaneously in response to the trigger operation for the connection button Shooting operation and sliding shovel operation.
  • FIG. 5B is a schematic diagram of the display interface of the object control method of the virtual scene provided by the embodiment of the present application.
  • the human-computer interaction interface 501B also displays a connection button 502B, and the connection button 502B is used to connect the attack button 503B and the action button 504B.
  • the action button 504B is triggered, the virtual object will perform a kneeling action, and a connection button 502B is displayed between the attack button 503B and the action button 504B.
  • the virtual object In response to the user's trigger operation on the connection button 502B, the virtual object can be controlled by one key 505B completes the shooting operation and the prone action at the same time, in response to the trigger operation of the user on the attack button 503B, controls the virtual object 505B to perform the attack operation alone, and in response to the user's trigger operation on the action button 504B, controls the virtual object to perform the prone action alone.
  • FIG. 5C is a schematic diagram of the display interface of the object control method of the virtual scene provided by the embodiment of the present application.
  • the human-computer interaction interface 501C also displays a connection button 502C, and a connection button is displayed between the attack button 503C and the action button 504C.
  • 502C in response to the user's trigger operation on the connection button 502C, the virtual object 505C can be controlled to complete the shooting operation and jumping action at the same time, and in response to the user's trigger operation on the action button 504C, the virtual object is controlled to perform the jumping action alone.
  • the human-computer interaction interface 501E displays a squat action button 504-1E, a prone action button 504-2E, and a jump action button 504-3E, and an attack button 503E is also displayed in the human-computer interaction interface 501E.
  • a squat connection button 502-1E between attack button 503E and prone action button 504-2E is displayed a lie down connection button 502-2E, attack button 503E and jump action
  • a jump connection button 502-3E is displayed between the buttons 504-3E.
  • the virtual object 505E is controlled to perform an attack operation alone.
  • control the virtual object to perform a squatting action alone In response to the user's trigger operation on the squat action button 504-1E , control the virtual object to perform a squatting action alone, and control the virtual object to perform a squatting action alone in response to the user's trigger operation on the squatting action button 504-2E, and control the virtual object to perform the squatting action alone in response to the user's trigger operation on the jump action button 504-3E.
  • the user can separately control whether the connection button is turned on in the custom setting, and the button custom interface 506D is displayed in the human-computer interaction interface 501D. At this time, it means that the user can customize the settings for the buttons in the human-computer interaction interface 501D.
  • an open button 502D and a close button 504D are displayed above the connection button 503D.
  • the open button 502D and the close button 504D can control the opening and closing of the connection button 503D, that is, control the connection button 503D Show or hide during the battle. Only one button between the open button and the close button is in an operable state. Referring to FIG.
  • the open button 502D in response to the trigger operation on the close button 504D, the open button 502D is displayed as an operable state, and a disabled icon is displayed on the connect button 503D 505D, in response to the trigger operation for the open button 502D, display the close button 504D as an operable state, and hide the disabled icon 505D on the connect button 503D, after the disabled icon 505D is displayed on the connect button 503D, in response to the human-computer interaction
  • the trigger operation of the blank area of the interface 501D hides and displays the open button 502D and the close button 504D.
  • step 601A when the virtual prop is in the burst firing mode, in step 601A, trigger the connection button between the attack button and the squatting action button or trigger the connection button between the attack button and the squatting action button.
  • step 602A control the virtual object to execute the shooting operation, and execute step 603A synchronously.
  • step 603A control the virtual object to complete the corresponding action, for example, squat or lie down.
  • step 604A control the virtual object Keep the continuous shooting operation, and execute step 605A synchronously, control the virtual object to keep squatting or lying down on the basis of step 603A, and in step 606A, judge whether the trigger operation for the connection button produces a movement track, that is, whether dragging the finger , when the finger is not dragged, execute step 605A and step 604A, and when the finger is dragged, execute step 607A, in step 607A, based on step 605A and step 604A, control the visual angle of the virtual object to follow the movement track of the trigger operation To move, in step 608A, it is judged whether the trigger operation is stopped, that is, whether the finger is loosened, when the trigger operation is not stopped, execute step 607A, when the trigger operation stops, execute step 609A, and in step 609A, stop shooting, The action reverts to standing, and the camera stops moving.
  • FIG. 6B is a schematic diagram of the object control method of the virtual scene provided by the embodiment of the present application.
  • step 601B trigger the connection between the attack button and the jump action button button, in step 602B, control the virtual object to execute the shooting operation, and execute step 603B synchronously, in step 603B, control the virtual object to complete a single jumping action, in step 604B, control the virtual object to keep continuous on the basis of step 602B Shooting operation, and execute step 605B synchronously, control the virtual object to no longer jump on the basis of step 603B, and maintain the action to return to the standing state. Drag the finger.
  • step 605B and step 604B When the finger is not dragged, execute step 605B and step 604B.
  • execute step 607B When dragging the finger, execute step 607B.
  • step 607B control the viewing angle of the virtual object based on step 605B and step 604B and follow the trigger operation
  • step 608B judge whether the trigger operation is stopped, that is, whether the finger is released.
  • execute step 607B When the trigger operation does not stop, execute step 607B.
  • step 609B Stop shooting, and the camera stops moving.
  • the continuous shooting operation will continue to be triggered.
  • the character's action will return to standing, and the jump action will not be triggered repeatedly.
  • the user keeps pressing the connect button Drag your finger to trigger continuous shooting and control the movement of the viewing angle while maintaining the action. If the jumping action is over, only control the continuous shooting while controlling the movement of the viewing angle. If the user does not release the finger, keep shooting continuously. However, the subsequent jumping action will not be triggered. If the user releases the finger, the continuous shooting will stop and the perspective movement will stop.
  • FIG. 7A is a schematic diagram of the object control method of the virtual scene provided by the embodiment of the present application.
  • step 701A trigger the button between the attack button and the squat action button. Connect the button or trigger the connection button between the attack button and the prone action button.
  • step 702A control the virtual object to perform a single shooting operation (shooting a single bullet), and execute step 703A synchronously.
  • step 703A control the virtual object After the object completes the corresponding action, for example, squatting or lying down, in step 704A, control the virtual object to stop shooting on the basis of step 702A, and execute step 705A synchronously, control the virtual object to keep squatting or shooting on the basis of step 703A Keep lying down, in step 706A, judge whether the trigger operation for the connection button produces a movement track, that is, whether to drag the finger, if the finger is not dragged, execute step 705A and step 704A, and when the finger is dragged, execute step 707A , in step 707A, based on steps 705A and 704A, control the virtual object’s viewing angle to move according to the movement track of the trigger operation; in step 708A, judge whether the trigger operation is stopped, that is, whether the finger is released; When it stops, execute step 707A. When the trigger operation stops, execute step 709A. In step 709A, the action returns to standing and the viewing angle stops moving.
  • the user clicks the connection button between the attack button and the squat action button, or receives the operation of the user clicking the connection button between the attack button and the squat action button, and the user clicks the connection button
  • the button is equivalent to triggering single-shot shooting and action operations at the same time. Start to complete single-shot shooting and complete the corresponding squat or squat action. If the user keeps pressing the connection button without releasing the finger, the shooting will not be triggered again after the single-shot shooting is completed. , only keep triggering the squat or kneeling action continuously. The user keeps pressing the connection button and dragging the finger to control the camera movement at the same time on the basis of a single shooting and maintaining the action.
  • the movement of the viewing angle is controlled at the same time. If the user does not release the finger, control the movement of the viewing angle while maintaining the squatting or lying down action. After the single shot is completed, the shooting stops and the shooting is not triggered again. If the user releases the finger, the virtual object's Crouching or prone actions revert to standing actions, and the camera stops moving.
  • step 701B when the virtual prop is in the single-shot firing mode, in step 701B, trigger the connection button between the attack button and the jump action button, and in step 702B, control the virtual object to perform a shooting operation (shooting a single bullet) , and execute step 703B synchronously.
  • step 703B control the virtual object to complete a single jumping action. On the basis of 703B, no longer jumping, and keep the action back to the standing state.
  • step 706B judge whether the trigger operation for the connection button generates a movement track, that is, whether to drag the finger.
  • step 705B and step 704B when dragging the finger, execute step 707B, in step 707B, based on the basis of step 705B and step 704B, control the visual angle of the virtual object to move according to the movement track of the trigger operation, in step 708B, determine the trigger operation Whether to stop, that is, whether the finger is released, if the trigger operation is not stopped, execute step 707B, and when the trigger operation stops, execute step 709B, and in step 709B, the viewing angle stops moving.
  • step 801 the switch setting logic operation for the target connection button is received, in step 802, the switch option of the target connection button is displayed, and step 803 is executed at the same time, in step 803, the outer frame of the connection button is modified Highlight and display the connection guide line.
  • step 804 it is judged whether a click operation for the blank area is received. If no click operation for the blank area is received, continue to execute steps 802 and 803. Clicking on the blank area, execute step 805, in step 805, hide the switch option, and execute step 806, in step 806, cancel the highlighting of the outer frame of the connection button, and hide the connection guide line, in step 802 and After step 803, execute step 807.
  • step 807 receive a click operation for the switch option.
  • step 808 judge whether the switch option is "on”. When the switch option is "on”, execute step 809. In step In 809, the switch option is switched to "closed” and a disabled icon is displayed on the upper layer of the connection button. When the switch option is "closed”, step 810 is performed. The upper layer hides the disabled icon.
  • the human-computer interaction interface is in the state where the layout can be set.
  • the switch option is displayed above the corresponding connection button, and the triggered The frame of the connection button is highlighted and the connection guide line is displayed. At this time, the switch option can be hidden in response to the blank area. At the same time, the frame of the connection button that was triggered before is canceled and the guide line is hidden.
  • the upper layer of the connection button displays a disabled icon or does not display the connection button, which means that the function of the connection button is not enabled and cannot be used during the battle Or it cannot be perceived during the battle.
  • the switch settings of the connection button can be set in batches or by targeted settings.
  • the object control method of the virtual scene provided by the embodiment of the present application provides the adjustment function of the action button.
  • the replacement function of the action button is provided to replace the action associated with the action button with other actions. , in order to flexibly switch various actions.
  • the attack button is associated with the virtual prop currently held by the virtual object by default.
  • the key position content of the action button is a squatting action
  • the key position content of the action button is a squatting action
  • the selection operation for the multiple candidate actions to be replaced update the selected candidate key position content to the action button to Replacing the squat action, that is, it supports replacing the key content of the action button whose original key content is squat with squat, and can also be replaced with a probe, which can realize the combined attack mode of shooting operation and probe operation, so that it can be used without occupying
  • a variety of action combinations can be realized, thereby realizing a variety of combined attack methods.
  • the object control method of the virtual scene provided by the embodiment of the present application provides the function of preventing false touches, and confirms that the trigger operation is a valid trigger operation through the set number of presses, press time, and press pressure, for example , when the number of presses for the trigger operation of the connection button A is greater than the set number of presses for the action button corresponding to the connection button A, or, when the press time for the trigger operation of the connection button A is longer than the setting of the action button corresponding to the connection button A.
  • the virtual object is controlled to execute the compound action corresponding to the connection button A, thereby preventing the user from making any adjustments to the connection button A. The button was touched by mistake.
  • the object control method of the virtual scene provided by the embodiment of the present application provides various forms of connection buttons.
  • the connection button 902A is set between the attack button 903A and the action button 904A, and there is a partial overlap with the display area of the attack button 903A and the action button 904A, see Figure 9B, Figure 9B is an embodiment of the present application
  • the schematic diagram of the display interface of the object control method of the provided virtual scene, the connection button 902B is also displayed in the human-computer interaction interface 901B, and the connection button 902B is used to connect the attack button 903B and the action button 904B, and the connection button 902B is set between the attack button 903B and the action button 903B.
  • the object control method of the virtual scene provided by the embodiment of the present application provides different display timings of the connection button, for example, the connection button can be displayed all the time, for example, the connection button can be displayed on demand, that is, the connection button is never displayed
  • the state is switched to the display state, and the conditions for on-demand display include at least one of the following: the group to which the virtual object belongs interacts with other groups; the distance between the virtual object and other virtual objects in other groups is less than the distance threshold, for example, the connection button can On-demand highlighting, that is, highlighting when it is always displayed, for example, displaying the dynamic special effects of the connection button, the highlighting conditions include at least one of the following: the group to which the virtual object belongs interacts with other groups; the virtual object interacts with other groups; The distances of other virtual objects of other groups are less than the distance threshold.
  • the connection button in the object control method of the virtual scene provided by the embodiment of the present application can be automatically and continuously triggered.
  • the connection button has a manual mode and a lock mode.
  • the lock mode when the connection button is triggered, the virtual object automatically Repeat compound actions (single-shot shooting operation and jumping operation) to reduce the difficulty of operation.
  • the attack operation associated with the continuous button as a single-shot shooting operation as an example
  • the single-shot shooting is automatically repeated Operate and automatically repeat the jump operation. For example, when the user presses the connection button for a preset time, the pressing operation is determined as a lock trigger operation, and the connection button is locked.
  • connection button Even after the user releases the finger, the virtual object still maintains the corresponding Actions, for example, continuous single-shot shooting and continuous jumping, in response to the user’s operation of clicking the connection button again, the connection button is unlocked, and the virtual object releases the corresponding action of the connection button, for example, stop single-shot shooting and stop jumping, connect Button locking can help virtual objects to continuously perform attacks and actions, thereby improving operational efficiency, especially for single attacks and single actions. By locking the connection button, automatic continuous attacks can be realized, thereby improving operational efficiency.
  • Actions for example, continuous single-shot shooting and continuous jumping
  • Manual mode and lock mode can be switched based on operating parameters, that is, they can be triggered based on different operating parameters of the same type of operation, taking the operation as an example of a pressing operation, for example, when the number of pressings for the triggering operation of the connection button A is greater than the set pressing times, or, when the pressing time for the trigger operation of the connection button A is greater than the set pressing time, or, when the pressing pressure for the trigger operation of the connection button A is greater than the set pressing pressure, the connection button is determined to be locked mode, that is, the connection button is locked, otherwise the connection button is in manual mode, manual mode and lock mode can also be triggered based on different types of operations, for example, when the trigger operation for connection button A is a click operation, the connection button is determined to be in In the manual mode, when the trigger operation for the connection button A is a slide operation, the connection button is determined to be in the locked mode.
  • the object control method of the virtual scene provided by the embodiment of the present application supports adding three connection buttons, and each connection button is used to correspond to the shooting button and each action button, for example, the connection button corresponding to the shooting button and the squatting action button, and the corresponding shooting button.
  • the button and the connection button of the prone action button, the connection button corresponding to the shooting button and the jump action button help users quickly complete the operation that originally required clicking two buttons at the same time, and can also control the movement of the viewing angle at the same time, low learning cost and easy to operate It realizes a variety of attack actions, and has a broad application prospect in the field of virtual scene interaction.
  • the object control method of the virtual scene provided by the embodiment of the present application provides a connection button, and the connection form is to connect the shooting button and the three action buttons respectively.
  • the connection button clicks the connection button to trigger the shooting operation and the corresponding action at the same time, to achieve the effect of clicking one button to trigger two functions at the same time, for example, click the connection button between the shooting button and the jump action button to trigger the virtual object Shooting while jumping, because the high-level attack method that combines action and attack is more intuitively opened to the user through the connection button, it is more conducive to the user's quick operation, complete the compound operation of various attacks and actions, and is conducive to improving all
  • the connection button can be turned on or off by personalization through custom settings, and the combination of different connection buttons can reduce the difficulty of operation and improve the flexibility of operation.
  • the virtual scene object control device 455 stored in the memory 450
  • the software modules in may include: the embodiment of the present application provides an object control device for a virtual scene, including: a display module 4551 configured to display a virtual scene; wherein, the virtual scene includes a virtual object holding an attack prop; the display module 4551, Also configured to display an attack button and at least one action button, and display at least one connection button; wherein each connection button is used to connect an attack button and an action button; the control module 4552 is configured to respond to triggering of the target connection button Operation, controlling the virtual object to execute the action associated with the target action button, and controlling the virtual object to use the attack props to perform attack operations synchronously; wherein, the target action button is an action button connected to the target connection button among at least one action button, and the target connection button is at least Any one of the connect buttons
  • the display module 4551 is further configured to: display an attack button associated with the attack prop currently held by the virtual object; wherein, when the attack button is triggered, the virtual object uses the attack prop to perform an attack operation; Displays at least one action button around the ; where each action button is associated with an action.
  • the type of at least one action button includes at least one of the following: an action button associated with a high-frequency action; wherein the high-frequency action is a candidate action whose operating frequency is higher than an operating frequency threshold among multiple candidate actions; and An action button associated with the target action; wherein, the target action is adapted to the state of the virtual object in the virtual scene.
  • the display module 4551 is further configured to: for each action button in at least one action button, display a connection button used to connect the action button and the attack button; wherein the connection button has at least one of the following display attributes :
  • the connect button includes a disabled icon when disabled and an enabled icon when enabled.
  • the display module 4551 is further configured to: for the target action button in at least one action button, display a connection button for connecting the target action button and the attack button; wherein, the action associated with the target action button is related to the virtual object State adaptation in a virtual scene; or, for at least one target action button in the action button, display a connection button for connecting the target action button and the attack button based on the first display mode, and remove the target for at least one action button
  • the other action buttons of the action button display a connection button connecting the other action buttons and the attack button based on the second display mode.
  • the display module 4551 is further configured to: acquire the interaction data of the virtual object and the scene data of the virtual scene; based on the interaction data and the scene data, invoke the neural network model to predict compound actions; wherein the compound actions include attack operations and Target action; makes the action button associated with the target action the target action button.
  • the display module 4551 is further configured to: determine the similar historical virtual scene of the virtual scene; wherein, the similarity between the similar historical virtual scene and the virtual scene is greater than the similarity threshold; determine the highest frequency in the similar historical virtual scene An action; wherein, the highest frequency action is a candidate action with the highest operating frequency among multiple candidate actions; the action button associated with the highest frequency action is used as the target action button.
  • the way each connection button is used to connect an attack button and an action button includes: the connection button overlaps with an attack button and an action button respectively; The button is connected to an action button.
  • the display module 4551 is further configured to: determine that the condition for automatically displaying at least one connection button is met; wherein, the condition includes at least one of the following: the group of virtual objects and other groups Interaction occurs between other virtual objects of the group; the distance of the virtual object to other virtual objects of other groups is less than a distance threshold.
  • the display module 4551 is further configured to: display a plurality of candidate actions in response to a replacement operation for any one action button; wherein , the multiple candidate actions are all different from the actions associated with at least one action button; in response to a selection operation on the multiple candidate actions, the action associated with any one of the action buttons is replaced with the selected candidate action.
  • the attack prop is in a single attack mode; the control module 4552 is further configured to: control the virtual object to perform an action associated with the target action button once, and when the posture after the execution of the action is different from the posture before the execution of the action, Restore the posture of the virtual object before performing the action, and start from controlling the virtual object to perform the action associated with the target action button, and control the virtual object to use the attack prop to perform an attack operation.
  • the trigger operation is a continuous operation of the target connection button; before restoring the posture of the virtual object before performing the action, the control module 4552 is also configured to: maintain the posture after the execution of the action until the trigger operation is released ; When the trigger operation generates a movement trajectory, synchronously update the viewing angle of the virtual scene according to the direction and angle of the movement trajectory; in response to the trigger operation being released, stop updating the viewing angle of the virtual scene.
  • the attack prop is in the continuous attack mode; the control module 4552 is also configured to: when the posture after the execution of the action is different from the posture before the execution of the action, control the virtual object to perform an action associated with the target action button once, and Keep the posture after the execution of the action is completed; when the posture after the execution of the action is the same as the posture before the execution of the action, control the virtual object to perform an action associated with the target action button; start from controlling the virtual object to execute the action associated with the target action button, control The target object uses the attack props to continue the attack operation; when the posture after the execution of the action is different from the posture before the execution of the action, it is released in response to the trigger operation, restores the posture of the virtual object before the execution of the action, and stops controlling the virtual object to use the attack props The attack operation is continued; when the posture after the execution of the action is the same as the posture before the execution of the action, in response to the release of the trigger operation, stop controlling the virtual object to use the attack props to continue the attack
  • control module 4552 is further configured to: generate a movement track in response to the trigger operation, and synchronously update the viewing angle of the virtual scene according to the direction and angle of the movement track; and stop updating the view angle of the virtual scene in response to the trigger operation being released. viewing angle.
  • the working mode of the target action button includes manual mode and lock mode; wherein, the manual mode is used to stop triggering the target connection button after the trigger operation is released, and the lock mode is used to continue to automatically trigger the target action after the trigger operation is released button; the control module 4552 is also configured to control the virtual object to perform the action associated with the target action button when the trigger operation controls the target action button to enter the manual mode, and control the virtual object to use the attack props to perform the action synchronously Attack operation; when the trigger operation controls the target action button to enter the lock mode, control the virtual object to perform the action associated with the target action button during the period when the trigger operation is not released and after the trigger operation is released, and control the virtual object to use the attack button.
  • the props perform attack operations synchronously.
  • the display module 4551 is further configured to: in response to the selection operation for at least one connection button, display each selected connection button according to the target display mode; wherein, the target The display mode is significantly different from the display mode of the unselected connection button; for each selected connection button, the following processing is performed: when the connection button is in the disabled state, in response to the opening operation for the connection button, the disabled icon of the connection button is hidden, and marking the connection button as an on state; when the connection button is in an on state, in response to a disabling operation on the connection button, displaying a disabled icon for the connection button, and marking the connection button as a disabled state.
  • An embodiment of the present application provides a computer program product, where the computer program product includes a computer program or a computer-executable instruction, and the computer-executable instruction is stored in a computer-readable storage medium.
  • the processor of the electronic device reads the computer-executable instructions from the computer-readable storage medium, and the processor executes the computer-executable instructions, so that the electronic device executes the method for controlling objects in a virtual scene described above in the embodiments of the present application.
  • the embodiment of the present application provides a computer-readable storage medium storing executable instructions, wherein the executable instructions are stored.
  • the processor will execute the virtual scene provided by the embodiment of the present application.
  • the object control method for example, the object control method of the virtual scene shown in FIGS. 4A-4C .
  • the computer-readable storage medium can be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; Various equipment.
  • executable instructions may take the form of programs, software, software modules, scripts, or code written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and its Can be deployed in any form, including as a stand-alone program or as a module, component, subroutine or other unit suitable for use in a computing environment.
  • executable instructions may, but do not necessarily correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in a Hyper Text Markup Language (HTML) document in one or more scripts, in a single file dedicated to the program in question, or in multiple cooperating files (for example, files that store one or more modules, subroutines, or sections of code).
  • HTML Hyper Text Markup Language
  • executable instructions may be deployed to be executed on one electronic device, or on multiple electronic devices located at one location, or, alternatively, on multiple electronic devices distributed across multiple locations and interconnected by a communication network. to execute.
  • the attack button and the action button are displayed, and the press connection button used to connect an attack button and an action is displayed, and in response to the trigger operation for the target connection button, the virtual object is controlled to execute the target action button Associated actions and use attack props to perform attack operations synchronously.
  • the actions and attack operations can be executed at the same time, which is equivalent to using a single button to realize multiple functions at the same time, thereby improving user operation efficiency.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种虚拟场景的对象控制方法、装置、电子设备、计算机程序产品及计算机可读存储介质;方法包括:显示虚拟场景;其中,虚拟场景包括持有攻击道具的虚拟对象;显示攻击按钮和至少一个动作按钮,并显示至少一个连接按钮;其中,每个连接按钮用于连接攻击按钮和一个动作按钮;响应于针对目标连接按钮的触发操作,控制虚拟对象执行目标动作按钮关联的动作,并控制虚拟对象使用攻击道具同步进行攻击操作;其中,目标动作按钮是至少一个动作按钮中与目标连接按钮连接的动作按钮,目标连接按钮是至少一个连接按钮中被选中的任意一个连接按钮。

Description

虚拟场景的对象控制方法、装置、电子设备、计算机程序产品及计算机可读存储介质
相关申请的交叉引用
本申请基于申请号为202111227167.8、申请日为2021年10月21日的中国专利申请以及申请号为202111672352.8、申请日为2021年12月31日的中国专利申请提出,并要求两个中国专利申请的优先权,两个中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及人机交互技术,尤其涉及一种虚拟场景的对象控制方法、装置、电子设备、计算机程序产品及计算机可读存储介质。
背景技术
基于图形处理硬件的显示技术,扩展了感知环境以及获取信息的渠道,尤其是虚拟场景的多媒体技术,借助与人机交互引擎技术,能够根据实际应用需求实现受控于用户或人工智能的虚拟对象之间的多样化的交互,具有各种典型的应用场景,例如在游戏等虚拟场景中,能够模拟虚拟对象之间的对战过程。
虚拟场景与用户之间的人机交互是通过人机交互界面实现的,人机交互界面中显示有多个按钮,每个按钮被触发后可以控制虚拟对象执行对应的操作,例如,触发跳跃按钮后可以控制虚拟对象在虚拟场景中跳跃,有时在对战场景中虚拟对象需要同时完成射击以及其他动作,例如,虚拟对象在下趴的过程中射击,从而既可以埋伏隐蔽又可以攻击敌人,但是相关技术中若想同时完成射击以及其他动作时,用户需要使用多个手指频繁点击操作,具有较高的操作难度和精准度的要求,从而导致人机交互效率较低。
发明内容
本申请实施例提供一种虚拟场景的对象控制方法、装置、电子设备、计算机程序产品及计算机可读存储介质,能够提高虚拟场景的操控效率。
本申请实施例的技术方案是这样实现的:
本申请实施例提供一种虚拟场景的对象控制方法,所述方法由电子设备执行,所述方法包括:
显示虚拟场景;其中,所述虚拟场景包括持有攻击道具的虚拟对象;
显示攻击按钮和至少一个动作按钮,并显示至少一个连接按钮;其中,每个所述连接按钮用于连接所述攻击按钮和一个所述动作按钮;
响应于针对目标连接按钮的触发操作,控制所述虚拟对象执行目标动作按钮关联的动作,并控制所述虚拟对象使用所述攻击道具同步进行攻击操作;其中,,所述目标动作按钮是所述至少一个动作按钮中与所述目标连接按钮连接的动作按钮,所述目标连接按钮是所述至少一个连接按钮中被选中的任意一个连接按钮。
本申请实施例提供一种虚拟场景的对象控制装置,包括:
显示模块,配置为显示虚拟场景;其中,所述虚拟场景包括持有攻击道具的虚拟对象;
所述显示模块,还配置为显示攻击按钮和至少一个动作按钮,并显示至少一个连接按钮;其中,每个所述连接按钮用于连接所述攻击按钮和一个所述动作按钮;
控制模块,配置为响应于针对目标连接按钮的触发操作,控制所述虚拟对象执行目标动作按钮关联的动作,并控制所述虚拟对象使用所述攻击道具同步进行攻击操作;其中,所述目标动作按钮是所述至少一个动作按钮中与所述目标连接按钮连接的动作按钮,所述目标连接按钮是所述至少一个连接按钮中的被选中任意一个连接按钮。
本申请实施例提供一种电子设备,包括:
存储器,用于存储计算机可执行指令;
处理器,用于执行所述存储器中存储的计算机可执行指令时,实现本申请实施例提供的虚拟场景的对象控制方法。
本申请实施例提供一种计算机可读存储介质,存储有计算机可执行指令,用于被处理器执行时,实现本申请实施例提供的虚拟场景的对象控制方法。
本申请实施例提供一种计算机程序产品,包括计算机程序或指令,所述计算机程序或计算机可执行指令被处理器执行时实现本申请实施例提供的虚拟场景的对象控制方法。
本申请实施例具有以下有益效果:
显示攻击按钮和动作按钮,并显示用于连接攻击按钮和一个动作按钮的连接按钮,响应于针对目标连接按钮的触发操作,控制虚拟对象执行目标动作按钮关联的动作并使用攻击道具同步进行攻击操作,通过布局连接按钮使得动作与攻击操作能够同时执行,相当于使用单个按钮同时实现多个功能,节约了操作时间,从而能够提升在虚拟场景的操控效率。
附图说明
图1是相关技术提供的虚拟场景的对象控制方法的显示界面示意图;
图2A是本申请实施例提供的虚拟场景的对象控制方法的应用模式示意图;
图2B是本申请实施例提供的虚拟场景的对象控制方法的应用模式示意图;
图3是本申请实施例提供的应用虚拟场景的对象控制方法的电子设备的结构示意图;
图4A-4C是本申请实施例提供的虚拟场景的对象控制方法的流程示意图;
图5A-5E是本申请实施例提供的虚拟场景的对象控制方法的显示界面示意图;
图6A-6C是本申请实施例提供的虚拟场景的对象控制方法的逻辑示意图;
图7A-7C是本申请实施例提供的虚拟场景的对象控制方法的逻辑示意图;
图8是本申请实施例提供的虚拟场景的对象控制方法的逻辑示意图;
图9A-9E是本申请实施例提供的虚拟场景的对象控制方法的显示界面示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
在以下的描述中,所涉及的术语“第一\第二\第三”仅仅是是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
对本申请实施例进行进一步详细说明之前,对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释。
1)虚拟场景,利用设备输出的区别于现实世界的场景,通过裸眼或设备的辅助能够形成对虚拟场景的视觉感知,例如通过显示屏幕输出的二维影像,通过立体投影、虚拟现实和增强现实技术等立体显示技术来输出的三维影像;此外,还可以通过各种可能的硬件形成听觉感知、触觉感知、嗅觉感知和运动感知等各种模拟现实世界的感知。
2)响应于,用于表示所执行的操作所依赖的条件或者状态,当满足所依赖的条件或状态时,所执行的一个或多个操作可以是实时的,也可以具有设定的延迟;在没有特别说明的情况下,所执行的多个操作不存在执行先后顺序的限制。
3)客户端,终端中运行的用于提供各种服务的应用程序,例如游戏客户端等。
4)虚拟对象,虚拟场景中进行交互的对象,受到用户或机器人程序(例如,基于人工智能的机器人程序)的控制,能够在虚拟场景中静止、移动以及进行各种行为的对象,例如游戏中的各种角色等。
5)按钮,按钮是在虚拟场景的人机交互界面中的用于人机交互的控件,具有图案标识,与特定的处理逻辑绑定,当用户触发按钮时将执行对应的处理逻辑。
参见图1,图1是相关技术提供的虚拟场景的对象控制方法的显示界面示意图,在虚拟场景中虚拟对象需要同时完成射击以及动作,例如,虚拟对象在下趴的过程中射击,从而既可以埋伏隐蔽又可以攻击敌人,但是相关技术中若想同时完成射击以及动作(动作包括:左右探头、下蹲、下趴)时,用户需 要使用多个手指频繁点击操作,具有较高的操作难度和精准度的要求,图1的人机交互界面301中显示方向按钮302、攻击按钮303以及动作按钮304,通常由左手拇指控制方向按钮302,由右手拇指控制攻击按钮303或者动作按钮304,针对手机端的虚拟场景,通常是由左右手的拇指控制人机交互界面,即默认操作模式是两指操作模式,其中一个拇指控制方向,另一个拇指控制虚拟独享执行具体操作,因此用户很难通过默认的两指键位操作同时进行射击以及动作操作,只能通过调整按钮布局,通过多指操作(至少3指)同时进行射击以及动作操作,即便如此,多指操作也需要较高学习成本和熟练度,提高了屏幕的按钮面积占比,较高几率对用户视野造成干扰,对于大多数用户的操作体验难度较大。
本申请实施例提供一种虚拟场景的对象控制方法、装置、电子设备、计算机可读存储介质及计算机程序产品,通过布局连接按钮使得触发连接按钮后动作与攻击操作能够同时执行,相当于使用单个按钮同时实现多个功能,从而能够提升用户操作效率,下面说明本申请实施例提供的电子设备的示例性应用,本申请实施例提供的电子设备可以实施为笔记本电脑,平板电脑,台式计算机,机顶盒,移动设备(例如,移动电话,便携式音乐播放器,个人数字助理,专用消息设备,便携式游戏设备)等各种类型的用户终端。
为便于更容易理解本申请实施例提供的虚拟场景的对象控制方法,首先说明本申请实施例提供的虚拟场景的对象控制方法的示例性实施场景,虚拟场景可以完全基于终端输出,或者基于终端和服务器的协同来输出。
在一些实施例中,虚拟场景可以是供游戏角色交互的环境,例如可以是供游戏角色在虚拟场景中进行对战,通过控制虚拟对象的行动可以在虚拟场景中进行双方互动,从而使用户能够在游戏的过程中舒缓生活压力。
在一个实施场景中,参见图2A,图2A是本申请实施例提供的虚拟场景的对象控制方法的应用模式示意图,适用于一些完全依赖终端400的计算能力即可完成虚拟场景的相关数据计算的应用模式,例如单机版/离线模式的游戏,通过智能手机、平板电脑和虚拟现实/增强现实设备等终端400完成虚拟场景的输出。
当形成虚拟场景的视觉感知时,终端400通过图形计算硬件计算显示所需要的数据,并完成显示数据的加载、解析和渲染,在图形输出硬件输出能够对虚拟场景形成视觉感知的视频帧,例如,在智能手机的显示屏幕呈现二维的视频帧,或者,在增强现实/虚拟现实眼镜的镜片上投射实现三维显示效果的视频帧;此外,为了丰富感知效果,设备还可以借助不同的硬件来形成听觉感知、触觉感知、运动感知和味觉感知的一种或多种。
作为示例,终端400运行客户端(例如单机版的游戏应用),在客户端的运行过程中输出包括有角色扮演的虚拟场景,虚拟场景是供游戏角色交互的环境,例如可以是用于供游戏角色进行对战的平原、街道、山谷等等;虚拟场景中包括虚拟对象110、连接按钮120、动作按钮130、攻击按钮140,虚拟对象110可以是受用户(或称用户)控制的游戏角色,即虚拟对象110受控于真实用户,将响应于真实用户针对控制器(包括触控屏、声控开关、键盘、鼠标和摇杆等)的操作而在虚拟场景中运动,例如当真实用户向左移动摇杆时,虚拟对象将在虚拟场景中向左部移动,响应于针对动作按钮130的触发操作,控制虚拟对象在虚拟场景中执行动作,响应于针对攻击按钮140的触发操作,控制虚拟对象在虚拟场景中进行攻击操作,响应于针对连接按钮120的触发操作,控制虚拟对象执行动作并同步进行攻击操作。
在另一个实施场景中,参见图2B,图2B是本申请实施例提供的虚拟场景的对象控制方法的应用模式示意图,应用于终端400和服务器200,一般地,适用于依赖服务器200的计算能力完成虚拟场景计算、并在终端400输出虚拟场景的应用模式。
以形成虚拟场景的视觉感知为例,服务器200进行虚拟场景相关显示数据的计算并发送到终端400,终端400依赖于图形计算硬件完成计算显示数据的加载、解析和渲染,依赖于图形输出硬件输出虚拟场景以形成视觉感知,例如可以在智能手机的显示屏幕呈现二维的视频帧,或者,在增强现实/虚拟现实眼镜的镜片上投射实现三维显示效果的视频帧;对于虚拟场景的形式的感知而言,可以理解,可以借助于终端的相应硬件输出,例如使用麦克风输出形成听觉感知,使用振动器输出形成触觉感知等等。
作为示例,终端400运行客户端(例如网络版的游戏应用),虚拟场景中包括虚拟对象110、连接按钮120、动作按钮130、攻击按钮140,通过连接游戏服务器(即服务器200)与其他用户进行游戏互动,客户端响应于针对连接按钮120的触发操作,客户端将虚拟对象110执行动作的动作配置信息以及使用攻击道具同步进行攻击操作的操作配置信息通过网络300发送至服务器200,服务器200基于上述信息计算操作配置信息以及动作配置信息的显示数据,并将上述显示数据发送至客户端,客户端依赖于图形计算硬件完成计算显示数据的加载、解析和渲染,依赖于图形输出硬件输出虚拟场景以形成视觉感知,即显示虚拟对象110执行目标动作按钮关联的动作并使用攻击道具同步进行攻击操作的画面。
在一些实施例中,终端400可以通过运行计算机程序来实现本申请实施例提供的虚拟场景的对象控制方法,例如,计算机程序可以是操作系统中的原生程序或软件模块;可以是本地(Native)应用程序(APP, Application),即需要在操作系统中安装才能运行的程序,例如游戏APP(即上述的客户端);也可以是小程序,即只需要下载到浏览器环境中就可以运行的程序;还可以是能够嵌入至任意APP中的游戏小程序。总而言之,上述计算机程序可以是任意形式的应用程序、模块或插件。
本申请实施例可以借助于云技术(Cloud Technology)实现,云技术是指在广域网或局域网内将硬件、软件、网络等系列资源统一起来,实现数据的计算、储存、处理和共享的一种托管技术。
云技术是基于云计算商业模式应用的网络技术、信息技术、整合技术、管理平台技术、以及应用技术等的总称,可以组成资源池,按需所用,灵活便利。云计算技术将变成重要支撑。技术网络系统的后台服务需要大量的计算、存储资源。
作为示例,服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN、以及大数据和人工智能平台等基础云计算服务的云服务器。终端400可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、以及智能手表等,但并不局限于此。终端400以及服务器200可以通过有线或无线通信方式进行直接或间接地连接,本申请实施例中不做限制。
参见图3,图3是本申请实施例提供的电子设备的结构示意图,图3所示的终端400包括:至少一个处理器410、存储器450、至少一个网络接口420和用户接口430。终端400中的各个组件通过总线系统440耦合在一起。可理解,总线系统440用于实现这些组件之间的连接通信。总线系统440除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图3中将各种总线都标为总线系统440。
处理器410可以是一种集成电路芯片,具有信号的处理能力,例如通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其中,通用处理器可以是微处理器或者任何常规的处理器等。
用户接口430包括使得能够呈现媒体内容的一个或多个输出装置431,包括一个或多个扬声器和/或一个或多个视觉显示屏。用户接口430还包括一个或多个输入装置432,包括有助于用户输入的用户接口部件,比如键盘、鼠标、麦克风、触屏显示屏、摄像头、其他输入按钮和按钮。
存储器450可以是可移除的,不可移除的或其组合。示例性的硬件设备包括固态存储器,硬盘驱动器,光盘驱动器等。存储器450可以包括在物理位置上远离处理器410的一个或多个存储设备。
存储器450包括易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。非易失性存储器可以是只读存储器(ROM,Read Only Memory),易失性存储器可以是随机存取存储器(RAM,Random Access Memory)。本申请实施例描述的存储器450旨在包括任意适合类型的存储器。
在一些实施例中,存储器450能够存储数据以支持各种操作,这些数据的示例包括程序、模块和数据结构或者其子集或超集,下面示例性说明。
操作系统451,包括用于处理各种基本系统服务和执行硬件相关任务的系统程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务;
网络通信模块452,用于经由一个或多个(有线或无线)网络接口420到达其他电子设备,示例性的网络接口420包括:蓝牙、无线相容性认证(WiFi)、和通用串行总线(USB,Universal Serial Bus)等;
呈现模块453,用于经由一个或多个与用户接口430相关联的输出装置431(例如,显示屏、扬声器等)使得能够呈现信息(例如,用于操作外围设备和显示内容和信息的用户接口);
输入处理模块454,用于对一个或多个来自一个或多个输入装置432之一的一个或多个用户输入或互动进行检测以及翻译所检测的输入或互动。
在一些实施例中,本申请实施例提供的虚拟场景的对象控制装置可以采用软件方式实现,图3示出了存储在存储器450中的虚拟场景的对象控制装置455,其可以是程序和插件等形式的软件,包括以下软件模块:显示模块4551和控制模块4552,这些模块是逻辑上的,因此根据所实现的功能可以进行任意的组合或进一步拆分。将在下文中说明各个模块的功能。
在一些实施例中,终端或服务器可以通过运行计算机程序来实现本申请实施例提供的虚拟场景的对象控制方法。举例来说,计算机程序可以是操作系统中的原生程序或软件模块;可以是本地(Native)应用程序(APP,Application),即需要在操作系统中安装才能运行的程序,如游戏APP或者即时通信APP;也可以是小程序,即只需要下载到浏览器环境中就可以运行的程序;还可以是能够嵌入至任意APP中的小程序。总而言之,上述计算机程序可以是任意形式的应用程序、模块或插件。
本申请实施例提供的虚拟场景的对象控制方法可以由图2A中的终端400单独执行,也可以由图2B中的终端400和服务器200协同执行,例如步骤103中响应于针对目标连接按钮的触发操作,控制虚拟对象执行目标动作按钮关联的动作,并控制虚拟对象使用攻击道具同步进行攻击操作可以由终端400和 服务器200协同执行,服务器200确定出虚拟对象执行目标动作按钮关联的动作,并使用攻击道具同步进行攻击操作的执行结果后,将执行结果返回至终端400进行显示。
下面,以由图2A中的终端400单独执行本申请实施例提供的虚拟场景的对象控制方法为例说明。参见图4A,图4A是本申请实施例提供的虚拟场景的对象控制方法的流程示意图,将结合图4A示出的步骤进行说明。
需要说明的是,图4A示出的方法可以由终端400运行的各种形式计算机程序执行,并不局限于上述的客户端,例如上文的操作系统451、软件模块和脚本,因此客户端不应视为对本申请实施例的限定。在下面的示例中,以虚拟场景用于游戏为例,但是不应视为对虚拟场景的限定。
在步骤101中,显示虚拟场景。
作为示例,终端运行客户端,在客户端的运行过程中输出包括有角色扮演的虚拟场景,虚拟场景是供游戏角色交互的环境,例如可以是用于供游戏角色进行对战的平原、街道、山谷等等;虚拟场景包括持有攻击道具的虚拟对象,虚拟对象可以是受用户(或称玩家)控制的游戏角色,即虚拟对象受控于真实用户,将响应于真实用户针对控制器(包括触控屏、声控开关、键盘、鼠标和摇杆等)的操作而在虚拟场景中运动,例如当真实用户向左移动摇杆时,第一虚拟对象将在虚拟场景中向左部移动,还可以保持原地静止、跳跃以及使用各种功能(如技能和道具);攻击道具是能够被虚拟对象使用和持有,且具有攻击功能的虚拟道具,攻击道具包括以下至少之一:射击道具、投掷道具、搏击道具。
在步骤102中,显示攻击按钮和至少一个动作按钮,并显示至少一个连接按钮。
作为示例,每个连接按钮用于连接一个攻击按钮和一个动作按钮,例如,显示攻击按钮A、动作按钮B1、动作按钮C1以及动作按钮D1,动作按钮B1与攻击按钮A之间显示连接按钮B2,动作按钮C1与攻击按钮A之间显示连接按钮C2,动作按钮D1与攻击按钮A之间显示连接按钮D2,连接按钮的数目与动作按钮的数目相同,每个动作按钮对应由一个连接按钮。
在步骤103中,响应于针对目标连接按钮的触发操作,控制虚拟对象执行目标动作按钮关联的动作,并控制虚拟对象使用攻击道具同步进行攻击操作。
作为示例,目标动作按钮是至少一个动作按钮中与目标连接按钮连接的动作按钮,目标连接按钮是至少一个连接按钮中被选中的任意一个连接按钮,例如,人机交互界面中显示攻击按钮A、动作按钮B1、动作按钮C1以及动作按钮D1,动作按钮B1与攻击按钮A之间显示连接按钮B2,动作按钮C1与攻击按钮A之间显示连接按钮C2,动作按钮D1与攻击按钮A之间显示连接按钮D2,以目标连接按钮为连接按钮B2为例,响应于针对连接按钮B2的触发操作,将与连接按钮B2连接的动作按钮B1识别为目标动作按钮,从而控制虚拟对象执行与动作按钮B1关联的动作,并控制虚拟对象使用攻击道具同步进行攻击操作。
作为示例,参见图9A,图9A是本申请实施例提供的虚拟场景的对象控制方法的显示界面示意图,人机交互界面901A中显示有连接按钮902A,连接按钮902A用于连接攻击按钮903A与动作按钮904A,连接按钮902A设置在攻击按钮903A和动作按钮904A之间,连接按钮902A、攻击按钮903A与动作按钮904A所处的区域均属于操作区域,且连接按钮902A、攻击按钮903A与动作按钮904A均是嵌入在操作区域中,如图9A所示,可以在嵌入虚拟场景的操作区域中显示按钮,参见图9C,图9C是本申请实施例提供的虚拟场景的对象控制方法的显示界面示意图,人机交互界面901C中显示有连接按钮902C,连接按钮902C用于连接攻击按钮903C与动作按钮904C,连接按钮902C设置在攻击按钮903C和动作按钮904C之间,连接按钮902C、攻击按钮903C与动作按钮904C所处的区域均属于操作区域,操作区域独立于虚拟场景,如图9C所示,可以在独立于虚拟场景的操作区域中显示按钮。
在一些实施例中,参见图4B,图4B是本申请实施例提供的虚拟场景的对象控制方法的流程示意图,步骤102中显示攻击按钮和至少一个动作按钮,可以通过图4B中步骤1021至步骤1022实现。
在步骤1021中,显示与虚拟对象当前持有的攻击道具关联的攻击按钮。
作为示例,当攻击按钮被触发时,虚拟对象使用攻击道具进行攻击操作,当虚拟对象当前持有的攻击道具是手枪时,显示手枪的攻击按钮,当虚拟对象当前持有的攻击道具是弓弩时,显示弓弩的攻击按钮,当虚拟对象当前持有的攻击道具是手雷时,显示手雷的攻击按钮。
在步骤1022中,在攻击按钮的周围显示至少一个动作按钮。
作为示例,参见图5A,图5A是本申请实施例提供的虚拟场景的对象控制方法的显示界面示意图,人机交互界面501A中显示有连接按钮502A,在攻击按钮503A与三个动作按钮504A之间显示三个连接按钮502A,图5A中三个连接按钮502A显示在攻击按钮503A的周围,且三个动作按钮504A显示在攻击按钮503A的周围,每个动作按钮关联一个动作,例如,动作按钮504A关联下蹲动作,另外两个动作按钮分别关联下趴动作以及跳跃动作。通过在攻击按钮的周围显示至少一个动作按钮的布局方式,可以提升操作的便利性。
在一些实施例中,至少一个动作按钮的类型包括以下至少之一:与高频动作关联的动作按钮;其中, 高频动作是多个候选动作中操作频率高于操作频率阈值的候选动作;与目标动作关联的动作按钮;其中,目标动作与虚拟对象在虚拟场景中的状态适配,目标动作与虚拟对象在虚拟场景中的状态适配,表征目标动作适合虚拟对象在当前的虚拟场景中执行,例如,虚拟对象在虚拟场景中的状态为被攻击的状态,则适合在当前的虚拟场景中执行的动作为跳跃动作,跳跃动作为与虚拟对象在虚拟场景中的状态适配的目标动作,虚拟对象在虚拟场景中的每个状态配置有至少一个适配的目标动作,通过个性化设置动作按钮关联的动作,可以提高用户的操作效率,使得用户进行人机交互操作时可以更便利地触发执行用户期望的动作。
作为示例,操作频率阈值是基于以往数据统计得到的,例如,服务器可以统计在最近一周的交互数据中每个候选动作的实际操作频率,再对多个候选动作的实际操作频率进行求平均处理,求平均处理结果作为操作频率阈值,这里的交互数据可以是最近一周内所有的交互数据。
作为示例,动作按钮被触发时,虚拟对象执行的动作可以是默认的设定动作,参见图5E,图5E是本申请实施例提供的虚拟场景的对象控制方法的显示界面示意图,人机交互界面501E中显示有下蹲动作按钮504-1E、下趴动作按钮504-2E以及跳跃动作按钮504-3E,响应于用户针对下蹲动作按钮504-1E的触发操作,控制虚拟对象单独执行下蹲动作;响应于用户针对下趴动作按钮504-2E的触发操作,控制虚拟对象单独执行下趴动作;响应于用户针对跳跃动作按钮504-3E的触发操作,控制虚拟对象单独执行跳跃动作。
作为示例,图5E中的下蹲动作按钮504-1E、下趴动作按钮504-2E以及跳跃动作按钮504-3E可以是默认设定的。
作为示例,动作按钮还可以是个性化设置的,例如,动作按钮是高频动作关联的动作按钮,高频动作是多个候选动作中操作频率高于虚拟对象A的操作频率阈值的候选动作,或者高频动作是多个候选动作中操作频率高于相同阵营的虚拟对象B的操作频率阈值的候选动作,例如,基于虚拟对象A的自身操作数据,确定出虚拟对象A执行跳跃动作的次数高于虚拟对象A的操作频率阈值,虚拟对象A的操作频率阈值是虚拟对象A执行每个动作的次数的平均值,那么跳跃动作即为多个候选动作中的高频动作,基于相同阵营的虚拟对象B的操作数据,确定出相同阵营的虚拟对象B执行跳跃动作的次数高于虚拟对象B的操作频率阈值,虚拟对象B的操作频率阈值是虚拟对象B执行每个动作的次数的平均值,那么跳跃动作即为多个候选动作中的高频动作,动作按钮还可以与目标动作关联,目标动作与虚拟对象在虚拟场景中的状态适配,例如,虚拟场景中敌人数量较多,则虚拟对象A需要隐藏自己,因此与虚拟对象A在虚拟场景中的状态适配的动作是下趴动作,此时下趴动作是目标动作。
在一些实施例中,步骤102中显示至少一个连接按钮,可以通过以下技术方案实现:针对至少一个动作按钮中的每个动作按钮,显示用于连接动作按钮和攻击按钮的连接按钮;其中,连接按钮具有以下显示属性至少之一:当处于禁用状态时连接按钮包括禁用图标,当处于可用状态时连接按钮包括可用图标。通过不同的显示属性显示处于不同状态的连接按钮,从而有效提示用户能够触发该连接按钮或者不能够触发该连接按钮,提升用户的操作效率,避免输出无效操作。
作为示例,当连接按钮被设置为关闭时,在链接按钮所在图层的上层图层显示禁用图标,当连接按钮被设置为开启时,在连接按钮所在图层的上层图层显示可用图标,例如,可用图标可以是连接按钮本身的图标,参见图5D,图5D是本申请实施例提供的虚拟场景的对象控制方法的显示界面示意图,当连接按钮被设置为关闭时,在连接按钮503D上显示禁用图标505D,当连接按钮被设置为开启时,在连接按钮503D上隐藏禁用图标505D仅显示连接按钮503D本身的图标,另外显示禁用图标时可以直接叠加在连接按钮503D本身的图标上进行显示。
在一些实施例中,步骤102中显示至少一个连接按钮,可以通过以下技术方案实现:识别出与虚拟对象在虚拟场景中的状态适配的动作,将关联有对应的动作的按钮作为目标动作按钮,仅显示用于连接目标动作按钮和攻击按钮的连接按钮。由于仅显示与目标动作按钮关联的目标连接按钮,从而可以节约由于同时显示多个连接按钮占据视野的比例,为虚拟场景提供更大的展示区域,且所显示的连接按钮正是用户需要使用的连接按钮,提高了用户找到合适的连接按钮的效率,提高了人机交互的智能化程度。
作为示例,当仅显示用于连接目标动作按钮和攻击按钮的连接按钮,不显示其他动作按钮与攻击按钮之间的连接按钮时,参见图9D,图9D是本申请实施例提供的虚拟场景的对象控制方法的显示界面示意图,人机交互界面901D中显示有连接按钮902D,连接按钮902D用于连接攻击按钮903D与动作按钮904D,连接按钮902D设置在攻击按钮903D和动作按钮904D之间,图9D中仅显示了下蹲动作对应的动作按钮904D、攻击按钮903D以及对应下蹲动作的连接按钮902D,下蹲动作对应的动作按钮904D是目标动作按钮,下蹲动作是目标动作按钮关联的动作,下蹲动作与虚拟对象在虚拟场景中的状态适配,例如,虚拟场景中存在较多敌人,用户需要攻击敌人也需要适当地隐蔽,因此与虚拟对象在虚拟场景中的状态适配的动作是下蹲动作。
在一些实施例中,步骤102中显示至少一个连接按钮,可以通过以下技术方案实现:针对至少一个 动作按钮中的目标动作按钮,基于第一显示方式显示用于连接目标动作按钮和攻击按钮的连接按钮,并针对至少一个动作按钮中除目标动作按钮的其他动作按钮,基于第二显示方式显示连接其他动作按钮和攻击按钮的连接按钮,从而更显著地提示用户去触发目标动作按钮关联的连接按钮,从而提高用户的操作效率。
作为示例,参见图9E,图9E是本申请实施例提供的虚拟场景的对象控制方法的显示界面示意图,人机交互界面901E中显示有连接按钮902E,连接按钮902E用于连接攻击按钮903E与下蹲动作按钮904E,人机交互界面901E中显示有连接按钮905E,连接按钮905E用于连接攻击按钮903E与下趴动作按钮906E,若下蹲动作按钮是目标动作按钮,则基于第一显示方式显示用于连接攻击按钮903E与下蹲动作按钮904E的连接按钮902E,并基于第二显示方式显示用于连接攻击按钮903E与下趴动作按钮906E的连接按钮905E,第一显示方式的显著程度高于第二显示方式。例如,第一显示方式的亮度高于第二显示方式的亮度,例如,第一显示方式的色彩对比度高于第二显示方式的色彩对比度。
作为示例,连接按钮可以一直显示,例如,连接按钮可以按需显示,即连接按钮从不显示状态切换到显示状态,按需显式指的是在满足按需显式的条件时进行突出显式,按需显示的条件包括以下至少之一:虚拟对象所属群组与其他群组发生交互,例如,虚拟对象所属群组与其他群组发生交战,虚拟对象所属群组指的是虚拟对象所属的战队,在虚拟场景中至少一个虚拟对象可以形成一个战队在虚拟场景中进行活动;虚拟对象与其他群组的其他虚拟对象的距离小于距离阈值,例如,连接按钮可以按需突出显示,即在一直显示的情况下进行突出显示,例如,显示连接按钮的动态特效,按需突出显式指的是在满足突出显式的条件时进行突出显式,突出显示的条件包括以下至少之一:虚拟对象所属群组与其他群组发生交互;虚拟对象与其他群组的其他虚拟对象的距离小于距离阈值。
在一些实施例中,获取虚拟对象的交互数据以及虚拟场景的场景数据,场景数据包括虚拟场景的环境数据、虚拟场景的天气数据、虚拟场景的战况数据中至少之一,虚拟对象的交互数据包括虚拟场景A中虚拟对象的位置、虚拟对象的生命值、虚拟对象的装备数据、对战双方的对比数据中至少之一;基于交互数据以及场景数据,调用神经网络模型预测复合动作;其中,复合动作包括攻击操作以及目标动作;将与目标动作关联的动作按钮作为目标动作按钮,通过神经网络预测的方式可以更加准确地确定出与目标动作,进而确定出关联的目标动作按钮,从而能够使得复合动作与当前的虚拟场景适配度更高,从而提高用户的操作效率。
作为示例,在样本虚拟场景对中采集每个样本虚拟场景中各个样本虚拟对象之间的样本交互数据,在样本虚拟场景对中采集每个样本虚拟场景的样本场景数据,根据所采集的样本交互数据以及样本场景数据构建训练样本,以训练样本为待训练的神经网络模型的输入,并以与样本虚拟场景适配的样本复合动作为标注数据,训练神经网络模型,从而调用神经网络模型基于交互数据以及场景数据预测复合动作。
在一些实施例中,确定虚拟场景的相似历史虚拟场景;其中,相似历史虚拟场景与虚拟场景的相似度大于相似度阈值;确定相似历史虚拟场景中的最高频动作;其中,最高频动作是多个候选动作中操作频率最高的候选动作;将最高频动作关联的动作按钮作为目标动作按钮,通过场景神经网络模型可以更加准确地确定出场景相似度,从而提高相似历史虚拟场景的确定准确度,从而基于相似历史虚拟场景得到的最高频动作能够最适合应用到当前的虚拟场景中,从而后续用户能够准确高效地控制虚拟对象在虚拟场景中实施对应的动作,从而有效提高用户的操作效率。
作为示例,确定出虚拟场景A的相似历史虚拟场景B,虚拟场景A与相似历史虚拟场景B之间的相似度大于相似度阈值,采集虚拟场景A的交互数据以及历史虚拟场景的交互数据,基于交互数据调用场景神经网络模型进行场景相似度预测处理,得到虚拟场景A与历史虚拟场景的场景相似度;其中,交互数据包括以下至少之一:虚拟场景A中虚拟对象的位置、虚拟对象的生命值、虚拟对象的装备数据、对战双方的对比数据。
在一些实施例中,每个连接按钮用于连接一个攻击按钮和一个动作按钮的方式包括:连接按钮分别与一个攻击按钮和一个动作按钮部分重合;连接按钮的显示区域通过连接标识分别与一个攻击按钮和一个动作按钮连接,通过重合的方式将连接按钮分别与攻击按钮和动作按钮进行显示上的关联,能够在不影响视野的情形下,向用户提示人机交互界面中所布局的多个按钮之间的连接关系,从而避免误触发连接按钮,例如,用户本想控制虚拟对象同步进行射击以及跳跃,但是由于按钮布局所表征的连接关系不明确,从而导致用户触发了下蹲动作按钮与射击按钮的连接按钮,使得虚拟对象同步进行下蹲以及射击。
作为示例,参见图9A,图9A是本申请实施例提供的虚拟场景的对象控制方法的显示界面示意图,人机交互界面901A中还显示有连接按钮902A,连接按钮902A用于连接攻击按钮903A与动作按钮904A,连接按钮902A设置在攻击按钮903A和动作按钮904A之间,并与攻击按钮903A和动作按钮904A的显示区域存在部分重叠,参见图9B,图9B是本申请实施例提供的虚拟场景的对象控制方法的显示界面示意图,连接标识包括以下至少之一:箭头、曲线、线段,人机交互界面901B中还显示有连接按钮902B,连接按钮902B用于连接攻击按钮903B与动作按钮904B,连接按钮902B设置在攻击按钮903B和动作 按钮904B之间,并与攻击按钮903B和动作按钮904B的显示区域不存在重叠,连接按钮902B通过线条(箭头、曲线、线段)与攻击按钮903B和动作按钮904B连接。
在一些实施例中,参见图4C,图4C是本申请实施例提供的虚拟场景的对象控制方法的流程示意图,步骤102中在显示至少一个连接按钮之前,执行步骤104。
在步骤104中,确定满足自动显示至少一个连接按钮的条件。
作为示例,条件包括以下至少之一:虚拟对象的群组与其他群组的其他虚拟对象之间发生交互,例如,虚拟对象的群组与其他群组的虚拟对象进行交战;虚拟对象与其他群组的其他虚拟对象的距离小于距离阈值。
作为示例,可以按条件显示连接按钮,在不满足条件时仅显示攻击按钮以及动作按钮,在满足条件后显示连接按钮,从而可以保证用户的战斗视野,当虚拟对象的群组与其他群组的其他虚拟对象之间发生交互,例如,发生战斗,自动显示至少一个连接按钮,当虚拟对象与其他群组的其他虚拟对象的距离小于距离阈值时,自动显示至少一个连接按钮。
作为示例,可以保持连接按钮处于显示状态,当显示攻击按钮和至少一个动作按钮时,总是同步显示至少一个连接按钮,从而即便虚拟对象的群组与其他群组的其他虚拟对象之间未发生交互,或者虚拟对象与其他群组的其他虚拟对象的距离不小于距离阈值,即在任何情况下,均可以保持显示连接按钮,从而可以使得用户随时触发连接按钮,提高用户操作的灵活性。
在一些实施例中,在显示攻击按钮和至少一个动作按钮,并显示至少一个连接按钮之后,响应于针对任意一个动作按钮的替换操作,显示多个候选动作;其中,多个候选动作与至少一个动作按钮关联的动作均不同;响应于针对多个候选动作的选择操作,将与任意一个动作按钮关联的动作替换为被选中的候选动作。
作为示例,本申请实施例提供的虚拟场景的对象控制方法提供动作按钮的调整功能,在虚拟场景的对战过程中,提供动作按钮的替换功能,将动作按钮关联的动作替换为其他动作,以便灵活切换各种动作,人机交互界面中显示有连接按钮,连接按钮用于连接攻击按钮与动作按钮,攻击按钮默认与虚拟对象当前持有的虚拟道具关联,响应于针对动作按钮的替换操作,显示待替换的多个候选键位内容,即显示多个候选动作,例如,动作按钮的键位内容为下蹲动作,响应于针对待替换的多个候选动作的选择操作,将选择的候选键位内容更新至动作按钮以替换下蹲动作,即支持将原始键位内容为下蹲的动作按钮的键位内容替换为下趴,还可以替换为探头,可以实现射击操作与探头操作的组合攻击方式,从而可以在不占用过多显示区域的情况下,实现多种动作组合,从而实现多种组合攻击方式。
作为示例,本申请实施例提供的虚拟场景的对象控制方法提供动作按钮的调整功能还可以根据用户操作习惯自动替换,在虚拟场景的对战过程中,提供动作按钮的替换功能,将动作按钮关联的动作替换为其他动作,以便灵活切换各种动作,人机交互界面中显示有连接按钮,连接按钮用于连接攻击按钮与动作按钮,攻击按钮默认与虚拟对象当前持有的虚拟道具关联,响应于用户的替换操作或者响应于虚拟场景发生变化,将自动匹配得到的键位内容更新至动作按钮以替换下蹲动作,即支持将原始键位内容为下蹲的动作按钮的键位内容替换为自动匹配得到的键位内容,例如,替换为下趴,自动匹配的过程是根据虚拟场景匹配得到的,即得到与虚拟场景适配的动作作为键位内容,从而可以在不占用过多显示区域的情况下,智能化地实现多种动作组合,从而实现多种组合攻击方式。
在一些实施例中,攻击道具处于单次攻击模式;步骤103中控制虚拟对象执行目标动作按钮关联的动作,并控制虚拟对象使用攻击道具同步进行攻击操作,可以通过以下技术方案实现:控制虚拟对象执行一次目标动作按钮关联的动作,当执行动作完成后的姿态与执行动作前的姿态不同时,恢复虚拟对象执行动作前的姿态,以及从控制虚拟对象执行目标动作按钮关联的动作开始,控制虚拟对象使用攻击道具进行一次攻击操作,通过瞬时性的操作控制虚拟对象执行瞬时性的动作,能够进行轻量化的操作,便于用户在对战过程中进行灵活的交互操作。
作为示例,执行动作完成后的姿态与执行动作前的姿态不同的动作包括下趴和蹲下,针对连接按钮的触发操作是不可拖动且属于瞬时性操作,例如触发操作是点击操作,控制虚拟对象执行一次目标动作按钮关联的动作,当动作是下趴动作或者蹲下动作时,恢复虚拟对象执行动作前的姿态,即恢复虚拟对象站立,执行动作完成后的姿态与执行动作前的姿态相同时,例如,动作是跳跃动作时,完成一次跳跃动作之后已经恢复至执行动作之前的姿态了,即动作本身具有恢复的能力,从而不需要再次恢复虚拟对象至执行动作之前的姿态,并从控制虚拟对象执行目标动作按钮关联的动作开始,控制虚拟对象使用攻击道具进行一次攻击操作,在整个过程中视角没变。
作为示例,参见图7C,图7C是本申请实施例提供的虚拟场景的对象控制方法的逻辑示意图,在步骤701C中,触发攻击按钮与下蹲动作按钮之间的连接按钮或者触发攻击按钮与下趴动作按钮之间的连接按钮,在步骤702C中,控制虚拟对象执行单次射击操作(射出单发子弹),并同步执行步骤703C,在步骤703C中,控制虚拟对象完成相应动作后恢复至执行动作前的姿态,例如,下蹲或者下趴后恢复站立的 姿态,由于触发操作是不可拖动且属于瞬时性操作,因此在执行步骤702C和步骤703C之后将不再执行其他动作。
在一些实施例中,触发操作是针对目标连接按钮的持续性的操作;在恢复虚拟对象执行动作前的姿态之前,保持执行动作完成后的姿态直至触发操作被释放;当触发操作产生移动轨迹时,根据移动轨迹的方向和角度,同步更新虚拟场景的视野角度;响应于触发操作被释放,停止更新虚拟场景的视野角度,相关技术中视野改变时通过图1中方向按钮302实现的,本申请实施例中复用连接按钮,通过在连接按钮上进行拖动,实现视野角度的更新,从而简化的用户对战过程中的操作难度,提升人机交互效率以及操作效率。
作为示例,执行动作完成后的姿态与执行动作前的姿态不同的动作包括下趴和蹲下,针对连接按钮的触发操作是持续性的拖动操作,例如触发操作是按压操作,在恢复虚拟对象执行动作前的姿态之前,当执行动作完成后的姿态与执行动作前的姿态不同时,例如,动作是下趴动作或者蹲下动作,保持下趴动作或者蹲下动作的姿态直至触发操作被释放,当触发操作产生移动轨迹时,即针对连接按钮的触发操作被拖动,则根据移动轨迹的方向和角度,同步更新虚拟场景的视野角度,在产生移动轨迹时,由于触发操作未被释放,当执行动作完成后的姿态与执行动作前的姿态不同时,即便产生移动轨迹仍然会保持姿态,当执行动作完成后的姿态与执行动作前的姿态相同时,在产生移动轨迹的同时,会保持执行动作前的姿态,例如保持站立姿态,响应于触发操作被释放,停止更新虚拟场景的视野角度。
作为示例,参见图7A,图7A是本申请实施例提供的虚拟场景的对象控制方法的逻辑示意图,当虚拟道具处于单发开火模式下时,单发开火模式指针对连接按钮的每次触发,仅执行一次攻击操作,在步骤701A中,触发攻击按钮与下蹲动作按钮之间的连接按钮或者触发攻击按钮与下趴动作按钮之间的连接按钮,在步骤702A中,控制虚拟对象执行单次射击操作(射出单发子弹),并同步执行步骤703A,在步骤703A中,控制虚拟对象完成相应动作,例如,下蹲或者下趴,在步骤704A中,控制虚拟对象在步骤702A的基础上不再射击,并同步执行步骤705A,控制虚拟对象在步骤703A的基础上保持下蹲或者保持下趴,在步骤706A中,判断针对连接按钮的触发操作是否产生移动轨迹,相当于判断用户是否拖动连接按钮进行移动,当未拖动时,继续执行步骤705A以及步骤704A,当拖动时,执行步骤707A,在步骤707A中,基于步骤705A以及步骤704A的基础上控制虚拟对象的视角按照触发操作的移动轨迹进行移动,在步骤708A中,判断触发操作是否停止,即用户的手指是否松开,当触发操作未停止时,执行步骤707A中,当触发操作停止时,执行步骤709A,在步骤709A中,控制虚拟对象将动作复原为站立、且视角停止移动。
在一些实施例中,攻击道具处于连续攻击模式;步骤103中控制虚拟对象执行目标动作按钮关联的动作,并控制虚拟对象使用攻击道具同步进行攻击操作,可以通过以下技术方案实现:当执行动作完成后的姿态与执行动作前的姿态不同时,控制虚拟对象执行一次目标动作按钮关联的动作,并保持执行动作完成后的姿态;当执行动作完成后的姿态与执行动作前的姿态相同时,控制虚拟对象执行一次目标动作按钮关联的动作;从控制虚拟对象执行目标动作按钮关联的动作开始,控制目标对象使用攻击道具持续进行攻击操作;当执行动作完成后的姿态与执行动作前的姿态不同时,响应于触发操作被释放,恢复虚拟对象执行动作前的姿态,并停止控制虚拟对象使用攻击道具持续进行攻击操作;当执行动作完成后的姿态与执行动作前的姿态相同时,响应于触发操作被释放,停止控制虚拟对象使用攻击道具持续进行攻击操作,通过连续性的攻击提升用户的攻击效率,并在连续性攻击过程中保持动作完成后的姿态,从而有效提高攻击效果。
在一些实施例中,当执行动作完成后的姿态与执行动作前的姿态相同时,还可以控制虚拟对象执行多次目标动作按钮关联的动作直至触发操作被释放,例如,当动作为跳跃动作时,可以控制虚拟对象多次完成跳跃动作直至触发操作被释放,即虚拟对象在保持射击的同时不断跳跃。
作为示例,执行动作完成后的姿态与执行动作前的姿态不同的动作包括以下至少之一:趴下、蹲下,执行动作完成后的姿态与执行动作前的姿态相同的动作包括跳跃,针对连接按钮的触发操作是不可拖动且属于瞬时性操作,例如触发操作是点击操作,可以在设定时间内保持连续攻击后停止攻击,还可以连续进行设定次数的攻击后停止攻击,由于触发操作是瞬时性的,因此会恢复虚拟对象执行动作前的姿态,或者在攻击结束之前虚拟对象保持动作后姿态,攻击结束后恢复虚拟对象执行动作前的姿态,由于触发操作并未被拖动,因此虚拟场景的视角未发生变化。
作为示例,参见图6C,图6C是本申请实施例提供的虚拟场景的对象控制方法的逻辑示意图,当虚拟道具处于连射开火模式下时,在步骤601C中,触发攻击按钮与下蹲动作按钮之间的连接按钮或者触发攻击按钮与下趴动作按钮之间的连接按钮,在步骤602C中,控制虚拟对象执行射击操作,并同步执行步骤603C,在步骤603C中,控制虚拟对象完成相应动作,例如,下蹲或者下趴,在步骤604C中,控制虚拟对象在步骤602C的基础上保持连续射击操作,并同步执行步骤605C,控制虚拟对象在步骤603C的基础上保持下蹲或者保持下趴,在步骤606C中,判断触发操作是否停止,即手指是否松开,当触发操作停 止时,执行步骤607C,在步骤607C中,停止执行射击操作并将动作复原为站立。
在一些实施例中,触发操作是针对目标连接按钮的持续性的操作,例如,持续性的按压操作,响应于触发操作产生移动轨迹,根据移动轨迹的方向和角度,同步更新虚拟场景的视野角度;响应于触发操作被释放,停止更新虚拟场景的视野角度,相关技术中视野改变时通过图1中方向按钮302实现的,本申请实施例中复用连接按钮,通过在连接按钮上进行拖动,实现视野角度的更新,从而简化的用户对战过程中的操作难度,提升人机交互效率以及操作效率。
作为示例,参见图6A,图6A是本申请实施例提供的虚拟场景的对象控制方法的逻辑示意图,当虚拟道具处于连射开火模式下时,在步骤601A中,触发攻击按钮与下蹲动作按钮之间的连接按钮或者触发攻击按钮与下趴动作按钮之间的连接按钮,在步骤602A中,控制虚拟对象执行射击操作,并同步执行步骤603A,在步骤603A中,控制虚拟对象完成相应动作,例如,下蹲或者下趴,在步骤604A中,控制虚拟对象在步骤602A的基础上保持连续射击操作,并同步执行步骤605A,控制虚拟对象在步骤603A的基础上保持下蹲或者保持下趴,在步骤606A中,判断针对连接按钮的触发操作是否产生移动轨迹,即是否拖动手指,当未拖动手指时,执行步骤605A以及步骤604A,当拖动手指时,执行步骤607A,在步骤607A中,基于步骤605A以及步骤604A的基础上控制虚拟对象的视角按照触发操作的移动轨迹进行移动,在步骤608A中,判断触发操作是否停止,即手指是否松开,当触发操作未停止时,执行步骤607A中,当触发操作停止时,执行步骤609A,在步骤609A中,停止射击、动作复原为站立、且视角停止移动。
在一些实施例中,目标动作按钮的工作模式包括手动模式和锁定模式;其中,手动模式用于在触发操作释放后停止触发目标连接按钮,锁定模式用于在触发操作释放后继续自动触发目标动作按钮;步骤103中控制虚拟对象执行目标动作按钮关联的动作,并控制虚拟对象使用攻击道具同步进行攻击操作,可以通过以下技术方案实现的:当触发操作控制目标动作按钮进入手动模式时,在触发操作未被释放的期间,控制虚拟对象执行目标动作按钮关联的动作,并控制虚拟对象使用攻击道具同步进行攻击操作;当触发操作控制目标动作按钮进入锁定模式时,在触发操作未被释放的期间、以及触发操作被释放之后的期间,控制虚拟对象执行目标动作按钮关联的动作,并控制虚拟对象使用攻击道具同步进行攻击操作,通过锁定模式可以解放用户的双手,即便触发操作被释放,仍然可以继续保持攻击以及执行相应的动作,有效提高用户的操作效率。
作为示例,在触发操作被释放之后的期间,可以在持续设定时间内保持连续攻击后停止攻击,还可以连续进行设定次数的攻击后停止攻击,或者在再次接收到针对锁定模式的触发操作时,停止控制虚拟对象使用攻击道具持续进行攻击操作,且当执行动作完成后的姿态与执行动作前的姿态不同时,恢复虚拟对象执行动作前的姿态。
作为示例,本申请实施例提供的虚拟场景的对象控制方法中连接按钮可以自动持续触发,即连接按钮除了具有手动模式,还具有锁定模式,在锁定模式中,当连接按钮被触发时,虚拟对象能自动重复执行复合动作(例如单发射击操作以及跳跃操作),以降低操作难度,以连续按钮关联的攻击操作是单发射击操作为例,响应于针对连续按钮的锁定触发操作,自动重复执行单发射击操作并自动重复执行跳跃操作,例如,在用户按压连接按钮到达预设时长时,将按压操作确定为锁定触发操作,连接按钮被锁定,即使用户松开手指后虚拟对象仍然维持连接按钮所对应的动作,例如,持续进行单发射击并持续跳跃,在响应于用户再次点击连接按钮的操作,连接按钮被解锁,虚拟对象解除连接按钮对应的动作,例如,停止进行单发射击并停止跳跃,连接按钮锁定可以有利于虚拟对象持续性执行攻击以及动作,从而提高操作效率,尤其针对与单次性攻击以及单次性动作,通过锁定连接按钮,可以实现自动连续攻击,从而提高操作效率。
在一些实施例中,当虚拟场景处于按钮设置状态时,虚拟场景处于按钮设置状态表征虚拟场景不处于对战状态,从而用户可以安心设置按钮,响应于针对至少一个连接按钮的选中操作,按照目标显示方式显示每个被选中的连接按钮;其中,目标显示方式显著于未被选中的连接按钮的显示方式;针对每个被选中的连接按钮执行以下处理:当连接按钮处于禁用状态时,响应于针对连接按钮的开启操作,隐藏连接按钮的禁用图标,并将连接按钮标记为开启状态;当连接按钮处于开启状态时,响应于针对连接按钮的禁用操作时,针对连接按钮显示禁用图标,并将连接按钮标记为禁用状态,通过用户的个性化设置对连接按钮的可用状态进行设置以及提示,从而提高人机交互效率以及个性化程度,能够提高用户的操作效率。
作为示例,参见图8,图8是本申请实施例提供的虚拟场景的对象控制方法的逻辑示意图,在步骤801中,接收针对目标连接按钮的开关设置逻辑操作,在步骤802中,显示目标连接按钮的开关选项,同时执行步骤803,在步骤803中,对连接按钮的外框进行高亮显示,并显示连接指引线,在步骤804中,判断是否接收到针对空白区域的点击操作,若未接收到针对空白区域的点击操作,则继续执行步骤802和步骤803,若接收到针对空白区域的点击操作,执行步骤805,在步骤805中,隐藏开关选项,并执行步骤806,在步骤806中,对连接按钮的外框取消高亮显示,并隐藏连接指引线,在步骤802和步骤803 之后,执行步骤807,在步骤807中,接收针对开关选项的点击操作,在步骤808中,判断开关选项是否为“开启”,当开关选项为“开启”时,执行步骤809,在步骤809中,将开关选项切换为“关闭”并在连接按钮上层显示禁用图标,当开关选项为“关闭”时,执行步骤810,在步骤810中,将开关选项切换为“开启”并在连接按钮上层隐藏显示禁用图标。
下面,将说明本申请实施例在一个实际的应用场景中的示例性应用。
终端运行客户端(例如单机版的游戏应用),在客户端的运行过程中输出包括有角色扮演的虚拟场景,虚拟场景是供游戏角色交互的环境,例如可以是用于供游戏角色进行对战的平原、街道、山谷等等;虚拟场景中包括虚拟对象、连接按钮、动作按钮、攻击按钮,虚拟对象可以是受用户(或称用户)控制的游戏角色,即虚拟对象受控于真实用户,将响应于真实用户针对控制器(包括触控屏、声控开关、键盘、鼠标和摇杆等)的操作而在虚拟场景中运动,例如当真实用户向左移动摇杆时,虚拟对象将在虚拟场景中向左部移动,响应于针对动作按钮的触发操作,控制虚拟对象在虚拟场景中执行动作,响应于针对攻击按钮的触发操作,控制虚拟对象在虚拟场景中进行攻击操作,响应于针对连接按钮的触发操作,控制虚拟对象执行动作并同步进行攻击操作。
下文中以攻击按钮是射击按钮,攻击操作是射击操作为例进行说明,攻击操作不局限在射击操作,攻击按钮也可以应用为使用其他攻击道具的按钮,例如,可以使用不同的攻击道具进行攻击,其中,攻击道具包括以下至少之一:枪、弓弩、手雷,将人机交互界面中所显示的攻击按钮默认与虚拟对象当前持有的攻击道具关联,当虚拟对象持有的虚拟道具从手枪切换为弓弩时,则攻击按钮关联的虚拟道具自动从手枪切换到弓弩。
参见图1,在虚拟场景的人机交互界面301的默认布局中,攻击按钮303的右侧环绕显示三个动作按钮304,三个动作按钮分别对应有下蹲动作、下趴动作以及跳跃动作,参见图5A,人机交互界面501A中还显示有连接按钮502A,在攻击按钮503A与三个动作按钮504A之间显示三个连接按钮502A,响应于用户针对连接按钮502A的触发操作,即可一键控制虚拟对象505A同时完成射击操作以及对应动作,例如,图5A中触发的连接按钮502A对应的动作按钮504A用于控制虚拟对象505A执行下蹲动作,即可一键控制虚拟对象505A同时完成射击操作以及下蹲动作,响应于用户针对攻击按钮503A的触发操作,控制虚拟对象单独执行攻击操作,响应于用户针对动作按钮504A的触发操作,控制虚拟对象单独执行下蹲动作。
作为示例,以攻击按钮为原点,还可以将攻击按钮与更多的动作按钮进行连接,例如射击按钮与开镜按钮的连接按钮,响应于针对该连接按钮的触发操作,同时进行射击操作以及开镜操作,射击按钮与探头按钮的连接按钮,响应于针对该连接按钮的触发操作,同时进行射击操作以及探头操作,射击按钮与滑铲按钮的连接按钮,响应于针对该连接按钮的触发操作,同时进行射击操作以及滑铲操作。
参见图5B,图5B是本申请实施例提供的虚拟场景的对象控制方法的显示界面示意图,人机交互界面501B中还显示有连接按钮502B,连接按钮502B用于连接攻击按钮503B与动作按钮504B,当动作按钮504B被触发时,虚拟对象将执行下趴动作,在攻击按钮503B与动作按钮504B之间显示连接按钮502B,响应于用户针对连接按钮502B的触发操作,即可一键控制虚拟对象505B同时完成射击操作以及下趴动作,响应于用户针对攻击按钮503B的触发操作,控制虚拟对象505B单独执行攻击操作,响应于用户针对动作按钮504B的触发操作,控制虚拟对象单独执行下趴动作。
参见图5C,图5C是本申请实施例提供的虚拟场景的对象控制方法的显示界面示意图,人机交互界面501C中还显示有连接按钮502C,在攻击按钮503C与动作按钮504C之间显示连接按钮502C,响应于用户针对连接按钮502C的触发操作,即可一键控制虚拟对象505C同时完成射击操作以及跳跃动作,响应于用户针对动作按钮504C的触发操作,控制虚拟对象单独执行跳跃动作。
参见图5E,人机交互界面501E中显示有下蹲动作按钮504-1E、下趴动作按钮504-2E以及跳跃动作按钮504-3E,人机交互界面501E中还显示有攻击按钮503E,攻击按钮503E与下蹲动作按钮504-1E之间显示有下蹲连接按钮502-1E,攻击按钮503E与下趴动作按钮504-2E之间显示有下趴连接按钮502-2E,攻击按钮503E与跳跃动作按钮504-3E之间显示有跳跃连接按钮502-3E,响应响应于用户针对攻击按钮503E的触发操作,控制虚拟对象505E单独执行攻击操作,响应于用户针对下蹲动作按钮504-1E的触发操作,控制虚拟对象单独执行下蹲动作,响应于用户针对下趴动作按钮504-2E的触发操作,控制虚拟对象单独执行下趴动作,响应于用户针对跳跃动作按钮504-3E的触发操作,控制虚拟对象单独执行跳跃动作。
参见图5D,用户可在自定义设置中分别控制连接按钮是否开启,人机交互界面501D中显示按钮自定义界面506D,此时表征用户可以针对人机交互界面501D中的按钮进行自定义设置,响应于用户针对任一连接按钮503D的触发操作,在连接按钮503D上方显示开启按钮502D以及关闭按钮504D,开启按钮502D以及关闭按钮504D即可控制连接按钮503D的开启与关闭,即控制连接按钮503D在对战过程中 显示或者隐藏。开启按钮与关闭按钮之间只会有一个按钮处于可操作状态,参见图5D,响应于针对关闭按钮504D的触发操作,将开启按钮502D显示为可操作状态,并在连接按钮503D上显示禁用图标505D,响应于针对开启按钮502D的触发操作,将关闭按钮504D显示为可操作状态,并在连接按钮503D上隐藏禁用图标505D,在连接按钮503D上显示禁用图标505D后,响应于针对人机交互界面501D的空白区域的触发操作,隐藏显示开启按钮502D以及关闭按钮504D。
参见图6A,当虚拟道具处于连射开火模式下时,在步骤601A中,触发攻击按钮与下蹲动作按钮之间的连接按钮或者触发攻击按钮与下趴动作按钮之间的连接按钮,在步骤602A中,控制虚拟对象执行射击操作,并同步执行步骤603A,在步骤603A中,控制虚拟对象完成相应动作,例如,下蹲或者下趴,在步骤604A中,控制虚拟对象在步骤602A的基础上保持连续射击操作,并同步执行步骤605A,控制虚拟对象在步骤603A的基础上保持下蹲或者保持下趴,在步骤606A中,判断针对连接按钮的触发操作是否产生移动轨迹,即是否拖动手指,当未拖动手指时,执行步骤605A以及步骤604A,当拖动手指时,执行步骤607A,在步骤607A中,基于步骤605A以及步骤604A的基础上控制虚拟对象的视角按照触发操作的移动轨迹进行移动,在步骤608A中,判断触发操作是否停止,即手指是否松开,当触发操作未停止时,执行步骤607A中,当触发操作停止时,执行步骤609A,在步骤609A中,停止射击、动作复原为站立、且视角停止移动。
作为示例,在武器连续射击开火模式下,接收用户点击攻击按钮与下蹲动作按钮间的连接按钮的操作,或接收用户点击攻击按钮与下趴动作按钮间的连接按钮的操作,用户点击连接按钮相当于同时触发连续射击和动作操作,开始射击同时完成对应的下蹲或下趴动作,若用户一直按住连接按钮未松开手指,则保持触发连续射击并保持动作,用户保持按住连接按钮且拖动手指即可在保持触发连续射击并保持动作的基础上同时控制视角移动,若用户未松开手指,则保持连续射击,且保持下蹲或下趴动作,若用户松开手指,则停止射击,从下蹲或下趴动作复原为站立,且视角停止移动。
参见图6B,图6B是本申请实施例提供的虚拟场景的对象控制方法的逻辑示意图,当虚拟道具处于连射开火模式下时,在步骤601B中,触发攻击按钮与跳跃动作按钮之间的连接按钮,在步骤602B中,控制虚拟对象执行射击操作,并同步执行步骤603B,在步骤603B中,控制虚拟对象完成单次跳跃动作,在步骤604B中,控制虚拟对象在步骤602B的基础上保持连续射击操作,并同步执行步骤605B,控制虚拟对象在步骤603B的基础上不再跳跃、且保持动作复原为站立的状态,在步骤606B中,判断针对连接按钮的触发操作是否产生移动轨迹,即是否拖动手指,当未拖动手指时,执行步骤605B以及步骤604B,当拖动手指时,执行步骤607B,在步骤607B中,基于步骤605B以及步骤604B的基础上控制虚拟对象的视角按照触发操作的移动轨迹进行移动,在步骤608B中,判断触发操作是否停止,即手指是否松开,当触发操作未停止时,执行步骤607B中,当触发操作停止时,执行步骤609B,在步骤609B中,停止射击、且视角停止移动。
作为示例,在武器连续射击开火模式下,接收用户点击攻击按钮与跳跃动作按钮间的连接按钮的操作,用户点击连接按钮相当于同时触发连续射击和动作操作,开始射击同时完成单次跳跃动作并复原为站立状态,若用户一直按住连接按钮未松开手指,则保持触发连续射击操作,但单次跳跃动作结束后角色动作复原为站立,不再重复触发跳跃动作,用户保持按住连接按钮拖动手指即可持触发连续射击并保持动作的基础上同时控制视角移动,若跳跃动作已结束则只控制连续射击的基础上同时控制视角移动,若用户未松开手指,则保持连续射击,但不再触发后续的跳跃动作,若用户松开手指,则停止连续射击,且停止视角移动。
参见图7A,图7A是本申请实施例提供的虚拟场景的对象控制方法的逻辑示意图,当虚拟道具处于单发开火模式下时,在步骤701A中,触发攻击按钮与下蹲动作按钮之间的连接按钮或者触发攻击按钮与下趴动作按钮之间的连接按钮,在步骤702A中,控制虚拟对象执行单次射击操作(射出单发子弹),并同步执行步骤703A,在步骤703A中,控制虚拟对象完成相应动作,例如,下蹲或者下趴,在步骤704A中,控制虚拟对象在步骤702A的基础上不再射击,并同步执行步骤705A,控制虚拟对象在步骤703A的基础上保持下蹲或者保持下趴,在步骤706A中,判断针对连接按钮的触发操作是否产生移动轨迹,即是否拖动手指,当未拖动手指时,执行步骤705A以及步骤704A,当拖动手指时,执行步骤707A,在步骤707A中,基于步骤705A以及步骤704A的基础上控制虚拟对象的视角按照触发操作的移动轨迹进行移动,在步骤708A中,判断触发操作是否停止,即手指是否松开,当触发操作未停止时,执行步骤707A中,当触发操作停止时,执行步骤709A,在步骤709A中,动作复原为站立、且视角停止移动。
作为示例,在武器单发射击开火模式下,接收用户点击攻击按钮与下蹲动作按钮间的连接按钮的操作,或接收用户点击攻击按钮与下趴动作按钮间的连接按钮的操作,用户点击连接按钮相当于同时触发单发射击和动作操作,开始完成单发射击同时完成对应的下蹲或下趴动作,若用户一直按住连接按钮未松开手指,在单发射击完成后不再次触发射击,只保持持续触发下蹲或下趴动作,用户保持按住连接按钮拖动手指即可在单次射击并保持动作的基础上同时控制视角移动,若单发射击已完成则只在保持动作 的基础上同时控制视角移动,若用户未松开手指,保持下蹲或下趴动作的同时控制视角移动,单发射击完成后停止射击,不再次触发射击,若用户松开手指,则虚拟对象的下蹲或下趴动作复原为站立动作,且视角停止移动。
参见图7B,当虚拟道具处于单发开火模式下时,在步骤701B中,触发攻击按钮与跳跃动作按钮之间的连接按钮,在步骤702B中,控制虚拟对象执行射击操作(射出单发子弹),并同步执行步骤703B,在步骤703B中,控制虚拟对象完成单次跳跃动作,在步骤704B中,控制虚拟对象在步骤702B的基础上不再射击,并同步执行步骤705B,控制虚拟对象在步骤703B的基础上不再跳跃、且保持动作复原为站立的状态,在步骤706B中,判断针对连接按钮的触发操作是否产生移动轨迹,即是否拖动手指,当未拖动手指时,执行步骤705B以及步骤704B,当拖动手指时,执行步骤707B,在步骤707B中,基于步骤705B以及步骤704B的基础上控制虚拟对象的视角按照触发操作的移动轨迹进行移动,在步骤708B中,判断触发操作是否停止,即手指是否松开,当触发操作未停止时,执行步骤707B中,当触发操作停止时,执行步骤709B,在步骤709B中,视角停止移动。
作为示例,在武器单发射击开火模式下,接收用户点击攻击按钮与跳跃动作按钮间的连接按钮的操作,用户点击连接按钮相当于同时触发单发射击和动作操作,开始单发射击同时完成单次跳跃动作并复原为站立状态,即使用户持续按住连接按钮,在单发射击完成后不再次触发射击,在单次跳跃动作结束后虚拟对象动作复原为站立,不再重复触发跳跃动作,用户保持按住连接按钮拖动手指即可持触发单次射击并保持动作的基础上同时控制视角移动,若单次射击与跳跃动作已结束则只控制视角移动,若用户松开手指,则停止视角移动。
参见图8,在步骤801中,接收针对目标连接按钮的开关设置逻辑操作,在步骤802中,显示目标连接按钮的开关选项,同时执行步骤803,在步骤803中,对连接按钮的外框进行高亮显示,并显示连接指引线,在步骤804中,判断是否接收到针对空白区域的点击操作,若未接收到针对空白区域的点击操作,则继续执行步骤802和步骤803,若接收到针对空白区域的点击操作,执行步骤805,在步骤805中,隐藏开关选项,并执行步骤806,在步骤806中,对连接按钮的外框取消高亮显示,并隐藏连接指引线,在步骤802和步骤803之后,执行步骤807,在步骤807中,接收针对开关选项的点击操作,在步骤808中,判断开关选项是否为“开启”,当开关选项为“开启”时,执行步骤809,在步骤809中,将开关选项切换为“关闭”并在连接按钮上层显示禁用图标,当开关选项为“关闭”时,执行步骤810,在步骤810中,将开关选项切换为“开启”并在连接按钮上层隐藏显示禁用图标。
作为示例,接收针对目标连接按钮的开关设置逻辑操作后,人机交互界面处于可进行布局设置状态中,响应于针对任意一个连接按钮的触发操作,对应连接按钮上方显示开关选项,同时被触发的连接按钮的外框高亮显示,且显示连接指引线,此时响应于针对空白区域即可隐藏开关选项,同时之前被触发的连接按钮的外框取消高亮显示并隐藏指引线,响应于针对开关选项的触发操作,若开关选项为“开启”,则将开关选项切换为“关闭”,同时连接按钮上层显示禁用图标或者不显示连接按钮,代表连接按钮功能未被开启,对战中不可被使用或者对战中不可被感知,连接按钮的开关设置可以批量设置或者由针对性的设置,响应于针对开关选项的触发操作,若开关选项为“关闭”,则将开关选项切换为“开启”,连接按钮上隐藏禁用图标,代表连接按钮功能激活,对战中可被使用或者对战中可被感知。
在一些实施例中,本申请实施例提供的虚拟场景的对象控制方法提供动作按钮的调整功能,在虚拟场景的对战过程中,提供动作按钮的替换功能,将动作按钮关联的动作替换为其他动作,以便灵活切换各种动作,人机交互界面中显示有连接按钮,连接按钮用于连接攻击按钮与动作按钮,攻击按钮默认与虚拟对象当前持有的虚拟道具关联,响应于针对动作按钮的替换操作,显示待替换的多个候选键位内容,动作按钮的键位内容为下蹲动作,响应于针对待替换的多个候选动作的选择操作,将选择的候选键位内容更新至动作按钮以替换下蹲动作,即支持将原始键位内容为下蹲的动作按钮的键位内容替换为下趴,还可以替换为探头,可以实现射击操作与探头操作的组合攻击方式,从而可以在不占用过多显示区域的情况下,实现多种动作组合,从而实现多种组合攻击方式。
在一些实施例中,本申请实施例提供的虚拟场景的对象控制方法提供防止误触碰的功能,通过设定的按压次数、按压时间、按压压力来确认本次触发操作是有效触发操作,例如,当针对连接按钮A的触发操作的按压次数大于对应连接按钮A的动作按钮的设定按压次数时,或者,当针对连接按钮A的触发操作的按压时间大于对应连接按钮A的动作按钮的设定按压时间时,或者,当针对连接按钮A的触发操作的按压压力大于对应连接按钮A的动作按钮的设定按压压力时,控制虚拟对象执行连接按钮A对应的复合动作,从而防止用户对连接按钮进行误触碰。
在一些实施例中,本申请实施例提供的虚拟场景的对象控制方法提供了连接按钮的多种形态,参见图9A,人机交互界面901A中还显示有连接按钮902A,连接按钮902A用于连接攻击按钮903A与动作按钮904A,连接按钮902A设置在攻击按钮903A和动作按钮904A之间,并与攻击按钮903A和动作按钮904A的显示区域存在部分重叠,参见图9B,图9B是本申请实施例提供的虚拟场景的对象控制方法的 显示界面示意图,人机交互界面901B中还显示有连接按钮902B,连接按钮902B用于连接攻击按钮903B与动作按钮904B,连接按钮902B设置在攻击按钮903B和动作按钮904B之间,并与攻击按钮903B和动作按钮904B的显示区域不存在重叠,连接按钮902B通过线条与攻击按钮903B和动作按钮904B连接。
在一些实施例中,本申请实施例提供的虚拟场景的对象控制方法提供了连接按钮的不同显示时机,例如,连接按钮可以一直显示,例如,连接按钮可以按需显示,即连接按钮从不显示状态切换到显示状态,按需显示的条件包括以下至少之一:虚拟对象所属群组与其他群组发生交互;虚拟对象与其他群组的其他虚拟对象的距离小于距离阈值,例如,连接按钮可以按需突出显示,即在一直显示的情况下进行突出显示,例如,显示连接按钮的动态特效,突出显示的条件包括以下至少之一:虚拟对象所属群组与其他群组发生交互;虚拟对象与其他群组的其他虚拟对象的距离小于距离阈值。
在一些实施例中,本申请实施例提供的虚拟场景的对象控制方法中连接按钮可以自动持续触发,连接按钮具有手动模式以及锁定模式,在锁定模式中,当连接按钮被触发时,虚拟对象自动重复执行复合动作(单发射击操作以及跳跃操作),以降低操作难度,以连续按钮关联的攻击操作是单发射击操作为例,响应于针对连续按钮的锁定触发操作,自动重复执行单发射击操作并自动重复执行跳跃操作,例如,在用户按压连接按钮到达预设时长时,将按压操作确定为锁定触发操作,连接按钮被锁定,即使用户松开手指后虚拟对象仍然维持连接按钮所对应的动作,例如,持续进行单发射击并持续跳跃,在响应于用户再次点击连接按钮的操作,连接按钮被解锁,虚拟对象解除连接按钮对应的动作,例如,停止进行单发射击并停止跳跃,连接按钮锁定可以有利于虚拟对象持续性执行攻击以及动作,从而提高操作效率,尤其针对与单次性攻击以及单次性动作,通过锁定连接按钮,可以实现自动连续攻击,从而提高操作效率。
手动模式和锁定模式可以基于操作参数进行切换,即可以基于相同类型操作的不同操作参数来触发,以操作是按压操作为例,例如,当针对连接按钮A的触发操作的按压次数大于设定按压次数时,或者,当针对连接按钮A的触发操作的按压时间大于设定按压时间时,或者,当针对连接按钮A的触发操作的按压压力大于设定按压压力时,将连接按钮确定为处于锁定模式,即连接按钮被锁定,否则连接按钮处于手动模式,手动模式和锁定模式也可以基于不同类型的操作来触发,例如,当针对连接按钮A的触发操作为点击操作时,将连接按钮确定处于手动模式,当针对连接按钮A的触发操作为滑动操作时,将连接按钮确定处于锁定模式。
本申请实施例提供的虚拟场景的对象控制方法支持添加三个连接按钮,每个连接按钮用于对应射击按钮与每个动作按钮,例如,对应射击按钮以及下蹲动作按钮的连接按钮、对应射击按钮以及下趴动作按钮的连接按钮、对应射击按钮以及跳跃动作按钮的连接按钮,帮助用户快速一键完成原本需要同时点击两个按钮的操作,还可同时控制视角移动,低学习成本且操作易用地实现了多种攻击动作,在虚拟场景的交互领域具有较为广泛的应用前景。
为了能够降低操作学习难度,使更多用户能够快速掌握不同类型的攻击操作,本申请实施例提供的虚拟场景的对象控制方法提供了连接按钮,其连接形式是将射击按钮与三个动作按钮分别组合为三个连接按钮,点击连接按钮即同时触发射击操作以及对应的动作,达到点击一个按钮同时触发两个功能的效果,例如,点击射击按钮与跳跃动作按钮间的连接按钮,则触发虚拟对象在跳跃的同时进行射击,由于将动作与攻击结合的高阶攻击方式更直观地通过连接按钮开放给用户,更利于用户进行快速操作,完成多种攻击与动作的复合操作,并且有利于提高所有用户的操作体验,此外,连接按钮可以通过自定义设置自个性化决定将其开启或关闭,对不同的连接按钮进行组合使用,在降低操作难度的同时提高操作的灵活性。
下面继续说明本申请实施例提供的虚拟场景的对象控制装置455的实施为软件模块的示例性结构,在一些实施例中,如图3所示,存储在存储器450的虚拟场景的对象控制装置455中的软件模块可以包括:本申请实施例提供一种虚拟场景的对象控制装置,包括:显示模块4551,配置为显示虚拟场景;其中,虚拟场景包括持有攻击道具的虚拟对象;显示模块4551,还配置为显示攻击按钮和至少一个动作按钮,并显示至少一个连接按钮;其中,每个连接按钮用于连接一个攻击按钮和一个动作按钮;控制模块4552,配置为响应于针对目标连接按钮的触发操作,控制虚拟对象执行目标动作按钮关联的动作,并控制虚拟对象使用攻击道具同步进行攻击操作;其中,目标动作按钮是至少一个动作按钮中与目标连接按钮连接的动作按钮,目标连接按钮是至少一个连接按钮中被选中的任意一个连接按钮。
在一些实施例中,显示模块4551,还配置为:显示与虚拟对象当前持有的攻击道具关联的攻击按钮;其中,当攻击按钮被触发时,虚拟对象使用攻击道具进行攻击操作;在攻击按钮的周围显示至少一个动作按钮;其中,每个动作按钮关联一个动作。
在一些实施例中,至少一个动作按钮的类型包括以下至少之一:与高频动作关联的动作按钮;其中,高频动作是多个候选动作中操作频率高于操作频率阈值的候选动作;与目标动作关联的动作按钮;其中,目标动作与虚拟对象在虚拟场景中的状态适配。
在一些实施例中,显示模块4551,还配置为:针对至少一个动作按钮中的每个动作按钮,显示用于连接动作按钮和攻击按钮的连接按钮;其中,连接按钮具有以下显示属性至少之一:当处于禁用状态时连接按钮包括禁用图标,当处于可用状态时连接按钮包括可用图标。
在一些实施例中,显示模块4551,还配置为:针对至少一个动作按钮中的目标动作按钮,显示用于连接目标动作按钮和攻击按钮的连接按钮;其中,目标动作按钮关联的动作与虚拟对象在虚拟场景中的状态适配;或者,针对至少一个动作按钮中的目标动作按钮,基于第一显示方式显示用于连接目标动作按钮和攻击按钮的连接按钮,并针对至少一个动作按钮中除目标动作按钮的其他动作按钮,基于第二显示方式显示连接其他动作按钮和攻击按钮的连接按钮。
在一些实施例中,显示模块4551,还配置为:获取虚拟对象的交互数据以及虚拟场景的场景数据;基于交互数据以及场景数据,调用神经网络模型预测复合动作;其中,复合动作包括攻击操作以及目标动作;将与目标动作关联的动作按钮作为目标动作按钮。
在一些实施例中,显示模块4551,还配置为:确定虚拟场景的相似历史虚拟场景;其中,相似历史虚拟场景与虚拟场景的相似度大于相似度阈值;确定相似历史虚拟场景中的最高频动作;其中,最高频动作是多个候选动作中操作频率最高的候选动作;将最高频动作关联的动作按钮作为目标动作按钮。
在一些实施例中,每个连接按钮用于连接一个攻击按钮和一个动作按钮的方式包括:连接按钮分别与一个攻击按钮和一个动作按钮部分重合;连接按钮的显示区域通过连接标识分别与一个攻击按钮和一个动作按钮连接。
在一些实施例中,在显示至少一个连接按钮之前,显示模块4551,还配置为:确定满足自动显示至少一个连接按钮的条件;其中,条件包括以下至少之一:虚拟对象的群组与其他群组的其他虚拟对象之间发生交互;虚拟对象与其他群组的其他虚拟对象的距离小于距离阈值。
在一些实施例中,在显示攻击按钮和至少一个动作按钮,并显示至少一个连接按钮之后,显示模块4551,还配置为:响应于针对任意一个动作按钮的替换操作,显示多个候选动作;其中,多个候选动作与至少一个动作按钮关联的动作均不同;响应于针对多个候选动作的选择操作,将与任意一个动作按钮关联的动作替换为被选中的候选动作。
在一些实施例中,攻击道具处于单次攻击模式;控制模块4552,还配置为:控制虚拟对象执行一次目标动作按钮关联的动作,当执行动作完成后的姿态与执行动作前的姿态不同时,恢复虚拟对象执行动作前的姿态,以及从控制虚拟对象执行目标动作按钮关联的动作开始,控制虚拟对象使用攻击道具进行一次攻击操作。
在一些实施例中,触发操作是针对目标连接按钮的持续性的操作;在恢复虚拟对象执行动作前的姿态之前,控制模块4552,还配置为:保持执行动作完成后的姿态直至触发操作被释放;当触发操作产生移动轨迹时,根据移动轨迹的方向和角度,同步更新虚拟场景的视野角度;响应于触发操作被释放,停止更新虚拟场景的视野角度。
在一些实施例中,攻击道具处于连续攻击模式;控制模块4552,还配置为:当执行动作完成后的姿态与执行动作前的姿态不同时,控制虚拟对象执行一次目标动作按钮关联的动作,并保持执行动作完成后的姿态;当执行动作完成后的姿态与执行动作前的姿态相同时,控制虚拟对象执行一次目标动作按钮关联的动作;从控制虚拟对象执行目标动作按钮关联的动作开始,控制目标对象使用攻击道具持续进行攻击操作;当执行动作完成后的姿态与执行动作前的姿态不同时,响应于触发操作被释放,恢复虚拟对象执行动作前的姿态,并停止控制虚拟对象使用攻击道具持续进行攻击操作;当执行动作完成后的姿态与执行动作前的姿态相同时,响应于触发操作被释放,停止控制虚拟对象使用攻击道具持续进行攻击操作。
在一些实施例中,控制模块4552,还配置为:响应于触发操作产生移动轨迹,根据移动轨迹的方向和角度,同步更新虚拟场景的视野角度;响应于触发操作被释放,停止更新虚拟场景的视野角度。
在一些实施例中,目标动作按钮的工作模式包括手动模式和锁定模式;其中,手动模式用于在触发操作释放后停止触发目标连接按钮,锁定模式用于在触发操作释放后继续自动触发目标动作按钮;控制模块4552,还配置为当触发操作控制目标动作按钮进入手动模式时,在触发操作未被释放的期间,控制虚拟对象执行目标动作按钮关联的动作,并控制虚拟对象使用攻击道具同步进行攻击操作;当触发操作控制目标动作按钮进入锁定模式时,在触发操作未被释放的期间、以及触发操作被释放之后的期间,控制虚拟对象执行目标动作按钮关联的动作,并控制虚拟对象使用攻击道具同步进行攻击操作。
在一些实施例中,当虚拟场景处于按钮设置状态时,显示模块4551,还配置为:响应于针对至少一个连接按钮的选中操作,按照目标显示方式显示每个被选中的连接按钮;其中,目标显示方式显著于未被选中的连接按钮的显示方式;针对每个被选中的连接按钮执行以下处理:当连接按钮处于禁用状态时,响应于针对连接按钮的开启操作,隐藏连接按钮的禁用图标,并将连接按钮标记为开启状态;当连接按钮处于开启状态时,响应于针对连接按钮的禁用操作时,针对连接按钮显示禁用图标,并将连接按钮标 记为禁用状态。
本申请实施例提供了一种计算机程序产品,该计算机程序产品包括计算机程序或计算机可执行指令,该计算机可执行指令存储在计算机可读存储介质中。电子设备的处理器从计算机可读存储介质读取该计算机可执行指令,处理器执行该计算机可执行指令,使得该电子设备执行本申请实施例上述的虚拟场景的对象控制方法。
本申请实施例提供一种存储有可执行指令的计算机可读存储介质,其中存储有可执行指令,当可执行指令被处理器执行时,将被处理器执行本申请实施例提供的虚拟场景的对象控制方法,例如,如图4A-4C示出的虚拟场景的对象控制方法。
在一些实施例中,计算机可读存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、闪存、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
在一些实施例中,可执行指令可以采用程序、软件、软件模块、脚本或代码的形式,按任意形式的编程语言(包括编译或解释语言,或者声明性或过程性语言)来编写,并且其可按任意形式部署,包括被部署为独立的程序或者被部署为模块、组件、子例程或者适合在计算环境中使用的其它单元。
作为示例,可执行指令可以但不一定对应于文件系统中的文件,可以可被存储在保存其它程序或数据的文件的一部分,例如,存储在超文本标记语言(HTML,Hyper Text Markup Language)文档中的一个或多个脚本中,存储在专用于所讨论的程序的单个文件中,或者,存储在多个协同文件(例如,存储一个或多个模块、子程序或代码部分的文件)中。
作为示例,可执行指令可被部署为在一个电子设备上执行,或者在位于一个地点的多个电子设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个电子设备上执行。
综上所述,通过本申请实施例显示攻击按钮和动作按钮,并显示用于连接一个攻击按钮和一个动作的按连接按钮,响应于针对目标连接按钮的触发操作,控制虚拟对象执行目标动作按钮关联的动作并使用攻击道具同步进行攻击操作,通过布局连接按钮使得动作与攻击操作能够同时执行,相当于使用单个按钮同时实现多个功能,从而能够提升用户操作效率。
以上所述,仅为本申请的实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和范围之内所作的任何修改、等同替换和改进等,均包含在本申请的保护范围之内。

Claims (20)

  1. 一种虚拟场景的对象控制方法,所述方法由电子设备执行,所述方法包括:
    显示虚拟场景;其中,所述虚拟场景包括持有攻击道具的虚拟对象;
    显示攻击按钮和至少一个动作按钮,并显示至少一个连接按钮;其中,每个所述连接按钮用于连接一个所述攻击按钮和一个所述动作按钮;
    响应于针对目标连接按钮的触发操作,控制所述虚拟对象执行目标动作按钮关联的动作,并控制所述虚拟对象使用所述攻击道具同步进行攻击操作;其中,所述目标动作按钮是所述至少一个动作按钮中与所述目标连接按钮连接的动作按钮,所述目标连接按钮是所述至少一个连接按钮中被选中的任意一个连接按钮。
  2. 根据权利要求1所述的方法,其中,所述显示攻击按钮和至少一个动作按钮,包括:
    显示与所述虚拟对象当前持有的攻击道具关联的攻击按钮;其中,当所述攻击按钮被触发时,所述虚拟对象使用所述攻击道具进行所述攻击操作;
    在所述攻击按钮的周围显示至少一个动作按钮;其中,每个所述动作按钮关联一个动作。
  3. 根据权利要求1所述的方法,其中,所述至少一个动作按钮的类型包括以下至少之一:
    与高频动作关联的动作按钮;其中,所述高频动作是多个候选动作中操作频率高于操作频率阈值的候选动作;
    与目标动作关联的动作按钮;其中,所述目标动作与所述虚拟对象在所述虚拟场景中的状态适配。
  4. 根据权利要求1所述的方法,其中,所述显示至少一个连接按钮,包括:
    针对所述至少一个动作按钮中的每个所述动作按钮,显示用于连接所述动作按钮和所述攻击按钮的连接按钮;
    其中,所述连接按钮具有以下显示属性至少之一:当处于禁用状态时所述连接按钮包括禁用图标,当处于可用状态时所述连接按钮包括可用图标。
  5. 根据权利要求1所述的方法,其中,所述显示至少一个连接按钮,包括:
    针对所述至少一个动作按钮中的目标动作按钮,显示用于连接所述目标动作按钮和所述攻击按钮的连接按钮;其中,所述目标动作按钮关联的动作与所述虚拟对象在所述虚拟场景中的状态适配;或者,
    针对所述至少一个动作按钮中的目标动作按钮,基于第一显示方式显示用于连接所述目标动作按钮和所述攻击按钮的连接按钮,并针对所述至少一个动作按钮中除所述目标动作按钮的其他所述动作按钮,基于第二显示方式显示连接所述其他动作按钮和所述攻击按钮的连接按钮。
  6. 根据权利要求5所述的方法,其中,所述方法还包括:
    获取所述虚拟对象的交互数据以及所述虚拟场景的场景数据;
    基于所述交互数据以及所述场景数据,调用神经网络模型预测复合动作;其中,所述复合动作包括所述攻击操作以及目标动作;
    将与所述目标动作关联的动作按钮作为所述目标动作按钮。
  7. 根据权利要求5所述的方法,其中,所述方法还包括:
    确定所述虚拟场景的相似历史虚拟场景;其中,所述相似历史虚拟场景与所述虚拟场景的相似度大于相似度阈值;
    确定所述相似历史虚拟场景中的最高频动作;其中,所述最高频动作是多个候选动作中操作频率最高的候选动作;
    将所述最高频动作关联的动作按钮作为所述目标动作按钮。
  8. 根据权利要求1所述的方法,其中,每个所述连接按钮用于连接一个所述攻击按钮和一个所述动作按钮的方式包括:
    所述连接按钮分别与一个所述攻击按钮和一个所述动作按钮部分重合;
    所述连接按钮的显示区域通过连接标识分别与一个所述攻击按钮和一个所述动作按钮连接。
  9. 根据权利要求1所述的方法,其中,在显示至少一个连接按钮之前,所述方法还包括:
    确定满足自动显示所述至少一个连接按钮的条件;其中,所述条件包括以下至少之一:所述虚拟对象的群组与其他群组的其他虚拟对象之间发生交互;所述虚拟对象与所述其他群组的其他虚拟对象的距离小于距离阈值。
  10. 根据权利要求1所述的方法,其中,在显示攻击按钮和至少一个动作按钮,并显示至少一个连接按钮之后,所述方法还包括:
    响应于针对任意一个动作按钮的替换操作,显示多个候选动作;其中,所述多个候选动作与所述至少一个动作按钮关联的动作均不同;
    响应于针对所述多个候选动作的选择操作,将与所述任意一个动作按钮关联的动作替换为被选中的 候选动作。
  11. 根据权利要求1所述的方法,其中,
    所述攻击道具处于单次攻击模式;
    所述控制所述虚拟对象执行所述目标动作按钮关联的动作,并控制所述虚拟对象使用所述攻击道具同步进行攻击操作,包括:
    控制所述虚拟对象执行一次所述目标动作按钮关联的动作,当执行所述动作完成后的姿态与执行所述动作前的姿态不同时,恢复所述虚拟对象执行所述动作前的姿态,以及
    从控制所述虚拟对象执行所述目标动作按钮关联的动作开始,控制所述虚拟对象使用所述攻击道具进行一次攻击操作。
  12. 根据权利要求11所述的方法,其中,
    所述触发操作是针对所述目标连接按钮的持续性的操作;
    在恢复所述虚拟对象执行所述动作前的姿态之前,所述方法还包括:
    保持执行所述动作完成后的姿态直至所述触发操作被释放;
    当所述触发操作产生移动轨迹时,根据所述移动轨迹的方向和角度,同步更新所述虚拟场景的视野角度;
    响应于所述触发操作被释放,停止更新所述虚拟场景的视野角度。
  13. 根据权利要求1所述的方法,其中,
    所述攻击道具处于连续攻击模式;
    所述控制所述虚拟对象执行所述目标动作按钮关联的动作,并控制所述虚拟对象使用所述攻击道具同步进行攻击操作,包括:
    当执行所述动作完成后的姿态与执行所述动作前的姿态不同时,控制所述虚拟对象执行一次所述目标动作按钮关联的动作,并保持执行所述动作完成后的姿态;
    当执行所述动作完成后的姿态与执行所述动作前的姿态相同时,控制所述虚拟对象执行一次所述目标动作按钮关联的动作;
    从控制所述虚拟对象执行所述目标动作按钮关联的动作开始,控制所述目标对象使用所述攻击道具持续进行攻击操作;
    所述方法还包括:
    当执行所述动作完成后的姿态与执行所述动作前的姿态不同时,响应于所述触发操作被释放,恢复所述虚拟对象执行所述动作前的姿态,并停止控制所述虚拟对象使用所述攻击道具持续进行攻击操作;
    当执行所述动作完成后的姿态与执行所述动作前的姿态相同时,响应于所述触发操作被释放,停止控制所述虚拟对象使用所述攻击道具持续进行攻击操作。
  14. 根据权利要求13所述的方法,其中,所述触发操作是针对所述目标连接按钮的持续性的操作,所述方法还包括:
    响应于所述触发操作产生移动轨迹,根据所述移动轨迹的方向和角度,同步更新所述虚拟场景的视野角度;
    响应于所述触发操作被释放,停止更新所述虚拟场景的视野角度。
  15. 根据权利要求1所述的方法,其中,
    所述目标动作按钮的工作模式包括手动模式和锁定模式;其中,所述手动模式用于在所述触发操作释放后停止触发所述目标连接按钮,所述锁定模式用于在所述触发操作释放后继续自动触发所述目标动作按钮;
    所述控制所述虚拟对象执行所述目标动作按钮关联的动作,并控制所述虚拟对象使用所述攻击道具同步进行攻击操作,包括:
    当所述触发操作控制所述目标动作按钮进入所述手动模式时,在所述触发操作未被释放的期间,控制所述虚拟对象执行所述目标动作按钮关联的动作,并控制所述虚拟对象使用所述攻击道具同步进行攻击操作;
    当所述触发操作控制所述目标动作按钮进入锁定模式时,在所述触发操作未被释放的期间、以及所述触发操作被释放之后的期间,控制所述虚拟对象执行所述目标动作按钮关联的动作,并控制所述虚拟对象使用所述攻击道具同步进行攻击操作。
  16. 根据权利要求1所述的方法,其中,当所述虚拟场景处于按钮设置状态时,所述方法还包括:
    响应于针对至少一个连接按钮的选中操作,按照目标显示方式显示每个被选中的连接按钮;其中,所述目标显示方式显著于未被选中的连接按钮的显示方式;
    针对每个被选中的连接按钮执行以下处理:
    当所述连接按钮处于禁用状态时,响应于针对所述连接按钮的开启操作,隐藏所述连接按钮的禁用 图标,并将所述连接按钮标记为所述开启状态;
    当所述连接按钮处于开启状态时,响应于针对所述连接按钮的禁用操作时,针对所述连接按钮显示所述禁用图标,并将所述连接按钮标记为所述禁用状态。
  17. 一种虚拟场景的对象控制装置,所述装置包括:
    显示模块,配置为显示虚拟场景;其中,所述虚拟场景包括持有攻击道具的虚拟对象;
    所述显示模块,还配置为显示攻击按钮和至少一个动作按钮,并显示至少一个连接按钮;其中,每个所述连接按钮用于连接一个所述攻击按钮和一个所述动作按钮;
    控制模块,配置为响应于针对目标连接按钮的触发操作,控制所述虚拟对象执行目标动作按钮关联的动作,并控制所述虚拟对象使用所述攻击道具同步进行攻击操作;其中,所述目标动作按钮是所述至少一个动作按钮中与所述目标连接按钮连接的动作按钮,所述目标连接按钮是所述至少一个连接按钮中被选中的任意一个连接按钮。
  18. 一种电子设备,所述电子设备包括:
    存储器,用于存储计算机可执行指令;
    处理器,用于执行所述存储器中存储的计算机可执行指令时,实现权利要求1至16任一项所述的虚拟场景的对象控制方法。
  19. 一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现权利要求1至16任一项所述的虚拟场景的对象控制方法。
  20. 一种计算机程序产品,包括计算机程序或计算机可执行指令,所述计算机程序或计算机可执行指令被处理器执行时实现权利要求1至16任一项所述的虚拟场景的对象控制方法。
PCT/CN2022/120775 2021-10-21 2022-09-23 虚拟场景的对象控制方法、装置、电子设备、计算机程序产品及计算机可读存储介质 WO2023065964A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2023571193A JP2024519364A (ja) 2021-10-21 2022-09-23 仮想シーンのオブジェクト制御方法、装置、電子機器及びコンピュータプログラム
US18/214,903 US20230330536A1 (en) 2021-10-21 2023-06-27 Object control method and apparatus for virtual scene, electronic device, computer program product, and computer-readable storage medium

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202111227167.8A CN113926181A (zh) 2021-10-21 2021-10-21 虚拟场景的对象控制方法、装置及电子设备
CN202111227167.8 2021-10-21
CN202111672352.8 2021-12-31
CN202111672352.8A CN114210047B (zh) 2021-10-21 2021-12-31 虚拟场景的对象控制方法、装置及电子设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/214,903 Continuation US20230330536A1 (en) 2021-10-21 2023-06-27 Object control method and apparatus for virtual scene, electronic device, computer program product, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2023065964A1 true WO2023065964A1 (zh) 2023-04-27

Family

ID=79280889

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/120775 WO2023065964A1 (zh) 2021-10-21 2022-09-23 虚拟场景的对象控制方法、装置、电子设备、计算机程序产品及计算机可读存储介质

Country Status (4)

Country Link
US (1) US20230330536A1 (zh)
JP (1) JP2024519364A (zh)
CN (2) CN113926181A (zh)
WO (1) WO2023065964A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113926181A (zh) * 2021-10-21 2022-01-14 腾讯科技(深圳)有限公司 虚拟场景的对象控制方法、装置及电子设备
CN114053712B (zh) * 2022-01-17 2022-04-22 中国科学院自动化研究所 一种虚拟对象的动作生成方法、装置及设备
CN114146420B (zh) * 2022-02-10 2022-04-22 中国科学院自动化研究所 一种资源分配方法、装置及设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107835148A (zh) * 2017-08-23 2018-03-23 杭州电魂网络科技股份有限公司 游戏角色控制方法、装置、系统及游戏客户端
CN109568949A (zh) * 2018-09-20 2019-04-05 厦门吉比特网络技术股份有限公司 一种游戏的空中稳定攻击方法和装置
US20200353355A1 (en) * 2018-04-17 2020-11-12 Tencent Technology (Shenzhen) Company Limited Information object display method and apparatus in virtual scene, and storage medium
CN113926181A (zh) * 2021-10-21 2022-01-14 腾讯科技(深圳)有限公司 虚拟场景的对象控制方法、装置及电子设备

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008229290A (ja) * 2007-03-22 2008-10-02 Tsutomu Ishizaka 戦闘システム
CN106730810B (zh) * 2015-11-19 2020-02-18 网易(杭州)网络有限公司 一种移动智能终端的游戏按钮切换方法及装置
CN109364476B (zh) * 2018-11-26 2022-03-08 网易(杭州)网络有限公司 游戏的控制方法和装置
CN110141869A (zh) * 2019-04-11 2019-08-20 腾讯科技(深圳)有限公司 操作控制方法、装置、电子设备及存储介质
CN110201391B (zh) * 2019-06-05 2023-04-07 网易(杭州)网络有限公司 游戏中虚拟角色的控制方法和装置
CN110743166A (zh) * 2019-10-22 2020-02-04 腾讯科技(深圳)有限公司 技能按钮的切换方法和装置、存储介质及电子装置
CN111921188A (zh) * 2020-08-21 2020-11-13 腾讯科技(深圳)有限公司 虚拟对象的控制方法、装置、终端及存储介质
CN111921194A (zh) * 2020-08-26 2020-11-13 腾讯科技(深圳)有限公司 虚拟环境画面的显示方法、装置、设备及存储介质
CN113350779A (zh) * 2021-06-16 2021-09-07 网易(杭州)网络有限公司 游戏虚拟角色动作控制方法及装置、存储介质及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107835148A (zh) * 2017-08-23 2018-03-23 杭州电魂网络科技股份有限公司 游戏角色控制方法、装置、系统及游戏客户端
US20200353355A1 (en) * 2018-04-17 2020-11-12 Tencent Technology (Shenzhen) Company Limited Information object display method and apparatus in virtual scene, and storage medium
CN109568949A (zh) * 2018-09-20 2019-04-05 厦门吉比特网络技术股份有限公司 一种游戏的空中稳定攻击方法和装置
CN113926181A (zh) * 2021-10-21 2022-01-14 腾讯科技(深圳)有限公司 虚拟场景的对象控制方法、装置及电子设备
CN114210047A (zh) * 2021-10-21 2022-03-22 腾讯科技(深圳)有限公司 虚拟场景的对象控制方法、装置及电子设备

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
5V5: "King of Glory right-click mobile attack simulator Android version v3.65.1.42", SOYOHUI, 21 February 2020 (2020-02-21), XP093059208, Retrieved from the Internet <URL:https://www.soyohui.com/game/79265/> [retrieved on 20230629] *
JIN MO: "Reloaded: How to merge the attack buttons? Free your fingers and start full weapon attack with one click", BAIDU, 3 February 2020 (2020-02-03), XP093059216, Retrieved from the Internet <URL:https://baijiahao.baidu.com/s?id=1657519241708584141&wfr=spider&for=pc> [retrieved on 20230629] *

Also Published As

Publication number Publication date
JP2024519364A (ja) 2024-05-10
CN113926181A (zh) 2022-01-14
CN114210047B (zh) 2023-07-25
US20230330536A1 (en) 2023-10-19
CN114210047A (zh) 2022-03-22

Similar Documents

Publication Publication Date Title
WO2023065964A1 (zh) 虚拟场景的对象控制方法、装置、电子设备、计算机程序产品及计算机可读存储介质
CN112684970B (zh) 虚拟场景的适配显示方法、装置、电子设备及存储介质
US11241615B2 (en) Method and apparatus for controlling shooting in football game, computer device and storage medium
CN112306351B (zh) 虚拟按键的位置调整方法、装置、设备及存储介质
WO2022105523A1 (zh) 虚拟对象的控制方法、装置、设备、存储介质及程序产品
US11803301B2 (en) Virtual object control method and apparatus, device, storage medium, and computer program product
US20240061566A1 (en) Method and apparatus for adjusting position of virtual button, device, storage medium, and program product
CN114344896A (zh) 基于虚拟场景的合拍处理方法、装置、设备及存储介质
US20230330525A1 (en) Motion processing method and apparatus in virtual scene, device, storage medium, and program product
CN113827970A (zh) 信息显示方法及装置、计算机可读存储介质、电子设备
CN116688502A (zh) 虚拟场景中的位置标记方法、装置、设备及存储介质
US11995311B2 (en) Adaptive display method and apparatus for virtual scene, electronic device, storage medium, and computer program product
CN114146414A (zh) 虚拟技能的控制方法、装置、设备、存储介质及程序产品
WO2024060924A1 (zh) 虚拟场景的互动处理方法、装置、电子设备及存储介质
WO2024021792A1 (zh) 虚拟场景的信息处理方法、装置、设备、存储介质及程序产品
CN115120976A (zh) 虚拟对象控制方法、装置、电子设备及存储介质
CN114100123A (zh) 射击游戏中游戏场景呈现方法、装置、设备及介质
CN117599415A (zh) 交互控制方法、装置、设备和存储介质
CN116939075A (zh) 操作电子装置的方法及装置
CN117180732A (zh) 道具处理方法、装置、电子设备、存储介质及程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22882575

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 11202307436T

Country of ref document: SG

WWE Wipo information: entry into national phase

Ref document number: 2023571193

Country of ref document: JP