CN114146414A - Virtual skill control method, device, equipment, storage medium and program product - Google Patents

Virtual skill control method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN114146414A
CN114146414A CN202111657056.0A CN202111657056A CN114146414A CN 114146414 A CN114146414 A CN 114146414A CN 202111657056 A CN202111657056 A CN 202111657056A CN 114146414 A CN114146414 A CN 114146414A
Authority
CN
China
Prior art keywords
virtual
virtual object
chaotic
skill
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111657056.0A
Other languages
Chinese (zh)
Inventor
刘智洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Publication of CN114146414A publication Critical patent/CN114146414A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a control method, a device, equipment, a computer readable storage medium and a computer program product of virtual skills; the method comprises the following steps: presenting a first virtual object with target skills in an interface of a virtual scene; responding to a release instruction aiming at the target skill, and presenting a chaotic induction area corresponding to the target skill; the chaotic induction area is used for transforming the interactive configuration of the virtual object in the chaotic induction area; when at least one second virtual object exists in the chaotic induction area, the interaction configuration of the second virtual object is controlled to be changed, so that the first virtual object is controlled to interact with the second virtual object after the interaction configuration is changed. Through the application, the playing method diversity can be enriched.

Description

Virtual skill control method, device, equipment, storage medium and program product
Description of the priority
The application requires application number 202111403280.7, application date 2021, 11/24, entitled: priority of a method, apparatus, device, storage medium, and program product for controlling virtual skills.
Technical Field
The present application relates to computer human-computer interaction technologies, and in particular, to a method, an apparatus, a device, a computer-readable storage medium, and a computer program product for controlling virtual skills.
Background
The display technology based on the graphic processing hardware expands the perception environment and the channel for acquiring information, particularly the display technology of the virtual scene, can realize diversified interaction between virtual objects controlled by users or artificial intelligence according to the actual application requirements, has various typical application scenes, and can simulate the fighting process between the virtual objects in the virtual scene, such as games.
Taking a game scene as an example, players can use various virtual items or virtual skills to perform interactive operation in the game scene, and since different virtual items or virtual skills have different functions, in the related art, players equip corresponding virtual items or virtual skills in a direction beneficial to improving interactive ability, and under the condition that the players do not actively change interactive configurations such as the virtual items or the virtual skills, the system cannot change the interactive configurations of the players, so that the playing method is single.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment, a computer readable storage medium and a computer program product for controlling virtual skills, which can enrich the diversity of playing methods.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a method for controlling virtual skills, which comprises the following steps:
presenting a first virtual object with target skills in an interface of a virtual scene;
responding to a release instruction aiming at the target skill, and presenting a chaotic induction area corresponding to the target skill;
the chaotic induction area is used for transforming the interactive configuration of the virtual object in the chaotic induction area;
when at least one second virtual object exists in the chaotic induction area, the interaction configuration of the second virtual object is controlled to be changed, so that the first virtual object is controlled to interact with the second virtual object after the interaction configuration is changed.
An embodiment of the present application provides a control device for virtual skills, including:
the first presentation module is used for presenting a first virtual object with target skills in an interface of a virtual scene;
the second presentation module is used for responding to a release instruction aiming at the target skill and presenting a chaotic induction area corresponding to the target skill;
the chaotic induction area is used for transforming the interactive configuration of the virtual object in the chaotic induction area;
and the transformation control module is used for controlling and transforming the interactive configuration of the second virtual object when at least one second virtual object exists in the chaotic induction area so as to control the interaction of the first virtual object and the second virtual object after the transformation and the interactive configuration.
In the foregoing solution, before the presenting the chaotic induction region corresponding to the target skill, the apparatus further includes: the instruction receiving module is used for presenting skill controls corresponding to the target skills; when the skill control is in an activated state, a release instruction for the target skill is received in response to a trigger operation for the skill control.
In the above scheme, the instruction receiving module is further configured to present a prop icon corresponding to the target skill; responding to the triggering operation aiming at the prop icon, and controlling the first virtual object to assemble a virtual prop corresponding to the target skill; and when the first virtual object successfully assembles the virtual prop, presenting a skill control corresponding to the target skill.
In the above scheme, the apparatus further comprises: the third presentation module is used for presenting state indication information used for indicating the skill control activation progress; the instruction receiving module is further configured to present the skill control in a target display style when the state indicating information indicates that the skill control is in an activated state.
In the above scheme, the second presenting module is further configured to determine a target area with a target position as a center as the chaotic induction area corresponding to the target skill, and present the chaotic induction area; wherein the target position is one of the following positions: a location at which the first virtual object is located, a skill release location for the target skill.
In the above scheme, the second presenting module is further configured to display an area enclosing frame in a target display style in the interface of the virtual scene when the target position is the position of the first virtual object, where an area in the area enclosing frame is a chaotic induction area corresponding to the target skill; the device further comprises: and the movement control module is used for responding to a movement instruction of the first virtual object, controlling the first virtual object to move in the virtual scene, and controlling the area bounding box to move synchronously along with the movement of the first virtual object.
In the foregoing solution, before the presenting the chaotic induction region corresponding to the target skill, the apparatus further includes: the position determining module is used for presenting a position identifier for selecting the skill release position when the target position is the skill release position corresponding to the target skill; controlling the position identifier to move in the virtual scene in response to a movement instruction for the position identifier; determining the position of the location identifier in the virtual scene as the skill release position in response to a location determination instruction for the location identifier.
In the above scheme, the apparatus further comprises: the recovery control module is used for presenting a virtual support prop corresponding to the target skill in the chaotic induction area, and the virtual support prop is used for controlling the display duration of the chaotic induction area; and when an area disappearing instruction triggered based on the virtual support prop is received, canceling the presentation of the chaotic induction area corresponding to the target skill, and controlling and recovering the interactive configuration of the second virtual object.
In the above scheme, the recovery control module is further configured to present a remaining effective duration of the target skill; and when the remaining effective duration is lower than a duration threshold or zero, canceling the presentation of the chaotic induction area corresponding to the target skill, and controlling and recovering the interactive configuration of the second virtual object.
In the above solution, the transformation control module is further configured to determine an interaction relationship between the second virtual object and the first virtual object, where the interaction relationship is used to indicate whether the second virtual object and the first virtual object belong to the same camp; and when the interaction relation indicates that the second virtual object and the first virtual object belong to different camps, controlling to transform the interaction configuration of the second virtual object.
In the foregoing solution, before the presenting the chaotic induction region corresponding to the target skill, the apparatus further includes: a fourth presentation module, configured to present, in an interface of the virtual scene, the second virtual object equipped with a virtual item, where the virtual item is configured for the first item; the transformation control module is further configured to, when the number of the second virtual objects is one, control the virtual prop of the second virtual object to be changed from the first prop configuration to a second prop configuration, so as to transform the interactive configuration of the second virtual object.
In the foregoing solution, the transformation control module is further configured to, when the number of the second virtual objects is at least two, and the virtual props equipped for the second virtual objects include main props and prop accessories, control random exchange of the main props or the prop accessories between any two second virtual objects in the at least two second virtual objects.
In the above scheme, the apparatus further comprises: an accessory screening module to perform the following for each of the second virtual objects: when the virtual prop comprises a plurality of prop accessories and the prop accessories are randomly exchanged, acquiring interactive data of each prop accessory; wherein the interaction data comprises at least one of: the use frequency, the interactive score and the interactive grade; and selecting a target prop accessory from the plurality of prop accessories according to the interactive data to serve as the prop accessory to be transformed of the second virtual object.
In the foregoing scheme, the transformation control module is further configured to, when the number of the second virtual objects is at least two and the second virtual objects are equipped with virtual items or virtual skills, control the at least two second virtual objects to exchange the equipped virtual items or virtual skills according to usage preferences of the second virtual objects, so that an adaptation degree between the usage preferences of the second virtual objects and the exchanged virtual items or virtual skills is lower than an adaptation degree threshold.
In the above scheme, the apparatus further comprises: the module setting module is used for presenting an interactive configuration mode setting icon of the virtual scene; responding to the starting operation of the icon set for the interaction configuration mode, and setting the interaction configuration mode of the virtual scene into an anti-interference interaction configuration mode; and when the first virtual object is in the chaotic induction area corresponding to the target skill of the second virtual object, controlling the interaction configuration of the first virtual object to be kept unchanged.
In the above scheme, the recovery control module is further configured to perform virtual object detection on the chaotic induction area; and when the second virtual object is detected to leave the chaotic induction area, controlling to recover the interactive configuration of the second virtual object.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the control method of the virtual skill provided by the embodiment of the application when the executable instruction stored in the memory is executed.
The embodiment of the present application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the computer-readable storage medium, so as to implement the control method for virtual skills provided in the embodiment of the present application.
The embodiment of the present application provides a computer program product, which includes a computer program or an instruction, and when the computer program or the instruction is executed by a processor, the control method for virtual technology provided in the embodiment of the present application is implemented.
The embodiment of the application has the following beneficial effects:
by applying the embodiment of the application, the interactive configuration of the second virtual object in the chaotic induction area is controlled to be changed by using the chaotic induction area released by the target skill, so that the playing method diversity is enriched; and because the transformation of the interactive configuration of the second virtual object has the characteristics of randomness, uncertainty or unpredictability, the interactive capability of the second virtual object is greatly weakened when the second virtual object is not used or the transformed interactive configuration cannot be used, and the interaction between the first virtual object and the second virtual object is controlled under the condition, so that the number of times of controlling the first virtual object and the second virtual object to execute interactive operation can be reduced, the human-computer interaction efficiency is improved, and the use experience of a user is improved.
Drawings
Fig. 1A is a schematic view of an application mode of a control method of virtual skills provided in an embodiment of the present application;
fig. 1B is a schematic view of an application mode of a control method of virtual skills according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a terminal device 400 according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a method for controlling virtual skills according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a display of a skill control provided by an embodiment of the present application;
fig. 5 is a schematic flow chart of a method for controlling virtual skills according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a display of a chaotic induction zone according to an embodiment of the present application;
fig. 7 is a schematic diagram illustrating detection of a virtual object according to an embodiment of the present application;
fig. 8 is a schematic display view of a prop accessory provided in an embodiment of the present application;
FIG. 9 is an exchange diagram of an interaction configuration provided by an embodiment of the present application;
FIG. 10 is an exchange diagram of an interaction configuration provided by an embodiment of the present application;
fig. 11 is an exchange diagram of an interaction configuration provided in an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the description that follows, reference is made to the term "first \ second …" merely to distinguish between similar objects and not to represent a particular ordering for the objects, it being understood that "first \ second …" may be interchanged in a particular order or sequence of orders as permitted to enable embodiments of the application described herein to be practiced in other than the order illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) The client, an application program running in the terminal for providing various services, such as a video playing client, a game client, etc.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) The virtual scene is a virtual scene displayed (or provided) when an application program runs on a terminal, and the virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, the virtual scene may include sky, land, sea, and the like, the land may include environmental elements such as desert, city, and the like, and the user may control the virtual object to move in the virtual scene.
4) Virtual objects, the appearance of various people and objects in the virtual scene that can interact, or movable objects in the virtual scene. The movable object may be a virtual character, a virtual animal, an animation character, etc., such as a character, animal, etc., displayed in a virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
5) The scene data, which represents feature data in the virtual scene, may include, for example, the position of the virtual object in the virtual scene, the time required to wait for various functions provided in the virtual scene (depending on the number of times the same function can be used within a certain time), and attribute values of various states of the game virtual object, such as a life value and a magic value.
The embodiment of the application provides a method and a device for controlling virtual skills, a terminal device, a computer readable storage medium and a computer program product, which can enrich the diversity of playing methods. In order to facilitate easier understanding of the method for controlling virtual skills provided in the embodiment of the present application, an exemplary implementation scenario of the method for controlling virtual skills provided in the embodiment of the present application is first described, and the virtual scenario in the method for controlling virtual skills provided in the embodiment of the present application may be completely based on output of a terminal device, or based on cooperative output of the terminal device and a server.
In some embodiments, the virtual scene may also be an environment for game characters to interact with, for example, game characters to play against in the virtual scene, and the two-way interaction may be performed in the virtual scene by controlling actions of the game characters, so that the user can relieve life stress during the game.
In an implementation scenario, referring to fig. 1A, fig. 1A is an application mode schematic diagram of the control method for virtual skills provided in the embodiment of the present application, and is applicable to some application modes that can complete the calculation of related data of the virtual scenario 100 completely depending on the computing capability of the graphics processing hardware of the terminal device 400, such as a game in a single-machine/offline mode, and the output of the virtual scenario is completed through various different types of terminal devices 400, such as a smart phone, a tablet computer, and a virtual reality/augmented reality device. As an example, types of Graphics Processing hardware include a Central Processing Unit (CPU) and a Graphics Processing Unit (GPU).
When the visual perception of the virtual scene 100 is formed, the terminal device 400 calculates and displays required data through the graphic computing hardware, completes the loading, analysis and rendering of the display data, and outputs a video frame capable of forming the visual perception on the virtual scene at the graphic output hardware, for example, a two-dimensional video frame is displayed on a display screen of a smart phone, or a video frame realizing a three-dimensional display effect is projected on a lens of augmented reality/virtual reality glasses; in addition, in order to enrich the perception effect, the terminal device 400 may also form one or more of auditory perception, tactile perception, motion perception, and taste perception by means of different hardware.
As an example, the terminal device 400 runs a client 410 (e.g. a standalone version of a game application), and outputs the virtual scene 100 including role playing during the running process of the client 410, where the virtual scene 100 may be an environment for game role interaction, such as a plain, a street, a valley, and the like for game role battle; the first virtual object 110 is included in the virtual scene, and the first virtual object 110 may be a game character controlled by a user (or a player), that is, the first virtual object 110 is controlled by a real user, and will move in the virtual scene in response to an operation of the real user on a controller (including a touch screen, a voice control switch, a keyboard, a mouse, a joystick, and the like), for example, when the real user moves the joystick to the right, the virtual object 110 will move to the right in the virtual scene 100, and may also remain stationary in place, jump, and control the virtual object 110 to perform a shooting operation, and the like.
For example, the terminal device 400 presents the first virtual object 110 with the target skill in the interface of the virtual scene 100; responding to a release instruction aiming at the target skill, and presenting a chaotic induction area 120 corresponding to the target skill; the chaotic induction area 120 is used for changing the interaction configuration of the virtual object in the chaotic induction area; when at least one second virtual object 130 exists in the chaotic induction area 120, controlling and transforming the interaction configuration of the second virtual object 130 so as to control the first virtual object 110 to interact with the second virtual object 130 after the interaction configuration is transformed; because the transformation of the interactive configuration of the second virtual object has the characteristics of randomness, uncertainty or unpredictability, the interactive capability of the second virtual object is greatly weakened when the second virtual object is not used or the transformed interactive configuration cannot be used, and the interaction between the first virtual object and the second virtual object is controlled under the condition, so that the number of times of executing interactive operation between the first virtual object and the second virtual object can be reduced for achieving a certain interaction purpose, the human-computer interaction efficiency is improved, and the use experience of a user is improved.
In another implementation scenario, referring to fig. 1B, fig. 1B is a schematic view of an application mode of the control method for virtual skills provided in this embodiment, which is applied to a terminal device 400 and a server 200, and is adapted to complete virtual scene calculation depending on the calculation capability of the server 200 and output an application mode of a virtual scene at the terminal device 400. Taking the example of forming the visual perception of the virtual scene 100, the server 200 performs calculation of display data (e.g., scene data) related to the virtual scene and sends the calculated display data to the terminal device 400 through the network 300, the terminal device 400 relies on graphics computing hardware to complete loading, parsing and rendering of the calculated display data, and relies on graphics output hardware to output the virtual scene to form the visual perception, for example, a two-dimensional video frame may be presented on a display screen of a smartphone, or a video frame realizing a three-dimensional display effect may be projected on a lens of augmented reality/virtual reality glasses; for perception in the form of a virtual scene, it is understood that an auditory perception may be formed by means of a corresponding hardware output of the terminal device 400, for example using a microphone, a tactile perception using a vibrator, etc.
As an example, the terminal device 400 runs a client 410 (e.g. a standalone version of a game application), and outputs the virtual scene 100 including role playing during the running process of the client 410, where the virtual scene 100 may be an environment for game role interaction, such as a plain, a street, a valley, and the like for game role battle; the first virtual object 110 is included in the virtual scene, and the first virtual object 110 may be a game character controlled by a user (or a player), that is, the first virtual object 110 is controlled by a real user, and will move in the virtual scene in response to an operation of the real user on a controller (including a touch screen, a voice control switch, a keyboard, a mouse, a joystick, and the like), for example, when the real user moves the joystick to the right, the virtual object 110 will move to the right in the virtual scene 100, and may also remain stationary in place, jump, and control the virtual object 110 to perform a shooting operation, and the like.
For example, the terminal device 400 presents the first virtual object 110 with the target skill in the interface of the virtual scene 100; responding to a release instruction aiming at the target skill, and presenting a chaotic induction area 120 corresponding to the target skill; the chaotic induction area 120 is used for changing the interaction configuration of the virtual object in the chaotic induction area; when at least one second virtual object 130 exists in the chaotic induction area 120, controlling and transforming the interaction configuration of the second virtual object 130, so as to control the first virtual object 110 to interact with the second virtual object 130 with the transformed interaction configuration in a target time period after the transformation of the interaction configuration of the second virtual object 130; because the transformation of the interactive configuration of the second virtual object has the characteristics of randomness, uncertainty or unpredictability, the interactive capability of the second virtual object is greatly weakened when the second virtual object is not used or the transformed interactive configuration cannot be used, and the interaction between the first virtual object and the second virtual object is controlled under the condition, so that the number of times of executing interactive operation between the first virtual object and the second virtual object can be reduced for achieving a certain interaction purpose, the human-computer interaction efficiency is improved, and the use experience of a user is improved.
In some embodiments, the terminal device 400 may implement the control method of the virtual technology provided in the embodiments of the present application by running a computer program, for example, the computer program may be a native program or a software module in an operating system; may be a Native APPlication (APP), i.e. a program that needs to be installed in an operating system to run, such as a shooting game APP (i.e. the client 410 described above); or may be an applet, i.e. a program that can be run only by downloading it to the browser environment; but also a game applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
Taking a computer program as an application program as an example, in actual implementation, the terminal device 400 is installed and runs with an application program supporting a virtual scene. The application program may be any one of a First-Person Shooting game (FPS), a third-Person Shooting game, a virtual reality application program, a three-dimensional map program, or a multi-player gunfight type live game. The user uses the terminal device 400 to operate virtual objects located in the virtual scene for activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing, building a virtual building. Illustratively, the virtual object may be a virtual character, such as a simulated character or an animated character, among others.
In other embodiments, the embodiments of the present application may also be implemented by Cloud Technology (Cloud Technology), which refers to a hosting Technology for unifying resources of hardware, software, network, and the like in a wide area network or a local area network to implement calculation, storage, processing, and sharing of data.
The cloud technology is a general term of network technology, information technology, integration technology, management platform technology, application technology and the like applied based on a cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources.
For example, the server 200 in fig. 1B may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. The terminal device 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal device 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present application is not limited thereto.
The structure of the terminal apparatus 400 shown in fig. 1A is explained below. Referring to fig. 2, fig. 2 is a schematic structural diagram of a terminal device 400 according to an embodiment of the present application, where the terminal device 400 shown in fig. 2 includes: at least one processor 420, memory 460, at least one network interface 430, and a user interface 440. The various components in the terminal device 400 are coupled together by a bus system 450. It is understood that the bus system 450 is used to enable connected communication between these components. The bus system 450 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 450 in fig. 2.
The Processor 420 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 440 includes one or more output devices 441, including one or more speakers and/or one or more visual display screens, that enable the presentation of media content. The user interface 440 also includes one or more input devices 442 including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display screen, camera, other input buttons and controls.
The memory 460 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 460 may optionally include one or more storage devices physically located remote from processor 420.
The memory 460 may include volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 460 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 460 may be capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 461 comprising system programs for handling various basic system services and performing hardware related tasks, such as framework layer, core library layer, driver layer, etc., for implementing various basic services and handling hardware based tasks;
a network communication module 462 for reaching other computing devices via one or more (wired or wireless) network interfaces 430, exemplary network interfaces 430 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 463 for enabling presentation of information (e.g., user interfaces for operating peripherals and displaying content and information) via one or more output devices 441 (e.g., display screens, speakers, etc.) associated with user interface 440;
an input processing module 464 for detecting one or more user inputs or interactions from one of the one or more input devices 442 and translating the detected inputs or interactions.
In some embodiments, the control means for virtual skills provided by embodiments of the present application may be implemented in software, and fig. 2 shows the control means for virtual skills 465 stored in the memory 460, which may be software in the form of programs and plug-ins, etc., and includes the following software modules: a first presentation module 4651, a second presentation module 4652 and a transformation control module 4653, which are logical and thus may be arbitrarily combined or further split according to the implemented functions, the functions of which will be described below.
In other embodiments, the virtual skill control Device provided in the embodiments of the present Application may be implemented in hardware, and for example, the virtual skill control Device provided in the embodiments of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the control method of the virtual skill provided in the embodiments of the present Application, for example, the processor in the form of the hardware decoding processor may be one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
The following describes a control method of virtual skills provided in an embodiment of the present application in detail with reference to the accompanying drawings. The method for controlling virtual skills provided in this embodiment of the present application may be executed by the terminal device 400 in fig. 1A alone, or may be executed by the terminal device 400 and the server 200 in fig. 1B in a cooperation manner.
Next, a control method for individually executing the virtual skills provided in the embodiment of the present application by the terminal device 400 in fig. 1A is described as an example. Referring to fig. 3, fig. 3 is a schematic flow chart of a method for controlling virtual skills according to an embodiment of the present application, and the steps shown in fig. 3 will be described.
It should be noted that the method shown in fig. 3 can be executed by various forms of computer programs running on the terminal device 400, and is not limited to the client 410 described above, but may also be the operating system 461, software modules and scripts described above, so that the client should not be considered as limiting the embodiments of the present application.
Step 101: the terminal equipment presents a first virtual object with target skills in an interface of a virtual scene.
Here, an application client supporting a virtual scene may be installed on the terminal device, and when a user opens the application client on the terminal and the terminal runs the application client, the terminal presents a screen of the virtual scene (such as a shooting game scene), and the user may control the first virtual object to perform an interactive operation in the virtual scene. In an actual application, the first virtual object is an avatar in a virtual scene corresponding to a user account currently logged in the application client, for example, the first virtual object may be a virtual object controlled by a user entering the virtual scene of a game, and of course, the virtual scene may further include other virtual objects, and may be controlled by other users or controlled by a robot program.
In the virtual scene, a user can trigger an interaction control instruction aiming at the first virtual object through the human-computer interaction interface to control the first virtual object to execute interaction operation. For example, the first virtual object may hold at least one virtual prop or be equipped with at least one virtual skill, etc., and the virtual prop may be any prop used when the virtual object interacts, such as a virtual shooting prop, a virtual bow, a virtual slingshot, a virtual nunchakus, a virtual whip, etc.; the virtual skill can be a protection skill, an attack skill and the like, and the user can control the first virtual object to perform interactive operation in the virtual scene based on the assembled virtual prop or virtual skill. In the embodiment of the application, a target skill for releasing the chaotic induction area is provided for the first virtual object, the chaotic induction area released by the target skill can change the interaction configuration of the virtual object in the chaotic induction area so as to change the interaction capability of the corresponding virtual object, and under the application scenario, the terminal device can present the first virtual object with the target skill in the picture of the virtual scene.
Step 102: and responding to a release instruction aiming at the target skill, and presenting a chaotic induction area corresponding to the target skill.
The chaotic induction area is used for changing the interaction configuration of a virtual object in the chaotic induction area, for example, for the virtual object in the chaotic induction area, the interaction configuration state of the virtual object can be changed, such as the motion state of the virtual object or the preparation state of the virtual object for a virtual prop, where the motion state of the virtual object includes but is not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, attacking, throwing, sliding shovel, resting; the different types of virtual props correspond to different allocation states, for example, when the virtual props are shooting type virtual props such as machine guns, pistols, rifles and the like, the corresponding allocation states can be a holding state, an aiming state, a launching state or a containing state; for example, when the virtual item is a throwing virtual item such as a grenade, a viscous grenade, or the like, the corresponding disposition state is a holding state, a throwing state, a storage state, or the like, and the storage state is a state in which the virtual object is controlled to carry the corresponding virtual item on the back or placed in a backpack, so that the virtual object cannot be controlled to use the virtual item in the storage state. In addition, the method can also control the transformation of the virtual item or item accessory in the virtual item in the chaos induction area, such as controlling the virtual item to be transformed from being equipped with the virtual item 1 to being equipped with the virtual item 2, or controlling the virtual item to be transformed from being equipped with the aiming lens 1 in the virtual item to being equipped with the aiming lens 3 when the virtual item equipped with the virtual item comprises a plurality of item accessories which are arranged at the same position, such as the aiming lens 1, the aiming lens 2 and the aiming lens 3.
Therefore, the chaotic induction area is released through the target skill, the interactive configuration of the virtual object in the chaotic induction area is changed, and the playing diversity is enriched; and because the transformation of the interaction configuration has the characteristics of randomness, uncertainty or unpredictability, the interaction capability of the virtual object is greatly weakened when the virtual object in the chaotic induction area is not used or the transformed interaction configuration cannot be used, and under the condition, the first virtual object is controlled to interact with the virtual object in the chaotic induction area, so that the frequency of controlling the first virtual object and the second virtual object to execute interaction operation for achieving a certain interaction purpose can be reduced, the human-computer interaction efficiency is improved, and the use experience of a user is improved.
In some embodiments, before the terminal device presents the chaotic induction area, a release instruction for the target skill may be received in an entering manner: presenting skill controls corresponding to the target skills; and when the skill control is in an activated state, a release instruction for the target skill is received in response to a triggering operation for the skill control.
Here, for the target skills, a corresponding skill control is provided, and when the skill control is in an activated state and a user triggers (e.g., clicks, double clicks, slides, etc.) the skill control, the terminal device receives a release instruction in response to the trigger operation, and controls the first virtual object to release the target skills in response to the release instruction.
In some embodiments, the terminal device may present the skill control corresponding to the target skill by: presenting a prop icon corresponding to the target skill; responding to the triggering operation aiming at the prop icon, and controlling the first virtual object to assemble the virtual prop corresponding to the target skill; and when the first virtual object successfully assembles the virtual prop, presenting a skill control corresponding to the target skill.
Here, the first virtual object may be controlled to have the target skill by controlling the first virtual object to assemble the virtual prop corresponding to the target skill. In practical applications, the virtual item may be obtained when the first virtual object is controlled to interact in the virtual scene (for example, when an interaction result meets an obtaining condition of the virtual item or is found during the interaction), or may be obtained before the first virtual object is controlled to enter the virtual scene (for example, before a game is opened). When the first virtual object possesses the virtual prop, the terminal device presents a corresponding prop icon, and the user can control the first virtual object to assemble the virtual prop corresponding to the target skill by triggering the prop icon. And when the first virtual object is successfully assembled with the virtual prop, the terminal equipment presents a skill control corresponding to the target skill, so that the user can control the first virtual object to release the target skill based on the skill control.
In some embodiments, the terminal device may present status indication information indicating the progress of the skill control activation; and when the state indication information indicates that the skill control is in an activated state, presenting the skill control by adopting the target display style.
In practical application, the display styles of the skill control can be different in an activated state and a deactivated state, for example, the skill control in the deactivated state is displayed in a gray scale mode, state indicating information used for indicating the activation progress of the skill control is presented, and when the state indicating information indicates that the cooling of the skill control is finished and the deactivated state is changed into the activated state, the skill control in the activated state is highlighted; further, the skill control in the activated state and the skill control in the deactivated state may be indicated by different identifiers, and so on, such as the skill control in the deactivated state may be indicated by the disabled identifier.
Referring to fig. 4, fig. 4 is a display schematic diagram of a skill control provided in the embodiment of the present application, when the skill control 401 is in an activated state, status indication information of activation progress is presented, for example, the activation percentage of the activation progress bar is 7%, and when the activation percentage of the activation progress bar becomes 100%, the skill control is activated and highlighted.
In some embodiments, the terminal device may present the chaotic induction region corresponding to the target skill by: determining a target area taking a target position as a center as a chaotic induction area corresponding to a target skill, and presenting the chaotic induction area; wherein the target position is one of the following positions: the location where the first virtual object is located, the skill release location of the target skill.
Here, the chaotic response area may be an area centered on the first virtual object, or an area centered on a release position of the target skill, and the size and the existence duration of the chaotic response area may be set, for example, the size or the existence duration of the chaotic response area is related to the grade of the virtual object, and for example, the higher the grade of the first virtual object is, the larger the corresponding chaotic response area is, the longer the existence duration is.
In some embodiments, the terminal may present the chaotic induction region by: when the target position is the position of the first virtual object, displaying a region enclosure frame in a target display mode in an interface of a virtual scene, wherein a region in the region enclosure frame is a chaotic induction region corresponding to a target skill; correspondingly, the terminal equipment can also control the area surrounding frame to move along with the synchronization in the following way: in response to a movement instruction for the first virtual object, the first virtual object is controlled to move in the virtual scene, and the control region bounding box moves synchronously with the movement of the first virtual object.
Here, when the chaotic induction area is an area centered on the first virtual object, if the first virtual object moves in the virtual scene, the chaotic induction area moves along with the movement of the first virtual object, so that after the target skills are released, the first virtual object can adjust the position of the chaotic induction area by adjusting the position thereof according to real-time virtual scene data, so that more second virtual objects exist in the chaotic induction area, and the interaction configuration of more second virtual objects is changed, thereby improving the conversion efficiency.
In some embodiments, before presenting the chaotic induction zone corresponding to the target skill, the terminal device may determine the skill release position by: when the target position is a skill release position corresponding to the target skill, presenting a position identifier for selecting the skill release position; controlling the position identifier to move in the virtual scene in response to the movement instruction for the position identifier; and determining the position of the position identifier in the virtual scene as a skill release position in response to the position determination instruction for the position identifier.
Here, when the chaotic induction area is an area centered on a skill release position of a target skill, the first virtual object may select the release position of the target skill according to real-time virtual scene data, for example, a movement instruction may be triggered by dragging a position identifier, the terminal device receives the movement instruction in response to a dragging operation for the position identifier, and displays a process of moving the position identifier in a virtual scene in response to the movement instruction, and in response to a releasing operation for the dragging operation (or the movement instruction), the terminal device receives a corresponding position determination instruction, and determines a release position (that is, a position of the position identifier in the virtual scene) corresponding to the releasing operation as the skill release position; therefore, the user can select the skill release position according to actual requirements, and the applicability of the target skill is improved. Of course, in practical application, the terminal device may also automatically determine the skill release position of the target skill according to the real-time virtual scene data.
Step 103: when at least one second virtual object exists in the chaotic induction area, the interaction configuration of the second virtual object is controlled and transformed, so that the interaction between the first virtual object and the second virtual object after the interaction configuration is controlled and transformed.
In some embodiments, the terminal device may control transforming the interactive configuration of the second virtual object by: determining an interaction relation between the second virtual object and the first virtual object, wherein the interaction relation is used for indicating whether the second virtual object and the first virtual object belong to the same camp; and when the interaction relation indicates that the second virtual object and the first virtual object belong to different camps, controlling to transform the interaction configuration of the second virtual object.
Here, in practical applications, the object to which the interactive configuration transformation in the chaotic induction region is directed may be set, for example, the interactive configuration of a second virtual object (hostile relationship) in the chaotic induction region, which is in a different camp from the first virtual object, is transformed, while the interactive configuration of a second virtual object (friend relationship) in the same camp as the first virtual object is not transformed (i.e., remains unchanged), for example, the terminal device performs virtual object detection on the chaotic induction region in real time, and when it is detected that the second virtual object enters the chaotic induction region, the interactive relationship between the second virtual object and the first virtual object is obtained; and when the interactive relationship indicates that the second virtual object and the first virtual object belong to the same marketing, controlling the interactive configuration of the second virtual object to be kept unchanged, and when the interactive relationship indicates that the second virtual object and the first virtual object belong to different marketing, controlling the interactive configuration of the second virtual object to be kept unchanged.
Through the mode, the interaction configuration of the second virtual object in the enemy-to-relationship is changed, and due to the fact that the transformation of the interaction configuration has the characteristics of randomness, uncertainty or unpredictability and the like, when the second virtual object in the chaotic induction area is not used or the transformed interaction configuration cannot be used, the interaction capacity of the second virtual object is greatly weakened, under the condition, the first virtual object and the second virtual object in the chaotic induction area are controlled to interact, the frequency of controlling the first virtual object and the second virtual object to execute interaction operation for achieving a certain interaction purpose can be reduced, the man-machine interaction efficiency is improved, and the use experience of a user is improved.
It should be noted that, when the first virtual object and the second virtual object in the chaotic induction area belong to different camps at the same time, the terminal device may detect the interaction configuration of the second virtual object, and when it is detected that there is an interaction configuration capable of improving the interaction capability of the first virtual object in the interaction configuration of the second virtual object, control the exchange of the interaction configuration between the first virtual object and the second virtual object, exchange the interaction configuration (such as a prop accessory) capable of improving the interaction capability of the first virtual object in the second virtual object to the first virtual object, and exchange the interaction configuration capable of weakening the interaction capability of the second virtual object in the first virtual object to the second virtual object, so as to weaken the interaction capability of the second virtual object while strengthening the interaction capability of the first virtual object, further expand the interaction capability difference between the first virtual object and the second virtual object (both of which are in an adversary relationship) in the chaotic induction area, under the condition, the first virtual object is controlled to interact with the second virtual object in the chaotic induction area, so that the times of controlling the first virtual object and the second virtual object to execute interactive operation for achieving a certain interaction purpose can be reduced, the human-computer interaction efficiency is improved, and the use experience of a user is improved.
In some embodiments, the terminal device presents an interactive configuration mode setting icon of the virtual scene; responding to the starting operation of the icon set for the interaction configuration mode, and setting the interaction configuration mode of the virtual scene as an anti-interference interaction configuration mode; and when the first virtual object is in the chaotic induction area, controlling the interactive configuration of the first virtual object to be kept unchanged, or when the first virtual object is in the chaotic induction area corresponding to the target skill of the second virtual object, controlling the interactive configuration of the first virtual object to be kept unchanged.
Here, when the first virtual object exists in the chaotic sensing area, in order to avoid causing a change to the interaction configuration of the first virtual object, the first virtual object may be controlled to start an anti-interference interaction configuration mode, that is, after the anti-interference interaction configuration mode is started, the interaction configuration of the first virtual object in the chaotic sensing area is not affected by the chaotic sensing area, so that the difference in interaction capability between the first virtual object in the chaotic sensing area and the second virtual object (which are in an adversary relationship) may be further expanded, and in this case, the interaction between the first virtual object and the second virtual object in the chaotic sensing area may be controlled, so as to reduce the number of times of performing an interaction operation for controlling the first virtual object and the second virtual object to achieve a certain interaction purpose, improve the human-computer interaction efficiency, and improve the user experience.
In some embodiments, before presenting the chaotic induction region corresponding to the target skill, the terminal device presents, in an interface of a virtual scene, a second virtual object equipped with a virtual prop, where the virtual prop is configured for a first prop; correspondingly, the terminal equipment can control and transform the interactive configuration of the second virtual object in the following modes: when the number of the second virtual objects is one, the virtual prop controlling the second virtual objects is changed from the first prop configuration to the second prop configuration so as to change the interactive configuration of the second virtual objects.
Here, when the number of the second virtual objects in the chaotic induction area is 1, the configuration state of the second virtual object itself may be controlled to be changed, for example, the interactive configuration of the second virtual object may be changed by changing the configuration of the second virtual object with respect to the equipped virtual item, for example, before the second virtual object enters the chaotic induction area, the configuration state of the second virtual item is a targeting state (first item configuration), and after the second virtual object enters the chaotic induction area, the configuration state of the second virtual object with respect to the virtual item is changed from the targeting state to a storage state (second item configuration); in addition, the motion state of the second virtual object can be changed, for example, before the second virtual object enters the chaotic induction area, the motion state of the second virtual object is a crawling state, and after the second virtual object enters the chaotic induction area, the motion state of the second virtual object is changed from the crawling state to a standing state, so that the chaotic induction area gives the second virtual object a feeling of being out of reach, the second virtual object is easily confused, and the interaction capacity of the second virtual object in a target time period after interactive configuration change is reduced.
In some embodiments, the terminal device may control transforming the interactive configuration of the second virtual object by: when the number of the second virtual objects is at least two, and the virtual props equipped by the second virtual objects comprise main props and prop accessories, the main props or the prop accessories are controlled to be randomly exchanged between any two second virtual objects in the at least two second virtual objects.
In practical application, the virtual props of the second virtual objects include main props and prop accessories, the main props (such as main and auxiliary weapons) and the prop accessories in the virtual props of different second virtual objects may be different, and each virtual prop may include multiple types of prop accessories, so that random exchange of the main props or the prop accessories can be controlled between different second virtual objects, for example, during exchange, any prop accessory (at this time, the main prop does not change) in the virtual props of different second virtual objects can be exchanged, or the main props (at this time, the prop accessories do not change) in the virtual props can be exchanged, for example, before the second virtual object 1 enters the chaos sensing area, the sighting mirror in the virtual prop equipped in the second virtual object is the sighting mirror 1, and after the second virtual object 1 enters the chaos sensing area, the sighting mirror 1 in the virtual prop configured in the second virtual object and the second virtual prop located in the chaos sensing area are controlled The sighting telescope 3 configured by the object 2 is exchanged, namely the second virtual object 1 is controlled to be changed from the sighting telescope 1 in the configured virtual prop to the equipped sighting telescope 3, and the second virtual object 2 is controlled to be changed from the sighting telescope 3 in the configured virtual prop to the equipped sighting telescope 1; therefore, the interaction capability of the virtual prop of the exchanged second virtual object is greatly weakened because the prop accessory is not matched with the original main prop or the exchanged main prop is not matched with the original prop accessory.
In some embodiments, the terminal device may further perform the following for each second virtual object to filter prop accessories to be exchanged: when the virtual prop comprises a plurality of prop accessories and the prop accessories are randomly exchanged, acquiring interactive data of each prop accessory; wherein the interaction data comprises at least one of: the use frequency, the interactive score and the interactive grade; and selecting a target item accessory from the plurality of item accessories according to the interactive data to serve as the item accessory to be transformed of the second virtual object.
Here, when the virtual item includes a plurality of item accessories and the item accessories are randomly exchanged, the interaction data of the second virtual object for each item accessory may be sorted in a descending order; determining prop accessories corresponding to the previous interactive data in the descending sorting result as target prop accessories to be transformed of the second virtual object; thus, the interaction capability of the second virtual object can be greatly reduced by replacing the prop accessory best served by the second virtual object.
In some embodiments, the terminal device may control transforming the interactive configuration of the second virtual object by: when the number of the second virtual objects is at least two and the second virtual objects are equipped with the virtual props or the virtual skills, controlling the at least two second virtual objects to exchange the equipped virtual props or the virtual skills according to the use preference of each second virtual object, so that the adaptation degree between the use preference of each second virtual object and the exchanged virtual props or virtual skills is lower than the adaptation degree threshold value.
For example, taking the exchange of virtual items as an example, assuming that the virtual items equipped for each second virtual object include virtual item 1, virtual item 2 and virtual item 3, through a neural network model, according to the role of the second virtual object in a virtual scene or the historical used virtual items of the second virtual object, the usage preference of the second virtual object, i.e. the preference, proficiency and the like of the second virtual object for each virtual item is predicted, based on the usage preference of the second virtual object, the matching degree of the virtual item 1 and the usage preference, the matching degree of the virtual item 2 and the usage preference, and the matching degree of the virtual item 3 and the usage preference are respectively determined, the virtual item with the highest matching degree is selected from the above to perform interactive configuration transformation, for example, the virtual item 1 to which the second virtual object 1 is most good at is replaced by the second virtual item 2 which is most good at using the virtual item 1, changing the virtual prop 2 to which the second virtual object 2 is most good at to the second virtual object 3 which is least good at using the virtual prop 2, and changing the virtual prop 3 to which the second virtual object 3 is most good at to the second virtual object 1 which is least good at using the virtual prop 3; therefore, according to the use preferences of different players, virtual props or virtual skills among each other are exchanged, so that the users are not used to or cannot use new virtual props or new virtual skills after the exchange, the interaction between the first virtual object and the second virtual object in the chaotic induction area is controlled under the condition, the number of times of controlling the first virtual object and the second virtual object to execute interactive operation for achieving a certain interaction purpose can be reduced, the human-computer interaction efficiency is improved, and the use experience of the users is improved.
In some embodiments, the terminal device presents a virtual support prop corresponding to the target skill in the chaotic induction region; and when an area disappearing instruction triggered based on the virtual support prop is received, canceling the chaotic induction area corresponding to the presented target skill, and controlling and recovering the interactive configuration of the second virtual object.
The chaotic induction area is internally provided with a virtual support prop (virtual support prop) as a support of target skills, when the virtual support prop is damaged, the chaotic induction area disappears, namely the virtual support prop is used for controlling the display duration of the chaotic induction area, the chaotic induction area and the virtual support prop exist or disappear simultaneously, and after the chaotic induction area disappears, the interactive configuration of a second virtual object originally in the chaotic induction area is restored to the interactive configuration before conversion.
In some embodiments, the terminal device may also present the remaining effective duration of the target skill; and when the remaining effective duration is lower than the duration threshold or zero, canceling the chaotic induction area corresponding to the target skill, and controlling to recover the interactive configuration of the second virtual object.
In practical application, a certain effective time length exists after the target skill is released, namely, the chaotic induction area generated by the target skill release has a corresponding effective time length (if the effective time length is 5 seconds), when the effective remaining time length returns to zero or is lower than a time length threshold value (0.5 second), the target skill fails, namely, the corresponding chaotic induction area disappears, and after the chaotic induction area disappears, the interactive configuration of the second virtual object originally in the chaotic induction area is restored to the interactive configuration before transformation.
In some embodiments, the terminal device may further perform virtual object detection on the chaotic induction area, and when it is detected that the second virtual object leaves the chaotic induction area, control to restore the interactive configuration of the second virtual object.
Here, after the second virtual object leaves the chaotic induction region, the interaction configuration of the second virtual object can be controlled to be restored to the interaction configuration before entering the chaotic induction region.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described. Taking a virtual scene as an example of a game, the embodiment of the present application provides a chaotic skill (i.e., the above target skill) capable of releasing a chaotic induction region and performing interactive configuration transformation on a virtual object in the chaotic induction region, where the chaotic skill requires a player to first equip a corresponding virtual prop and enter the game for use, referring to fig. 5, fig. 5 is a flowchart of a control method of the virtual skill provided in the embodiment of the present application, and the method includes:
step 201: the terminal equipment controls the first virtual object to equip the virtual prop corresponding to the chaotic skill.
Here, after the user controls the first virtual object to equip the virtual item corresponding to the chaotic skill through the terminal device, the skill control corresponding to the chaotic skill is presented in the game interface.
Step 202: and judging whether the skill control is in an activated state.
Here, the skill control in the inactive state is not available, and when the skill control is determined to be in the inactive state, step 201 is executed; when it is determined that the skill control is in the activated state, step 203 is performed.
Step 203: the skill control in the activated state is highlighted.
Step 204: and judging whether a trigger operation aiming at the skill control is received.
Here, when a trigger operation for the skill control is received, step 205 is executed; otherwise step 203 is performed.
Step 205: and responding to the triggering operation aiming at the skill control, and presenting a chaotic induction area corresponding to the chaotic skill.
Here, the chaotic response area may be an area with a position where the first virtual object is located or a skill release position of the chaotic skill as a center and a radius equal to a target distance, as shown in fig. 6, fig. 6 is a display schematic diagram of the chaotic response area provided in the embodiment of the present application, in fig. 6, a support object (i.e., the virtual support prop) corresponding to the chaotic skill is presented at the center of the chaotic response area, and when an area disappearance instruction triggered based on the support object is received, the chaotic response area corresponding to the chaotic skill is cancelled, that is, as the support object is destroyed and disappears, the corresponding chaotic response area also disappears synchronously.
Step 206: and judging whether the second virtual object enters the chaotic induction area or not.
Here, the terminal device may perform virtual object detection on the chaotic sensing area in real time to determine whether a second virtual object exists in the chaotic sensing area, referring to fig. 7, where fig. 7 is a schematic detection diagram of a virtual object provided in this embodiment of the present application, calculate a distance D between the second virtual object and a supporting object in the chaotic sensing area, and when the distance D between the second virtual object and the supporting object is greater than a radius R of the chaotic sensing area, determine that the second virtual object does not enter the chaotic sensing area, and execute step 205; when the distance between the two is smaller than or equal to the radius of the chaotic induction area, it is determined that the second virtual object enters the chaotic induction area, and step 207 is executed.
Step 207: and when the number of the second virtual objects is at least two, controlling to randomly exchange the interaction configuration of any two second virtual objects.
Here, in practical applications, the virtual props of the second virtual objects include main props and prop accessories, the main props (such as primary and secondary weapons) and the prop accessories in the virtual props of different players may be different, and each virtual prop may include multiple types of prop accessories, and random exchange of the main props or the prop accessories between the different second virtual objects may be controlled, see fig. 8, fig. 8 is a schematic diagram showing prop accessories provided in this embodiment of the present application, the main props (such as the primary and secondary weapons) may be equipped with the prop accessories (or the prop accessories), the main props after being equipped with the prop accessories may have many added attribute functions, and their interaction capabilities are greatly improved, as shown in fig. 8, the prop accessories that the virtual props 801 may be equipped include muzzles, barrels, sighting scopes, stock, and the like, and when exchanging, any prop accessory in the virtual props of different second virtual objects (at this time, the main prop accessory is not exchanged) may be exchanged And exchanging or exchanging the main body prop (the prop accessory is not exchanged at the moment) in the virtual prop, wherein the exchanged main body prop accessory is not matched with the original main body prop, or the exchanged main body prop is not matched with the original prop accessory, so that the interaction capacity of the virtual prop of the exchanged second virtual object is greatly weakened.
In practical implementation, a linear congruence method can be adopted to randomly select a main prop or prop accessory from the main props for exchange, and the linear congruence method is based on a software algorithm and a random number seed and adopts the following formula to generate a random sequence:
Figure BDA0003448608950000161
wherein, anThe random number is represented as the nth random number, d is a seed, b, c and m are constants of positive integers, the random number has periodicity due to the fact that m is subjected to residue, the length of the period is determined by the size of m, and the larger the period is, the better the period is; in practical applications, different seeds correspond to different random numbers, and the random sequence can be generated by using the current timestamp as a seed and using the random time ().
Based on the random principle, when a plurality of second virtual objects exist in the chaotic induction area or a plurality of prop accessories are corresponding to main props of virtual props of each second virtual object, the virtual props can be randomly selected from the chaotic induction area for exchange, or target prop accessories are selected from the prop accessories for exchange, so that random exchange of the main props or the prop accessories among different second virtual objects is controlled; if the second virtual object A and the second virtual object B exist and random exchange of prop accessories is determined, exchanging the second virtual object A with the prop accessory 1 of the second virtual object B, and not exchanging the main body props of the second virtual object A and the second virtual object B; if a second virtual object a, a second virtual object B, a second virtual object C and a random main body prop exist, the main body prop of the second virtual object a is exchanged for the second virtual object B, the main body prop of the second virtual object B is exchanged for the second virtual object C, the main body prop of the second virtual object C is exchanged for the second virtual object a, and the main body props of the second virtual object a, the second virtual object B and the third virtual object C are not exchanged.
Referring to fig. 9, fig. 9 is an exchange schematic diagram of interaction configuration provided in this embodiment of the application, where after a main body prop (carried prop accessories are not exchanged simultaneously) is controlled to be exchanged between a second virtual object a and a second virtual object B, since the exchanged main body prop is not adapted to an original prop accessory, interaction capability of the virtual prop held by two players is weakened after the two players exchange the main body props of the other player.
In practical application, except for exchanging main body props or prop accessories among different second virtual objects, virtual props or virtual skills among different second virtual objects can be exchanged, different cooling time exists in the different virtual props or virtual skills, and the cooling time is calculated independently by the terminal equipment corresponding to each second virtual object, so that after the virtual props or the virtual skills are exchanged, if the cooling time of the exchanged virtual props or virtual skills is long, the exchanged virtual props or virtual skills cannot be used before the cooling time is not reset to zero, and the interaction capacity of the second virtual objects in a target time period after interactive configuration conversion is reduced.
Referring to fig. 10 to 11, fig. 10 to 11 are schematic diagrams of exchange of interaction configurations provided in the embodiment of the present application, in fig. 10, before a second virtual object 1 enters a chaotic induction region, a cooling time of a virtual item of the second virtual object 1 is 10 seconds, and after entering the chaotic induction region, and after exchanging with a virtual item (cooling time is 30 seconds) of a second virtual object 2 in the chaotic induction region, the second virtual object 1 can use the virtual item only after waiting for 30 seconds. In fig. 11, before the second virtual object 1 enters the chaotic sensing region, the virtual item 1, the virtual item 2, and the virtual item 3 of the second virtual object 1 are all in an activated state, and after the second virtual object 1 enters the chaotic sensing region, the virtual item 1 of the second virtual object 1 is exchanged into the virtual item 4 of the second virtual object 2, and since the virtual item 4 is in an inactivated state, the second virtual object cannot use the virtual item 4, thereby reducing the interaction capability of the second virtual object.
Step 208: and judging whether the second virtual object leaves the chaotic induction area or not.
Here, when the second virtual object leaves the chaotic response region or the chaotic response region disappears, step 209 is performed; otherwise, step 207 is performed.
Step 209: and controlling the second virtual object to restore the interactive configuration.
Here, when the second virtual object leaves the chaotic sensing region or the chaotic sensing region disappears, the interactive configuration of the second virtual object is controlled to be restored to the interactive configuration before entering the chaotic sensing region.
By the mode, the interaction configuration of the second virtual object in the chaotic induction area is controlled to be changed by using the chaotic induction area released by the chaotic skill, so that the diversity of playing methods is enriched; and because the transformation of the interactive configuration of the second virtual object has the characteristics of randomness, uncertainty or unpredictability, the interactive capability of the second virtual object is greatly weakened when the second virtual object is not used or the transformed interactive configuration cannot be used, and the interaction between the first virtual object and the second virtual object is controlled under the condition, so that the number of times of executing interactive operation between the first virtual object and the second virtual object can be reduced for achieving a certain interaction purpose, the human-computer interaction efficiency is improved, and the use experience of a user is improved.
Continuing with the exemplary structure of the virtual skill control device 465 as a software module provided by embodiments of the present application, in some embodiments, the software modules stored in the virtual skill control device 465 of the memory 460 of fig. 2 may include:
a first presenting module 4651, configured to present, in an interface of a virtual scene, a first virtual object with a target skill; a second presenting module 4652, configured to, in response to a release instruction for the target skill, present a chaotic induction area corresponding to the target skill; the chaotic induction area is used for transforming the interactive configuration of the virtual object in the chaotic induction area; a transformation control module 4653, configured to, when at least one second virtual object exists in the chaotic induction region, control to transform an interaction configuration of the second virtual object, so as to control the first virtual object to interact with the second virtual object after the transformation and interaction configuration.
In some embodiments, before presenting the chaotic induction zone corresponding to the target skill, the apparatus further comprises: the instruction receiving module is used for presenting skill controls corresponding to the target skills; when the skill control is in an activated state, a release instruction for the target skill is received in response to a trigger operation for the skill control.
In some embodiments, the instruction receiving module is further configured to present a prop icon corresponding to the target skill; responding to the triggering operation aiming at the prop icon, and controlling the first virtual object to assemble a virtual prop corresponding to the target skill; and when the first virtual object successfully assembles the virtual prop, presenting a skill control corresponding to the target skill.
In some embodiments, the apparatus further comprises: the third presentation module is used for presenting state indication information used for indicating the skill control activation progress; the instruction receiving module is further configured to present the skill control in a target display style when the state indicating information indicates that the skill control is in an activated state.
In some embodiments, the second presenting module is further configured to determine a target area centered at a target position as the chaotic induction area corresponding to the target skill, and present the chaotic induction area; wherein the target position is one of the following positions: a location at which the first virtual object is located, a skill release location for the target skill.
In some embodiments, the second presenting module is further configured to display an area enclosing frame in the interface of the virtual scene in a target display style when the target position is the position of the first virtual object, where an area in the area enclosing frame is a chaotic induction area corresponding to the target skill; the device further comprises: and the movement control module is used for responding to a movement instruction of the first virtual object, controlling the first virtual object to move in the virtual scene, and controlling the area bounding box to move synchronously along with the movement of the first virtual object.
In some embodiments, before presenting the chaotic induction zone corresponding to the target skill, the apparatus further comprises: the position determining module is used for presenting a position identifier for selecting the skill release position when the target position is the skill release position corresponding to the target skill; controlling the position identifier to move in the virtual scene in response to a movement instruction for the position identifier; determining the position of the location identifier in the virtual scene as the skill release position in response to a location determination instruction for the location identifier.
In some embodiments, the apparatus further comprises: the recovery control module is used for presenting a virtual support prop corresponding to the target skill in the chaotic induction area, and the virtual support prop is used for controlling the display duration of the chaotic induction area; and when an area disappearing instruction triggered based on the virtual support prop is received, canceling the presentation of the chaotic induction area corresponding to the target skill, and controlling and recovering the interactive configuration of the second virtual object.
In some embodiments, the recovery control module is further configured to present a remaining effective duration of the target skill; and when the remaining effective duration is lower than a duration threshold or zero, canceling the presentation of the chaotic induction area corresponding to the target skill, and controlling and recovering the interactive configuration of the second virtual object.
In some embodiments, the transformation control module is further configured to determine an interaction relationship between the second virtual object and the first virtual object, the interaction relationship being used to indicate whether the second virtual object and the first virtual object belong to the same camp; and when the interaction relation indicates that the second virtual object and the first virtual object belong to different camps, controlling to transform the interaction configuration of the second virtual object.
In some embodiments, before presenting the chaotic induction zone corresponding to the target skill, the apparatus further comprises: a fourth presentation module, configured to present, in an interface of the virtual scene, the second virtual object equipped with a virtual item, where the virtual item is configured for the first item; the transformation control module is further configured to, when the number of the second virtual objects is one, control the virtual prop of the second virtual object to be changed from the first prop configuration to a second prop configuration, so as to transform the interactive configuration of the second virtual object.
In some embodiments, the transformation control module is further configured to, when the number of the second virtual objects is at least two and the virtual props provided for the second virtual objects include main props and prop accessories, control random exchange of the main props or the prop accessories between any two second virtual objects of the at least two second virtual objects.
In some embodiments, the apparatus further comprises: an accessory screening module to perform the following for each of the second virtual objects: when the virtual prop comprises a plurality of prop accessories and the prop accessories are randomly exchanged, acquiring interactive data of each prop accessory; wherein the interaction data comprises at least one of: the use frequency, the interactive score and the interactive grade; and selecting a target prop accessory from the plurality of prop accessories according to the interactive data to serve as the prop accessory to be transformed of the second virtual object.
In some embodiments, the transformation control module is further configured to, when the number of the second virtual objects is at least two and the second virtual objects are equipped with virtual props or virtual skills, control the at least two second virtual objects to exchange the equipped virtual props or virtual skills according to usage preferences of the respective second virtual objects, so that a degree of adaptation between the usage preferences of the respective second virtual objects and the exchanged virtual props or virtual skills is lower than an adaptation threshold.
In some embodiments, the apparatus further comprises: the module setting module is used for presenting an interactive configuration mode setting icon of the virtual scene; responding to the starting operation of the icon set for the interaction configuration mode, and setting the interaction configuration mode of the virtual scene into an anti-interference interaction configuration mode; and when the first virtual object is in the chaotic induction area corresponding to the target skill of the second virtual object, controlling the interaction configuration of the first virtual object to be kept unchanged.
In some embodiments, the recovery control module is further configured to perform virtual object detection on the chaotic induction region; and when the second virtual object is detected to leave the chaotic induction area, controlling to recover the interactive configuration of the second virtual object.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the control method of the virtual technology described in the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions, which when executed by a processor, will cause the processor to execute a control method of virtual skills provided by embodiments of the present application, for example, a method as shown in fig. 3.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (20)

1. A method of controlling a virtual skill, the method comprising:
presenting a first virtual object with target skills in an interface of a virtual scene;
responding to a release instruction aiming at the target skill, and presenting a chaotic induction area corresponding to the target skill;
the chaotic induction area is used for transforming the interactive configuration of the virtual object in the chaotic induction area;
when at least one second virtual object exists in the chaotic induction area, the interaction configuration of the second virtual object is controlled to be changed, so that the first virtual object is controlled to interact with the second virtual object after the interaction configuration is changed.
2. The method of claim 1, wherein prior to presenting the chaotic induction zone corresponding to the target skill, the method further comprises:
presenting a skill control corresponding to the target skill;
when the skill control is in an activated state, a release instruction for the target skill is received in response to a trigger operation for the skill control.
3. The method of claim 2, wherein the presenting the skill control corresponding to the target skill comprises:
presenting a prop icon corresponding to the target skill;
responding to the triggering operation aiming at the prop icon, and controlling the first virtual object to assemble a virtual prop corresponding to the target skill;
and when the first virtual object successfully assembles the virtual prop, presenting a skill control corresponding to the target skill.
4. The method of claim 2, wherein the method further comprises:
presenting status indication information for indicating the skill control activation progress;
the presenting of the skill control corresponding to the target skill comprises:
and when the state indication information indicates that the skill control is in an activated state, presenting the skill control in a target display style.
5. The method of claim 1, wherein said presenting the chaotic induction zone corresponding to the target skill comprises:
determining a target area taking a target position as a center as a chaotic induction area corresponding to the target skill, and presenting the chaotic induction area;
wherein the target position is one of the following positions: a location at which the first virtual object is located, a skill release location for the target skill.
6. The method of claim 5, wherein said presenting the chaotic induction zone comprises:
when the target position is the position of the first virtual object, displaying a region enclosure frame in a target display mode in an interface of the virtual scene, wherein a region in the region enclosure frame is a chaotic induction region corresponding to the target skill;
the method further comprises the following steps:
and in response to a movement instruction for the first virtual object, controlling the first virtual object to move in the virtual scene, and controlling the area bounding box to move synchronously along with the movement of the first virtual object.
7. The method of claim 5, wherein prior to presenting the chaotic induction zone corresponding to the target skill, the method further comprises:
when the target position is a skill release position corresponding to the target skill, presenting a position identifier for selecting the skill release position;
controlling the position identifier to move in the virtual scene in response to a movement instruction for the position identifier;
determining the position of the location identifier in the virtual scene as the skill release position in response to a location determination instruction for the location identifier.
8. The method of claim 1, wherein the controlling transforming the interactive configuration of the second virtual object comprises:
determining an interaction relationship between the second virtual object and the first virtual object, the interaction relationship being used for indicating whether the second virtual object and the first virtual object belong to the same camp;
and when the interaction relation indicates that the second virtual object and the first virtual object belong to different camps, controlling to transform the interaction configuration of the second virtual object.
9. The method of claim 1, wherein prior to presenting the chaotic induction zone corresponding to the target skill, the method further comprises:
presenting the second virtual object equipped with a virtual item in an interface of the virtual scene, wherein the virtual item is configured for a first item;
the controlling transforming the interactive configuration of the second virtual object includes:
when the number of the second virtual objects is one, controlling the virtual prop of the second virtual object to be changed from the first prop configuration to a second prop configuration so as to change the interactive configuration of the second virtual object.
10. The method of claim 1, wherein the controlling transforming the interactive configuration of the second virtual object comprises:
when the number of the second virtual objects is at least two and the virtual props equipped for the second virtual objects comprise main props and prop accessories, controlling random exchange of the main props or the prop accessories between any two second virtual objects in the at least two second virtual objects.
11. The method of claim 10, wherein the method further comprises:
performing the following for each of the second virtual objects:
when the virtual prop comprises a plurality of prop accessories and the prop accessories are randomly exchanged, acquiring interactive data of each prop accessory;
wherein the interaction data comprises at least one of: the use frequency, the interactive score and the interactive grade;
and selecting a target prop accessory from the plurality of prop accessories according to the interactive data to serve as the prop accessory to be transformed of the second virtual object.
12. The method of claim 1, wherein the controlling transforming the interactive configuration of the second virtual object comprises:
when the number of the second virtual objects is at least two and the second virtual objects are equipped with virtual props or virtual skills, controlling the at least two second virtual objects to exchange the equipped virtual props or virtual skills according to the use preference of each second virtual object, so that the adaptation degree between the use preference of each second virtual object and the exchanged virtual props or virtual skills is lower than an adaptation degree threshold value.
13. The method of claim 1, wherein the method further comprises:
presenting virtual support props corresponding to the target skills in the chaotic induction area;
and when an area disappearing instruction triggered based on the virtual support prop is received, canceling the presentation of the chaotic induction area corresponding to the target skill, and controlling and recovering the interactive configuration of the second virtual object.
14. The method of claim 1, wherein the method further comprises:
presenting the remaining effective duration of the target skill;
and when the remaining effective duration is lower than a duration threshold or zero, canceling the presentation of the chaotic induction area corresponding to the target skill, and controlling and recovering the interactive configuration of the second virtual object.
15. The method of claim 1, wherein the method further comprises:
presenting an interactive configuration mode setting icon of the virtual scene;
responding to the starting operation of the icon set for the interaction configuration mode, and setting the interaction configuration mode of the virtual scene into an anti-interference interaction configuration mode;
and when the first virtual object is in the chaotic induction area, controlling the interaction configuration of the first virtual object to be kept unchanged.
16. The method of claim 1, wherein the method further comprises:
carrying out virtual object detection on the chaotic induction area;
and when the second virtual object is detected to leave the chaotic induction area, controlling to recover the interactive configuration of the second virtual object.
17. An apparatus for controlling virtual skills, the apparatus comprising:
the first presentation module is used for presenting a first virtual object with target skills in an interface of a virtual scene;
the second presentation module is used for responding to a release instruction aiming at the target skill and presenting a chaotic induction area corresponding to the target skill;
the chaotic induction area is used for transforming the interactive configuration of the virtual object in the chaotic induction area;
and the transformation control module is used for controlling and transforming the interactive configuration of the second virtual object when at least one second virtual object exists in the chaotic induction area so as to control the interaction of the first virtual object and the second virtual object after the transformation and the interactive configuration.
18. An electronic device, comprising:
a memory for storing executable instructions;
a processor for implementing the method of controlling virtual skills of any of claims 1 to 16 when executing executable instructions stored in said memory.
19. A computer-readable storage medium storing executable instructions for implementing the method of controlling virtual skills of any of claims 1 to 16 when executed by a processor.
20. A computer program product comprising a computer program or instructions which, when executed by a processor, implement the method of controlling virtual skills of any of claims 1 to 16.
CN202111657056.0A 2021-11-24 2021-12-30 Virtual skill control method, device, equipment, storage medium and program product Pending CN114146414A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2021114032807 2021-11-24
CN202111403280 2021-11-24

Publications (1)

Publication Number Publication Date
CN114146414A true CN114146414A (en) 2022-03-08

Family

ID=80449584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111657056.0A Pending CN114146414A (en) 2021-11-24 2021-12-30 Virtual skill control method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN114146414A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023231553A1 (en) * 2022-06-02 2023-12-07 腾讯科技(深圳)有限公司 Prop interaction method and apparatus in virtual scene, electronic device, computer readable storage medium, and computer program product

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023231553A1 (en) * 2022-06-02 2023-12-07 腾讯科技(深圳)有限公司 Prop interaction method and apparatus in virtual scene, electronic device, computer readable storage medium, and computer program product

Similar Documents

Publication Publication Date Title
CN112691377B (en) Control method and device of virtual role, electronic equipment and storage medium
CN112090070B (en) Interaction method and device of virtual props and electronic equipment
CN112090069B (en) Information prompting method and device in virtual scene, electronic equipment and storage medium
CN112402960B (en) State switching method, device, equipment and storage medium in virtual scene
CN113797536B (en) Control method, device, equipment and storage medium for objects in virtual scene
CN112057860B (en) Method, device, equipment and storage medium for activating operation control in virtual scene
CN112306351B (en) Virtual key position adjusting method, device, equipment and storage medium
JP2022540277A (en) VIRTUAL OBJECT CONTROL METHOD, APPARATUS, TERMINAL AND COMPUTER PROGRAM
CN112138385B (en) Virtual shooting prop aiming method and device, electronic equipment and storage medium
CN112402959A (en) Virtual object control method, device, equipment and computer readable storage medium
US20230078440A1 (en) Virtual object control method and apparatus, device, storage medium, and program product
CN112569599A (en) Control method and device for virtual object in virtual scene and electronic equipment
US20230330534A1 (en) Method and apparatus for controlling opening operations in virtual scene
JP2022525689A (en) How to process data using virtual characters and their devices, devices and computer programs
CN114404969A (en) Virtual article processing method and device, electronic equipment and storage medium
CN113262488A (en) Control method, device and equipment for virtual object in virtual scene and storage medium
CN114146414A (en) Virtual skill control method, device, equipment, storage medium and program product
CN113144603A (en) Method, device, equipment and storage medium for switching call objects in virtual scene
CN112121432A (en) Control method, device and equipment of virtual prop and computer readable storage medium
US20230033902A1 (en) Virtual object control method and apparatus, device, storage medium, and program product
CN113018862B (en) Virtual object control method and device, electronic equipment and storage medium
CN113144617B (en) Control method, device and equipment of virtual object and computer readable storage medium
CN114356097A (en) Method, apparatus, device, medium, and program product for processing vibration feedback of virtual scene
CN113633968A (en) Information display method and device in game, electronic equipment and storage medium
CN113769396B (en) Interactive processing method, device, equipment, medium and program product of virtual scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination